Back to Blog

Bridging the Autonomy Gap - Part 1

Florian Pestoni

Robots are everywhere. They can be found in hospitals and hotels. In farms and construction sites. Brick and mortar retailers and e-commerce distribution centers. In the air, in the sea, on the ground and even underground, as we saw recently in the DARPA challenge.

But what is a robot? There have been plenty of philosophical discussions on this, and probably no shortage of flame wars. We like this definition from IEEE:

A robot is an autonomous machine capable of sensing its environment,
carrying out computations to make decisions, and performing actions
in the real world.

So right there in the definition is the A-word: autonomy. Since nomos is Greek for “law”, something autonomous makes its own laws. Pretty cool, right?

However, autonomy is relative. We’re not just talking about being constrained by the laws of physics, but by the limits of AI, sensing technology, computing power, servos, etc. In essence, the guts and brains of robots can only go so far with current technology.

Surely given all the brilliant minds that are working on this problem, we should have complete autonomy in a couple of years, right? We don’t think so. In fact, complete autonomy may never be possible.

In the self-driving car space, the SAE (formerly the Society of Automotive Engineers) has defined the six levels of driving automation, ranging from no automation (level 0) to full self-driving automation (level 5) requiring no driver intervention – and perhaps no human passengers either.

Getting to level 5 will take a while. Jim Hackett, CEO of Ford Motor Co. acknowledged as much recently: “We overestimated the arrival of autonomous vehicles.” John Krafcik, CEO of Alphabet’s Waymo, made a stronger statement: “Autonomy will always have some constraints.” More to our point, even level 5 does not mean complete autonomy in terms of being self-directed.

Although these challenges were discussed in the context of self-driving cars, they apply more broadly to (other) robots. Small digression: a self driving car is really a ground-based robot with a hole in the middle for carrying cargo called people, and even that is not a hard requirement – think trucks without a human cabin.

For instance, ground-based autonomous mobile robots are deployed in various environments, usually for material movement or data collection. The applications are diverse, from campus security to inventory management at a retail store or goods-to-person delivery in a warehouse. These robots typically use a combination of odometry, RGB cameras, depth cameras and LIDAR to sense their environment.

While these sensors, and the computer vision algorithms that are used to make sense of the signals coming from the sensors for navigation purposes, have advanced tremendously in the last decade, they still have limitations. Sometimes, a reflection off a particularly clean floor is perceived as an obstacle. Other times, a small slip of the wheels may result in “mislocalization”: the robot isn’t sure where it is in the map, and is therefore stuck without some assistance.

At InOrbit, we call this the autonomy gap. We’re on a mission to bridge this gap by creating DevOps and AI tools as well as developing best practices to drive human-in-the-loop robot operations at global scale.

In Part 2 of this article, we’ll share more about InOrbit’s plans to help the robotics industry accelerate the adoption at scale of autonomous robots by bridging the autonomy gap.