Limbic scaffolding: bridging the gap between AI and robotics
Limbic scaffolding: Bridging the gap between AI and robotics
Sci-Fi novels and movies have spoiled generations of people with an almost romantic view of intelligent robots that have not only cognitive but also physical capabilities equal to or better than those of humans. However, the reality is that research and development efforts on the cognitive side often have very little overlap with those on physical side. As a result, we have an awkward situation where AI is often unembodied, and robots are almost universally unintelligent. This discrepancy has some unexpected consequences.
The most notable consequence is fragmentation. Consider personal assistants: ideally, what we want is a single intelligent personal assistant that has the level of utility of those in I, Robot. However, what we have is a series of loosely integrated assistants that are good at one (or at most a few) tasks. There is a plethora of commercially available AI ‘assistants’ that can (kind of) understanding speech but otherwise have little to no sensory input. Their primary purpose is to interpret commands and answer questions. In terms of agency, anything that is outside the realm of personal admin (scheduling an event or calling somebody while we drive) requires linking the assistant to ‘smart’ devices (such as music players) that it can then send commands to.
We have separate and dedicated machines for tasks that require physical assistance, such as dishwashers and washing machines at home or wheelchairs and prosthetic devices for people living with disabilities. However, while these devices are for all intents and purposes robots, they are not imbued with any sort of intelligence – it is up to us to tell them what to do. Self-driving cars, which are both literally and metaphorically around the corner, are the closest that AI and robotics have ever come together – and they are naturally generating quite a bit of hype.
An immediate consequence of fragmentation is something that makes every engineer shudder: integration. When we have multiple unintelligent physical devices and AI assistants, we have to make them all talk to each other in a way that they can understand. In addition to introducing an exponential number of potential points of failure (and therefore frustration), the utility of the final solution is unlikely to offset the initial investment in terms of time, cost and effort. In other words, integrating an AI assistant with your music player or smart lighting system will probably be just as much effort as turning on the music or flicking the light switch yourself.
And even if you could integrate your ephemeral AI assistant with the dreaded dishwasher, that wouldn’t really help you as the dishwasher cannot load itself! The lack of agency is not just a matter of integration – the discernible lack of robots in our homes that stems from the concern that although robots might be physically capable of performing various tasks, they are just not smart enough to perform them – and to do so safely.
Finally, there is also a personal element to all this. Although there are people who have been officially married to their smartphone, it is safe to say that most of us do not feel particularly connected on an emotional level to our devices or AI assistants. They are tools that serve a particular purpose, and that’s the extent of it – if one breaks, we buy another. Old models are quickly replaced with new ones. Even though your AI assistants can understand what you say, they don’t really care about it. Do not expect any of them to ask you if you’d like a glass of wine because they intuitively know that you’ve had a tough day. And certainly don’t expect any of them to pour it for you!
There is a term in visual arts called negative space – it is the space that exists between and around visual subjects in a scene. In many cases, the negative space can be substantial and can represent a visual subject in itself, a prime example of which is a visual illusion called The Rubin vase. The point of this is to illustrate the negative space that exists between AI and robotics -- the gap between the minds and bodies of robots. And just like negative space, it is an entire subject of its own that deserves to be studied and developed on its own. A bridge over the gap between mind and matter, between AI and physical robots – a form of limbic scaffolding analogous with the limbic system in the brain – can advance both AI and robotics beyond the current status quo.
What would we gain from a robot that has all the perceptual capabilities and dexterity of a human as well as a mind that can make use of those capabilities? To begin with, this would eliminate fragmentation – it is a single unit – and by extension, it would also eliminate the need for integration. Take, for example, the task of loading the dishwasher. Consider what is involved in the process: speech recognition to understand the request, object recognition to ensure that we are not adding the cat to the top tray, collision avoidance to ensure that the dishes are not broken while being stacked. Sounds like a good use case for a ‘dishwasher loading robot’! But wait, what if the same robot could also use the same capabilities to load the laundry into the washing machine? Or the shopping into the fridge? What about taking the clean dishes out of the dishwasher and into the cupboard, and the same for the clean laundry from the washing machine into the dryer? Tools in and out of a box? Or even helping you get in your (non-autonomous) car, and then driving you to work or to the hospital?
This is an obvious case of reusability, which is a paradigm used extensively in almost all engineering disciplines – and yet, we are somehow failing to see the parallels. The reason is that despite the obvious parallels, every separate task requires a separate set of instructions. The missing link is a mind that can get the robot to learn new tricks, things that have not been explicitly programmed into it – and to do them safely and reliably. A mind that allows it to ask questions, to discover patterns and to eventually figure out what needs to be done before we even have to say it – and to be able to do it!
A subtle point to take home is that learning is always a two-way street, meaning that while the robot is learning, so are we! We can adjust our teaching approach as we learn about the particular way it does things, and in this process we can achieve something that we do not have with any product currently on the market: true personalisation. I for one know that the longer I spend around my intelligent assistant and get to know it as it gets to know me, the more I will miss it if something happened to it. And perhaps it would reciprocate.