Your browser does not support JavaScript!
  • LinkedIn
  • YouTube
  • RSS

Subsumption Architecture: How iRobot Enabled Scrum

A simple definition of Subsumption Architecture can be found on Wikipedia. However, that definition is a watered-down version of reality.

It is like hearing the story fourth hand where the understanding gets weaker and weaker as it goes from person to person. You will get a much clearer picture by reading from someone working in the area - see Mobile Robot Control.

In the early 1990s, I leased office space for a couple of years to the inventor of the Subsumption Architecture. Rodney Brooks and his graduate students used that space to start iRobot. So I was taught directly by Brooks and his team using the real robots as demos of what he was saying.

The robots often escaped from their lab and ran in and out of my office on a regular basis. Getting it directly from the source is always a more vivid and accurate experience. Take a look at Brooks' papers on scholar.google.com, for example, this one on robust layered control systems

On Friday afternoons, Brooks would come by to check on his team. On one such Friday, I asked Prof. Brooks to explain how the robot (named Ghengis Khan at the time) works. "It's very interesting," I added, "when it runs into my office trying to hunt me down with its infrared sensors."

genghis robot reflects subsumption architecture example

Photo Credit: Carlton SooHoo, PanoSpin

Brooks said, "First you need to understand that for 30 years we have been trying to build an intelligent system at MIT AI Lab and it has been a total failure." The best they had been able to do, Brooks told me, was a smart chess program.

"So I am taking a radically different approach with the subsumption architecture. This robot has no central processor. Each leg has a chip that can move a leg. There is a chip in the spine that coordinates the legs. A neural network chip in the head figures out what to do. Before you turn the robot on the chip is blank. There is no database. The world is the data and all data is created by sensors."

They would show me the blank chip, plug it into the head, turn the robot on. Its legs would start flailing like an octopus and it would roll upright, then stagger around like a baby learning to walk until a few minutes later it was scurrying around the room.

I said to Brooks that this reminded me of the really slow programmers I had back at the banking company across from the MIT Sloan Business School. "I bet if we gave the slow programmers some simple rules like the robot and every day they got together and synchronized their neural networks they could boot up into a super smart, very fast team." then I asked, "Do you think that would work?" Brooks replied, "I don't know, why don't you try it?"

The next year I joined Easel Corporation and I got my chance. Scrum booted up just like the robot.

So Scrum arose from deep immersion in the work of primary researchers in robotics and computer science theory, primarily at MIT and Bell Labs, but also from complex adaptive systems theory based on supercomputer simulation of evolving cells during my 11 years of research at the University of Colorado School of Medicine.

It has always interested me to see people trying to tweak Scrum without understanding the subsumption architecture. It is like a caveman coming upon a smartphone and trying to fix it.

I remember a VP of Engineering in Silicon Valley who told me around the time of the Agile Manifesto meeting, maybe before. "Scrum is the biggest thing that has hit the valley since the invention of the computer chip!"

It was designed to solve the problem of Moore's law which was causing exponential increases in computing power, while software development productivity was growing linearly.

Today with continuous deployment multiple times a day off a single code branch like they do at Google and elsewhere we are seeing orders of magnitude improvement in productivity, something I tried to explain in my Agile 2005 research paper on "The Future of Scrum."

smart cars on display - google dev in lab

The latest iteration of this kind of thinking is in self-driving cars. In 2016 I upgraded my Tesla to the latest version with the new NVidia smart board and expanded sensors. This is a $500 box that is a compressed version of a $50M supercomputer from the year 2000.

And Elon Musk says programming it requires thousands of developers but only a few years of iterations. It would not be possible to build this without the combination of the hardware chips and agile methods.

Meanwhile, Google is still stuck with smart cars in a laboratory environment. They are not agile enough.

en_USEnglish
Shares