The core of that demo is the Working Model 2D physics engine (http://www.design-simulation.com/WM2D/index.php), which wasn't produced at MIT as far as I know. MIT added the whiteboard interface. The recognition of circles, arrows, and other symbols is also part of Working Model. One of my friends (a Stanford prof who teaches ME) coded a large part of the engine, and he told me it worked like a charm on Intel 486 chips. I'm helping him look for some seed money for a related venture, so interested angels should drop me a line. (YC isn't a good fit for him.)
In the Steven Levy book Hackers, I gather that the MIT hackers in the 60s/70s could do tasks like shape recognition with hands behind their back, in assembly, in less than 30 lines of code.
I'm not sure if current MIT hackers can do amazing robotic/visual recognition coding that efficiently, but I haven't got any books on modern-day MIT.
That's truly an amazing piece of technology with a wide range of potential. MIT certainly holds just as much (if not more) of the proportion in innovation by introducing the whiteboard. I'm extremely interested in seeing what kind of work is currently being done on this piece of tech. I'm curious as to why they're sticking strictly to 2-dimensional space.