Experimental demonstrations of apparent computer control of a living brain is largely based on the software’s ability (and the ability of the corresponding human programmer) to train itself to identify specific, repeatable brain signals and tag these with specific, observed motions of the living test subject. [ READ MORE ] So, the monkey’s arm moves to the left, and a certain picture of brain activity is recorded. The monkey’s arm moves to the left again, and a similar brain activity pattern is again mapped.

There must be a connection, right?

Next, the computer is connected to a robotic arm to simulate the monkey’s arm. The computer records a certain brain activity that was previously tagged when the monkey moved its arm to the left … so, the computer tells the robotic motors to move the simulating arm to the left.

Voilà! Computer control between a monkey’s brain and a robotic arm.

Certainly, this is a magnificent neurotechnological feat, but the experiment is entirely based on a previously mapped out look-up table. There is very little information about the fundamental behavior of the monkey’s active neuron network, which may or may not behave in precisely reproducible ways each time its arm moves to the left.

What if a neuron in the recorded activity dies? Well, the monkey can presumably still move its arm around, but the specific network pattern of electrical activity might change and might not match the look-up table stored in the computer’s memory. The computer might nix the new recorded signal that occurs when the monkey moves its arm to the left because the software code doesn’t understand how the network adapts at a fundamental level.

Developing software that tries to understand the neuron-computer interface at a more basic level is the goal of Lakshminarayan Srinivasan at MIT. This work is essentially the starting point of writing an all-purpose BASIC neurological language for the brain.

The general idea is building a software language that looks at a broader swath of brain activity and links neural action with its probable relation to a specific motor task. This provides for flexibility in the software communication so that it won’t lock up and give the user the Blue Screen of Death just because one neuron didn’t fire the same way it did last week.

So far, Srinivasan’s work is entirely based on simulations, and is currently being expanded to test with interfaces to living test subjects. So, it will be very interesting to watch the results of these new developments to discover if this programming approach is compatible with talking directly with our neuron networks.

I would like to emphasize here that I in no way want to discount the significance and the importance of the successes of “Bionic Monkey” research to date. These new techniques are absolutely critical and very exciting. But, we must be clear that this does not yet reach the notion of a pure neuron-computer interface. There is still a long way to go in continuing the advances of neurotechnology to discover the deeper understanding of neuron network function… and this long road is still very exciting!

“Standardizing the Brain-Machine Interface” :: IEEE Spectrum Online :: April 2008 :: [ READ ]

Srinivasan, L., Eden, U.T., Mitter, S.K, and Brown, E.N. “General purpose filter design for neural prosthetic devices,” Journal of Neurophysiology, 98:2456-2475. [ READ ]

Also take a look at the earlier work…

“Bionic Monkeys!” :: Discover Magazine Blog :: May 29, 2008 :: [ READ ]

“Mind Over Matter: Monkey Feeds Itself Using Its Brain” ::ScienceDaily.com :: May 28, 2008 :: [ READ ]

 

Share your thoughts...

Last updated May 22, 2018