Neuron News

AI will augment humans so they might never overtake

Photo by Yuyeung Lau on Unsplash.

Especially since sci-fi fans became enthralled with the ominous twist in Kubrick’s classic “2001: A Space Odyssey,” the rise of AI in our current technological environment elicits many concerns about one day succumbing to computer overlords. I believe these fears are legitimate, as they are being studied seriously. My take is that this fear should be leveraged to guide research toward how to make sure it doesn’t happen.
However, the reality of today’s practical implementations of artificial intelligence focus on how machine learning techniques and predictive data analytics can *enhance* and *augment* the workflows and processes performed by humans. The intension is not to replace, but to help people work more productively, with more accuracy, or get to the “next level” of the type of work they perform.
What I find particularly interesting is how humans can collaborate with AI to expand our creativity and innovate in ways that we may not have been able to conceive without the shared insights or capabilities enabled through machine learning not otherwise possible for the human brain.
And, this is why today’s artificial intelligence applications must still be collaborative — the human brain can still perform certain tasks impossible for AI and AI can now perform certain other tasks that are impossible for the human brain. So, my expectation is that we will see the merging of AI and Humans long before we see AI overlords squash the human species into servants of digital fiefdoms. Then, as we instead become one with AI, we would experience a sort of evolution of our species that will preclude the possibility of becoming the slave.

The State of Neurotechnology in 2018

It has been awhile since we last posted about neurotechnology. So, where do things stand today? Where are the cyborgs already? Where is our unlimited memory capacity? Interesting developments bring the brain and technology are trotting along, and there is still a long, and exciting path up ahead. Two recent articles from The Guardian and The Economist highlight some aspects of the current state of neurotechnologies, so these seem like a great place to get back up to speed. 

Just as many of the world’s most insanely rich people are deeply dabbling in out-of-this-planet endeavors, such as Elon Musk’s SpaceX and Jeff Bezos’s Blue Origin, others are dropping big dollars a bit more inwardly – into our brain. Paul Allen (of Microsoft founding fame) funded the Allen Institute for Brain Science, and Elon Musk (wait, where have we heard that name before?) started Neuralink as major initiatives to jump-start our brains into a future where we are directly connected to our technological creations. Just as in the latest round in the space race, with all of these privately funded ventures, things will get real interesting, real fast.

 “Neurotechnology, Elon Musk and the goal of human enhancement,” The Guardian, January 1, 2018

So, how are we going to arrive at our point in human evolution where are brains are interfaced with non-biological computational power? What might keep us from reaching this state, and even if when so, how might it change our definition of being human?  

Three scientists from the Center for Sensorimotor Neural Engineering at the University of Washington take on these questions and more in their report for The Economist at … 

 “Grey matter, red tape: In search of serendipity,” The Economist, originally appearing the print edition of the Technology Quarterly, January 6, 2018.

There are many wild, ominous, and crazy-cool efforts in progress many of which are already appearing in our hospital recovery rooms. It will only be a matter of time before more tangible advancements in neurotechnology will show up in our neighborhoods. 

What do you think? Will you be ready to jack your brain into the machine? 


DARPA on the Brain

Neuron News from DPR has highlighted some interesting activity from DARPA in the past (read more) with its involvement in neurological research and technology developments. With the “Grand Challenge” introduced in April 2013 by the Obama administration called the BRAIN Initiative (“Brain Research through Advancing Innovative Neurotechnologies”), we clearly noticed that half of the dedicated $100 million in funds were to be delegated to future work out of DARPA. Now, months later, the military research branch has finally released two open calls for grant applications to spread around some of these monies to more organizations. 

DARPA-SUBNET_1The first program call is for a project called SUBNETS (“Systems-Based Neurotechnology for Emerging Therapies”), which is searching for new technologies that will allow near real-time quantitative measurements of brain activity to then control implanted neural stimulation devices. Out of context, this might sound like an attempt at developing controllable cyborgs, but the focus of this specific proposal is to create a health care advancement that will support the recovery and repair of U.S. service members who have experienced neurological injuries and neuropsychological illness from war-time activities. According to the proposal, ten percent of veterans today are receiving mental health care or substance abuse counseling from the VA. With implantable devices controlled by real-time recording and analysis as proposed by SUBNETS, neuropsychiatry will take a major leap beyond lying on the couch and talking it out with a trained professional with a notepad. 

The second call from DARPA is called RAM (“Restoring Active Memory”) and continues in the vein of supporting veterans with brain injuries. Here, the goal is to develop innovative neurotechnologies that utilize an understanding of the neural encoding of memories — something that is not yet even remotely understood — to recover memory after brain injury. The anticipation is to have an implantable device that clicks on to recover the lost memories. 

RAM seems to carry a rather far-reaching goal that could only be successful with a complete understanding of the structural and functional neural correlations of a human being’s memory. The added difficulty is that if it is assumed that memories are directly encoded in the specific architecture of neuron connections and the resulting functional relationships, then a traumatic brain injury could be defined as an event that directly destroys these connections. So to recover lost memories, one might expect that a digital “brain dump” would be necessary to be stored (securely in the cloud?) before a soldier heads off to battle. 

With both the SUBNETS and RAM programs, exciting new technologies and advancements might be possible. However, in the descriptions above there are a plethora of ethical, security, and privacy issues left unmentioned only for the reader to speculate upon. To address these sorts of issues that always exist on the leading edge of technological developments, DARPA has also established an Ethical, Legal and Social Implications Panel composed of academics, medical ethicists, clinicians, and researchers to advise and guide the new programs as well as provide some form of independent oversight during their progress.


DARPA Aims to Rebuild BrainsScience 29 November 2013 Vol. 342 no. 6162 pp. 1029-1030 



Optical Illusions: Our neurons working out of our control

Looking at an image and seeing that something just isn’t quite right is always an intriguing experience. From past experience, we expect to see one thing, but often upon immediate observation we see something else quite different. Optical illusions demonstrate to us directly that reality is created by our perceptions of the environment and these perceptions are processed in our brain. So, maybe reality is just all in our heads?

“Reality is merely an illusion, albeit a very persistent one” – Albert Einstein
(a popular misquotation extracted from “For us believing physicists, the distinction between past, present and future is only a stubborn illusion.”  Einstein: His Life and Universe by Walter Isaacson (2008), p. 540)

Classic examples of optical illusions include the floor tiling at the Basilica of St. John Lateran in Rome and the “flashing” grid illusion first reported by Ludimar Hermann in 1870. The twentieth century artist M. C. Escher took the phenomena to an artistic level and created some of the most popular and aesthetically interesting illusions, and many more optical illusions may be viewed with an image search.

Rotating Snakes illusion, Copyright A.Kitaoka 2003

In 2003, Akiyoshi Kitaoka, a professor of psychology at the Ritsumeikan University in Kyoto, Japan, designed a new visual phenomenon called the peripheral drift illusion, or “Rotating Snakes” (read the original report, PDF). In this design, an apparent motion of the image is seen in the observer’s peripheral vision. The effect is strongest when the image contains clearly graduating sections of repetitive diminishing or increasing brightness and these sections follow fragmented or curved edges. A variety of examples of the design can be previewed on Kitoaka’s website of Rotating Snakes.

This visual phenomena has fascinated scientists with the challenge to explain how our brains process this image. It was not until quite recently that an answer may have been experimentally discovered (“Microsaccades and Blinks Trigger Illusory Rotation in the “Rotating Snakes” Illusion”, Otero-Millan, et al. The Journal of Neuroscience, 25 April 2012, 32(17): 6043-6051; doi: 10.1523/​JNEUROSCI.5823-11.2012, Read the abstract). Researchers from the Laboratory of Visual Neuroscience at the Barrow Neurological Institute in Arizona, lead by  Dr. Susana Martinez-Conde, presented “Rotating Snake” images to participants while recording their eye motion with high-resolution. Previously, it had been presumed that the eyes were drifting during observation to create the apparent motion. However, they instead found that when the observers acknowledged motion in the images, their eyes were undergoing small rapid movements called microsaccades. These mini-eye movements represent small jumps in a person’s gaze position that help to refresh the input on retinal receptors during the intentional fixation on an image (“Toward a model of microsaccade generation: The case of microsaccadic inhibition” Rolfs, et al. Journal of Vision, August 6, 2008 vol. 8 no. 11 article 5 doi: 10.1167/8.11.5, Read the full-text PDF).

It is quite amazing to gaze at an image that you consciously know is static, yet you unquestionably see an apparent animation. Your understanding of reality conflicts directly with your observation of reality. For a quick personal experiment to see if I could control this reality distortion, I was able to temporarily pause the motion with a very focused attempt to stare only at one corner of the Rotating Snake image. As I let my focus shift just bit, the rotation immediately re-appeared. It is only a guess as to whether I was inhibiting the microsaccades of my eyes, or if I was positioning the image in some “peripheral blind spot” where the retinal receptors taking input from the eye motions couldn’t receive the input. Nevertheless, I do still feel quite grounded in reality; however, I am reminded to maintain an appreciation of questioning what I directly perceive around me as my brain will continue to work in ways that is beyond my conscious control.

Citizen Scientists Mapping the Connectome

We have just begun reading the recently released review book, “Connectome,” from Sebastian Seung of MIT. The basic notion of the book is that you are the emergent result from the interconnections of some 100 billion neurons in your brain. “You are your connectome.”

This is not a novel idea at its most basic level, however, Dr. Seung is bringing this exciting hypothesis to a broader popular understanding, which will help guide future generations of appreciation for the utterly incredible mass of flesh lodged in our skulls.

Mapping the complete interconnections of neurons remains to this day a daunting task for neuroscientists, but a task in which Dynamic Patterns Research is particularly interested. It took decades of manual labor by White, et al. to map the mere 302 neurons in the wee little worm C. elegansThe complete structural architecture of its neuronal connections–it’s connectome–is now readily available for research and exploration. Now, imagine extending this task to the human brain, but plan on taking a 300+ million-fold leap that would necessarily require technological advances not yet fully realized.

Despite this apparent impossibility, there is just something awesome about the human brain that makes us who we are, and we just have to plow forward and try to discover more. There’s something “in there,” or, something that emerges from what’s inside that especially sets us apart from all other known life on this planet. If we could tap into that “something,” then we might just have a better understanding of who we are as an organism. Tapping into the structure of our brains–our connectome–is the best place to start.

… …

Watch Sebastian Seung’s TED Talk, “I am my connectome.” :: July 2010

… …

It seems that we will have to patiently wait for technological advances–although they are increasing at accelerating rates–to get us to the ability to efficiently map our personal connectomes. In the mean time, we do have an extraordinarily powerful tool that is ready today to help with developing procedures for mapping neuronal connections in living brains: this tool is the brains of citizen scientists.

It is from the laboratory of Sebastian Seung and enthusiastic collaborating scientists who bring to the citizen science community the exciting opportunity to directly map interconnections in neural tissue. The online system is called eyewire, and provides images from 3-D stacks of neuronal tissue from the retina and guides citizen scientists through a process of identifying connecting features. By visually evaluating two-dimensional cross-sections of tissue images created with electron microscopy, users work through the layers by recognizing connecting features between each image. The identified cross-layer features from the efforts of citizen scientists can then be reconstructed into a three-dimensional structural map of the neurons–and their connections–throughout the tissue.

A waiting list is currently in place to control the influx of interested citizen scientists, so Dynamic Patterns Research has not yet had the opportunity to test out the system (but, we are on “the list”!). We hope to be in soon so that we can participate in this great project that is at one of the core interests of Dynamic Patterns Research. If you are already participating now, please let us know what you think of the system.

Your brain is the most awesome thing in the Universe. We know so little about it, but we are on the cusp of a revolution in a new understanding of what it is and how it works. Now, citizen scientists can be an integral part in this revolution so that anyone can scientifically better know their inner self.

fEITER image of an anaesthetised brain.

A Live View of Unconsciousness

It’s difficult to know what you are thinking — or what is happening in your own brain — as you loose consciousness. There are many instances where this loss might happen, including getting whacked up side the head, inhaling a large volume of non-medically-inspired drugs, or, to the preference of many, falling into a deep sleep during anesthesia before an invasive operation.

Many research groups have studied the brain during its influence to anesthetic drugs, in particular Stuart Hameroff from the University of Arizona. The brain seems to become almost numb and nearly shuts down entirely, enabling trained professionals to freely cut into the human body without the distraction of painful screams and cries for help from the patient. But, this is a rather interesting phenomena, that is not entirely understood.

fEITER image of an anaesthetised brain.
Reconstruction of the brain during the onset of anaesthesia. CREDIT: University of Manchester via

Directly watching the brain as it slips into unconsciousness would certainly be an interesting approach to trying to solve not only the mysteries of anesthesia, but to also better understand what it means for the brain to be conscious, or at least aware. Now, with a new observational technique developed by the University of Manchester, called functional electrical impedance tomography by evoked response (fEITER), the attempt is underway to create live views of the brain’s electrical activity as it shuts down from anesthetic drugs. With this near real-time recording, the research team, lead by Brian Pollard, Professor of Anaesthesia at the University of Manchester, is hoping to learn more about the differences between an unaware and aware brain and how these differences might lead to a better understanding of what the phenomenon of consciousness really is for human beings.

Notice, here, that a subtle change of words was made from “consciousness” to “awareness” and back again. This difference seems to be important, however, and should not be used lightly. A brain might be considered “aware” of its surroundings by responding to pain being induced on its body, or to the intense colors and lights surrounding its head during a walk through Times Square in New York City. But, a simple diode light sensor switching off an automatic garage door motor might also be considered to be “aware” of the puppy dog running through its beam just before the door touches ground.

So, what seems to be an additional specialty to humans is that our brains are more than just aware. There is something more to consciousness; something to being self-aware. Or, maybe not… we just don’t understand, yet. However, the real-time, three-dimensional electrical views generated by fEITER devices should provide some extremely interesting comparisons between the aware and unaware brain. And, it is seemingly from this awareness that emerges our sensation of consciousness, so understanding the electrical requirements for awareness is an important step to understanding the neural correlate of consciousness.

“3-D Images Reveal What Happens as Brain Loses Consciousness” :: LiveScience :: June 10, 2011 [ READ ]

Last updated August 13, 2022