- Citizen Science
- Amateur Research
- Neuron News
- Science News
Currently viewing the category: "Neuron News"
Neuron News from DPR has highlighted some interesting activity from DARPA in the past (read more) with its involvement in neurological research and technology developments. With the “Grand Challenge” introduced in April 2013 by the Obama administration called the BRAIN Initiative (“Brain Research through Advancing Innovative Neurotechnologies”), we […]
Neuron News from DPR has highlighted some interesting activity from DARPA in the past (read more) with its involvement in neurological research and technology developments. With the “Grand Challenge” introduced in April 2013 by the Obama administration called the BRAIN Initiative (“Brain Research through Advancing Innovative Neurotechnologies”), we clearly noticed that half of the dedicated $100 million in funds were to be delegated to future work out of DARPA. Now, months later, the military research branch has finally released two open calls for grant applications to spread around some of these monies to more organizations.
The first program call is for a project called SUBNETS (“Systems-Based Neurotechnology for Emerging Therapies”), which is searching for new technologies that will allow near real-time quantitative measurements of brain activity to then control implanted neural stimulation devices. Out of context, this might sound like an attempt at developing controllable cyborgs, but the focus of this specific proposal is to create a health care advancement that will support the recovery and repair of U.S. service members who have experienced neurological injuries and neuropsychological illness from war-time activities. According to the proposal, ten percent of veterans today are receiving mental health care or substance abuse counseling from the VA. With implantable devices controlled by real-time recording and analysis as proposed by SUBNETS, neuropsychiatry will take a major leap beyond lying on the couch and talking it out with a trained professional with a notepad.
The second call from DARPA is called RAM (“Restoring Active Memory”) and continues in the vein of supporting veterans with brain injuries. Here, the goal is to develop innovative neurotechnologies that utilize an understanding of the neural encoding of memories — something that is not yet even remotely understood — to recover memory after brain injury. The anticipation is to have an implantable device that clicks on to recover the lost memories.
RAM seems to carry a rather far-reaching goal that could only be successful with a complete understanding of the structural and functional neural correlations of a human being’s memory. The added difficulty is that if it is assumed that memories are directly encoded in the specific architecture of neuron connections and the resulting functional relationships, then a traumatic brain injury could be defined as an event that directly destroys these connections. So to recover lost memories, one might expect that a digital “brain dump” would be necessary to be stored (securely in the cloud?) before a soldier heads off to battle.
With both the SUBNETS and RAM programs, exciting new technologies and advancements might be possible. However, in the descriptions above there are a plethora of ethical, security, and privacy issues left unmentioned only for the reader to speculate upon. To address these sorts of issues that always exist on the leading edge of technological developments, DARPA has also established an Ethical, Legal and Social Implications Panel composed of academics, medical ethicists, clinicians and researchers to advise and guide the new programs as well as provide some form of independent oversight during their progress.
Looking at an image and seeing that something just isn’t quite right is always an intriguing experience. From past experience, we expect to see one thing, but often upon immediate observation we see something else quite different. Optical illusions demonstrate to us directly that reality is created by our perceptions of the environment and […]
Looking at an image and seeing that something just isn’t quite right is always an intriguing experience. From past experience, we expect to see one thing, but often upon immediate observation we see something else quite different. Optical illusions demonstrate to us directly that reality is created by our perceptions of the environment and these perceptions are processed in our brain. So, maybe reality is just all in our heads?
“Reality is merely an illusion, albeit a very persistent one” – Albert Einstein
(a popular misquotation extracted from “For us believing physicists, the distinction between past, present and future is only a stubborn illusion.” Einstein: His Life and Universe by Walter Isaacson (2008), p. 540)
Classic examples of optical illusions include the floor tiling at the Basilica of St. John Lateran in Rome and the “flashing” grid illusion first reported by Ludimar Hermann in 1870. The twentieth century artist M. C. Escher took the phenomena to an artistic level and created some of the most popular and aesthetically interesting illusions, and many more optical illusions may be viewed with an image search.
In 2003, Akiyoshi Kitaoka, a professor of psychology at the Ritsumeikan University in Kyoto, Japan, designed a new visual phenomenon called the peripheral drift illusion, or “Rotating Snakes” (read the original report, PDF). In this design, an apparent motion of the image is seen in the observer’s peripheral vision. The effect is strongest when the image contains clearly graduating sections of repetitive diminishing or increasing brightness and these sections follow fragmented or curved edges. A variety of examples of the design can be previewed on Kitoaka’s website of Rotating Snakes.
This visual phenomena has fascinated scientists with the challenge to explain how our brains process this image. It was not until quite recently that an answer may have been experimentally discovered (“Microsaccades and Blinks Trigger Illusory Rotation in the “Rotating Snakes” Illusion”, Otero-Millan, et al. The Journal of Neuroscience, 25 April 2012, 32(17): 6043-6051; doi: 10.1523/JNEUROSCI.5823-11.2012, Read the abstract). Researchers from the Laboratory of Visual Neuroscience at the Barrow Neurological Institute in Arizona, lead by Dr. Susana Martinez-Conde, presented “Rotating Snake” images to participants while recording their eye motion with high-resolution. Previously, it had been presumed that the eyes were drifting during observation to create the apparent motion. However, they instead found that when the observers acknowledged motion in the images, their eyes were undergoing small rapid movements called microsaccades. These mini-eye movements represent small jumps in a person’s gaze position that help to refresh the input on retinal receptors during the intentional fixation on an image (“Toward a model of microsaccade generation: The case of microsaccadic inhibition” Rolfs, et al. Journal of Vision, August 6, 2008 vol. 8 no. 11 article 5 doi: 10.1167/8.11.5, Read the full-text PDF).
It is quite amazing to gaze at an image that you consciously know is static, yet you unquestionably see an apparent animation. Your understanding of reality conflicts directly with your observation of reality. For a quick personal experiment to see if I could control this reality distortion, I was able to temporarily pause the motion with a very focused attempt to stare only at one corner of the Rotating Snake image. As I let my focus shift just bit, the rotation immediately re-appeared. It is only a guess as to whether I was inhibiting the microsaccades of my eyes, or if I was positioning the image in some “peripheral blind spot” where the retinal receptors taking input from the eye motions couldn’t receive the input. Nevertheless, I do still feel quite grounded in reality; however, I am reminded to maintain an appreciation of questioning what I directly perceive around me as my brain will continue to work in ways that is beyond my conscious control.
We have just begun reading the recently released review book, “Connectome,” from Sebastian Seung of MIT. The basic notion of the book is that you are the emergent result from the interconnections of some 100 billion neurons in your brain. “You are your connectome.”
This is not a novel idea at its most basic level, however, Dr. Seung is bringing this exciting hypothesis to a broader popular understanding, which will help guide future generations of appreciation for the utterly incredible mass of flesh lodged in our skulls.
Mapping the complete interconnections of neurons remains to this day a daunting task for neuroscientists, but a task in which Dynamic Patterns Research is particularly interested. It took decades of manual labor by White, et al. to map the mere 302 neurons in the wee little worm C. elegans. The complete structural architecture of its neuronal connections–it’s connectome–is now readily available for research and exploration. Now, imagine extending this task to the human brain, but plan on taking a 300+ million-fold leap that would necessarily require technological advances not yet fully realized.
Despite this apparent impossibility, there is just something awesome about the human brain that makes us who we are, and we just have to plow forward and try to discover more. There’s something “in there,” or, something that emerges from what’s inside that especially sets us apart from all other known life on this planet. If we could tap into that “something,” then we might just have a better understanding of who we are as an organism. Tapping into the structure of our brains–our connectome–is the best place to start.
Watch Sebastian Seung’s TED Talk, “I am my connectome.” :: July 2010
It seems that we will have to patiently wait for technological advances–although they are increasing at accelerating rates–to get us to the ability to efficiently map our personal connectomes. In the mean time, we do have an extraordinarily powerful tool that is ready today to help with developing procedures for mapping neuronal connections in living brains: this tool is the brains of citizen scientists.
It is from the laboratory of Sebastian Seung and enthusiastic collaborating scientists who bring to the citizen science community the exciting opportunity to directly map interconnections in neural tissue. The online system is called eyewire, and provides images from 3-D stacks of neuronal tissue from the retina and guides citizen scientists through a process of identifying connecting features. By visually evaluating two-dimensional cross-sections of tissue images created with electron microscopy, users work through the layers by recognizing connecting features between each image. The identified cross-layer features from the efforts of citizen scientists can then be reconstructed into a three-dimensional structural map of the neurons–and their connections–throughout the tissue.
A waiting list is currently in place to control the influx of interested citizen scientists, so Dynamic Patterns Research has not yet had the opportunity to test out the system (but, we are on “the list”!). We hope to be in soon so that we can participate in this great project that is at one of the core interests of Dynamic Patterns Research. If you are already participating now, please let us know what you think of the system.
Your brain is the most awesome thing in the Universe. We know so little about it, but we are on the cusp of a revolution in a new understanding of what it is and how it works. Now, citizen scientists can be an integral part in this revolution so that anyone can scientifically better know their inner self.
It’s difficult to know what you are thinking — or what is happening in your own brain — as you loose consciousness. There are many instances where this loss might happen, including getting whacked up side the head, inhaling a large volume of non-medically-inspired drugs, or, to the preference of many, falling into a […]
It’s difficult to know what you are thinking — or what is happening in your own brain — as you loose consciousness. There are many instances where this loss might happen, including getting whacked up side the head, inhaling a large volume of non-medically-inspired drugs, or, to the preference of many, falling into a deep sleep during anesthesia before an invasive operation.
Many research groups have studied the brain during its influence to anesthetic drugs, in particular Stuart Hameroff from the University of Arizona. The brain seems to become almost numb and nearly shuts down entirely, enabling trained professionals to freely cut into the human body without the distraction of painful screams and cries for help from the patient. But, this is a rather interesting phenomena, that is not entirely understood.
Directly watching the brain as it slips into unconsciousness would certainly be an interesting approach to trying to solve not only the mysteries of anesthesia, but to also better understand what it means for the brain to be conscious, or at least aware. Now, with a new observational technique developed by the University of Manchester, called functional electrical impedance tomography by evoked response (fEITER), the attempt is underway to create live views of the brain’s electrical activity as it shuts down from anesthetic drugs. With this near real-time recording, the research team, lead by Brian Pollard, Professor of Anaesthesia at the University of Manchester, is hoping to learn more about the differences between an unaware and aware brain and how these differences might lead to a better understanding of what the phenomenon of consciousness really is for human beings.
Notice, here, that a subtle change of words was made from “consciousness” to “awareness” and back again. This difference seems to be important, however, and should not be used lightly. A brain might be considered “aware” of its surroundings by responding to pain being induced on its body, or to the intense colors and lights surrounding its head during a walk through Times Square in New York City. But, a simple diode light sensor switching off an automatic garage door motor might also be considered to be “aware” of the puppy dog running through its beam just before the door touches ground.
So, what seems to be an additional specialty to humans is that our brains are more than just aware. There is something more to consciousness; something to being self-aware. Or, maybe not… we just don’t understand, yet. However, the real-time, three-dimensional electrical views generated by fEITER devices should provide some extremely interesting comparisons between the aware and unaware brain. And, it is seemingly from this awareness that emerges our sensation of consciousness, so understanding the electrical requirements for awareness is an important step to understanding the neural correlate of consciousness.
“3-D Images Reveal What Happens as Brain Loses Consciousness” :: LiveScience :: June 10, 2011 [ READ ]
Much of the predicted future of neurotechnology is grounded in the continuing success and development of nanotechnology. This field is broad, for sure, and is even a primary target of the US Federal Government (see the NNI).
A particularly critical aspect, however, considers the development of nanoparticles. A great deal of research is already underway on developing very tiny capsules that will one day float around in our bodies and drop off exact doses of drugs to a specific cell. Or, pint-sized nanobots with full on-board electronics will maneuver through our circulatory system looking for tissues to repair, cells to manipulate, and observations to report back to the host.
The prospects for this sort of technology might be exciting, and even a little scary. But, what is really important to think about right now is how will the human body actually get along with the nano-invaders? Will our immune system run in overdrive to try to stop the little buggers? Will we have to force an evolutionary leap to develop new symbiotic relationships with metallic pellets that are only just trying to be beneficial to our survival?
Three researchers from North Carolina State University are addressing this important issue that must be resolved before any real human trials of nano-particle infestations are implemented. Dr. Jim Riviere, Dr. Nancy Monteiro-Riviere, and Dr. Xin-Rui Xia are collaborating to figure out a way to pre-screen a nanoparticle’s characteristics in order to predict how it will behave once inside the body.
As soon as any foreign object slips into the human body, our sophisticated immune system kicks into high gear. Everything that is native to a body is essentially key-coded with a biological pass that tells any immune response that “I’m OK to be here, thank you!” If something inside isn’t coded properly, then a rapid kill response is launched through a biochemical cascade of the complement system (learn more), which attacks the surface of unrecognized cells and objects with a variety of binding proteins.
This is certainly a natural response that we would not want to occur if we were voluntarily injecting ourselves with nanobots. The brain might be able to consciously will our hands and feet to move as we see fit, but our species has not yet figured out how to mentally control our internal processes (or, can we?). Until thought-invoked immune suppression is possible, it will be more useful to clearly understand the biochemistry of the interactions between nanoparticles and our tissues, and use this characterization to correctly modify the nano-stuff to stay functional while surfing in the blood stream.
“Predicting how nanoparticles will react in the human body” :: PhysOrg.com :: August, 15, 2010 :: [ READ ]
The Open Source movement has been an integral part of software development for many years now, and it is starting to explode into the science world. The latest project might even transform brain science communication and understanding to a new level as the new Whole Brain Catalog is now available for […]
The Open Source movement has been an integral part of software development for many years now, and it is starting to explode into the science world. The latest project might even transform brain science communication and understanding to a new level as the new Whole Brain Catalog is now available for anyone to access.
The brain is complicated. The brain is designed with a biological network of connected cells so intricate that a complete visualization or map of the system has yet to be developed. Neuroscientists have been trying to determine a way to create this map for many years, and advances in brain imaging has helped inch us closer toward this realization. The Whole Brain Catalog, from researchers at UC San Diego, is the latest attempt at constructing this map, and they are taking a little inspiration from Google.
The software integrates imaging data and models from anyone who is able to contribute. There is still so much to discover about the structure and function of the brain, and amassing this sort of information from everyone in an organized and visually integrated way could really bring about a revolution in the fundamental understanding of the human brain.
Researchers generating data can provide 2D images, 3D reconstructions, cellular morphologies, and even functional simulations that will all be integrated into the system’s catalog. Users, which may be anyone from other neuroscience professionals to the interested citizen scientists, can explore through actual imagery of slices of the brain and wander around 3D models of brain regions all the way down to molecular structures.
A future goal of the WBC is to integrate their extensive data with the National Institute of Health’s Neuroscience Information Framework (NIF), which currently is a growing online database of all web-based neuroscience resources. Anyone can register today with this open-source program through Neuinfo and search the extensive collection of neuroscience information.
Although the WBC sounds wonderful and exciting, the software is very much in an early, beta-testing stage. We have been trying to install the program here at Neuron News on a Dell laptop running an Intel Celeron 2 GHz, 2 GB RAM, which is just below the minimum computing system for the software (view system requirements). So, the software has loaded up, but quickly took everything this little computer had to offer and crashed it down hard.
If you have the computing resources to try out the latest release of The Whole Brain Catalog (download now), please comment here to let us know about what interesting images and simulations you discover. And, we would also appreciate if you could share your screenshots with Neuron News.
What this sort of software and world-wide open collaboration could also foster is a Zooniverse-inspired citizen science project. The team that started the Galaxy Zoo interface is continuing to help citizen scientists look “upward” with new projects to look at the Moon, Mars, galaxies, solar storms, and more. It would be also exciting to offer the opportunity for people to look “inward,” and enable citizen scientists to help identify and discover new things out of the deluge of data coming from the neuroinformatics and neuroimaging fields. In fact, the interest in participation might exponentially increase from the Zooniverse’s current 300,000-plus world-wide volunteers, since it might be more broadly considered that trying to figure out more about ourselves is paramount to watching for a dark black hole so many light years away.
It is likely that some group has already begun the initial considerations for developing a citizen science-lead, at home discovery interface for the human brain. If not, then we would like to formally propose the idea here on Neuron News, and find out what you think about the possibility of creating an open platform allowing anyone to explore the human mind and help make scientific observations and discoveries by sifting through the increasing collection of brain images and models.
“3-D brain model could revolutionize neurology” :: MSNBC.com :: July 30, 2010 :: [ READ ]
- Amateur Research (70)
- Citizen Science (81)
- In Brief (3)
- Neuron News (109)
- The Future (1)
Read Neuron News
[ READ NOW ]
This weblog is licensed under a
Creative Commons License.