Glia are brain cells that have traditionally been believed to play supportive roles to neurons, their more glamorous cousins. However, new research is beginning to suggest that their function is more complex, even extending to information processing. This is information that both neurobiologists and computer scientists are ready to take advantage of, as Sukanya explores in this article.
“The human brain has one hundred billion neurons, each neuron connected to ten thousand other neurons. Sitting on your shoulders is the most complicated object in the known universe.”
~ Michio Kaku, theoretical physicist
Neurons, or nerve cells, have commanded a lot of attention in the scientific study of the nervous system. However, the nervous system also comprises another set of cells called “glia” that are just as abundant as neurons (if not more). Originally, glia were considered to function only as passive helpers to neurons. Now, however, these cells are pushing us to consider an overhaul of the perspective with which we view the architecture of the mind.
In recent years, scientists have begun trying to understand the crosstalk between neurons and glia and how they work together to develop the neural circuits in our brain. Although the study of neuron-glia interactions is still in its infancy, the picture is becoming clearer with time. The intricate nature of neuron-glia interrelationships can be seen in the connections between neuronal branches and those of astrocytes, a type of star-shaped glial cell. At these junctions, information is actively exchanged between neurons and glia.
The three basic properties of any information processing system are receiving a signal, processing it, and producing an output. We already know that neurons are the masters of all three. But can the same be said for glia?
A major question raised during the “glial revolution” was regarding the electrical properties of glia, or lack thereof. Neurons are regarded as the main information-processing cells in the brain because their membrane can rapidly and reliably conduct electric currents with the help of an array of ion channels expressed on their surfaces. These channels act like pathways that allow movement of ions in response to fluctuations in the electric potential difference across the membrane. These currents are, in turn, passed on to other neurons through junctions called synapses. Glial cells, on the other hand, express significantly lower levels of some of these key channels, suggesting that they lack this important property of electrical conductivity.
The past few decades have witnessed the advent of several new molecular biology techniques. Armed with an arsenal of modern tools, researchers are now able to directly record the electrical activity of individual cells. In the late 1980s and early 90s, researchers using calcium imaging observed that fluctuations in the levels of calcium ions in glial cells alter their electrical properties. This revolutionised our view of these cells. Changes in calcium ion levels along with the ability to chemically communicate with neurons make glia an undeniable component of neural circuits.
Computer scientists, too, were not oblivious to this increasing evidence supporting the importance of glia. Artificial Neural Networks (ANNs) have been in the picture since the 1950s. These are information processing systems that try to replicate the way our brain thinks by simulating biological networks. The earliest of these models were capable of producing a single output signal by analysing the inputs received, much like their living counterparts.
This paved the way for computational networks which could be “trained” to learn the relationship between an input and its output. Artificial Neural Networks today are composed of complex layers and webs of computational nodes, also known as “neurons”. These are connected in intricate ways and can process enormous quantities of data to pick out similarities or patterns, a feat previously thought to be unattainable by a machine.
Owing to the emerging roles of glial cells, efforts have been made to extend these ANNs to incorporate glial cells as well. Where does India stand in terms of research in this exciting field? To understand this, I had a conversation with V Srinivasa Chakravarthy, Professor at Indian Institute of Technology (IIT), Madras, whose illustrious career has spanned over two decades. He entered this field as early as 2010 when his group proposed an “energy-matching” principle while studying blood flow to the brain.
Energy is delivered to neurons from blood vessels in the form of lactate or lactic acid passing through astrocytes on the way. During this process, the total energy that moves into the neuron should match the neuron’s energy demand. Otherwise, the normal functioning of the neuron could be disrupted. The models developed by Chakravarthy’s team suggest that there must be some kind of learning going on in these networks, both at the astrocyte-blood vessel and the neuron-astrocyte levels. It was the first time anyone proposed Hebbian learning (explained aptly by the adage “neurons that fire together wire together” ) and plasticity (the ability to change dynamically) in such networks.
The focus on artificial neuron-glia models in India does not seem to be very widespread. According to Chakravarthy, research into artificial neural networks essentially started with simple computational models and took a few decades to move towards more complex models. However, glia were modelled in detailed biophysical terms almost from the very beginning, which may have resulted in a slow pace of research in this area, a situation akin to missing the forest for the trees.
Essentially, since glial cells had been overlooked for a long time, the biological principles that govern their function were not completely clear. However, a somewhat promising realization from neurobiology was that astrocytes, like neurons, can regulate processing and transfer of information.
In an interdisciplinary study undertaken by a group of researchers at the Universidade da Coruña, Spain, in 2012, single “astrocytes” were added as modulatory units to individual neurons in an artificial neural network. These computational glia could change the relative weights of the input or output signals or both. This system was then applied to complex problems such as disease diagnosis. This network inherently relies on the properties of astrocytes and research in this direction is currently ongoing.
What are the implications of these models for Artificial Intelligence (AI)?
Looking at it from a biological angle, Chakravarthy explains, “I like to think of them [glia] as conduits between blood vessels and neurons. Vessels are carriers of huge amounts of energy, but neurons don’t have the storage to support this quantity. So, several buffers would be required at each stage.”
In the context of AI, networks can be implemented both in software and hardware. Clever computer programs need a physical shell to be fruitful, such as microchips. Hardware implementation in these cases relies on numerous parallel computations to accelerate the performance of the software. This, in turn, makes energy conservation crucial to prevent the devices overheating. Normally, in a chip, a power line fuels all the circuits. But in a futuristic realization, we could perhaps have a branching pattern of the power line, similar to blood vessels in the brain. To feed current from these “power trees” into local neuron circuits, we would need an intermediary to temporarily buffer and redistribute the energy properly. Digital equivalents of the glial cells could serve as these buffers.
“The only job is to look at them [glia] in simpler ways without getting lost in the microscopic details,” advises Chakravarthy. Obtaining a complete understanding of neuronal networks is an inconceivable feat. Neuron-glia interactions are even more puzzling given the paucity of available data and the increased complexity of the system. The role of glia in higher-order brain functions such as cognition also remain incompletely understood.
We can hope that as biological research in neuroscience advances, the computational evidence to illustrate the relevance of neuron-glia interactions will synchronously gain ground. Perhaps, with advances in AI technology, we might even be able to successfully emulate the plethora of interactions in the brain someday.