Six million potassium ions (K+) must move from inside a brain cell to outside to give the cell membrane -100 mV of potential (an equivalent of approx. 1 pico coulomb of electric charge).
If it sounds like a lot, think again - there's around 1011 of K+ in a single cell, which is approx. 100,000 times more than needed to generate the -100 mV potential and hyperpolarize the cell to make it ready for carrying the action potential, i.e., the signal.
There's around 86 - 100 billion neurons packed in a human skull and a vast number of other processes concerning other ions, molecules, neurotransmitters, proteins of various shapes and properties, and other cells. The complexity of the brain machinery is staggering, with a vast number of events happening every second.
Like it or not, a brain is a machine. An electric device. Many conditions happen when something doesn't work as it should. It can be structural changes in psychopath's brains, several potential causes in schizophrenic's brains, autism, dementia, personality changes caused by tumors, accidents, and so on. There's a mountain of evidence suggesting a direct relationship between neuropathology and effects on behavior. (Note that we're focusing on neuropathology rather than psychopathology and hundreds of different conditions described in the DSM). We don't even have to pinpoint pathological conditions as there have been many experiments on neurotypical brains where, e.g., induced electrical current targeted to a particular brain region triggered hallucinations. Every single aspect of our perceived reality can be linked to a specific brain region.
There's no software or hardware as we know them in computers. No programming language to program the specific neural circuit to perform a given task. No single region is responsible for only storage or only computation. There aren't fully connected hidden layers as we know them from deep learning or MLP. And yet, the brain's ability to reason and adapt is way beyond what we can do with AI today. How is the brain doing it? Is it sheer complexity from which these tremendous capabilities arise? In fact, what is complexity? Perhaps it's not the complexity itself, but rather the core mechanisms from which the complexity can occur, giving us the adaptability and capacity to reason and model the world around us. After all, the brain is locked in darkness and silence its whole life. It relies only on input signals from eyes, ears, etc. to understand the world of which it is part. To adapt is to survive.
Indeed, there are still many mysteries; neuroscience systematically and analytically uncovers the brain's inner workings, though. New discoveries have brought us closer and, in fact, require us to change the perception of ourselves.
Perhaps you've heard of some of the fantastic GPT-3 results (if you don't know what the GPT-3 is - it's the OpenAI's pre-trained language model). Undoubtedly, results have been excellent so far. AI still has a long way to go, though. It is the reasoning, adaptability, and constant adjustments of the model we need to focus on. 175 billion parameters and 300 billion tokens of the GPT-3 model are impressive statistics. Indeed, we can say it is precisely such complexity that leads to more and more intelligent results. Just increase the number of parameters 1000 times to achieve similar numbers like synapses in the human brain, and we will have super-intelligent machines. Well, we don't believe so.
The brain is not static or pre-trained. Neuroplasticity plays a crucial part in our intelligence. Our brains continually rewire to adjust. Our brains learn based on much fewer examples. Our brains filter the noise more efficiently (GPT learns from the internet, Wikipedia, and millions of web pages where "informational" noise is significant). Our brains actually model the world continuously and can reason about it. GPT-3 can't.
In many things, it's the computer that wins.
Every coin has two sides. Let's be fair. Computers are much better at calculating difficult math problems, storing information, and processing the information quickly.
As we already learned in the previous blog articles, we're not perfect (false memories, for example).
The speed of action potential propagation in a human brain is much slower than in a microchip. Typical action potential propagating through the brain goes at 1 m/s. It's sufficient for local computations, and the myelination (produced by special glial cells called oligodendrocytes and Schwann cells) helps a lot. However, it's still not sufficient for traveling more considerable distances, such as one meter or so. The nodes of Ranvier are the action potential amplifiers that can increase the speed up to 100 m/s, which is still far slower than electrons in microchips, though. Sure, it is the parallelization of the brain responsible for our superior abilities; nonetheless, there are many tasks where the sheer power and speed of computers overperform humans.
At BotX, we try to focus on core mechanisms from which complexities of various kinds can arise. Let's be clear here - we believe that DL, DCN, DCGAN, LDA, LSA, SVM, and all widely used machine learning algorithms, and generally speaking, the AI of today has achieved absolutely fantastic results. Those results focus on very specific tasks, though.
One of the instruments where we do research is our Cognitive Diagram technology. In short, it's the hypergraph. At least at the moment. What we work on is for the C.D. to become a self-forming hypergraph where technology such as deep learning concerned with a specific task is represented only by a single node of a self-adjusting hypergraph.
We believe this is a better conceptualization of what the brain does. We don't try to reverse-engineer the brain or claim we want to achieve the AGI by any means. It's only the path we see as the fittest for the future of AI and also for our present and future applications and projects.