Deep Roots

Information Overflow #1

The machines are waking up.

Sounds ominous if you, like me are a product of the guns-blazing Schwarzenegger-busting Terminator age. Or if you've ever pondered on Kurzweil's Singularity. Or even casually browsed 4Chan and found yourself repulsed by gory videos of Chinese factory workers getting mauled by lathe machines. We have been severely traumatized across generations by the fear of a long-foretold rise of the sentient machine. Today, while the tech world is oversaturated with better and faster LLMs and AI tools, the fear of an impending AGI doomsday is nigh, with stalwarts like Geoff Hinton and Elon Musk weighing in. It seems utterly paradoxical that the world is simultaneously being driven more and more by AI, and is growing more skeptical of it. Is it true though? Do we really feel more skeptical towards AI today than a year ago? Two years? Five? Fifty?

On the off-chance that you've never realized the log-based nature of our cognition and indeed most of nature's systems, allow me to play the fiddle for a while. In Nature, collective behavior rules the game. Evolution is the massively distributed gene pool computing stuff over exponential scales. One mutation here that doesn't suck, and before you can spell "D-A-R-W-I-N", we have flourishing generations exhibiting that particular trait like a boss. Think of snowfall. Or cyclones. Or Covid-19, although it's better not to think too much about it and get doomsday flashbacks again. Nearly everything that stays in Nature, is inherently logarithmic. So Intelligence, being a particularly critical piece of the "What if Life" puzzle, must also be logarithmic? And indeed it is.

As we embark on this wild ride towards unmasking Information, we shall first define some ground rules:

Ah, yes, we loop back into the premise before the tiny digression into rules. Logarithms.

I understood what logarithms meant on my own after being disappointed by definitions from my teachers and available books, and I wish to share it here:

A Logarithm is a function that tells you what your exponent for any number (base) should be, to get any target number

So, log2x=y implies that to get y as a power of 2, you need to raise it to the exponent x. Now this is for numbers, but amazingly works beautifully for variables, specifically complex variables. Remember, every function can be imagined as a map, and can be composed into a hierarchical chain of functions. And when you apply the logarithm on any function that's complex valued, the result is a stunning series of Riemann surfaces (check out Tristan Needham's fantastic book on "Visual Complex Analysis"). This is a deep rabbit hole that we might venture into some other time, but for now, the core idea is that Logarithms are fascinating. But how do they connect to Neural Nets, and even AI at large?

Neural Networks are essentially composable functions that transform inputs into desired outputs using weights and biases.

Weights: How important a particular data point is. Biases: How likely is this data to be relevant.

Neural Nets were initially conceived by McCullough & Pitts as a model for Brain function, which gravitated into the model for the Perceptron, and ushered in the era of ML. The nature of the Brain has long been probed into, and the mostly agreed vision is this:

The Brain is a built up of deeply connected networks of Neurons that keep growing as the Brain learns.

Imagine a graph, with edges representing transformations and nodes representing data points. Now imagine a ton of such graphs, all deeply intertwined with each other like strings of led lights just the day before Deepavali. Bingo! You have a connectome - a graphical representation of the Brain's Neural connections. If you look closely, the entire hypergraph, is the perfect representation of a Logarithmic system. One Neuron fires, and the chain reaction leads to a whole bunch of Neurons firing and creating a spike in a section of the Brain. This is what a Neural Network aims to achieve - To mimic the emergent computational behavior of the brain using functional graphs.

The last pair of words is a bit confusing, so I'll devote a whole post (or a series) to it, but the essentials are really quite simple:

Life is Logarithmic. Computation is when Information is mutated. Learning is a core trait of any autonomous, intelligent system.

So that, is indeed why we must understand Information from the vantage point of Computation and Artificial Intelligence. I have my own thoughts against this chase after AI, and I agree with Yann LeCun's version: Autonomous Intelligence. This is a core trait of all systems, natural or artificial, that they mutate information. How well they do it, is what sets them apart.

Until next time!