Shyam Srinivasan

BrainAlgos

Complex Data, Clear Insight

BrainAlgos is an independent consultancy specialising in complex systems. We go beyond surface metrics to understand how and why a system behaves the way it does. Using a combination of data analysis, computational modelling, statistical reasoning, and first-principles thinking, we develop genuine insight into system behaviour. Not just what the data shows, but what it means and what to do about it. That insight is what makes the difference when the goal is to scale a system or adapt it to a new environment. Knowing which levers to press, and why they work, is what separates durable solutions from temporary fixes. We bring that depth to problems in data infrastructure, system design, and research, drawing on expertise that spans neural circuits, information theory, large-scale computation, and machine learning.

Work

01

Research

Published work in computational neuroscience with direct implications for AI and large-scale system design.

Neuroscience · AI · Systems Design

Scalable Neural Systems: Principles from Biology for Engineered Design

How the brain achieves scalable, resource-efficient architecture — and what that means for AI infrastructure design.

+
Read more

The brain is one of nature's most sophisticated examples of a scalable, resource-efficient architecture. In this line of research, conducted in affiliation with UC San Diego and the Salk Institute, we examined how biological neural systems achieve scalability, asking why the brain is designed the way it is and what general principles that reveals for any complex system operating under resource constraints.

The core insight is that evolution, working under finite energy budgets, converges on designs that are both maximally efficient and reusable across scales. Within a group of related organisms like mammals, development follows a shared program, meaning that the most effective way to change performance is simply to change system size. Critically, the environment can demand a change in size without giving the system time or opportunity to go through another full optimisation process. For the system to meet that demand without sacrificing efficiency, its architecture must already be scalable. Scalability is not a convenience. It is a prerequisite for adaptability in dynamic environments. This same logic applies to subsystems: the modular design of sensory circuits allows evolution to scale up hearing in bats or vision in humans by adjusting the size of the relevant module rather than redesigning it from scratch.

We studied this across three neural systems: visual, olfactory and cerebellar. Demonstrating that their computational circuits follow a scalable design and that this produces predictable performance gains across scales required substantial empirical and analytical work: mapping circuit architecture, identifying the relevant computational elements, and establishing that the scaling relationships were consistent and non-trivial. Using power law analysis, statistical reasoning and information theory we then showed not just that these systems scale, but why they scale the way they do.

We also examined a subtler but equally important aspect of brain architecture: the role of non-neuronal cells. Glial cells are as numerous as neurons and actively support and modulate neural function. Like the support structure of any dynamical system, they are not peripheral. They are load-bearing. We showed that glial populations scale in proportion to neural circuit size. If they did not, the efficiency gains from scalable circuit design would be undermined, because the support infrastructure would become a bottleneck as the system grew. The analogy to engineered systems is direct: you cannot simply increase the number of GPUs to improve performance. Cooling systems, memory bandwidth, and interconnects have to scale proportionally too, or the gains evaporate.

The broader implication is that any complex system operating in a resource-constrained environment faces the same fundamental pressures that shaped the brain. The principles governing biological scalability, modularity, efficient coding, and proportional scaling of complementary components, are directly applicable to the design of AI systems, data centres, and large-scale engineered architectures. As the energy cost of AI infrastructure becomes a critical concern, these principles offer a rigorous, biologically validated framework for thinking about scalable and efficient system design. This work is documented across several publications.

AffiliationUC San Diego · Salk Institute
MethodsPower law analysis · Information theory · Statistics
StatusPublished
Neuroscience · Machine Learning

Discrimination Learning: How the Brain Separates Similar Experiences

A biological mechanism for learning finer distinctions over time — with direct implications for catastrophic forgetting in AI.

+
Read more

All animals display perceptual learning: two stimuli that are initially indistinguishable become separable through experience. This is a fundamental feature of biological intelligence, yet the mechanism behind it is poorly understood. We investigated this question using olfactory learning in both mammals and invertebrates, giving us a comparative window into whether the underlying mechanism is conserved across very different nervous systems.

We showed that variability in olfactory responses, rather than being noise to be eliminated, plays a functional role. Coupled with a recurrent negative feedback mechanism, this variability enables the system to progressively separate the representations of similar stimuli over time. The brain is not simply storing experiences. It is actively restructuring its internal representations through feedback and controlled randomness to make finer and finer distinctions.

The implications extend directly into machine learning. One of the persistent challenges in AI is catastrophic forgetting: when a model learns a new category, it tends to overwrite what it previously learned, requiring the whole system to be retrained. The biological mechanism we identified suggests a different approach. By exploiting variability and using recurrent negative feedback to sharpen distinctions incrementally, a system can learn new categories without disturbing existing ones. This points toward more efficient and robust learning architectures that do not require retraining from scratch every time new information is introduced.

AffiliationUC San Diego · Salk Institute
MethodsRecurrent negative feedback · Variability analysis
StatusPublished
Neuroscience · Deep Learning

Bridging Insect Learning Circuits and Deep Learning

A simple, fully mapped biological circuit that outperforms large AI systems on certain tasks — and what it can teach us.

+
Read more

Deep learning systems have achieved remarkable results, but they require enormous computational resources and vast amounts of training data. Biological systems, by contrast, learn efficiently under tight resource constraints. The question is whether there are lessons from biology that could inform better machine learning architectures.

This review focused on one of the most tractable model systems in neuroscience: the fruit fly. The fly's learning circuit is small enough that the full set of neurons and their connections are known. The learning mechanism is understood. And yet the fly performs remarkably sophisticated learning tasks, including fine-grained sensory discrimination and flexible behavioural adaptation, that challenge even large-scale machine learning systems. It does all of this on a fraction of the energy budget.

The argument of the review was straightforward. Here is a system where we know exactly what is happening, why it works, and how it achieves efficiency. Rather than continuing to scale up machine learning systems by brute force, why not extract the computational principles that make the fly's circuit so effective and apply them to deep learning architectures? The review drew direct connections between the fly's learning mechanisms and open problems in deep learning, making the case that biological circuits at this scale are not just a curiosity but a genuine design resource.

AffiliationUC San Diego · Salk Institute
TypeCo-authored Review
StatusPublished
AI · Clinical Research · Neuroscience

Early Detection of Alzheimer's Disease Using Hybrid Machine Learning

A novel hybrid algorithm achieving up to 80% accuracy on early-stage AD prediction — designed for small clinical datasets.

+
Read more

One of the central challenges in Alzheimer's disease research is predicting onset before symptoms appear. This is difficult because the disease is poorly understood at the causal level, making early indicators hard to identify. At the same time, a growing body of publicly available data now captures biomarkers and clinical symptom markers across different stages of the disease, creating an opportunity to trace how these markers evolve as the disease progresses.

A core technical obstacle is data scarcity. Traditional deep learning models require datasets on the order of millions of samples. A good clinical dataset in this domain might have 5,000 patients. This rules out standard approaches and demands something more efficient. We developed a hybrid algorithm that addresses this directly: an information maximisation step first identifies the most predictive biomarkers, which are then fed into a Naive Bayes classifier augmented with a gradient descent optimisation layer to weight the most promising features appropriately. In preliminary results the model achieves up to 80% accuracy on early-stage patient data.

A further limitation of most commercial models, including tools like Precivity, is that they operate on a fixed set of biomarkers. Our Bayesian core accumulates evidence across whatever markers are available, making it both flexible and agnostic to the specific inputs a patient presents. The next phase of the work focuses on extending this further, building a model that can generalise across varying biomarker types and data sources without requiring a fixed input schema.

This work was carried out as independent research funded in 2024, in association with the Neurolinx Research Institute and in collaboration with scientists at the Alzheimer's Disease Research Center at UC San Diego.

AffiliationNeurolinx Research Institute · UCSD ADRC
MethodsInformation maximisation · Naive Bayes · Gradient descent
StatusOngoing · Paper in preparation
02

Consulting & Engineering

Software, systems and infrastructure work across two decades — from internet backbone hardware to AI-powered document analysis.

NLP · Cryptography · Document Analysis

Sensitive Information Detection and Selective Readership

A document intelligence system for a cryptography startup — from chemical manufacturing to Hollywood film production.

+
Read more

Developed a document analysis module for a cryptography startup that scanned documents for user-defined sensitive content, personal information, financial data, or any custom category. The first phase used rule-based detection tailored to specific industries where document structures were known and predictable. A second phase explored NLP and machine learning approaches for broader matching across unknown document types.

The same core technology was extended in two directions. First, selective encryption: flagging relevant text so only authorised readers could decrypt it. Second, selective readership: given a parameter such as a character name in a film script, the system would highlight every passage involving that character and mask the rest. This allowed film productions to share scripts with actors without revealing the full story. During development the target industries included chemical manufacturing and film production, reflecting how flexibly the system could be pointed at very different problems.

TypeConsulting
MethodsRule-based detection · NLP · Machine learning
IndustriesChemical · Film production
HPC · Supercomputing · MPI

Kabru Supercomputer — Network Architecture and Resource Management

Core design team member for Kabru, at the time the second fastest supercomputer in India, supporting physics and weather research.

+
Read more

Member of the core design team that built Kabru at the Institute of Mathematical Sciences, at the time the second fastest supercomputer in India. This was pioneering work at a scale very few teams in the country were tackling at the time. Focused on network load balancing at the architectural level and developed MPI programs to manage resource allocation for high-performance computing, supporting research including weather forecasting simulations.

InstitutionInstitute of Mathematical Sciences
MethodsMPI · Network load balancing · HPC architecture
ApplicationPhysics · Weather forecasting
Embedded Systems · Networking · Infrastructure

Core Internet Infrastructure, Lucent Technologies

Senior Software Design Engineer on the CBX-500 ATM switches that formed the backbone of the early commercial internet.

+
Read more

At its peak, Lucent's ATM switches were among the dominant infrastructure in internet backbone routing, handling a significant portion of global traffic. As a Senior Software Design Engineer at Lucent, I worked on the T3 and E3 interfaces of the CBX-500 ATM Gigabit switches, the hardware that sat inside local service providers and formed the backbone connecting cities across the internet. My work involved writing low-level embedded software implementing TCP/IP and networking protocols for these switches, as well as diagnosing and resolving technical issues escalated by customers in the field. This was infrastructure work at the most critical layer of the early commercial internet.

CompanyLucent Technologies
RoleSenior Software Design Engineer
MethodsEmbedded systems · TCP/IP · ATM networking

Get in touch

Available for consulting, research collaboration, and advisory work.