Dots are connected...
I aspire to work on building blocks of AGI through unconventional AI inspired by how the human brain works. I build systems across deep learning, LLMs, and multi-agent approaches, and I bring hardware development and chip design thinking because the compute substrate shapes what architectures are practical.
Philosophy: how I think AGI gets built
My core thesis is opinionated on purpose. It is meant to guide research and engineering choices.
1) AGI will not come from backprop and gradient descent alone.
I believe the ultimate AGI will come from an AI that does not use backprop and gradient descent as the primary way to learn.
I care about learning where the mechanism is not just optimizing weights, but inventing structure, rules, and memory that evolves.
2) Topology and neurmorphology matter more than weights.
I am drawn to AI where the system is driven by the topology of the neural network and neurmorphology rather than actual weights.
In that view, architecture is not a container for learning. Architecture is a source of intelligence.
3) Yoneda Lemma as a design instinct.
I am inspired by Yoneda Lemma as a way to think about representation: a concept can be understood by how it relates to everything else.
I carry that into AI design as “meaning is the map of relationships and transformations”.
4) Invent theory around intelligence.
I believe we need to invent theory around intelligence to measure a fundamental unit of intelligence and operations around it.
For me, intelligence is not knowing what a concept is made of. It is the concept as a node in a space that is highly connected to other nodes,
and it is this connectedness that defines intelligence. This is a key reason why LLMs are magically good at many tasks.
5) Memory as a first-class foundation.
I am interested in AI that is based on memory. I am immediately interested in merging memory with LLMs.
Long-term memory, editable memory, and structured memory are not optional features. They are part of the architecture.
6) Predictive and corrective general intelligence.
A general intelligence architecture will be not predictive but corrective as well. I want systems that revise,
reconcile, and repair their internal state when new evidence appears.
7) LLMs are calculators compared to computers.
I believe that LLMs are merely calculators as compared to computers, and we have a lot to achieve in AI.
The next layer is memory, correction loops, symbolic structure, and robust execution.
If you are collaborating with me
I like projects that turn philosophy into buildable systems: prototypes, measurable artifacts, and iterations toward a stronger architecture.
- Start from a strong systems model: data, memory, loops, failure modes.
- Prefer explainable boundaries and debuggable representations.
- Keep hardware realities in the conversation early.
- Bias toward prototypes and iterative validation.
What I have built (and what these systems taught me)
Below is a curated view of my work themes, written for collaborators.
Neuro-Symbolic Explainable AI (GIPCA-driven)
Gigantic ANN models can produce strong predictions when datasets exist, but they often behave like black boxes. In healthcare, AI can play a big role, yet it is hard to rely on fully black-box models for critical decision making. Symbolic AI is an alternative, but it often becomes a rule engine that struggles to scale.
I worked on merging statistical AI (ANNs) with symbolic AI through architectures that break a large model into multiple stages, where each boundary produces a valid symbolic representation. This supports explainability and also helps with minimal-data training, which makes it possible to learn on higher-order datasets like paragraphs and stories without building massive labeled sets.
This work is based on an AI architecture called GIPCA (General Intelligence Predictive and Corrective Architecture). BISLU (Brain Inspired Spoken Language Understanding) is built using the GIPCA architecture.
Universal NLU: thought-level outputs, not intent labels
Conventional NLU is often an intent-classification model with a small number of intents, which limits a computer’s ability to understand a human like a human.
Universal NLU takes a spoken utterance stream as input and generates Human Thought Representations as output. If the utterance is in-domain, it produces high-resolution thoughts. If it is out-of-domain, it produces low-resolution thoughts. Universal NLU stays always aware and keeps extracting information for downstream processing.
Universal NLU separates language-specific syntactic structures and semantic meaning so it can be adapted to any spoken language.
Streaming NLU: meaning accumulation in real speech
Speech-to-text outputs are often continuous streams rather than neatly segmented sentences. Extracting intents from streaming audio is challenging, especially when people speak naturally, with half-finished phrases.
Existing solutions sometimes require new behaviors (pauses, wake words) or depend heavily on punctuation inferred by speech engines. My work focuses on a hybrid approach using pauses, meaning accumulation, and centom theory. The meaning accumulation engine accumulates sub-intents and forms actionable intents on top of strong clinical NER.
Centom: segmenting voice streams using atomic connected entities
Spoken language is messy and often does not follow written grammar. This makes segmentation hard. Centom is a method to segment spoken language streams by identifying small syntactic chunks containing connected entities that are also semantically correlated. I call these atomically connected entities Centom (Atomic Connected Entities).
Centom helps break continuous streams into segments while preserving local semantic information in each segment. Those segments form a strong symbolic base for neuro-symbolic AI systems.
Speech-to-Text acoustic model: lightweight, accurate, medical-ready
Many ANN-based STT models are accurate but heavy, or lightweight but less accurate. Medical STT also suffers from dataset gaps for jargon and context.
I tackled the first part by building a custom ANN-based lightweight, high-performing STT system designed for resource-constrained environments. Some parts are inspired by Facebook’s Wav2Letter and Google’s Inception V3 network. Wav2Letter helps keep the model lightweight, while Inception-style parallel branches improve performance by looking at different receptive fields in parallel.
I addressed the dataset issue by training with proprietary medical recordings, tuned for real hospital background noise. I also worked on detection of noise and other languages so resources can be optimized on unwanted audio streams.
Thought Representation and ETML (Extended Thought-Representation Mark-up Language)
ETML serves the purpose of understanding “thought” in a thought cloud, which is a graphical thought representation of a conversation. Human thought correlates multiple entities in the physical universe with properties. Thoughts can also be imaginary, yet still inspired by our world.
Thought Representation is a rich graphical structure that a computer can understand. Thought Cloud is a collection of thoughts representing conversation context, story context, or any chunk of meaning. In BISLU, Thought Cloud encompasses complete conversation context.
ETML is a textual representation of those graphical structures so engineers can debug and modify easily. ETML also helps create datasets for converting text into thoughts and thoughts back into coherent text.
Real-time visual context de-referencing and entity enrichment
In real conversations, people often convey intent via gestures. In patient consultations, missing these cues is costly. I worked on a deep learning based approach that understands visual cues in real time in a patient-doctor conversation.
The algorithm detects the moment a gesture carries meaning (for example pointing to a body part), extracts a frame, identifies the body part, and feeds it into Universal NLU. These cues help de-refer and co-refer entities, and enrich entities with extra information such as severity of pain or discomfort.
The model runs in real time to enhance BISLU system capability and improve predictions.
Spiking Neural Networks, neuromorphology, and brain-inspired simulation
Studying spiking neural networks and simulating behavior from actual neuron morphology from human neocortex is an active area I pursue. I simulated up to 50 cubic millimeter volume of human neocortex on audio and video perceptual inputs, to understand correlations among perceptual features.
I used NeuroMorpho.Org for actual human neuron SWC models, built 3D models to simulate synapses, and used the Brian2 simulator to simulate the models.
Chip design and hardware (highlight)
I bring hardware development and chip design experience, and I connect it directly to AGI architecture choices.
I have worked across hardware development and chip design, alongside software and AI systems work. This matters because compute, memory, and noise properties shape what kinds of intelligence architectures are feasible.
I keep this section focused on collaboration value:
- Hardware-software co-design for latency, throughput, and power.
- Architecture thinking for AI acceleration: what should be computed, stored, and reused.
- System tradeoffs: memory bandwidth, precision, noise, and reliability.
- Partnering on the path from prototype to silicon reality.
For the specific chip design work list, please refer to my LinkedIn profile: linkedin.com/in/blusingh.
Why I care about analog directions
I am interested in moving away from digital chip based AI toward analog chip based AI. Digital systems are designed to be deterministic, intolerant to noise, and intolerant to error. Human intelligence can accept errors and can still function robustly.
I am interested in architectures that can use analog MACs rather than only digital MACs, and systems that treat approximation and correction as normal behavior. This is a research and engineering direction I want to expand with collaborators.
Key patents
A few of my patents that reflect the systems and architectures I have built.
- AUTOMATED SYSTEM FOR THE DIGITIZATION AND ANALYSIS OF THE HAND-WRITTEN MEDICAL RECORDS (IN 537504)
- BRAIN INSPIRED SPOKEN LANGUAGE UNDERSTANDING SYSTEM (BISLU), A DEVICE FOR IMPLEMENTING THE SYSTEM AND METHOD (USPTO 11756540)
- GENERAL INTELLIGENCE PREDICTIVE AND CORRECTIVE ARCHITECTURE (GIPCA) BASED SYSTEM FOR INFERRING PHONETIC BASED WORDS (USPTO 11615786)
- GENERAL INTELLIGENCE PREDICTIVE AND CORRECTIVE ARCHITECTURE (GIPCA) BASED SYSTEM TO CONVERT HUMAN THOUGHT REPRESENTATION INTO COHERENT STORIES AS THOUGHT CLOUD (USPTO 11314949)
- GENERAL INTELLIGENCE PREDICTIVE AND CORRECTIVE ARCHITECTURE (GIPCA) BASED SYSTEM FOR INFERRING HUMAN THOUGHT REPRESENTATIONS (USPTO 11314948)
- AN INTELLIGENT FOOTFALL ANALYSIS IN HOSPITALS (IN 201841042944)
- DETECTING AND ADDRESSING REVENUE LEAKAGE IN HOSPITALS (IN 201841029770)
- INFERENCING AN ADVERSE HEALTH CONDITION OF A PATIENT (IN 201841035724)
What I want to build next
These are collaboration-ready directions. If you want to build one of these, I want to talk.
1) Symbolic RAG using thought representations (BISLU and ETML-inspired)
I am immediately interested in symbolic representation of knowledge such as conversations, articles, and complex documents, using thought representation and BISLU work.
This can be used to parse complex knowledge bases for better RAG, which I call Symbolic RAG. The aim is retrieval grounded in symbolic thought structures instead of only chunked text.
- Better traceability and debuggability for retrieval decisions.
- Stronger handling of complex narrative structure and long context.
- Cleaner interfaces for correction loops and memory updates.
2) Memory + LLMs now, then analog-memory-powered AI
I am interested in combining memory with LLMs immediately, then moving toward analog memory powered AI. My focus is persistent memory that changes behavior over time, supports correction, and can be inspected and revised.
- Long-term memory that outlives sessions and contexts.
- Editable memory with provenance and correction loops.
- Memory as a substrate, not a feature.
3) Neuroscience-inspired AGI via topology, neurmorphology, and Yoneda Lemma
I want to work on neuroscience inspired AGI using the concepts described above: topology and neurmorphology as a driver, representation inspired by Yoneda Lemma, and architectures that are predictive and corrective.
I want systems where intelligence is primarily the connectedness of concepts and the operations available across that connected space.
4) Moving from digital AI to analog AI (noise-tolerant compute)
I want to move away from digital chip based AI to analog chip based AI. Digital systems are designed to be deterministic and intolerant to noise and error, while human intelligence can accept errors and still operate.
I am interested in analog MACs rather than digital MACs, and a broader shift toward systems that treat noise and approximation as part of robust computation.
5) Multi-domain curiosity that feeds next-gen AI
I believe next generation AI will use a blend of the following: deep learning, LLMs, multi-agent systems, neuro-symbolic AI, neuroscience and brain circuits, pathways, neuromorphology, spiking neural networks, brain topology and cognition, hardware and chip design, quantum computing based AI, and consciousness and philosophy.
I am expert in some of these areas, worked moderately in others, and have strong reading interest in a couple (including quantum computing based AI). I like collaborations where cross-domain thinking becomes a tangible system.
Collaborate
Startups or enterprise teams, if you want to build something bold and practical, reach out.
If you are a startup building differentiated AI, or an enterprise team that wants to prototype future-facing architectures, I would like to collaborate.
Good collaboration fits
- Symbolic RAG for complex knowledge bases using thought representations.
- Memory systems for LLMs: persistence, correction, and auditable behavior.
- Multi-agent systems with shared memory and corrective loops.
- Neuro-symbolic pipelines for explainability and minimal-data learning.
- Brain-inspired architecture experiments: topology, spiking, morphology.
- Analog and mixed-signal AI exploration and hardware-software co-design.
Contact
Email: baljit@mankash.com
LinkedIn: linkedin.com/in/blusingh
- Send a short note with what you are building.
- Share the problem constraints: data, latency, cost, and success metric.
- I will reply with a concrete first prototype path and checkpoints.