Personal homepage • Open to collaborate

Building connected intelligence with memory, structure, and hardware in mind.

I work at the intersection of brain-inspired AI, neuro-symbolic systems, streaming language understanding, memory-centric architectures, and chip design thinking.

My north star is simple: intelligence is not only prediction. It is connectedness, correction, durable memory, and architecture that can actually live in the real world.

Core lens Memory + correction + symbolic structure
Operating range AI systems, language, hardware, and chip design
Best fit Startups and advanced teams building differentiated AI

Perspective

A modern AI thesis with a hardware conscience.

I am interested in AI systems where architecture matters as much as optimization. That means topology, neurmorphology, symbolic structure, memory, and correction loops are part of the design from the start rather than retrofits.

Structure before scale

Topology and representation design are not containers for intelligence. They are part of its source.

Memory as architecture

Long-term, editable, inspectable memory should shape system behavior instead of sitting outside it.

Corrective intelligence

The systems I care about should revise, reconcile, and repair their internal state as new evidence arrives.

Compute substrate matters

Latency, memory movement, precision, noise, and analog behavior all shape what kind of AI is feasible.

Systems

What I have built, and what those systems taught me.

The projects below are best understood as architectural experiments in language, structure, perception, memory, and explainability.

Neuro-symbolic architecture

GIPCA and BISLU

I worked on architectures that merge statistical AI and symbolic AI by forcing meaningful, inspectable boundaries between stages. This supports explainability and stronger learning behavior in settings where black-box models are not enough.

This direction is embodied in GIPCA (General Intelligence Predictive and Corrective Architecture) and BISLU (Brain Inspired Spoken Language Understanding).

Universal NLU

Thought-level outputs, not only intent labels

Instead of compressing meaning into small intent sets, I worked toward human thought representations with variable resolution depending on domain confidence.

Streaming speech

Meaning accumulation and Centom segmentation

Real speech is fragmented and messy. My work on streaming NLU and Centom focused on segmenting spoken streams using atomic connected entities while preserving local meaning.

Thought representation

ETML and symbolic thought clouds

I built toward graphical thought representations for conversation and context, plus ETML as a textual form engineers can inspect, debug, and evolve.

Speech + perception

Medical-ready STT and visual de-referencing

I worked on lightweight speech-to-text for noisy medical settings and on visual context enrichment so gestures can contribute meaning to language understanding.

Brain-inspired simulation

Spiking neural systems and neocortex simulation

I have studied spiking neural networks and simulated behavior from real neuron morphology to understand how perceptual features correlate across connected structures.

Proof

Patents, hardware depth, and a broad systems range.

My differentiator is not just one model or one product category. It is the ability to connect language, AI architecture, memory, hardware, and execution into one practical view.

Selected patents
  • Automated system for digitization and analysis of handwritten medical records
  • Brain Inspired Spoken Language Understanding System (BISLU)
  • GIPCA based system for inferring phonetic based words
  • GIPCA based system to convert thought representation into coherent stories
  • GIPCA based system for inferring human thought representations
  • Intelligent footfall analysis in hospitals
  • Revenue leakage detection in hospitals
  • Inferencing adverse health conditions of a patient
Artificial intelligence illustration Hardware and chip design

I have worked across hardware development and chip design alongside software and AI systems. That perspective keeps questions of memory bandwidth, precision, power, noise tolerance, and analog directions in the room from day one.

Deep Learning Hardware Development Chip Design Neuro-Symbolic AI LLMs Multi-Agent Systems Spiking Neural Networks Brain Topology Analog AI

Next

What I want to build next.

These are collaboration-ready directions where I believe a strong architecture point of view can create disproportionate results.

Symbolic RAG with thought representations

Retrieval grounded in thought structures rather than only text chunks, with cleaner provenance and better debuggability.

Memory-native LLM systems

Persistent, editable memory that changes behavior over time and supports correction, provenance, and long-lived context.

Analog and corrective AI architectures

Noise-tolerant compute and hardware-software co-design where approximation and correction are normal parts of robust intelligence.

Collaborate

If you are building differentiated AI, I would like to talk.

I work best with founders, advanced R&D teams, and enterprise groups that want more than a commodity implementation. The strongest collaborations begin with a clear constraint, a difficult architecture question, and a willingness to prototype seriously.

  • Symbolic RAG for complex knowledge and conversations
  • Memory systems for LLMs with persistence and correction
  • Multi-agent systems with shared memory and execution loops
  • Neuro-symbolic systems with explainability and minimal-data learning
  • Hardware-software co-design and analog AI exploration