“We Bring Brain-Like Intelligence Directly To Sensors For Ultra-Low-Power AI Processing” – Sumeet Kumar, Innatera

Sensors generate more data than the cloud can handle. Innatera’s neuromorphic chips shift AI to the edge, enabling always-on, ultra-low-power intelligence at the source. In an exclusive EFY conversation, Sumeet Kumar of Innatera discusses with EFY’s Ashwini Kumar Sinha, Nidhi Agarwal, and Saba Aafreen how hybrid spiking and conventional neural networks are defining the next wave of smart edge AI.


Q. How are neuromorphic principles implemented at the hardware level?

A. In the case of the Pulsar chip, neuromorphic principles are implemented by mimicking the brain’s hardware and software structure. The chips use processing elements that function as silicon neurons and synapses. These recreate integrate-and-fire spiking neuron behaviour in silicon, enabling time-series integration and fine-grained temporal spike generation. This approach allows spiking neural networks (SNNs) to be reproduced in silicon with full fidelity. A significant part of the engineering effort focuses on making these neurons and synapses robust, high-performing, energy-efficient, and manufacturable, while organising them into a programmable fabric.

Q. What are the main compute elements in the neuromorphic microcontrollers?

A. The Pulsar chip integrates three major compute fabrics. The first is a spiking neural network accelerator built from processing elements that mimic biological neurons and synapses. These silicon neurons operate in parallel and consume extremely small amounts of energy per operation.

- Advertisement -

The second element is a conventional CNN accelerator that enables developers to efficiently run traditional deep learning models as needed. The third component is a RISC-V CPU subsystem with standard sensor interfaces. This CPU handles tasks such as sensor data acquisition, control logic, and actuation. Together, these elements form a complete system-on-chip that provides all the processing resources required for sensing applications while maintaining extremely low power consumption.

Q. What is SNN processing and why is it important in neuromorphic systems?

In conventional neural networks, data is processed as continuous digital values, and every node consumes energy regardless of whether the computation is meaningful. Spiking neural networks are fundamentally different. They are event-driven. Sensor data is encoded as simple voltage spikes, essentially single-bit events, where information is carried in the timing or frequency of the spikes. Computation occurs only when something meaningful happens. These networks inherently understand time, enabling them to be smaller and far more energy-efficient than traditional neural networks. This temporal, event-driven nature closely mirrors how the biological brain processes information, making SNNs particularly effective for edge and sensor-based applications.

Q. How do you train the SNNs?

A. From a developer’s perspective, training SNNs is similar to training conventional neural networks. Innatera has developed a software framework called Talamo that uses PyTorch as its front end. Developers define, train, and optimise their models using standard PyTorch workflows. Once the model is ready, the company’s compilation framework automatically maps it onto the hardware. Engineers do not need to understand the chip’s internal details, making deployment straightforward and familiar for teams already working with modern AI tools.

Innatera was founded in 2018 as a spin-off from Delft University of Technology with a clear goal: bringing brain-like intelligence directly to sensors. The company operates on the belief that modern sensors generate massive volumes of data, far more than can realistically be transmitted to the cloud. The solution is to process this data on chips such as Pulsar, as close to the sensor as possible. Accordingly, Innatera designs neuromorphic chips that enable always-on, ultra-low-power AI processing directly at the sensor, removing the need for constant cloud connectivity.

Q. Why do developers still prefer CNNs for image and video processing?

A. Traditional image and video data is frame-based and does not inherently contain temporal information. Spiking neural networks are particularly effective for event-driven and time-series data. While event-based vision sensors can benefit from SNNs, conventional imaging pipelines are generally better suited to convolutional neural networks. Even so, SNNs can offer advantages in specialised event-based imaging scenarios.

Q. Why do you keep both CNN and SNN on your chips?

A. Real-world applications are rarely solved using a single type of neural network. Different stages of a sensing pipeline may require different processing techniques, and developers may need to run different models at different times. For example, a video doorbell may first detect a human in radar data using an SNN, then trigger the camera to capture an image that a CNN analyses to determine whether a package was left. Integrating both accelerators on the same chip gives developers the flexibility to choose the most suitable approach for each stage without compromising power efficiency or performance.

Q. How does the chip decide whether to run a workload on CNN or SNN?

A. The chip does not make that decision automatically. Developers explicitly define where each model runs. Spiking neural network models are mapped to the spiking accelerator, convolutional neural network models to the CNN accelerator, and control logic to the RISC-V CPU. This explicit mapping provides full control over performance, accuracy, and power trade-offs, rather than relying on automated scheduling.

Q. So it can be optional to use one or both?

A. Absolutely. Developers can use only the spiking neural network, only the convolutional neural network, or both. The choice depends entirely on the application requirements and the design of the sensing and processing pipeline.

Q. What hardware architecture is used to combine CNN and SNN?

A. The spiking neural network accelerator uses a near-memory architecture, with memory embedded directly within the compute fabric to reduce data movement and improve efficiency. The CNN accelerator has its own dedicated memory space optimised for its workloads. All components are connected through a common on-chip interconnect, enabling efficient data exchange between the accelerators, the CPU, and other processing blocks.

Q. Is data flow between CNN and SNN handled by software?

A. The chip integrates on-chip DMA engines and shared memory spaces addressable by both the RISC-V CPU and the accelerators. Data flow between these components is configured in software and executed efficiently in hardware.

Q. What are the main applications where combining CNN and SNN is required?

A. The company sees strong demand across consumer electronics, IoT, smart home devices, and wearables. Typical applications include audio classification, keyword spotting, radar-based human presence detection, image-based recognition, and biomedical signal analysis such as ECG monitoring. In many cases, the spiking neural network performs the always-on, low-power front-end processing and classification, sometimes alongside a CNN.

Q. But these applications are possible without combining CNN and SNN. Why combine them?

A. SNNs and CNNs are each effective on their own, and applications can be built using either approach. However, each offers different strengths. For example, a CNN may perform spatial recognition, with its output feeding into an SNN for temporal analysis. In other cases, one model reduces the amount of data processed by the other, lowering overall power dissipation. Hybrid approaches therefore enable capabilities that may not be achievable with a single network type.

Q. Which industries are likely to adopt this technology first?

A. The earliest adopters are expected to be consumer electronics, IoT, and smart-home segments. Current deployments show that devices with localised intelligence at the edge are already practical. Industrial and automotive vendors have also expressed interest, although consumer and smart-home IoT markets are moving fastest.

Q. Is Innatera planning to work on new technologies, and what does success look like?

A. As I said before, Pulsar is the company’s first neuromorphic microcontroller, but it represents only an early step towards the capabilities of biological brains. Future products will focus on enabling higher levels of autonomy, adaptability, and efficiency at the edge. For the company, success means delivering intelligent systems that operate reliably on smaller batteries, at lower cost, and with greater functional integration.

- Advertisement -

Industry's Buzz

Learn From Leaders

Startups