When billions of queries hit ChatGPT simultaneously, how does the right response reach the right user, and who keeps those prompts secure? Navin Bishnoi of Marvell explains how purpose-built networking chips deliver secure, ultra-low-latency AI at scale, and why data centre networks now form the backbone of modern AI infrastructure.
Q. Can you explain what Marvell is and what it does?
A. Marvell develops and delivers semiconductor solutions that move, store, process and secure the world’s data. As a leader in essential data infrastructure semiconductor technology trusted by top data centre operators and OEMs, Marvell silicon powers innovation across the cloud and AI, carrier infrastructure, and enterprise networking markets. With a comprehensive portfolio of compute, interconnect, network switching, security and storage products and IP, the company offers merchant, semi-custom and full-custom options to address a range of customer requirements
Q. What products does Marvell make in India?
A. We design customised semiconductors, networking switches, storage and connectivity solutions. We have a strong competitive position in India’s AI infrastructure market, which we intend to grow. With the AI infrastructure market expected to reach $94 billion by 2028, this represents a significant growth opportunity for Marvell.
We work with leading global hyperscalers and cloud providers. Whether it is a payment, a file transfer, an upload, or heavy processing of massive AI workloads, our technology often underpins these operations. We design custom semiconductors integrated into compute systems, networking fabrics, and security layers. Essentially, we are embedded across the entire data stack.
Q. What is the difference between traditional data centres and AI-based data centres, and how does it affect networking chip requirements?
A. Traditional data centres focus on moving and storing data, and semiconductors deliver predictable performance between servers and storage. However, AI data centres interconnect thousands of AI accelerators (XPUs) with each other and with memory and other devices, constantly exchanging massive amounts of training data and model updates. Within the data centre, the scale-up domain is where hundreds of these XPUs are directly connected to form the equivalent of a single supercomputer. The scale-up switches used in this domain must deliver ultra-low latency during traffic bursts without dropping packets; otherwise, the entire training process slows down.
The scale-out domain is where thousands—or hundreds of thousands—of XPUs are interconnected. AI data centre customers are looking for high-speed ports, advanced congestion control, lossless Ethernet for sensitive traffic, and intelligent load distribution across many paths.
Q. How does Marvell ensure low latency and high bandwidth for AI training data transfer?
A. We use a switch architecture that minimises latency even during heavy traffic. By leveraging advanced physical-layer IP, we deliver high bandwidth and radix, enabling dense, high-performance networks that are power- and cost-efficient for AI workloads.
Marvell Teralynx switches are purpose-built for AI/ML and high-performance computing. The architecture reduces time spent inside the network and lowers job completion time for distributed training.
Q. How is Marvell’s Teralynx switch designed to handle network congestion during heavy traffic?
A. We prioritise congestion prevention at the switch level. We monitor the network from within our switches to identify issues early and take corrective action immediately.
During sudden traffic surges, our shared memory pool lends additional buffer capacity to overloaded links. Because not all data is equal, we assign traffic to different priority levels on each port and route it accordingly by importance. We also control how much bandwidth each stream, queue, or port receives, ensuring no workload starves others of resources.
Q. How do cybersecurity threats evolve as networking and data infrastructure become more critical for AI?
A. As AI infrastructure expands across data centres, edge sites, high-speed interconnects, and APIs, the number of entry points grows, and with it, the risk for our customers. Our response is to embed security directly into hardware and the system stack, preventing models from becoming targets for poisoning, evasion, or theft.
Our products support secure boot and strong authentication, encrypt data in motion and at rest, and enable confidential computing to protect sensitive workloads. We are preparing for the post-quantum era with stronger cryptography and design for Zero Trust and continuous monitoring to detect and contain threats early.
Q. What challenges do Marvell’s engineers face when designing chips for cloud and AI data-centre networks?
A. A modern ‘chip’ is a complete computing system in a package. This requires that we solve signal integrity, thermal challenges, and mechanical stress together, rather than in isolation. Development timelines are tight: customers expect production silicon within about 18 months, ideally successful on the first spin.
To meet this challenge, we co-design across digital, analogue, packaging, and firmware domains, validating solutions years before standards mature. Our engineering teams constantly balance scale, speed, and efficiency throughout the design process.
Q. How do data centres manage power consumption and heat generation as AI workloads grow?
A. It is a valid concern. Power and heat have become the defining constraints for modern data centres. Their real limit is the power they can deliver and the cooling they can provide, as dense networking and storage generate significant heat during continuous AI processing.
Our products address this through power reduction and better heat management. We optimise power at every level: designing efficient circuits, tailoring architectures to specific workloads, and tuning software to minimise waste.




