alt
/Expertise/AI research

AI research

AI's growth needs sustainable innovation. To achieve this, imec adopts a co-optimized, modular and application-driven approach.

AI evolves faster than the hardware it runs on. Frontier models land on hardware designed years before their workload existed. Meanwhile, next-generation chips and systems are developed without a clear view on the algorithms they'll actually need to run.

Imec closes this loop — co-designing next-generation AI with next-generation hardware (from silicon to systems).

Transformer-based models have defined the last decade of AI progress. But the era of scaling as a universal solution is ending: more parameters and more compute no longer guarantee better intelligence. Researchers at the forefront of innovation are therefore looking at other approaches to drive artificial intelligence forward.

For AI to reach a level that comes close to artificial general intelligence (AGI), it must take giant leaps forward. LLMs cannot be the only answer. And sustainability, in particular energy consumption, should be a prime concern.

Energy consumption is also a prime concern when it comes to Edge AI. In this crucial domain for a smart, connected environment, specific constraints in terms of power budget, but also size and (wireless) communication, require different choices in terms of algorithms, architectures and technologies.

Imec’s AI research combines deep knowledge of (beyond) CMOS semiconductor technologies with expertise on algorithms and architectures to develop the integrated technology blocks that will drive tomorrow’s AI solutions.

Our guiding principle is that these solutions will need to be co-optimized, modular, and application-driven.

AI’s scaling problem is situated on three levels:

  1. technological – The most performant model is the one that creates the most efficient intelligence/joule. 
  2. economic – As AI is one of the defining technologies of our generation, its democratization is key to foster innovation.
  3. environmental – The energy consumption of these data centers threatens to expand AI’s footprint beyond what our planet can support.
     

To address this issue, scaling cannot be done only on the hardware layer. It needs innovation at three layers of the technology stack:

1. Algorithms

Imec develops novel algorithms for purposes such as neural network training and sensor processing. This often fits into specific applications such as health devices or sensor fusion use cases like autonomous driving and smart manufacturing.

Especially when it comes to AI at the (extreme) edge, efficient algorithms are a way to reduce energy consumption. But software strategies can also help high-performance computing (hpc) applications, to get more out of available hardware resources.

2. Architectures

Imec.AI-labs is a Paris-based center of excellence that targets the future architecture of artificial intelligence, uniquely positioned at the intersection of software and hardware. The team of researchers and engineers is dedicated to creating breakthrough AI models, methodologies, and applications. Their focus is on the next generation of intelligent systems rather than incremental improvements to today's architectures.

Other imec research teams focus on subjects such as:

  • AI accelerators based on compute-in-memory architectures
  • processor architectures
     

In close conjunction with all these activities, imec’s compute system architecture (CSA) team uses its expertise in system-level modeling, performance analysis and hardware validation to explore optimized architectures for scalable systems. 

3. Technology

The semiconductor technology layer is the core of imec’s decade-long expertise. Research activities range from traditional CMOS scaling to CMOS 2.0, integrated photonics, and emerging technologies such as quantum computing and superconducting digital computing.

Do we want these innovations to have maximum effect on system performance? Then it’s crucial they’re developed closely together, with that system performance as the north star.

Listen to the EE Journal podcast with imec's scientific director Axel Nackaerts: The Key to the Future of AI? Hardware Innovation!

Co-optimization of technology, architectures and algorithms

Since 2010, the computational complexity of AI models increases hundredfold every two years. By comparison, Moore’s law, underpinning the increase in computational power, decrees a ‘mere’ doubling of the number of transistors on a chip every two years. Evidently, the current growth of AI risks being unsustainable.

Modular approach

This co-optimization of algorithms, architectures and technology will result in different functional units with an optimal configuration of technologies to address certain tasks. These ‘AI bricks’ will target specific AI workloads, such as perception, language processing, language generation, etc.

Together, these AI bricks constitute the modular configurations that can handle the heterogeneous AI workloads that, according to experts, will characterize future AI systems.

Application-driven research strategy

This all implies that to design – not necessarily build – these future AI systems requires a relatively detailed conception of the tasks they will need to fulfill. Therefore, application characteristics need to be taken into account from day one.

Imec’s deep involvement in a wide array of application domains allows access to unique insights into these domains’ specific needs and challenges. It enables us to translate knowledge on sensors and algorithms to workloads at the hardware architecture and technology level.

Some crucial applications that imec’s AI teams currently focus on:

Additionally, imec is involved in various large-scale testbeds using current technologies, such as Mobilidata, Solid, and various projects within the OnePlanet Research Center.

Silicon AI: benchmarking next-gen workloads on next-gen hardware

Agentic AI is reshaping computational bottlenecks faster than traditional hardware design cycles can follow. Today's algorithms are still built around yesterday's systems, while tomorrow's chips are being optimized for today's workloads.

To close that gap, imec is launching Silicon AI, a publicly available benchmarking initiative that identifies how agentic workloads influence bottlenecks on the hardware and drive total cost of ownership (TCO) at the system level.

From the ideal CPU-to-GPU ratio under increasing agent parallelism to the impact of chip architectures and generations on token cost and TCO, Silicon AI delivers the neutral, system-level insight the ecosystem needs to co-optimize future AI models with future compute.

Want to get involved in imec’s AI research?

Click the button below to get in touch.

Article Chip Tech 4 Automotive

Steven Latré

Vice president AI

alt

Subscribe to our thematic newsletters

Sign up