
The complexity of large language models (LLMs) demands ever-increasing performance from the systems that execute both their training and inference computations.
This AI data center scaling challenge can only be efficiently tackled using a hardware-software-technology codesign approach, enabling the post-exascale performance HPC and AI systems.
Such an approach hinges on a clear view of the impact of various current and future AI workloads on compute, communication and memory subsystems.
That view is provided by imec.kelis, a performance modeling and design space exploration tool for LLM data centers. It’s built on imec’s expertise in analytical performance modeling for high-performance computing and artificial intelligence.
Some key features:

The imec.kelis tool offers modeling of the compute, communication, and memory subsystems of accelerators, as well as the larger systems they are part of, all the way to datacenter scale.
It consists of:
In short, imec.kelis offers you an end-to-end framework to quickly and accurately evaluate and optimize your design choices.
For more information and licensing, set up a meeting by clicking the contact button below.