imec.kelis
/Expertise/Compute system architecture/Imec.kelis: analytical performance modeling tool for AI data centers

Imec.kelis: analytical performance modeling tool for AI data centers

The complexity of large language models (LLMs) demands ever-increasing performance from the systems that execute both their training and inference computations.

This AI data center scaling challenge can only be efficiently tackled using a hardware-software-technology codesign approach, enabling the post-exascale performance HPC and AI systems.

Such an approach hinges on a clear view of the impact of various current and future AI workloads on compute, communication and memory subsystems.

That view is provided by imec.kelis, a performance modeling and design space exploration tool for LLM data centers. It’s built on imec’s expertise in analytical performance modeling for high-performance computing and artificial intelligence.

Some key features:

  • Returns results within seconds, allowing for interactive exploration.
  • Is validated within 12% worst case error for large-scale LLM training and inference on Nvidia A100 and H100 systems.
  • Comes with an easy-to-use interactive interface exposing key parameters.

Contact us for licensing

imec.kelis analytical model

Use imec.kelis to optimize AI datacenter performance and TCO

The imec.kelis tool offers modeling of the compute, communication, and memory subsystems of accelerators, as well as the larger systems they are part of, all the way to datacenter scale.

It consists of:

  • LLM task-graph analyzer
  • Parallelism mapper
  • Hierarchical roofline model
  • Topology-aware collective communication library

In short, imec.kelis offers you an end-to-end framework to quickly and accurately evaluate and optimize your design choices.

For more information and licensing, set up a meeting by clicking the contact button below.