Join a forward-thinking team building advanced compiler infrastructure for next-generation AI hardware. As a Senior/Staff AI Graph Compiler Engineer, you will play a central role in developing and refining the compiler stack that bridges high-level AI frameworks and specialized compute architectures. Your work will directly influence how models execute across a range of processing units, ensuring optimal performance, memory use, and efficiency.
What You'll Do
- Design and implement frontend and graph-level compiler components using MLIR to support AI workloads
- Develop and refine graph transformations including operator fusion, constant folding, operator sinking, and graph partitioning
- Extend and maintain MLIR dialects, passes, and supporting infrastructure
- Translate models from PyTorch, ONNX, and TensorFlow into internal representations for efficient execution
- Collaborate with hardware and backend teams to map AI computations effectively across heterogeneous systems
- Profile, debug, and improve compiler correctness, speed, and output quality
- Mentor team members in MLIR adoption and best practices
- Contribute to the long-term design of compiler tooling and architecture
What We're Looking For
Applicants should hold a Master’s or PhD in Computer Science or a related field, with 3–5 years of software engineering experience, including at least two years focused on deep learning systems. Strong knowledge of MLIR—especially dialects, passes, and IR design—is essential. Experience with PyTorch, ONNX, or TensorFlow is required, along with proven work in graph-level optimizations such as fusion or partitioning.
Proficiency in C++ and Python is expected, along with a solid foundation in compiler principles. You should have experience working in collaborative engineering environments and be comfortable sharing knowledge and guiding peers.
Preferred Background
- Experience with custom AI accelerators or specialized compute hardware
- Familiarity with heterogeneous architectures involving CPUs, NPUs, GPUs, or dedicated accelerators
- Hands-on work with AI compiler frameworks such as Torch-MLIR, TVM, XLA, or Glow
- Track record of optimizing AI models for speed and efficiency
- Understanding of model ingestion pipelines, graph lowering, and compiler workflows
Work Environment
This role supports flexible arrangements, including remote work across Europe or onsite at locations in Belgium, the Netherlands, Switzerland, Italy, or the UK. Relocation support is available for candidates joining in Italy or the Netherlands. The team values open communication, technical ownership, and a culture of continuous innovation.
Compensation and Benefits
The position offers a competitive salary, pension plan, and comprehensive insurance coverage. Employees may also receive company shares. We foster an inclusive, diverse workplace that celebrates individual contributions and promotes growth through responsibility and collaboration.
