NVIDIA is hiring a Senior Software Engineer – Inference Platform Infrastructure

NVIDIA is hiring a Senior Software Engineer – Inference Platform Infrastructure to build and automate the foundational infrastructure for our inference services. You will ensure these services are reliable, scalable, and easy to operate across thousands of GPUs through hands-on coding and automation.

What You'll Do

  • Build automation for inference at scale: provisioning, configuration, upgrades, rollbacks, and routine maintenance optimized for repeatability and safety.
  • Create and evolve deployment patterns for inference workloads on Kubernetes: rollouts, autoscaling, multi-cluster patterns, GPU scheduling/isolation, and safe upgrade strategies.
  • Own platform reliability outcomes through software: define and improve SLIs/SLOs, error budgets, alert quality, and automated remediation for common failure modes.
  • Own and operate a large fleet of NVIDIA GPU and Datacenter hardware from pre-release to production.

What We're Looking For

  • Strong software engineering skills; ability to build platforms and systems that our teams rely on.
  • 5+ years building and operating production distributed systems with strong ownership and a track record of improving reliability and eliminating toil.
  • Proven expertise in cloud-native platforms: Kubernetes, containers, service networking, configuration management, and modern CI/CD.
  • Deep experience with infrastructure-as-code and automation-first operations (e.g., GitOps workflows, policy enforcement, fleet management patterns).
  • Excellent communication and collaboration skills; ability to lead cross-functional efforts and drive improvements to completion.
  • BS/MS in Computer Science, Computer Engineering, or related field or equivalent experience.

Nice to Have

  • Direct experience in operating inference serving at scale (Triton, TensorRT-LLM, KServe/Ray Serve, etc.).
  • Built scheduling, placement, or quota systems (priority queues, fairness, admission control, rate limiting) for Kubernetes.
  • Built fleet health systems: telemetry pipelines, automated quarantine/drain, and hardware/software failure triage automation.

Technical Stack

  • Kubernetes, Containers
  • Triton, TensorRT-LLM, KServe/Ray Serve

Benefits & Compensation

  • Compensation: $152,000 USD - $241,500 USD for Level 3, and $184,000 USD - $287,500 USD for Level 4 + equity.
  • Eligible for equity.
  • Benefits (detailed on company benefits page).

NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

Required Skills
KubernetesContainersTritonTensorRT-LLMKServeRay ServePythonC++Distributed SystemsGPU ComputingML InferencePerformance OptimizationMicroservicesCloud Infrastructure KubernetesContainersTritonTensorRT-LLMKServeRay ServePythonC++Distributed SystemsGPU ComputingML InferencePerformance OptimizationMicroservicesCloud Infrastructure
Starting a business in Thailand?

Company registration done right

Foreign ownership rules, licenses, tax registration — Thai business setup has many moving parts. SVBL guides you through every step with full legal compliance.

Company registration & structure
Foreign ownership solutions
License & tax registration
BOI promotion eligibility
Start your business
100% foreign ownership possible
About company
NVIDIA
NVIDIA builds accelerated computing platforms and AI technologies that power advancements in areas such as generative AI, data centers, robotics, and digital twins.
All jobs at NVIDIA Visit website
Job Details
Category infrastructure
Posted 2 months ago