As the Machine Learning Engineer Lead, you will drive the technical direction of our core AI security systems, shaping how we detect and respond to next-generation threats in generative AI. You'll bridge research and engineering, transforming novel findings into reliable, scalable models that underpin both defensive and offensive security capabilities.
Key Responsibilities
- Partner with security researchers to convert threat insights—such as prompt injection patterns—into deployable detection models, defining the technical path from concept to production.
- Lead the adaptation of language models using efficient fine-tuning methods like LoRA and PEFT, tailored for multilingual support and specific security objectives.
- Advance the platform toward multimodal threat detection by integrating models capable of identifying risks in images, audio, and combined inputs.
- Own and enhance the full ML lifecycle, improving CI/CD/CT pipelines for faster, more efficient training and deployment.
- Enforce strict data and model versioning using tools like DVC to ensure full reproducibility and auditability.
- Monitor models in production for performance degradation and concept drift, maintaining high reliability in live environments.
- Collaborate with platform engineers to embed ML services seamlessly into the broader system architecture.
- Mentor and lead ML engineers, promoting best practices in code quality, testing, and operational discipline.
- Manage GPU allocation and compute spending across training and inference, balancing cost and performance.
- Communicate technical trade-offs—such as latency versus accuracy or quantization impacts—to executives and clients in clear, accessible terms.
- Advocate for resource needs by articulating how infrastructure investments affect model effectiveness and system scalability.
Qualifications
You bring at least five years of hands-on experience in machine learning engineering, with a track record of leading technical initiatives and guiding engineering teams. You excel at turning complex technical realities into actionable insights for decision-makers.
- Proven expertise in streamlining ML workflows using tools such as MLflow, Kubeflow, Airflow, and data versioning systems like DVC.
- Strong software engineering foundation: proficient in Python, Docker, and Kubernetes, treating models as versioned, testable components.
- Deep familiarity with Transformer-based architectures, embedding techniques, and LLM fine-tuning using PyTorch, Hugging Face, and vLLM.
- Experience adapting models for multilingual environments is required.
Preferred Background
- Hands-on work with multimodal models, including vision-language systems like CLIP or LLaVA.
- Knowledge of generative AI security risks, particularly prompt injection and adversarial inputs.
- Experience accelerating inference through quantization, distillation, or optimized serving with vLLM.
- Familiarity with Vector Databases in retrieval-augmented generation (RAG) contexts.
Technology Stack
Python, Docker, Kubernetes, PyTorch, Hugging Face, vLLM, MLflow, Kubeflow, Airflow, DVC, LoRA, PEFT, Transformer architectures, Vector DBs, RAG
Why This Role Stands Out
You’ll operate at the intersection of AI innovation and cybersecurity, working in a culture that values technical depth, continuous learning, and cross-functional collaboration. The role offers significant autonomy, visibility across technical and business levels, and the chance to shape the future of AI security. Compensation is competitive and aligned with experience and impact.


