Shape the future of safe, scalable content creation through machine learning. In this role, you'll lead the development of AI systems that automate content review and moderation at scale. You'll translate complex safety requirements into robust, production-grade machine learning solutions that protect users while enabling creative expression.
What You’ll Do
- Research, design, and deploy machine learning models to improve the accuracy and efficiency of content moderation systems
- Collaborate with product, design, and engineering leads to define technical roadmaps and deliver high-impact projects end-to-end
- Break down complex initiatives into actionable engineering tasks and drive their execution with precision
- Champion best practices in code quality, peer review, and system design to strengthen team output
- Mentor machine learning engineers, guiding them through technical challenges and solution design
- Stay current with advancements in AI and content safety to inform innovation and system evolution
- Advocate for engineering priorities with non-technical partners, ensuring alignment across teams
What We’re Looking For
- 5+ years of hands-on experience in machine learning engineering within a product or SaaS environment
- Proven ability to lead technical direction while working across product, design, and business functions
- Experience designing automation systems that integrate human oversight effectively
- Familiarity with multimodal AI—particularly in text, image, video, or audio analysis—is a strong plus
Why This Work Matters
You’ll contribute to systems that help millions express themselves creatively while ensuring safety and trust. Your work will directly influence how content is reviewed and managed at scale, combining cutting-edge AI with thoughtful engineering to support a global user base. This role offers deep technical challenges, leadership opportunities, and the chance to shape tools that balance innovation with responsibility.