As an Audio ML Engineer, you will lead the development of machine learning models that enhance how audio systems perceive and adapt to environments and users. Your work will center on building and refining models for audio scene understanding, perceptual quality prediction, artifact detection, and personalization—ensuring outputs are optimized for real-world deployment.
Key Responsibilities
- Design and train deep learning models for audio perception tasks, including context classification, preference modeling, and personalization embeddings
- Develop inference-efficient architectures suitable for on-device deployment using quantization, pruning, and distillation techniques
- Enable cloud-based training pipelines that support fleet learning and batch evaluation, with seamless on-device inference integration
- Collaborate with DSP engineers to ensure model outputs drive adaptive audio parameters—such as EQ and spatial rendering—while preserving system stability and interpretability
- Define robust data strategies, including collection, labeling, augmentation, and bias assessment, to ensure reproducibility and product transferability
- Optimize models for constrained environments using ONNX, TFLite, TensorRT, or NNAPI, balancing accuracy with latency and memory use
- Work closely with cross-functional teams to establish integration standards, reference implementations, and validation metrics for product releases
- Leverage AI-powered development tools—such as coding assistants and automated analysis—to accelerate experimentation while maintaining rigorous validation
Qualifications
You bring an advanced degree in Computer Science, Electrical Engineering, Statistics, or Applied Machine Learning, or equivalent practical experience. A minimum of five years in applied ML engineering is required, with a strong preference for experience in audio, speech, or time-series modeling.
Essential skills include proficiency in Python, PyTorch or TensorFlow, and experience with model deployment in embedded or cloud environments. Familiarity with DSP fundamentals and how machine learning influences perceptual outcomes is critical. You have a track record of using AI-assisted tools to improve development speed and code quality.
Preferred candidates will have shipped ML features in consumer or automotive audio products, published research, or hold patents in relevant domains. Experience in speech enhancement, source separation, spatial audio, or privacy-preserving personalization is highly valued.
Work Environment
This role supports full-time remote work for positions that do not require presence at a company or customer site. You’ll have access to extensive learning resources through an internal university platform, wellness benefits, tuition reimbursement, and employee recognition programs. The technical environment emphasizes modern AI tooling, MLOps practices, and cross-platform deployment frameworks.
Our Culture
We value innovation, inclusivity, and teamwork. We are committed to fostering professional growth and personal development in an environment where making a difference is part of everyday work. We take pride in building technology that improves lives.
Equal Opportunity
We encourage applications from all qualified individuals regardless of race, religion, color, national origin, gender, sexual orientation, gender identity, age, veteran status, disability, or other legally protected characteristics.
