Shape the future of our data platform as a Senior Data Engineer, responsible for building and maintaining robust batch pipelines using PySpark and Python. You'll play a central role in defining how data moves across systems, ensuring it's accurate, observable, and ready for analytics and product use.
What You’ll Do
- Design, develop, and maintain scalable data pipelines with a focus on reliability and performance
- Optimize data architecture on AWS to support evolving business and technical needs
- Work closely with data scientists and engineers to deliver high-quality datasets for modeling and product features
- Help define best practices in CI/CD, testing, and version control using Git workflows
- Contribute to the evolution of the platform by exploring and implementing streaming data solutions
- Ensure data integrity and usability across downstream systems
What We’re Looking For
- Proven experience building and managing batch data pipelines
- Strong skills in PySpark and Python for data processing
- Hands-on experience designing and optimizing data systems on AWS
- Familiarity with engineering practices such as automated testing, CI/CD, and collaborative Git workflows
- Ability to work effectively with cross-functional teams including product, engineering, and data science
Work Environment
This role supports flexible working arrangements—either hybrid with presence in Geneva, Switzerland, or fully remote for candidates within the GMT+1 to GMT+4 time zones. We value trust, clear communication, and collective problem-solving. Our culture emphasizes learning, inclusion, and working with purpose and focus.


