Position Overview
We’re looking for a Mid Data Engineer to join our data platform team. In this role, you'll design, implement, and maintain data infrastructure that powers analytics and machine learning workflows across the organization.
Key Responsibilities
- Develop and maintain data pipelines using Python and SQL
- Implement data transformation workflows in DBT within a Databricks environment
- Collaborate with data scientists and analysts to understand requirements and deliver high-quality datasets
- Monitor pipeline performance, troubleshoot issues, and optimize for efficiency and scalability
- Contribute to data modeling efforts and support the evolution of our data architecture
Requirements
- 2–4 years of experience in data engineering or related roles
- Proficiency in Python, SQL, and Git-based workflows
- Hands-on experience with Databricks, DBT, and cloud data platforms (AWS or Azure)
- Familiarity with CI/CD practices and version-controlled development
- Strong problem-solving skills and attention to detail
Preferred Qualifications
- Experience with data orchestration tools such as Apache Airflow
- Knowledge of data governance, testing frameworks, or observability practices
- Exposure to machine learning pipelines or analytics engineering
