We're seeking an experienced Databricks Engineer to support a US-based client through short-term initiatives that often extend into longer engagements. Your work will focus on designing and refining data solutions within the Databricks environment, including platform migrations, real-time data integration, and performance optimization.
What You'll Do
- Plan technical approaches and choose the right tools for data engineering tasks
- Integrate data sources with near real-time capabilities
- Build and maintain scalable ETL workflows
- Migrate databases, platforms, and machine learning models to modern architectures
- Improve efficiency through automation and tuning of data platforms
- Work closely with data engineers, data scientists, and solution architects
What We Need
- At least 8 years in data engineering or a closely related field
- Proven experience—2–3 years minimum—with Databricks and Apache Spark, especially in ETL, migrations, and integrations
- Advanced Python programming skills
- Hands-on background in cloud data platforms, particularly Microsoft Azure (Data Factory, Synapse, Logic Apps, Data Lake) or AWS (Redshift, Athena, Glue)
- Ability to collaborate effectively and communicate clearly in English
- Self-driven mindset with a track record of taking initiative
Nice to Have
- Experience designing data workflows using tools like DBT, SSIS, or TimeXtender
- Familiarity with big data or noSQL technologies such as Hadoop, EMR, or Redshift
Work Environment
This is a fully remote role with flexibility in scheduling. We expect limited overlap with CET business hours (e.g., 10:00–18:00), but are open to candidates across time zones. The recruitment process emphasizes transparency and respect, with clear communication at every stage.
Benefits
- 100% remote work with no mandatory travel
- Private medical coverage through Medicover
- Multisport card for active lifestyle support
- Simple, candidate-focused hiring experience
- Open and honest communication throughout employment
