Data Engineer (Remote in Mexico or Brazil)
Role Overview
This position is responsible for designing, constructing, and maintaining efficient data systems that power analytics and decision-making. You will develop automated data workflows, ensure reliable integration across sources, and support clean, accessible datasets in a centralized warehouse environment. Working closely with analytics professionals, you'll help translate complex data needs into scalable technical solutions.
Key Responsibilities
- Build and manage data pipelines using Python and Airflow to support timely and accurate data delivery.
- Collaborate with analytics and engineering teams to model and structure data for downstream use.
- Implement monitoring and alerting systems to maintain pipeline stability and performance.
- Enforce data integrity through validation, testing, and quality checks across ingestion and transformation stages.
- Optimize cloud data platforms for efficiency, reliability, and cost-effectiveness.
- Participate in architectural planning for data storage, orchestration, and infrastructure automation.
- Maintain clear documentation of data flows, models, and system configurations.
Required Qualifications
- Bachelor’s degree in computer science or a related technical field.
- Proven experience in data or software engineering roles with a focus on data infrastructure.
- Strong proficiency in Python and advanced SQL skills.
- Hands-on experience with Kubernetes and containerized environments.
- Familiarity with workflow orchestration tools such as Airflow.
- Knowledge of modern data formats including Parquet, ORC, and JSON.
- Experience working with enterprise data warehouses like Snowflake, Redshift, or BigQuery.
- Working knowledge of AWS services such as S3, Lambda, and Athena.
- Ability to work independently and collaboratively in a team setting.
- Strong problem-solving skills and attention to detail.
- Excellent written and verbal communication abilities.
Preferred Qualifications
- Experience with data transformation tools like dbt.
- Background in NoSQL or graph database technologies.
- Familiarity with data visualization platforms such as Power BI, Tableau, or Apache Superset.
- Exposure to the R programming language.
Technical Environment
Python, Airflow, dbt, SQL, Kubernetes, Snowflake, Redshift, BigQuery, AWS (S3, Lambda, Athena), Parquet, ORC, JSON
Work Environment
This is a remote role available to candidates based in Mexico or Brazil. The team operates in a flexible, collaborative, and globally connected setting. Our culture values speed, innovation, diligence, and ethical practices. We emphasize accuracy, respect, and trust, and welcome diverse perspectives.
Commitment to Inclusion
We are an Equal Opportunity Employer. All qualified applicants will be considered without regard to race, color, religion, sex (including pregnancy, gender identity, or expression), national origin, age, disability, genetic information, veteran status, or other protected characteristics. Our hiring practices reflect our dedication to fairness, merit, and business needs.


