Design and maintain high-performance data pipelines that enable efficient ingestion, transformation, and storage of data from varied sources. Work closely with technical teams to build cloud-native solutions that support enterprise analytics and operational reporting.
Key Responsibilities
- Develop and optimize ETL workflows to ensure reliable data movement and transformation across systems
- Architect and implement data models that align with business intelligence and application needs
- Deploy and manage infrastructure on AWS and Azure platforms, including serverless functions, storage services, and messaging queues
- Integrate containerized applications using Docker and Kubernetes within scalable cloud environments
- Establish CI/CD pipelines using Jenkins, Git, and related tools to automate data engineering processes
- Collaborate with database administrators and developers to define data interfaces, APIs, and stored procedures
- Monitor system performance and data integrity using logging and observability platforms such as Splunk, CloudWatch, and Kibana
- Lead data migration efforts from legacy systems to modern architectures, supporting cloud adoption and system modernization
- Diagnose and resolve production issues to maintain system reliability and uptime
- Document technical designs, data flows, and operational procedures for audit and knowledge transfer purposes
Qualifications
Must have a Bachelor’s degree and at least five years of experience in data engineering or software development. Proficiency in Python, Java, SQL, and Pyspark is required. Experience with relational and NoSQL databases—including Oracle, PostgreSQL, and MongoDB—is essential. Candidates must demonstrate hands-on work with cloud technologies (AWS, Azure), container orchestration (Docker, Kubernetes), and DevOps tooling (Jenkins, Git, Maven).
Strong analytical abilities, problem-solving skills, and the capacity to explain technical concepts clearly to non-technical stakeholders are critical. Familiarity with Agile delivery methods and the ability to work independently in fast-moving environments are expected.
Preferred Background
- Experience supporting federal government programs or large-scale data transformation initiatives
- Cloud or data engineering certifications (AWS, Azure)
- Exposure to data visualization platforms such as Tableau or Kibana
- Knowledge of federal security and compliance standards
- Background in microservices, Spring Boot, or integrating AI/ML components
- Consulting experience in technology services


