This role serves as a technical anchor for key enterprise accounts, delivering hands-on support for both standard and customized deployment environments. You'll troubleshoot complex technical issues, perform in-depth root cause analysis, and use tools like minikube, minitrino, and docker-compose to replicate and resolve customer-reported problems.
Key Responsibilities
- Respond to technical inquiries—both urgent and strategic—via the SFDC ticketing system
- Reproduce and diagnose issues efficiently, identifying underlying causes and delivering effective solutions
- Log defects in Jira for SEP and Galaxy platforms, and submit feature requests through Aha!
- Support customers during upgrade cycles, ensuring compliance with supported LTS versions
- Escalate unsupported LTS requests to the Account team when professional services are required
- Conduct regular technical reviews with business units to review open tickets, share updates on known issues, and recommend best practices
- Ensure customer environments remain on supported long-term support releases
- Develop and refine internal and customer-facing documentation
- Lead training sessions for peers and support teams to strengthen collective expertise
- Act as a technical resource for content development teams
- Drive your own technical growth and contribute to cross-functional projects that improve team efficiency
- Identify systemic challenges and propose practical solutions across teams
- Provide input to management on training needs, project opportunities, and process improvements
Qualifications
Applicants should bring at least five years of technical support experience, with a minimum of three years focused on Big Data platforms and containerized environments. Proficiency in Hadoop, Spark, data lakes, Docker, and Kubernetes is essential. Experience with cloud infrastructure on AWS, Azure, or GCP is required, along with working knowledge of authentication and authorization systems such as LDAP and OAuth 2.0. Familiarity with SSL/TLS, Linux administration, SQL, and scripting in Java, Python, or Bash is expected.
Technical Environment
The role operates within a modern data stack including Hadoop, Spark, Docker, Kubernetes, AWS, Azure, GCP, LDAP, OAuth2.0, SSL/TLS, Linux, SQL, Java, Python, Bash, minikube, minitrino, docker-compose, Jira, Aha!, and SFDC.
Compensation and Work Environment
This position offers competitive compensation, attractive stock grants, and flexible paid time off. The role supports a globally distributed workforce with adaptable scheduling and remote collaboration. The organization values personal ownership, inclusive culture, and continuous growth, fostering an environment where diverse perspectives drive innovation. Equal employment opportunities are extended to all individuals, with no tolerance for discrimination or harassment based on protected characteristics.


