Shape the perception systems behind autonomous freight robots
Work on the core technology that allows mobile robots to understand and navigate complex environments such as trailers, docks, and freight facilities. You'll be responsible for building and refining the perception stack that processes data from LiDAR, cameras, and inertial sensors to enable safe and accurate movement around people and cargo.
What You'll Do
- Develop and maintain perception software for object detection, segmentation, localization, and mapping
- Integrate and calibrate sensors including depth cameras and IMUs on physical robot platforms
- Process and filter 3D point cloud data to ensure consistent performance in real-world conditions
- Design and implement sensor fusion solutions that combine multiple data sources for improved environmental awareness
- Assemble and manage datasets used for training, validating, and testing perception models
- Collaborate with navigation, controls, and hardware teams to align perception outputs with system needs
- Diagnose and resolve perception issues using recorded logs, sensor data, and on-site testing
- Support continuous integration and deployment workflows for perception software updates
What We're Looking For
- Proven experience deploying perception software on robots in real-world customer environments
- Track record of building systems that perform reliably in unstructured or dynamic settings
- Strong debugging skills, particularly in diagnosing sensor-related issues on physical hardware
Technology You'll Use
LiDAR, RGB and depth cameras, IMUs, point cloud processing, sensor fusion, object detection, segmentation, localization, mapping, and CI/CD pipelines.
Work Environment
This is an onsite role based in Atlanta. You'll work directly with robots and hardware in a fast-paced, applied robotics environment focused on real-world deployment.