INTERNSHIP DETAILS

Perception Engineer - Apprentice

CompanyOrigin
LocationBengaluru
Work ModeOn Site
PostedJanuary 24, 2026
Internship Information
Core Responsibilities
The role involves designing and deploying perception pipelines for autonomous drywall-finishing robots. Key tasks include building ROS 2 nodes, training deep-learning models, and collaborating with various teams to create production-ready perception stacks.
Internship Type
full time
Company Size
105
Visa Sponsorship
No
Language
English
Working Hours
40 hours
Apply Now →

You'll be redirected to
the company's application page

About The Company
Origin is building the World's First General Purpose Construction Robot. Nothing defines an era more clearly than the way a society builds. Yet, construction remains the only major industry where productivity has declined over the last century. While the rest of the world has become mechanised and lightning-fast, construction is still slow, hazardous, and heavily manual. We are here to change that. We are on a multi-decade journey to enable humanity to build anything, anywhere, autonomously, without ever compromising on craftsmanship. Origin is working with leading Trade Contractors and General Contractors in New York and will expand across the United States in the future. If you want to define the future of automation in Construction, please get in touch. Join the most talent-dense robotics team in the world. We are looking for passionate engineers who want to shape how the future is built. If you are ready to reinvent how humanity builds, apply here: https://apply.workable.com/10x-3/
About the Role

As a Perception Engineering Intern / Apprentice at Origin (Formerly 10xConstruction), you will help our autonomous drywall-finishing robots "see" the job-site. You'll design and deploy perception pipelines—camera + LiDAR fusion, deep-learning vision models, and point-cloud geometry—to give the robot the awareness it needs.

Key Responsibilities

* Build ROS 2 nodes for 3-D point-cloud ingestion, filtering, voxelisation and wall-plane extraction (PCL / Open3D).

* Train and integrate CNN / Transformer models for surface-defect detection and semantic segmentation.

* Implement RANSAC-based pose, plane and key-point estimation; refine with ICP or Kalman/EKF loops.

* Fuse LiDAR, depth camera, IMU and wheel odometry data for robust SLAM and obstacle avoidance.

* Optimize and benchmark models on Jetson-class edge devices with TensorRT / ONNX Runtime.

* Collect, label and augment real & synthetic datasets; automate experiment tracking (Weights & Biases, MLflow).

* Collaborate with manipulation, navigation and cloud teams to ship end-to-end, production-ready perception stacks.

Qualifications & Skills

* Solid grasp of linear algebra, probability and geometry; coursework or projects in CV or robotics perception.

* Proficient in **Python 3.x and C++17/20**; comfortable with git and CI workflows.

* Experience with **ROS 2 (rclcpp / rclpy)** and custom message / launch setups.

* Familiarity with **deep-learning vision** (PyTorch or TensorFlow)—classification, detection or segmentation.

* Hands-on work with **point-cloud processing** (PCL, Open3D); know when to apply voxel grids, KD-trees, RANSAC or ICP.

* Bonus: exposure to camera–LiDAR calibration, or real-time optimization libraries (Ceres, GTSAM).

Why Join Us

* Work side-by-side with founders and senior engineers to redefine robotics in construction.

* Build tech that replaces dangerous, repetitive wall-finishing labor with intelligent autonomous systems.

* Help shape not just a product, but an entire company—and see your code on real robots at active job-sites.

  • Python 3.x
  • C++17/20
  • ROS 2
  • PyTorch
  • Open3D
  • RANSAC
Key Skills
Python 3.xC++17/20ROS 2Deep-Learning VisionPoint-Cloud ProcessingCamera-LiDAR CalibrationRANSACSLAMTensorRTONNX RuntimeCNNTransformer ModelsIMUKalmanICPPCLOpen3D
Categories
TechnologyEngineeringConstructionData & AnalyticsSoftware