INTERNSHIP DETAILS

Research Scientist Intern

CompanyXPENG
LocationSanta Clara
Work ModeOn Site
PostedMarch 18, 2026
Internship Information
Core Responsibilities
Conduct research on designing and implementing large-scale multi-modal architectures for autonomous driving. Collaborate with researchers and engineers to improve model interpretability and action quality.
Internship Type
full time
Company Size
2237
Visa Sponsorship
No
Language
English
Working Hours
40 hours
Apply Now →

You'll be redirected to
the company's application page

About The Company
XPENG is a leading Chinese Smart EV company that designs, develops, manufactures, and markets Smart EVs that appeal to the large and growing base of technology-savvy middle-class consumers. Its mission is to drive Smart EV transformation with technology and data, shaping the mobility experience of the future. In order to optimize its customers’ mobility experience, XPeng develops in-house its full-stack advanced driver-assistance system technology and in-car intelligent operating system, as well as core vehicle systems including powertrain and the electrical/electronic architecture. XPeng is headquartered in Guangzhou, China. In 2021, the Company established its European headquarters in Amsterdam, along with other dedicated offices in Copenhagen, Munich, Oslo, and Stockholm.The Company’s Smart EVs are mainly manufactured at its plant in Zhaoqing and Guangzhou,Guangdong province. For more information, please visit https://heyxpeng.com.
About the Role
XPENG is a leading smart technology company at the forefront of innovation, integrating advanced AI and autonomous driving technologies into its vehicles, including electric vehicles (EVs), electric vertical take-off and landing (eVTOL) aircraft, and robotics. With a strong focus on intelligent mobility, XPENG is dedicated to reshaping the future of transportation through cutting-edge R&D in AI, machine learning, and smart connectivity.
 

About the Role

We are actively seeking a full-time Research Scientist Intern to drive the modeling and algorithmic development of XPENG’s next-generation Vision-Language-Action (VLA) Foundation Model — the core brain that powers our end-to-end autonomous driving systems.
 
You will work closely with world-class researchers, perception and planning engineers, and infrastructure experts to design, train, and deploy large-scale multi-modal models that unify vision, language, and control. Your work will directly shape the intelligence that enables XPENG’s future L3/L4 autonomous driving products.

Key Responsibilities

  • Conduct research on designing and implementing large-scale multi-modal architectures (e.g., vision–language–action transformers) for end-to-end autonomous driving.
  • Design and integrate cross-modal alignment (e.g., visual grounding, temporal reasoning, policy distillation, imitation and reinforcement learning) to improve model interpretability and action quality.
  • Closely collaborate with researchers and engineers across the modeling and infrastructure team.
  • Contribute to top-tier AI/CV/ML conferences publications and present research findings.

Minimum Qualifications

  • Currently enrolled in the Master/Ph.D program in Computer Science, Electrical/Computer Engineering, or related field, with the specialization in the CV/NLP/ML.
  • Experience in multi-modal modeling (vision, language, or planning), with deep understanding of representation learning, temporal modeling, and reinforcement learning techniques.
  • Strong proficiency in PyTorch and modern transformer-based model design.

Preferred Qualifications

  • Publication record in top-tier AI conferences (CVPR, ICCV, ECCV, NeurIPS, ICLR, ICML, etc).
  • Prior experience building foundation or end-to-end driving models, or LLM/VLM architectures (e.g., ViT, Flamingo, BEVFormer, RT-2, or GRPO-style policies).
  • Knowledge of RLHF/DPO/GRPO, trajectory prediction, or policy learning for control tasks.
  • Familiarity with distributed training (DDP, FSDP) and large-batch optimization.

What We Offer

  • A collaborative, research-driven environment with access to massive real-world data and industry-scale compute.
  • Opportunity to work with top-tier researchers and engineers advancing the frontier of foundation models for autonomous driving.
  • Direct impact on the next generation of intelligent mobility systems.
  • Comprehensive benefits, meals, and team-building activities.
What do we provide:
  • A fun, supportive and engaging environment
  • Infrastructures and computational resources to support your work.
  • Opportunity to work on cutting edge technologies with the top talents in the field.
  • Opportunity to make significant impact on the transportation revolution by the means of advancing autonomous driving
  • Competitive compensation package
  • Snacks, lunches, dinners, and fun activities
 
We are an Equal Opportunity Employer. It is our policy to provide equal employment opportunities to all qualified persons without regard to race, age, color, sex, sexual orientation, religion, national origin, disability, veteran status or marital status or any other prescribed category set forth in federal or state regulations.
Key Skills
Multi-Modal ModelingRepresentation LearningTemporal ModelingReinforcement LearningPyTorchTransformer-Based Model DesignVisual GroundingTemporal ReasoningPolicy DistillationImitation LearningCollaborationResearch PublicationDistributed TrainingLarge-Batch Optimization
Categories
TechnologyScience & ResearchEngineering
Benefits
Comprehensive BenefitsMealsTeam-Building Activities