INTERNSHIP DETAILS

Solution Architecture Intern, AI in Industry - 2026

CompanyNVIDIA
LocationBeijing
Work ModeOn Site
PostedMarch 6, 2026
Internship Information
Core Responsibilities
The intern will provide technical support for the adoption of NVIDIA technologies, applying knowledge in accelerated computing and machine learning to design and implement AI model optimizations. This includes setting up training/inference, identifying bottlenecks, and verifying methods to improve model efficiency.
Internship Type
full time
Company Size
46209
Visa Sponsorship
No
Language
English
Working Hours
40 hours
Apply Now →

You'll be redirected to
the company's application page

About The Company
Since its founding in 1993, NVIDIA (NASDAQ: NVDA) has been a pioneer in accelerated computing. The company’s invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined computer graphics, ignited the era of modern AI and is fueling the creation of the metaverse. NVIDIA is now a full-stack computing company with data-center-scale offerings that are reshaping industry.
About the Role

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world.

Join NVIDIA, a groundbreaking leader in AI computing and visual technologies, at the forefront of innovation. As an AI in Industry Solution Architecture Intern, you'll be integral to our mission of redefining industries through AI and HPC. Our Solution Architect team builds innovative AI computing platforms, analyzes applications, and delivers outstanding value to our customers. This role offers a remarkable opportunity to harness NVIDIA's newest technologies to optimize large models, develop sophisticated AI workflows, and empower our clients with advanced AI solutions.

What you will be doing:

  • Provide technical support to internal developers and external customers, facilitating the adoption and implementation of NVIDIA technologies and products.

  • Apply your experience and knowledge in areas of accelerated computing and machine learning. Design and implement optimization of various AI models or business scenarios.

  • Setup model training or inference, identify the bottlenecks and verify the ways to improve model efficiency. Conduct surveys and experiments on learning models and to consolidate guidelines and relevant papers.

What we need to see:

  • Pursuing a Bachelor or Master in Computer Science, AI, or a related field; Or candidates pursuing a PhD in ML Infra or data systems for ML.

  • Can work under Linux, with strong programming skills in Python or C++.

  • Familiarity with AI models, including language models, video models, multi-modality models, or domain-specific models. Proficiency in at least one inference framework(e.g. TensorRT/TRT-LLM, ONNX Runtime, PyTorch, vLLM, SGLang, Dynamo).

  • Excellent problem-solving skills and the ability to troubleshoot complex technical issues.

  • Demonstrated ability to collaborate effectively across diverse, global teams, adapting communication styles while maintaining clear, constructive professional interactions.

Ways to stand out from the crowd:

  • Optimizing critical operators such as GEMM and attention mechanisms tailored to different GPU architectures to improve inference performance.

  • Conducting in-depth research on Speech LLM training and implementing audio classification.

  • Aligning performance with benchmark data to evaluate the accuracy of current modeling, including KV-cache and multi-modality modeling.

  • Familiarity with mainstream inference engines (e.g., vLLM, SGLang), or familiarity with disaggregated LLM Inference.

  • Experience on SOTA RL for reasoning model methods and try to consolidate best practices and relevant papers. 

Key Skills
AIHPCAccelerated ComputingMachine LearningModel OptimizationLinuxPythonC++Language ModelsVideo ModelsMulti-modality ModelsTensorRTTRT-LLMONNX RuntimePyTorchvLLM
Categories
TechnologyEngineeringSoftwareData & AnalyticsScience & Research