INTERNSHIP DETAILS

Research Intern RL & Post-Training Systems, Turbo (Summer 2026)

CompanyTogether AI
LocationSan Francisco
Work ModeOn Site
PostedJanuary 6, 2026
Internship Information
Core Responsibilities
As a research intern, you will study reinforcement learning and post-training methods, focusing on their performance and scalability in relation to inference behavior. Projects will involve co-designing algorithms and systems to enhance experimentation capabilities.
Internship Type
full time
Salary Range
$58 - $63
Company Size
304
Visa Sponsorship
No
Language
English
Working Hours
40 hours
Apply Now →

You'll be redirected to
the company's application page

About The Company
Together AI is a research-driven AI cloud infrastructure provider. Our purpose-built GPU cloud platform empowers AI engineers and researchers to train, fine-tune, and run frontier class AI models. Our customers include leading SaaS companies such as Salesforce, Zoom, and Zomato, as well as pioneering AI startups like ElevenLabs, Hedra, and Cartesia. We advocate for open source AI and believe that transparent AI systems will drive innovation and create the best outcomes for society.
About the Role
<h3><strong>About Together AI</strong></h3> <p>Together AI is a research-driven artificial intelligence company. We believe open and transparent AI systems will drive innovation and create the best outcomes for society, and together we are on a mission to significantly lower the cost of modern AI systems by co-designing software, algorithms, and models. We have contributed to leading open-source research, models, and datasets to advance the frontier of AI, and our team has been behind technological advancements such as FlashAttention, Mamba, FlexGen, SWARM Parallelism, Mixture of Agents, and RedPajama.</p> <h3><strong>Role Overview&nbsp;</strong></h3> <p>The <strong>Turbo Research</strong> team investigates how to make <strong>post-training and reinforcement learning for large language models efficient, scalable, and reliable</strong>. Our work sits at the intersection of <strong>RL algorithms</strong>, <strong>inference systems</strong>, and <strong>large-scale experimentation</strong>, where the <strong>cost and structure of inference dominate overall training efficiency</strong> and shape what learning algorithms are practical.</p> <p>As a research intern, you will study RL and post-training methods whose performance and scalability are tightly coupled to <strong>inference behavior</strong>, co-designing <strong>algorithms and systems</strong> rather than treating them independently. Projects aim to unlock new regimes of experimentation—larger models, longer rollouts, and more complex evaluations—by rethinking how inference, scheduling, and training interact.</p> <h3><strong>Requirements</strong></h3> <p>We’re looking for research interns who want to work on <strong>foundational questions in RL and post-training</strong>, grounded in realistic inference systems.</p> <p>You might be a strong fit if you:</p> <ul> <li>Are pursuing a <strong>PhD or MS</strong> in Computer Science, EE, or a related field (exceptional undergraduates considered).</li> <li>Have research experience in one or more of: <ul> <li><strong>RL or post-training for large models</strong> (e.g., RLHF, RLAIF, GRPO, preference optimization).</li> <li><strong>ML systems</strong> (inference engines, runtimes, distributed systems).</li> <li><strong>Large-scale empirical ML research or evaluation</strong>.</li> </ul> </li> <li>Are comfortable with <strong>empirical research</strong>: <ul> <li>Designing controlled experiments and ablations.</li> <li>Interpreting noisy results and drawing principled conclusions.</li> </ul> </li> <li>Can work across abstraction layers: <ul> <li>Strong <strong>Python</strong> skills for experimentation.</li> <li>Willingness to modify inference or training systems (experience with C++, CUDA, or similar is a plus).</li> </ul> </li> <li>Care about <strong>research insight</strong>, not just benchmarks: <ul> <li>You ask <em>why</em> methods work or fail under real system constraints.</li> <li>You think about how infrastructure assumptions shape algorithmic outcomes.</li> </ul> </li> </ul> <h3>Example Research Directions</h3> <p>Intern projects are tailored to your background and interests, and may include:</p> <ul> <li><strong>Inference-Aware RL &amp; Post-Training</strong> <ul> <li>Designing RL or preference-optimization objectives that explicitly account for inference cost and structure (e.g., speculative decoding, partial rollouts, controllable sampling).</li> <li>Studying how inference-time approximations affect learning dynamics in GRPO-, RLHF-, RLAIF-, or DPO-style methods.</li> <li>Analyzing bias, variance, and stability trade-offs introduced by accelerated inference within RL loops.</li> </ul> </li> </ul> <ul> <li><strong>RL-Centric Inference Systems</strong> <ul> <li>Developing inference mechanisms that support <strong>deterministic, reproducible RL rollouts</strong> at scale.</li> <li>Exploring batching, scheduling, and memory-management strategies optimized for RL and evaluation workloads rather than pure serving.</li> <li>Investigating how KV-cache policies, sampling controls, or runtime abstractions influence learning efficiency.</li> </ul> </li> </ul> <ul> <li><strong>Scaling Laws &amp; Cost–Quality Trade-offs</strong> <ul> <li>Empirically characterizing how reward improvement and generalization scale with rollout cost, latency, and throughput.</li> <li>Quantifying when systems-level optimizations change algorithmic behavior rather than only reducing runtime.</li> <li>Identifying regimes where inference efficiency unlocks qualitatively new learning capabilities.</li> </ul> </li> </ul> <ul> <li><strong>Evaluation &amp; Measurement</strong> <ul> <li>Designing rigorous benchmarks and diagnostics for post-training and RL efficiency.</li> <li>Studying failure modes in long-horizon training and how system constraints shape outcomes.</li> </ul> </li> </ul> <h3><strong>Preferred Qualifications</strong></h3> <ul> <li>Prior research experience with foundation models or efficient machine learning</li> <li>Publications at leading ML and NLP conferences (such as NeurIPS, ICML, ICLR, ACL, or EMNLP)</li> <li>Understanding of model optimization techniques and hardware acceleration approaches</li> <li>Contributions to open-source machine learning projects</li> </ul> <h3><strong>Application Process</strong></h3> <p>Please submit your application with:</p> <ol> <li>Resume/CV</li> <li>A cover letter that includes your preferred research areas, academic transcript (unofficial is acceptable), and links to relevant projects or publications</li> </ol> <h3><strong>Internship Program Details</strong></h3> <p>Our summer internship program spans over 12 weeks where you’ll have the opportunity to work with industry-leading engineers building a cloud from the ground up and possibly contribute to influential open source projects. Our internship dates are May 18th to August 7th or June 15th to September 4th.&nbsp;</p> <h3><strong>Compensation</strong></h3> <p>We offer competitive compensation, housing stipends, and other competitive benefits. The estimated US hourly rate for this role is $58-63/hr. Our hourly rates are determined by location, level and role. Individual compensation will be determined by experience, skills, and job-related knowledge.</p> <h3><strong>Equal Opportunity</strong></h3> <p>Together AI is an Equal Opportunity Employer and is proud to offer equal employment opportunity to everyone regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, veteran status, and more.</p> <p>Please see our privacy policy at <a href="https://www.together.ai/privacy">https://www.together.ai/privacy</a></p>
Key Skills
Reinforcement LearningPost-TrainingMachine LearningPythonC++CUDAEmpirical ResearchControlled ExperimentsInference SystemsLarge-Scale ML ResearchAlgorithm DesignBenchmarkingData AnalysisModel OptimizationDistributed SystemsOpen-Source Contributions
Categories
TechnologyScience & ResearchData & AnalyticsSoftware
Benefits
Housing StipendsCompetitive Benefits