INTERNSHIP DETAILS

Associate, Software Engineer - Intern

CompanyPearson
LocationPoznan
Work ModeOn Site
PostedApril 22, 2026
Internship Information
Core Responsibilities
The intern will support internal developer tooling and AI-driven engineering initiatives through prototyping, experimentation, and documentation. They will also assist in benchmarking AI platforms, creating evaluation datasets, and developing onboarding materials for internal teams.
Internship Type
full time
Company Size
12041
Visa Sponsorship
No
Language
English
Working Hours
40 hours
Apply Now →

You'll be redirected to
the company's application page

About The Company
Our purpose is simple: to help people realize the life they imagine through learning. We believe that every learning opportunity is a chance for a personal breakthrough. That’s why our c. 20,000 Pearson employees are committed to creating vibrant and enriching learning experiences designed for real-life impact. We are the world’s leading learning company, serving customers in nearly 200 countries with digital content, assessments, qualifications, and data. For us, learning isn’t just what we do. It's who we are.
About the Role

Role: Associate, Software Engineer - Intern 

Location: Hybrid / Remote-friendly (team based in Poznań) 
Duration: 2 months 
Team: C4E AI Platform Enablement 
Start: 15th June 

 

 

Why join Pearson’s intern program? 

Pearson is the world’s learning company. We help people of all ages acquire the knowledge and skills they need to be successful in their work and careers. We believe that everyone should be able to keep learning, every day and in every way, throughout their lives. Bringing together everything we know about the science of learning and the latest technology, we’re shaping the future of teaching and learning. 

 

We’re looking for the next generation of talented graduates to join our team for an unforgettable internship. An internship at Pearson is an opportunity to bring your own unique perspective as a learner, together with your academic knowledge, technical skills, and enthusiasm, to help create products that help fix the skills challenges faced by learners, employees, and employers. 

 

As one of our interns, you will gain a comprehensive introduction to our business. You’ll be assigned to a team to support real-life projects that bring our products and services to life and are ready to launch to our learners.  

 

We foster a work environment that’s inclusive as well as diverse, where our people can be themselves. Every idea and perspective is valued so that our products reflect the people we serve, our teachers and students, employers and employees, and consumers and learners.

 

 

Introduction: 

We are looking for a motivated Software / AI Engineering Intern to join a team working on internal developer tooling and AI‑driven engineering initiatives. The intern will work closely with experienced engineers on hands‑on technical tasks related to prototyping, experimentation, and documentation of modern AI‑assisted development approaches. 

 

Required qualifications 

  • Strong Python fundamentals (scripts, notebooks, data handling)
  • Understanding of machine learning concepts (training vs inference, metrics)
  • Familiarity with Linux / command line
  • Interest in cloud platforms and AI infrastructure 

 

Nice‑to‑have qualifications 

  • Exposure to AWS, Azure, GCP, and/or Databricks (any one is sufficient)
  • Basic understanding of containers or REST APIs
  • Ability to write clear technical summaries
  • Documented hands-on experience with any of the above (GitHub, stackoverflow, etc.) 

 

 

What you will work on 

Platform research & documentation 

  • Analyze and document capabilities of AI / MLOps platforms (e.g., Databricks and related tooling)
  • Help maintain internal comparison matrices and capability overviews
  • Turn technical notes and PoC outputs into clear internal documentation 

 

Experimentation & evaluation support 

  • Reproduce and extend existing AI platform proof‑of‑concepts
  • Run controlled experiments (training, evaluation, serving) under guidance
  • Create evaluation golden datasets
  • Collect and summarize metrics such as runtime, cost, and latency 

 

AI observability & benchmarking 

  • Assist in defining simple evaluation and benchmarking scenarios 
  • Support preparation of dashboards or reports for model monitoring and observability 
  • Document gaps between desired and available observability signals 

 

Enablement & onboarding materials 

  • Contribute to “getting started” guides and reference workflows 
  • Help create examples showing how teams can use the AI platform end‑to‑end 
  • Collect and summarize feedback from early adopters 

 

What you will learn 

  • Practical MLOps concepts: experiment tracking, model serving, monitoring, vibe-coding platforms 
  • How to evaluate AI solutions 
  • How to design developer‑facing platforms and internal tooling 
  • How to communicate technical findings to non‑expert stakeholders 
  • How to use state-of-art AI-based tools for streamlining development 
  • How large organizations evaluate and adopt AI platforms 

 

 

 

 

Key Skills
PythonMachine learningLinuxCommand lineCloud platformsAI infrastructureAWSAzureGCPDatabricksContainersREST APIsTechnical documentationMLOpsBenchmarkingData handling
Categories
SoftwareTechnologyEngineeringData & AnalyticsEducation