INTERNSHIP DETAILS

Stage Data Engineer

CompanySky
LocationMilan
Work ModeOn Site
PostedApril 14, 2026
Internship Information
Core Responsibilities
Support the development and maintenance of batch and real-time data processing pipelines using Python, SQL, and cloud technologies. Collaborate with cross-functional teams to assist in delivering meaningful data solutions and improving data quality.
Internship Type
full time
Company Size
35322
Visa Sponsorship
No
Language
English
Working Hours
40 hours
Apply Now →

You'll be redirected to
the company's application page

About The Company
Sky connects and entertains millions of people across Europe. At the heart of everything we do, is a belief that people deserve better. For decades, we’ve shaken up every category we entered to give people what they love, to make life a little easier and to provide great value. That’s how we bring millions of customers the joy of a better experience in TV, broadband and mobile. In TV, we offer the best sports coverage, unmissable TV and the smartest ways to stream and aggregate the TV you love. In broadband, we power homes and businesses, with a fast, reliable connection. In mobile, we bring people closer, with plans at unbeatable value. And now, you can even keep your home connected and protected, through our smart insurance. We design our products to fit seamlessly into your life, with service whenever and however you need it. That’s how we do better for customers. And we believe in better for society too. We power the cultural economy in the UK and beyond, making award-winning news, original sport, and entertainment. We contribute billions to UK GDP, creating and sustaining thousands of jobs and sharing both our journalism and our coverage of the arts, free of charge. We are cutting emissions and making recyclable, energy-efficient products, and we give back, through free internet access and digital skills for under-served communities and young people. Sky is owned by Comcast Corporation, a global media and technology company.
About the Role

Our Data Engineering team builds highly intuitive data solutions that underpin business functions across Sales, Marketing, Finance and Operations. We are looking for a Data Engineer Intern who are passionate about learning how to build structured, high-quality data pipelines. You will work closely with experienced data engineers, gaining exposure to Python, SQL, and Google Cloud Platform (GCP) tools such as BigQuery and managed Airflow, while learning how large-scale data systems operate in a modern digital environment. 

 

What you’ll do 

Support the development and maintenance of batch and real-time data processing pipelines using Python, SQL, and cloud technologies 
Collaborate with data engineers, data scientists, software engineers to understand data requirements and assist in delivering meaningful data solutions 
Participate in improving data quality by assisting with testing, validation, documentation, and basic performance checks 
Help conduct exploratory investigations and root-cause analyses on data issues under the guidance of senior team members 
Learn and apply software engineering best practices, including version control, code reviews, and continuous integration workflows 

 

Who you are 

You have a strong interest in software engineering and data technologies, and you are motivated to learn how modern data infrastructure is designed and operated 
You enjoy problem-solving, working with data, and collaborating with cross-functional teams 
You are curious, adaptable, and eager to learn new tools, frameworks, and engineering practices 
You value reliability, maintainability, and quality in your work 

 

Must have 

Currently pursuing or recently completed a BSc in Computer Science, Mathematics, Engineering, or a related quantitative field 
Foundational knowledge of Python and SQL (coursework or personal projects acceptable) 
Basic understanding of databases and data processing concepts 
Strong analytical and problem-solving skills 
Proficiency in English 

 

Nice to have 

Exposure to cloud platforms such as GCP, AWS, or Azure 
Experience with data processing frameworks (e.g., Apache Beam, Apache Spark) from coursework or personal experimentation 
Interest in large-scale data systems and modern pipeline architectures. 

Key Skills
PythonSQLGoogle Cloud PlatformBigQueryAirflowData PipelinesData EngineeringSoftware EngineeringVersion ControlContinuous IntegrationData QualityRoot-cause AnalysisBatch ProcessingReal-time ProcessingData Systems
Categories
Data & AnalyticsTechnologySoftwareEngineering