Data Engineering Intern (AI & Automation)

You'll be redirected to
the company's application page
About your role
We are looking for a forward-thinking Data Engineering Intern to join our team for Summer 2026. This isn't just a role about moving data from point A to point B; it’s about rethinking how those pipelines are built. You will sit at the intersection of core infrastructure and AI, building robust ETL processes while leveraging AI technologies to automate the very workflows you create.
Your mission is dual-purpose: you will support our Business Intelligence efforts and directly influence our product roadmap by turning raw data into actionable insights through advanced dashboards and automated workflows.
A Day in the Life
- Build & Orchestrate: Design, develop, and maintain ETL pipelines to ingest data into our Snowflake warehouse using Python, SQL, and Airflow.
- AI-Driven Automation: Implement AI-powered solutions to streamline engineering tasks, including:
- Automating code generation and documentation.
- Building AI-driven data quality checks and anomaly detection.
- Developing "self-healing" pipelines that can identify and alert on ingestion errors.
- Insight Generation: Use Jupyter Notebooks and Streamlit to analyze data and build internal tools that help our product team make data-driven decisions.
- Visualization: Create high-impact dashboards in Tableau that translate complex data into a clear narrative for stakeholders.
- Agile Collaboration: Participate in daily Scrum huddles, manage tasks via Jira, and work closely with product owners and QA to promote code to production.
- Cloud Infrastructure: Interact with cloud services via CLI and manage containerized environments using Docker and Kubernetes.
What you’ll need
- Academic Status: Currently pursuing an undergraduate degree with a targeted graduation date in 2026 or early 2027, or pursuing a degree in Computer Science, Data Science, or a related quantitative field.
- Programming Mastery: Expertise in Python and SQL
- Infrastructure Mindset: A strong understanding of Data Warehousing (Snowflake) and ETL orchestration (Airflow).
- DevOps Curiosity: Familiarity with CLI, Docker, and Kubernetes for managing cloud-based environments.
- Analytical Toolkit: Experience with Jupyter Notebooks, Tableau, or Streamlit.
- Problem Solver: A proactive approach to using AI/LLMs to automate repetitive tasks and improve system reliability.
The Summer "Win"
By the end of your 10-week internship, you will have designed and deployed a functional ETL pipeline that feeds into a custom Insights Dashboard. Your work won't just sit in a folder, it will be used to influence our official product roadmap.
Interview Process
- Recruiter Phone Screen
- Technical Assessment
- Hiring Manager Interview
- Team Interview
Duration: June 8, 2026 - August 14, 2026, working 40 hours per week
- San Francisco Bay Area, CA: $50.00
- California (outside of San Francisco Bay Area): $46.25
- Colorado: $42.50
- Utah, Arizona, and North Carolina: $40.00
Actual compensation packages are determined by various factors unique to each candidate, including but not limited to skill set, depth of experience, certifications, specific work location, and performance during the interview process. In addition to base salary, this role may include variable compensation and be eligible for an equity grant, depending on the position and level.
By applying for this position, your data will be processed as per Rocket Lawyer Privacy Policy.
Prep Tools
PROFESSIONAL COVER LETTER TEMPLATES
Template Library
Professional templates
50+ templates for every role
YOUR RESUME KNOWS THE QUESTIONS
AI Question Predictor
Based on Data Engineering Intern (AI & Automation) role
ACE YOUR INTERVIEW IN REAL-TIME
Silent AI Co-Pilot
Real-time interview help
"Why Rocket Lawyer?"
💡 Mention their Law Practice and your passion for Python