INTERNSHIP DETAILS

Intern for AI Engineering and Data Science

CompanyAURORA COOPERATIVE ELEVATOR COMPANY
LocationAurora
Work ModeOn Site
PostedJanuary 5, 2026
Internship Information
Core Responsibilities
The intern will help design, build, and ship AI-powered solutions, contributing to production-grade systems alongside experienced engineers. Responsibilities include developing AI applications, automating testing, and monitoring system performance.
Internship Type
other
Company Size
273
Visa Sponsorship
No
Language
English
Working Hours
40 hours
Apply Now →

You'll be redirected to
the company's application page

About The Company
Aurora Cooperative Elevator Company is a farmer-owned agricultural cooperative that provides goods and services in over 70 rural communities. Aurora Cooperative was formed in 1908 by a group of 22 farmers so that together they could provide goods and services the farmers were not able to provide on their own. Aurora Cooperative continues with this purpose of providing goods and services to farmers in each of our four core divisions of Grain, Agronomy, Animal Nutrition, and Energy.
About the Role

Job Details

Job Location: Aurora Corporate - Aurora, NE 68818

AI Engineering and Data Science Intern

Location Omaha, Nebraska (Aurora Cooperative office)

 

About Aurora Cooperative

Aurora Cooperative is a farmer owned agricultural cooperative headquartered in Aurora, Nebraska. We provide inputs and services that power farming operations seed, fertilizer, crop protection, animal nutrition, energy products and we operate grain elevators across our network. Our mission is to create value for our owners by offering top quality products, services, and expertise.

 

Role Overview

Join the Data Science team to help design, build, and ship AI powered solutions that improve how we serve our members and run the business. You won’t just be running experiments; you’ll contribute to production grade systems alongside experienced engineers. You’ll help us professionalize our AI operations by contributing to automated testing frameworks, deployment pipelines, and observability tools for applications like chatbots, RAG assistants, and text to SQL interfaces.

 

What You’ll Do

  • Build & Deploy AI Applications  Help develop AI applications (chatbots, RAG knowledge assistants, text to SQL) and contribute to deployment and upgrade processes.
  • Automate Testing & Evaluation  Contribute to automated testing frameworks that evaluate model performance, retrieval accuracy (RAG), and agent decision making before production.
  • Develop Retrieval Pipelines  Help engineer retrieval systems including chunking strategies, metadata management, embedding generation, and vector database indexing.
  • Monitor & Optimize  Assist with monitoring for system performance, latency, errors, and costs. Help build dashboards to track user interactions and model behavior.
  • Curate Knowledge Bases  Help build and maintain domain specific knowledge bases (agronomy, grain, logistics) using automated pipelines.
  • Data Engineering  Work with internal systems and SQL databases (Snowflake) to feed data into AI models and process outputs.
  • Collaborate & Communicate  Work with data engineers and domain experts to identify high impact use cases and present technical findings to stakeholders.

Minimum Qualifications

  • Pursuing a degree in Computer Science, Data Science, Engineering, Statistics, or a related field (or equivalent practical experience).
  • Hands on Build Experience  You have built at least one AI enabled application (e.g., chatbot, RAG system, agent) and can explain how it works.
  • AI Native Workflow  You are comfortable using AI coding assistants (Cursor, GitHub Copilot, Claude Code, etc.) to accelerate your development.
  • Core Tech Stack  Proficiency in Python and SQL. Comfort with REST APIs and Git version control.
  • Problem Solving  Ability to break down complex problems, debug systems, and learn new technologies quickly.

Preferred Qualifications

  • Experience with LLM Evaluation  Familiarity with frameworks or methods for evaluating the quality of LLM outputs (e.g., RAGAS, TruLens, or custom metrics).
  • DevOps, CI, CD Exposure  Understanding of continuous integration/deployment concepts, containerization (Docker), or automated testing.
  • Cloud & Data  Experience with Snowflake, Azure, or similar cloud data platforms.
  • Vector Search  Understanding of vector databases, embeddings, and semantic search concepts.
  • Frontend Skills  Ability to build quick prototypes using Streamlit or similar tools.

What You’ll Gain

  • Real World Impact   Contribute code that real employees use to solve real agricultural problems.
  • End to End Exposure   See projects through from the “business case” phase to deployment and monitoring, with guidance from senior team members.
  • Mentorship   Work directly with experienced data scientists and engineers who will help you grow your technical and professional skills.
  • Domain Knowledge   Learn how AI is applied in the complex world of modern agriculture and supply chain logistics.

Work Environment

  • This internship is based in our Omaha office with some hybrid remote work possible.

Qualifications


Key Skills
PythonSQLAI ApplicationsAutomated TestingData EngineeringCollaborationProblem SolvingREST APIsGitLLM EvaluationDevOpsCloud PlatformsVector DatabasesFrontend SkillsMonitoringKnowledge Bases
Categories
TechnologyData & AnalyticsEngineeringAgricultureSoftware