Sr. Big Data Engineer

  • General Mills
  • Minneapolis, MN, USA
  • Nov 26, 2019
Full-time Big Data Data Analysis Data Architecture Data Engineering Data Science Data Warehousing ETL Hadoop SQL Tableau

Job Description

WHAT YOU’LL DO 

As a Data Engineer, you will work closely with a multidisciplinary agile team to build high quality data pipelines driving analytic solutions. These solutions will generate insights from our connected data, enabling General Mills to advance the data-driven decision-making capabilities of our enterprise.

This role requires deep understanding of data architecture, data engineering, data analysis, reporting, and a basic understanding of data science techniques and workflows. 

In this role you will: 

  • Design, develop, optimize, and maintain data architecture and pipelines that adhere to ETL principles and business goals. 
  • Solve complex data problems to deliver insights that helps our business to achieve their goals 
  • Create data products for analytics and data scientist team members to improve their productivity 
  • Advise, consult, mentor and coach other data and analytic professionals on data standards and practices 
  • Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions 
  • Lead evaluation, implementation and deployment of emerging tools & process for analytic data engineering to improve our productivity as a team 
  • Develop and deliver communication & education plans on analytic data engineering capabilities, standards, and processes 
  • Partner with business analysts and solutions architects to develop technical architectures for strategic enterprise projects and initiatives. 
  • Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics

WHO YOU ARE 

  • Bachelor’s Degree 
  • 5 years of experience working in data engineering or architecture role
  • Expertise in SQL and data analysis and experience with at least one programming language
  • Experience developing and maintaining data warehouses in big data solutions 
  • Big Data development experience using some or all of the following: Hive, BigQuery, Impala, Spark and familiarity with Kafka
  • Experience working with BI tools such as Tableau, Power BI, Looker, Shiny 
  • Conceptual knowledge of data and analytics, such as dimensional modeling, ETL, reporting tools, data governance, data warehousing, structured and unstructured data. 
  • Exposure to machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics 
  • Passion for agile software processes, data-driven development, reliability, and experimentation 
  • Experience working on a collaborative agile product team  
  • Excellent communication, listening, and influencing skills 

WHAT’S NICE TO HAVE 

  • Bachelor’s degree in Computer Science, MIS, or Engineering 
  • 7+ years applicable work experience
  • Experience with developing solutions on cloud computing services and infrastructure in the data and analytics space 
  • Experience in Python or Scala 
  • Big Data development experience using Hive, Impala, Spark and familiarity with Kafka 
  • Familiarity with the Linux operating system  
  • Exposure to machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics
  • Experience with OLAP such as AtScale, SSAS, SAP BW, Essbase
  • Knowledge of Data Preparation, Data Wrangling, and Feature Engineering