Big Data/Hadoop Developer - Solution Specialist

  • Deloitte
  • Gilbert, AZ, USA
  • Oct 10, 2018
Full-time Cassandra Hadoop HBase Hive Kafka NoSQL Pig Spark SQL Unix

Job Description

Locations: Gilbert, AZ; Mechanicsburg, PA 

Are you an experienced, passionate pioneer in technology – a solutions builder, a roll-up-your-sleeves technologist who wants a daily collaborative environment, think-tank feel and share new ideas with your colleagues - without the extensive demands of travel? If so, consider an opportunity with our US Delivery Center – we are breaking the mold of a typical Delivery Center.

Our US Delivery Centers have been growing since 2014 with significant, continued growth on the horizon. Interested? Read more about our opportunity below…

The team

In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. 

The Analytics & Cognitive team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the Strategy practice, our Strategy & Analytics portfolio helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets.

Analytics & Cognitive will work with our clients to:

  • Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms
  • Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe actions
  • Drive operational efficiency by maintaining their data ecosystems, sourcing analytics expertise and providing As-a-Service offerings for continuous insights and improvements

Qualifications

Required

  • Strong communication skills, both written and oral
  • Excellent teamwork and interpersonal skills
  • Potential and ability to manage small engagements or work streams within large engagements
  • Aptitude for trouble-shooting and problem-solving
  • Strong technical skills including understanding of software development principles
  • Hands-on programming experience
  • Ability to travel up to 25%

Preferred

  • Ability to deploy and maintain multi-node Hadoop cluster
  • 3+ Experience working with Big Data eco-system including tools such as Hadoop, Map Reduce, Yarn, Hive, Pig, Impala, Spark , Kafka, Hive, Impala and Storm to name a few
  • Knowledgeable in techniques for designing Hadoop-based file layout optimized to meet business needs
  • Understands the tradeoffs between different approaches to Hadoop file design
  • Experience with techniques of performance optimization for both data loading and data retrieval
  • Experience with NoSQL Databases – HBase, Apache Cassandra, Vertica, or MongoDB
  • Able to translate business requirements into logical and physical file structure design
  • Ability to build and test rapidly Map Reduce code in a rapid, iterative manner
  • Ability to articulate reasons behind the design choices being made
  • Bachelor of Science in Computer Science, Engineering, or MIS, or equivalent experience
  • Strong communication and presentation skills

Additional Requirements

Strong technical expertise in:

  • Hadoop (Cloudera distribution)
  • Apache Accumulo
  • SQL and NO SQL Data stores
  • UNIX, Java
  • Kafka
  • Apache Spark Sentry

Job ID

E19GILCACSLL200-SA