Warning: count(): Parameter must be an array or an object that implements Countable in /home/anton702/public_html/wp-includes/post-template.php on line 317
Big Data Engineer Consultant - IoT BigData Jobs

Big Data Engineer Consultant

Accenture Accenture

Organization: Digital Growth Platform

Location: Multiple US Locations

Travel: Travel Required

Join Accenture and help transform leading organizations and communities around the world. The sheer scale of our capabilities and client engagements and the way we collaborate, operate and deliver value provides an unparalleled opportunity to grow and advance. Choose Accenture, and make delivering innovative work part of your extraordinary career.

People in our Client & Market career track drive profitable growth by developing market-relevant insights to increase market share or create new markets. They progress through required promotion into market-facing roles that have a direct impact on sales.

Analytics professionals create new insights from predictive statistical modeling activities that target and deliver value to our clients.

The Big Data Engineer Consultant empowers clients to turn information into action by gathering, analyzing and modeling client data which enables smarter decision making. Uses a broad set of analytical tools and techniques to develop quantitative and qualitative business insights. Works with partners as necessary to integrate systems and data quickly and effectively, regardless of technical challenges or business environments.

Job Description

Do you have a pulse on new technologies and a desire to change the way business gets done? Do you want to implement emerging solutions for some of the most successful companies around? If you answered yes to these questions and you are passionate about helping clients effectively manage enormous amounts of data to generate knowledge and value, then we want to meet you.

Data Engineers at the Consultant level will be responsible for architecture, design and implementation of Hadoop and NoSQL based full scale solutions that includes data acquisition, storage, transformation, security, data management and data analysis using these technologies. A solid understanding of infrastructure planning, scaling, design and operational considerations that are unique to Hadoop, NoSQL and other emerging data technologies is required. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to identify and apply Hadoop and NoSQL solutions to challenges with data and provide better data solutions to industries.

Basic Qualifications

  • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience
  • Minimum 2+ years of building and deploying applications java applications in a Linux/Unix environment.
  • Minimum of 1+ years designing and building large scale data loading, manipulation, processing, analysis, blending and exploration solutions using Hadoop/NoSQL technologies (e.g. HDFS, Hive, Sqoop, Flume, Spark, Kafka, HBase, Cassandra, MongoDB etc.)
  • Minimum 1+ years of architecting and organizing data at scale for a Hadoop/NoSQL data stores
  • Minimum 1+ years of coding with MapReduce Java, Spark, Pig, Hadoop Streaming, HiveQL, Perl/Python/PHP for data analysis of production Hadoop/NoSQL applications

Preferred Skills

  • Minimum 2 years designing and implementing relational data models working with RDBMS move to preferred
  • Minimum 2 years working with traditional as well as Big Data ETL tools move to preferred
  • Minimum 2 years of experience designing and building REST web services move to preferred
  • Designing and building statistical analysis models, machine learning models, other analytical modeling using these technologies on large data sets (e.g. R, MLib, Mahout, Spark, GraphX) move to preferred
  • Minimum 1 year of experience implementing large scale cloud data solutions using AWS data services e.g. EMR, Redshift move to preferred
  • 2+ years of hands-on experience designing, implementing and operationalizing production data solutions using emerging technologies such as Hadoop Ecosystem (MapReduce, Hive, HBase, Spark, Sqoop, Flume, Pig, Kafka etc.), NoSQL(e.g. Cassandra, MongoDB), In-Memory Data Technologies, Data Munging Technologies.
  • Architecting large scale Hadoop/NoSQL operational environments for production deployments
  • Designing and Building different data access patterns from Hadoop/NoSQL data stores
  • Managing and Modeling data using Hadoop and NoSQL data stores
  • Metadata management with Hadoop and NoSQL data in a hybrid environment




To apply for this job please visit tinyurl.com.