Warning: count(): Parameter must be an array or an object that implements Countable in /home/anton702/public_html/wp-includes/post-template.php on line 317
Data Engineer - Virtual U.S. - IoT BigData Jobs

Data Engineer – Virtual ...

teradata Teradata

Data Engineer

Join Teradata Ecosystem Integration COE, the world's leader in designing, building and deploying highly effective business focused analytic solutions by leveraging the emerging platform interconnect technologies that bridge traditional Data Warehousing and Hadoop for our customers in the Americas. You will be working closely with a highly motivated team that enjoys being agile in the ever changing world of data gathering and dispensing information on disparate platforms. You will be working to define best practices for the movement of data in the traditional DW, Cloud and Hadoop. The solution we develop will ensure governance, performance and data management for the purpose of analytics and data processing with Big Data. The role of Data Engineer will consult with customers both onsite and remote.

Key Responsibilities:

Candidates need to have an understanding of Hadoop and Big Data open source components such as YARN, HDFS, Hive, Presto, Spark, Kafka, Storm, Hbase/Cassandra, ElasticSearch/Solr and others as well as systems management tools such as Ambari, Cloudera Manager, Nagios, and Ganglia.

Candidate must have at least 1 to 3 years of software development (at least 1 year with Java or Python) Ideally, candidates will have database experience in data modeling and analysis, data ingestion, transformation and data management. They will have some working knowledge of aspects of system and data security, governance, and operational and business level metadata management. They will also have hands on experience with at least one major Hadoop Distribution such as Cloudera or Hortonworks.

You will grow into the following responsibilities:

• Consult on Data Integration design, installation and optimization with IT organizations

• Design, and develop automated test cases that verify solution feasibility and interoperability, including performance assessments

• Recommend and design integration with third-party systems and network management tools

• Research new technologies and startups in the big data space.

• Design and recommend approaches for big data ingestion strategies from any data source or type, including use of leading third party tools and their integration with overall metadata management

• Deliver designs and consultation on data transformation technology choices

• Deliver designs and consultation on data export and synchronization

• Apply knowledge of emerging technologies to define new solutions

• Consult and advise Teradata solution architects and consultants on overall enterprise wide analytics solutions around movement of data between disparate platforms.


Basic Requirements:

1 to 3 years’ experience of at least 5 of the following;

• Master's Degree or Bachelor’s Degree in Computer Science or related discipline (or equivalent work experience)

• Experience working in a group environment

• Experience in writing business and technology focused documentation

• Presentation experience

• Experience working with and developing on Linux

• Experience with SQL and one major RDBMS’s

• Experience building and testing integrated solutions with languages like Python, Java, or bash

• Experience with a major Hadoop distribution and Hive

Preferred Requirements:

• Hands on experience with Cloudera 5.4+, Hortonworks 2.3+ or MapR 5.0+.

• Knowledge of System-wide bottleneck analysis including network analysis and performance tuning

• Experience with ETL solutions on Hadoop

• experience in data modeling

• Experience with MapReduce and Spark solution design and development

• Experience developing Java code in a big data environment

• Experience with Ambari and/or Cloudera Enterprise Manager and Director

• Experience with Tableau and/or other Business Intelligence tools

• Knowledge and experience with public cloud providers, specifically AWS and Azure

Nice to Have

• Experience with Teradata IDW implementations

• Experience with ElasticSearch, SolR or Lucene

• Experience with DevOps or CI/CD technologies

• Experience with Information Security practices, principles and tools

*Our total compensation approach includes a competitive base salary, 401(k), strong work/family programs, and medical, dental and disability coverage.

Teradata is an Equal Opportunity/Affirmative Action Employer and commits to hiring returning veterans.

To apply for this job please visit tinyurl.com.