- 4+ years of Hadoop solution architecture experience in Java environment.
- Strong ability to drive complex technical solutions deployed at an enterprise level; ability to drive big data technology adoption and changes through education and partnership with stakeholders
- Ability to negotiate, resolve, and prioritize complex issues and provide explanations and information to others on difficult issues
- Estimate and organize own work to meet or negotiate deadlines – lead / facilitate the creation of estimates
- Self-starter who can work with minimal guidance
- Strong communication skills Qualifications
- Demonstrated experience in architecture, engineering and implementation of enterprise-grade production big data use cases
- Extensive knowledge about Hadoop Architecture and HDFS
- Extensive hands on experience in MapReduce, Hive, Pig, Java, HBase, Solr, and the following Hadoop eco-system products: Sqoop, Flume, Oozie, Storm, Spark, and/or Kafka
- Hands on delivery experience working on popular Hadoop distribution platforms like Cloudera, HortonWorks or MapR
- Hands on experience in architectural design and solution implementation of large scale Big Data use cases
- Understanding of industry patterns for big data solutions
- Demonstrated experience in working with the vendor(s) and user communities to research and testing new technologies to enhance the technical capabilities of existing Hadoop cluster
- Demonstrated experience in working with Hadoop architect and big data users to implement new Hadoop eco-system technologies to support multi-tenancy cluster
All your information will be kept confidential according to EEO guidelines.
To apply for this job please visit tinyurl.com.