· Hadoop development and implementation.
· Loading from disparate data sets.
· Pre-processing using Hive and Pig.
· Designing, building, installing, configuring and supporting Hadoop.
· Translate complex functional and technical requirements into detailed design.
· Perform analysis of vast data stores and uncover insights.
· Maintain security and data privacy.
· Create scalable and high-performance web services for data tracking.
· High-speed querying.
· Managing and deploying HBase.
· Being a part of a POC effort to help build new Hadoop clusters.
· Test prototypes and oversee handover to operational teams.
· Propose best practices/standards.
· Bachelor’s degree in Computer Science/ IT or equivalent work experience
· Experience working with, processing and managing large data sets is a must
· 2+ years of proven experience in Big Data Components/Frameworks (Hadoop, HBase, MapReduce, HDFS, Pig, Hive, Sqoop, Flume, Oozie, YARN)
· Knowledge and experience of System Development Life Cycle (SDLC), product development methodologies, database design concepts and system integration strategies.
· Experienced in SQL, core Java/Python/Perl is required
· Good knowledge of HBase schema design and optimization
· Experience working on Hadoop projects
· Good in requirements gathering and analysis
· Solid SQL experience is a big advantage.
· Familiarity with MapR M7 is highly desirable
· Proficient communication skills both verbal and written
All your information will be kept co- MS in Finance, Financial Engineering, Analytics or Mathematics, Computer Science, Statistics, Industrial Engineering, Operations research, or related field.
nfidential according to EEO guidelines.
To apply for this job please visit tinyurl.com.