Software Engineer – Java...

Software Engineer – Java/Hadoop

Santa Clara Valley, CA 2016-10-19 - –

Apple Apple

The Maps Data Augmentation team is chartered to look at various data signals, derive insights regarding map performance, and use those insights to drive improvements in Maps. We are looking for talented engineers who are motivated by challenging problems and are well versed with big data technologies.

Key Qualifications

Demonstrated deep proficiency with Big Data processing technologies (Hadoop, HBase, Cassandra, other NO SQL solutions)

Experience in building data pipelines and analysis tools using Java (not limited to J2EE), Scala, Python

Experience building large-scale server-side systems with distributed processing algorithms.

Aptitude to independently learn new technologies.

Strong problem solving skills

Experience designing or implementing systems which work with external vendors' interfaces

Excellent oral and written English communication skills

Strong mathematics background

Description

Combining disparate signals such as community feedback and other location signals data to validate and enhance our maps is an opportunity that combines large scale data processing, analytics and visualization.

In this role you'll be responsible for building data processing pipelines, data analysis, and visualization tools using Hadoop/Hive/Cassandra/HBase class of technologies. You will be working with internal and external teams driving system architecture and design, analyzing data, and producing insights.

Successful candidates will have strong engineering skills and communication, as well as a belief that data-driven processes lead to great products.

Education

  • Bachelors degree in CS or equivalent in computer science or related field.
  • Advanced degree preferred.

Additional Requirements

  • Familiarity with map data
  • Familiarity with real-time streaming distributed data processing systems
  • Development experience with Linux and iOS
  • The role is for a software engineer with 2-5 years experience and preferably 1-2 years on distributed data processing platforms like Hadoop, Cassandra, Solr/ElasticSearch.
  • The role will require hands on gathering requirements, design and coding tasks.

To apply for this job please visit tinyurl.com.