Big Data Engineer

Endurance International Group

Small businesses are a critical part of communities and economies around the globe. Over 28 million small business owners operate in the US alone, providing critical services and driving employment. Unfortunately, the software that serves these hard working professionals doesn’t always solve their problems or grow with them. Our mission is to serve them better. Do you have what it takes to help small businesses grow? Are you passionate about customers? Do you love celebrating their success? If you answered yes, then this job is for you. Come be part of a new team within Endurance that is laser focused on helping the world's small businesses thrive!

Ready to help make small businesses successful? Come join us…

Our Big Data team will be responsible for technical vision, architecture, design and development of platform components of the next generation platform providing meaningful insights to our customers and business partners across the organization.

In this role, you will develop strategic design and requirements on small systems or modules of large systems (large scale). You will also guide and mentor junior developers in developing code and workflow procedures, and conduct code reviews. This Engineer will perform general application development activities, including unit testing, code deployment to development environment and technical documentation.

Responsibilities:
Participate in collaborative software and system design and development of the new platform.

Explore and evaluate new ideas and technologies.

Ensure conceptual and architectural integrity of the platform.

Work on large-scale, multi-tier big data platform engagements.

Be a mentor and role model to less experienced developers.

Required:
Bachelor's degree and fifteen years related work experience

Five year’s experience working with Scala, Java, Python or other predictive modeling tools

Five year’s experience in advanced math and statistics

Preferred:
Master's Degree in Computer Science, Math, Statistics or related field

Three year’s experience working with Hadoop, a NoSQL Database or other big data infrastructure

Three year’s experience actively engaged in data science or some other research-oriented activity

Three year’s designing and organizing large scale data for a Hadoop/NoSQL data store

Three year’s of MapReduce coding, including Java, Python, Pig programming, Hadoop Streaming, Hive for data analysis of production applications

Three year’s industry systems development and implementation experience OR Minimum of two years of data loading, acquisition, storage, transformation, and analysis

One year’s of experience using Hive, Impala, Sqoop, Kafka, Hue or Spark

Ability to put the minimal system needed into production.

Experience with the following task using above mentioned emerging technologies:

Experience with Spark, Kafka, Cassandra or another NoSQL data store

Polyglot development (5 years+/expert): Capable of developing in Java and Scala with good understanding functional programming, SOLID principles and, concurrency models and modularization.

Development experience with at least one NoSQL database development (2 years/skilled).

DevOps: Appreciates the CI and CD model and always builds to ease consumption and monitoring of the system. Experience with sbt (or Maven) and Git preferred.

Required Skills

Required Experience

To apply for this job please visit tinyurl.com.