Warning: count(): Parameter must be an array or an object that implements Countable in /home/anton702/public_html/wp-includes/post-template.php on line 317
Senior Data Engineer - IoT BigData Jobs

Senior Data Engineer

Senior Data Engineer

Wilmington, DE 2016-11-03 - –

CapitalOne Capital One

Benjamin Franklin (18052), United States of America, Wilmington, Delaware

Senior Data Engineer

Do you want to work for a technology company that writes its own code, develops its own software, and builds its own products? We experiment and innovate leveraging the latest technologies, engineer breakthrough customer experiences, and bring simplicity and humanity to banking. We make a difference for 65 million customers.

At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who love to solve real problems and meet real customer needs. Capital One started as an information strategy company that specialized in credit cards, and we have become one of the most impactful and disruptive players in the industry. We have grown to see ourselves as a technology company in consumer finance, with great opportunities for software engineers who want to build innovative applications to give users smarter ways to save, transact, borrow and invest their money, as we seek to disrupt the industry again.

We are looking for bright, driven, and talented individuals to join our team of passionate and innovative software engineers. In this role, you’ll use your experience with Python/Java/Scala, Fast Data, Big Data, Streaming and Cloud technologies to build our next generation of Data capabilities.

The Job:
Collaborating as part of a cross-functional Agile team to create and enhance software that enables state of the art, next generation Big Data & Fast Data applications

Utilizing programming languages like Python, Java, Scala, and Open Source RDBMS and NoSQL databases like PostgreSQL, MongoDB, and Redshift

Developing and deploying distributed computing Big Data applications using Open Source frameworks like Apache Spark, Flink and Kafka on AWS Cloud

Leveraging DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test Driven Development to enable the rapid delivery of working code utilizing tools like Jenkins, Maven, Nexus, Ansible, Teraform, Git and Docker

Help drive cross team design / development via technical leadership / mentoring

Your interests:
Fearless. Big, undefined problems and petabytes don't frighten you. You can work at a tiny crack until you've broken open the whole nut.

You have a bias toward action, you try things, and sometimes you fail. Expect to tell us what you’ve shipped and what’s flopped.

You are passionate about finding refined solutions to complex DevOps challenges and helping the entire team meet its commitments.

You yearn to be a part of cutting edge, high profile projects and are motivated by delivering world-class solutions on an aggressive schedule.

You love learning new technologies and mentoring more junior developers.

Humor and fun are a natural part of your flow.


Basic Qualifications:
Bachelor’s Degree in Computer Science or military experience

At least 3 years of professional work experience programming in Python, Java or Scala

Build or hone skills working with Cassandra, Accumulo, HBase, Spark, Hadoop, HDFS, AVRO, MongoDB, Redshift, Lambda, or PostgreSQL

Preferred Qualifications:
Master's Degree in Computer Science, Computer Engineering, Data Science or related discipline

2+ years of experience with the Hadoop Stack

2+ years of Distributed Computing frameworks such as Apache Spark, Hadoop

2+ years experience with Cloud computing (AWS a plus)

Experience with Elasticsearch, PostgreSQL, Ansible, Flask, Docker, Cassandra, Jenkins, Spark, Git, Stash

Familiarity with Agile engineering practices

Capital One will consider sponsoring a new qualified applicant for employment authorization for this position.

To apply for this job please visit tinyurl.com.