Warning: count(): Parameter must be an array or an object that implements Countable in /home/anton702/public_html/wp-includes/post-template.php on line 317
Data Engineer - IoT BigData Jobs

Data Engineer

Data Engineer

San Francisco, CA 94103 (South Of Market area) 2016-11-05 - –

Change Change.org

Change.org is the world's largest technology platform for social change. Our goal is to empower people everywhere to start campaigns around the issues they care about, mobilize others, and work with decision makers to drive solutions. We’re also an innovative business – a “social enterprise” and a certified B Corporation, with a business model designed to support positive social impact (more about B Corps: www.bcorporation.net ).

Over 150 million people have started and signed petitions, and our users win nearly one victory per hour, including strengthening hate crime legislation in South Africa; fighting corruption in Indonesia, Italy, and Brazil; ending the ban on gay Boy Scouts in the United States, and big wins for women’s rights in India. And we’re just getting started.

Here’s a small snapshot of some of the victories our users have had : https://youtu.be/h4O81mgK85E

We love serving our incredible users, and we love our staff too. We show it with very competitive salaries, five weeks of vacation, robust maternity and parental leave, an amazing culture, free language training (if you want it), and a high impact, low-ego team that can’t wait to learn from you and teach you what they know.

About the role:
We’re seeking a Data Engineer who will own the vision and execution of projects from start to finish. The ideal candidate is super passionate and very motivated to have an enormous impact on a company that is quite literally helping to change the world. This individual will be flexible and interested in learning new skills, tools, and technologies as necessary. Given our small team size and the scope of our global mission, we must select the right tools as necessary. At any given point in time, you may find team members working with one or more of the following: Redshift, Cassandra, AWS (Elastic MapReduce, SimpleWorkflow, EC2, etc), Spark, Redis, and Dropwizard, driven by Ruby, Python, Java, Go, and Javascript.

We encourage our team members to go to and talk at conferences, our team spoke at Strata, AWS re: Invent and DataWeek. Depending upon your skills and experience, and what you bring to the table overall, we are also open to considering a Senior Data Engineer role.

Here's what you'll do as part of our team:

    • You will get in early and help us set technical direction on the Data Science team for a company with tens of millions of users; and big ambitions.
    • Build a data and computational infrastructure that can simultaneously handle batch large scale analytics, real time streaming analytics and perform machine learning training and prediction to serve millions of users.
    • Own the architecture, delivery, and evolution of interrelated big data systems.
    • Follow good engineering practices such as architectural design, unit testing, test driven development.
    • Code, write, and converse daily.
    • Work with the infrastructure team to ensure that all the required monitoring, exception handling and fault tolerance is in place to maximize robustness of the data architecture.
    • Build fault tolerant distributed machine learning workflows.
    • Develop flexible event tracking and querying pipeline for experiment analysis and analytics.
    • Contribute to moving to a multi-datacenter, resilient service-oriented architecture with autoscaling.

And here are the skills & experience we hope you have:

    • You are able to explain deeply technical concepts, algorithms and products to colleagues of various technical levels is a must have.
    • 3+ years industry experience in developing production software in languages such as Ruby, Java, Python and query languages such as SQL and CQL.
    • 2+ years of industry experience in working independently within a cross-functional engineering team.
    • 1+ years of experience in developing a data pipeline with custom ETL that accommodates batch and streaming analytics.
    • 1+ years of experience in using distributed computing architectures such as AWS products (e.g. EC2, Redshift, EMR), Hadoop, Spark and effective use of map-reduce, SQL and Cassandra to solve big data type problems.
    • 1+ years of experience in understanding and optimizing dimensional warehouse data models.
    • Passion for converting innovative data science innovations and algorithms into data products.

This is a full-time opportunity, based in San Francisco.

All qualified applicants will receive consideration for employment without regard to race, color, national origin, religion, sexual orientation, gender, gender identity, age, physical disability, or length of time spent unemployed.

We are working for a world where no one is powerless, and where creating change is a part of everyday life. We're just getting started, and we hope you'll join us.

Apply for this job

To apply for this job please visit tinyurl.com.