Principal Software Engineer &#...

Principal Software Engineer – Big Data (Hadoop)

San Francisco, CA 94107 (South Of Market area) 2016-10-14 - –

splunk Splunk

Principal Software Engineer – Big Data

Responsibilities:
Build high-performance and reliable data transport and processing mechanisms

Coding and testing complex system components

Overall system architecture, scalability, reliability, and performance

Attitude for simple, elegant design

Requirements:
10+ year’s relevant experience

Expert in writing large scale, distributed systems in C/C++ and/or Java.

A deep understanding of (one or more): Distributed Systems, Messaging Systems, Database Systems, NoSQL system implementation

Big Data engine internals (storage): indexing, access methods, locking, caching, transaction processing, replication, backup restore, buffer management

Big Data engine internals (query processing): query compilation, optimization, execution, parallel execution

Strong backgrounds in file and storage systems, networks, JVM and their performance

Experience and knowledge of some of the following open source systems: Apache Hadoop, Apache Kafka, Apache Spark, Apache Flink

Experience delivering and operating large scale, highly available Big Data distributed systems (e.g., Splunk, Hadoop, Kafka, Flink, Spark)

Education:
Bachelors in Computer Science or equivalent experience, Masters or PHD a plus.

To apply for this job please visit tinyurl.com.