Principal Software Engineer – Big Data (Hadoop)
Build high-performance and reliable data transport mechanisms
Implement and optimize API-level integrations with Hadoop and Hadoop technologies, or other relevant technologies, for seamless/effortless/performant orchestration of data flow and data processing.
Leverage a variety of open source technologies to achieve an optimally flexible architecture, and provide guidance to ensure customer success on best-of-breed deployments
Define and publish best practices
10+ years relevant experience
Strong enterprise Java and C/C++ knowledge, and track record of delivering innovative solutions to tough problems
Thrives on big data challenges of large volumes, at high velocity, with extreme variability
Active contributor to any open source Apache projects a huge plus.
Excellent understanding of issues regarding clustering, and distributed computing in enterprise environments
Experience with a variety of traditional data technologies including RDBMS, ETL, Data Warehousing, BI, OLAP, OLTP
Experience with new generation technologies, such as Hadoop, HBase, Hive, Kafka, Storm, Cassandra, MongoDB, etc.
Experience with a range of scripting technologies, including python, bash, etc
Experience with deployment, operations, and management issues for complex distributed data systems
Dedication to testing as an essential part of software engineering practice
Passionate about open source, and building communities
Ability to implement elegant solutions to complex problems
Bachelors in Computer Science or equivalent experience, Masters a plus.
To apply for this job please visit tinyurl.com.