We are looking for a highly motivated data engineer to join the IT,MES team to develop a highly distributed and scalable pipeline with varying demands of throughput and latency. Our team is building the Manufacturing Execution Software from ground up using open source technologies. The ideal candidate is result oriented, excited to learn new technologies and interested in solving extremely challenging problems at aggressive timelines. The candidate will work with our manufacturing data and design pipelines for both real time transactional and analytical needs.
- Design and implement scalable data pipelines using live caching, Message Queues, Pub/Sub methodologies and data models that support ingestion of real time transactional data .
- You will al so be responsible for building scalable storage and retrieval of huge amounts of data, working with DevOps to optimize and tune the solution.
- Strong programming experience in at least one of the following languages: Java, Golang or Python
- 2+ years designing, building, or operating production environment data pipelines using message queues like Apache Kafka
- 1+ years of hands on designing and administering stream processing using Storm/Spark or Samza.
- Hands on experience with cache or in-memory stores like Redis, Hazelcast etcWorking Knowledge of MQTT is a plus.
- Experience engineering large scale data infrastructures.
- Experience contributing to open source projects and working with open source community.
- Evidence of Exceptional Ability
To apply for this job please visit tinyurl.com.