Rakuten, Inc. is the largest ecommerce company in Japan, and third largest ecommerce marketplace worldwide.
We seeks to empower merchants to deliver Omotenashi, a hospitality mindset, which helps sellers create lasting relationships with customers.
The Japanese word
optimism. Along with the global marketplaces, Rakuten supports an ever expanding list of acquisitions and strategic investments in disruptive industries and growing markets.
At Rakuten, we offer competitive salaries, benefits, annual bonuses, a stocked kitchen (including catered lunch daily), and a dynamic office environment. We love investing in our people and when it comes down to it, we think our entire team is pretty awesome. As a technology focused company, we understand the importance of an energizing atmosphere that promotes collaboration and innovation.
Our Big Data team is
responsible for delivering a large scale data platform to Rakuten Business
Units globally. This includes the implementation and maintenance of our Hadoop
platform. We are looking for someone who can develop a strategy by creating
architecture blueprints, validating designs and providing recommendations on
the enterprise platform strategic roadmap. Experience with design and
develop in high-volume real-time big data platforms is preferred.
Administration, management, and tuning of Hadoop environment.
Collaborate with the
infrastructure team to coordinate OS-level patching and to identify and solve hardware related issues.
Cluster & Node
Maintenance, Health Checks, Automation of Job Monitoring by creating
team in identifying the root cause of slow performing jobs / queries
Capacity Planning /
Develop strategy to
automate management and deployment processes (DevOps).
Deploying and managing
all Hadoop platform components.
conducting platform upgrades.
Work with development staff
to ensure all components are ready for release / deployment.
Project Managers, Developers and business staff to develop products &
managing and maintaining the product on an on-going basis
knowledge including the ability to understand the interaction between
applications and the operating system.
Ability to provide
recommendations and suggestions related to troubleshooting and performance
and administering a reasonably-sized Hadoop cluster (100+ nodes).
Experience in running,
using and troubleshooting the Apache Big Data stack i.e. Hadoop FS, Hive,
HBase, Kafka, Pig, Oozie, Yarn, Sqoop, Flume etc.
Ability to create
infrastructure capacity plans based on quantitative and qualitative data
implementing and managing security for a multitenancy environment,
networking stack from TCP/IP and up
Good work ethics with
extremely high standard of code quality, system reliability, and
large amounts of structured and unstructured data with MapReduce.
Experience with data
movement and transformation technologies.
Experience with tuning
and troubleshooting JVM environment.
Experience with data
virtualization using Presto-DB or JBOSS Tied or any other similar
Experience in SQL and
Relation Database developing data extraction applications.
To Recruiting Agencies:
Recruiters and/or agencies must hold a valid contract for service and obtain approval from Rakuten's Global Human Resources Department on an individual requisition basis in order to submit resumes. Rakuten will NOT bear any costs for any placement resulting from the receipt of an unsolicited resume and (Rakuten) reserves the right to pursue and hire the candidate without any financial responsibility to the recruiter or agency.
To apply for this job please visit tinyurl.com.