Big Data Architect with Spark

ognizant Cognizant

Cognizant is always looking for top talent. We are searching for candidates to fill future needs within the business. This job posting represents potential future employment opportunities with Cognizant. Although the position is not currently available, we want to provide you with the opportunity to express your interest in future employment opportunities with Cognizant. If a job opportunity that you may be qualified for becomes available in the future, we will notify you. At that time you can determine whether you would like to apply for the specific open position. Thank you for your interest in Cognizant career opportunities.

Our Analytics Information Management Practice creates solutions that cover the entire lifecycle of information utilization, from ideation through implementation. At the outset, we offer consulting and development services to help our clients define their strategy and solution architecture. Then our teams deliver and manage data warehousing, business analytics and reporting applications that provide tangible business benefits.

• Minimum 12 years of solid IT consulting experience in data warehousing, operational data stores and large scale implementations

• Using Big Data technology and customer’s business requirements design and document a comprehensive technical architecture.

• Analysis and documentation of source system data sources from traditional (RDBMS) and new data sources (web, machine-to-machine, geospatial, etc.)

• Using business SLAs and a technical architecture calculate performance and volumetric requirements for infrastructure components

• Design an architecture using cloud and/or virtualization technology

• Minimum 3 years’ experience with Hadoop suite of applications

• Solid understanding of Hadoop architectures

• Plan and execute a technology proof-of-concept (POC) using Big Data technology

• Architecture role will be responsible developing Spark based solutions to support near real-time data ingestion, analytics, and reporting.

• Responsibilities include gathering requirements, designing a solution, determining the effort to implement the solution, and possibly leading the team to implement the solution.

• Minimum 3 years’ experience developing analytic algorithms with two of the following languages; python, java or scala.

• Minimum 1 year designing analytic solutions using Spark and leading the development team to implement these solutions.

• Candidate should have an awareness of best practices for Apache Spark Development

• Apache Spark Development (Spark SQL, Spark Streaming, MLlib, GraphX, Zeppelin, HDFS, YARN, and NoSQL)

• linux command line proficiency, bash shell scripting, python, java, scala, SQL, Machine Learning, spark Architecture


Preferred Skills:

• Prefer 14+ years of solid IT consulting experience in data warehousing, operational data stores and large scale implementations

• Excellent one-on-one communication and presentation skills, able to convey technical information in a clear and unambiguous manner

• Experience working in a client deliver role in an on off-shore model.

• Previous Consulting experience

• Bachelor Degree

Relevance

To apply for this job please visit tinyurl.com.