JPMorgan Chase & Co.
(NYSE: JPM) is a leading global financial services firm with assets of $2 trillion and operations in more than 60 countries. The firm is a leader in investment banking, financial services for consumers, small business and commercial banking, financial transaction processing, asset management, and
JP Morgan Intelligent Solutions
(JPMIS) is a new group considering ways to transform the firm by leveraging modern technology and turning JPMC proprietary data assets into opportunities. Protecting and managing intellectual property effectively as well as utilizing it to develop solutions that are both customized and scalable will enable JPMC to create additional shareholder value.
JPMIS is currently driving a firm wide initiative to modernize and standardize the way the bank leverages its data through cutting-edge technology and analytics, and this role will be plugged into this highly visible effort immediately!
The Data Architect will provide expertise and thought leadership in the design and standardization of a modern big data lifecycle for the firm. This includes sourcing into a data reservoir, managing complex access controls and encryption methodologies, developing ETL processes, integrating modern end-user software, and streamlining data consumption by data scientists, reporting teams and operational applications.
The individual should have an exceptional ability to evangelize and educate on the big data lifecycle design to data scientists, data management teams, and technology leaders across different lines of business. This candidate will be responsible for developing roadmap, design and driving the implementation of the a well defined, controlled and unified view of how data is ingested, stored, secured, processed, tracked, monitored, and consumed in multi-tenant Big Data Environment.
The ideal candidate will have hands on experience in data architecture and data management principles as well as hands on experience in full information lifecycle for large scale big data environments — from data origination to data consumption.
Specific responsibilities include:
Take the lead on designing data architecture solutions in compliance with firm wide data management principles and security controls for a big data environment consisting of Spark, HBase, Kafka, Impala etc.
Determine the best mix of big data software solutions to streamline data flow for different business use cases and data discovery across the firm while maintaining strict access controls
Collaborating with technologists and data scientists to set up test plans and use cases to evaluate new Hadoop-native technologies for ETL, visualization, data science, discovery, etc.
Design the connection between the big data environments and metadata management/security platform to ensure efficient capture and use of metadata to govern data flow patterns and access controls across the firm
Analyze across multiple data domains and define strategies to reduce data redundancy, improve data availability, and accessibility by partnering with the technical teams and deliver a maintainable and reusable data architecture
Govern data decisions related to data model design, data access patterns, and data services.
Maintain the end-to-end vision of the data flow diagram and develop logical data models into one or more physical data repositories.
Document logical data integration (ETL) strategies for data flows between disparate source/target systems for structured and unstructured data into common data reservoir and the enterprise information repositories
Develop and maintain controls on data quality, interoperability and sources to effectively manage corporate risk
Define processes for the effective, integrated introduction of new data and new technology
Ability to think through multiple alternatives and select the best possible solutions to solve tactical and strategic business needs.
BS Degree in Computer Science or Engineering.
10+ years of hands on experience in Data Management/Data Architecture/Application architecture/Data Development.
3+ years of experience working with modern big data systems for data management, data architecture, security and access controls
Experience managing data transformations using Spark and/or NiFi and working with data scientists leveraging the Spark machine learning libraries
Proficiency working within the Hadoop platform including Kafka, Spark, Hbase, Impala, Hive, and HDFS in multi-tenant environments and integrating with other 3rd party and custom software solutions
Solid expertise in data technologies; i.e., data warehousing, ETL, MDM, DQ, BI and analytical tools. Extensive experience in metadata management and data quality processes
Experience in integrating complex, corporate-wide processes and data models
Hands-on experience with dimensional modeling techniques and creation of logical and physical data models (entity relationship modeling, Erwin diagrams, etc.)
Relevant experience implementing data quality and master data management programs Experience in information intensive industries or digitally advanced enterprises
Experience in full life cycle architectural guidance
Advanced analytical thinking and problem solving skills
Strong interpersonal skills
JPMorgan Chase is an equal opportunity and affirmative action employer Disability/Veteran.
US-NY-New York-5 Manhattan West / 03354
JPMorgan Chase & Co.
To apply for this job please visit tinyurl.com.