Solution Architect - Big Data Space

  • DOE
  • Charlotte, NC, États-Unis
  • CDI, Plein-temps
  • Synechron Inc.
  • 15 oct. 18

Synechron is seeking a Solution Architect - Big Data Space within the Data Management and strategy development experience. • We are looking for a Solutions Architect who has a passion for data and the new technology patterns that support business insight and analytics. • Need someone with experience implementing data solutions across a wide variety of tools and technologies. • Will be responsible for partnering with leadership to understand and interpret solution requirements. • Ensuring our delivery of Big Data solutions is first class – satisfying our demanding client-base, where we win engagements to deliver Big Data solutions.

Job Responsibilities:

  • Work with business to understand business requirements and use cases.
  • Create technical and business solutions architectures (logical and physical).
  • Resolve questions during design and implementation of architecture.
  • Evaluate tools for use case fit, perform vendor/tool comparisons and present recommendations.
  • Contribute to the capability roadmaps for data platforms.
  • Review schemas, data models and data architecture for Hadoop environments.
  • Prototype solutions for specific use cases.
  • Advise peers and business partners on fit-for-use and technical complexities.
  • Partner with other technical leaders for solution alignment with strategy and standards.
  • Proven leadership, including leadership within the Financial Services domain.
  • Gravitas, comfort and capability of presenting and pitching to senior technology managers at major financials.
  • Appreciation of the business domains in Financial Services that use Big Data, including Risk, Regulation, Finance, Compliance, Fraud.
  • Team building, hiring the best of the available talent pool, growing talent from within.
  • Ability to pitch and manage multiple projects.
  • Consulting background highly beneficial.

 

Required Skill:

  • Hands-on experience with Hadoop, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
  • Experience with end-to-end solution architecture for data capabilities including.
  • ETL Staging & Load for legacy systems.
  • Experience with Test Driven Code Development and SCM tools.
  • Fluent understanding of best practices for building Data Lake and analytical architectures on Hadoop.
  • Strong scripting / programming background (Unix, Python preferred)
  • Strong SQL experience with the ability to develop, tune and debug complex SQL applications.
  • Expertise in schema design, developing data models and proven ability to work with complex data is required
  • Experience in real time and batch data ingestion.
  • Proven experience in working in large environments such as RDBMS, EDW, NoSQL, etc.
  • Understanding security, encryption and masking using various technologies including Kerberos, MapR-tickets, Vormetric and Voltage.
  • Shared HDFS storage for all Data Marts.
  • S3 for Object data stores where applicable, especially for supporting OLTP or CRUD application capabilities, Apache Parquet or ORC for Columnar data stores
  • Apache Spark for data processing – Scala, Python or Java.
  • Apache Spark for data access through Spark SQL, Data Frames.
  • Dedicated Spark clusters for each Data Mart. The capacity of the cluster is sized based on the usage of the Data Mart.
  • Oracle or SQL Server for legacy Data Marts.
  • Metadata Management, and Data Governance within the Big Data / NoSQL domain.
  • TML5 for UI , OBIEE for standard reporting ,Tableau for self-service reporting
  • Education: Bachelor’s degree in computer science or related.