As part of its Data strategy, AXA IM has built a very successful Data Platform through multiple federated squads as a "Data mesh" paradigm. About 15 various Product oriented squads from different domains (Data Referential, Responsible Investment, Quant, Finance, Digital, Front office, Operations, Trading, etc) are leveraging the datasets produced and maintained by the Data domains and offer/build their own specialised Datasets. The platform becomes richer from additional value provided by the data domain. The platform will offer to its customers various channels of distribution (APIs, Delta Sharing, Files, ..) but also a Data analytics and a Data Science platform.
The Dataplatform is at the heart of all key strategic projects of AXA IM from our Responsible Investment program, our Equity Renaissance or our back-office transformation.
The platform has still a lot of key components to build. It is an existing time to join us either to build the platform or to participate to our ambitious programs.
As a Senior Data Engineer, you will be responsible for the implementation of Quantitative tools used by Portfolio Managers and operation to help them with investment decisions or improve their access to internal and external Data. He/She will be part of the Data Platform tech team responsible for designing, implementing and running cloud-native data solutions. You will build industrialised patterns or core infrastructure components used by your squad and leveraged by others.
The quantitative tools include:
Integration of Quant libraries in an industrialised framework
Build front ends for specific tools helping Portfolio Managers in their decision process
Build our Signal framework used by Quants and Portfolio managers to construct our next generation of Funds
Integration ML models into our ML Ops framework
The squad has currently a Data Engineer, 3 Front end developers and a DevOps Engineer.
As a Senior Data Engineer, you will:
Lead our engineering team around back-end data ingestion, quant libraries integration and front-end construction for our end users. You will also build APIs and tools for extracting Data easily.
Ingest external data from suppliers such as Markit, Bloomberg, …
Industrialise our ingestion patterns on our Azure Cloud / Databricks solution, using state-of-the art technologies (Spark 3, Azure Cognitive Services, Azure Event Hub, Docker, Azure Pipelines, Azure Data Factory, Scala, Python)
implement business Data strategies build new Datasets from internal and external Data
optimize cluster and FinOps performance and usage
design, build and maintain common patterns such as CI/CD Pipelines, shared libraries (data pipeline development, data quality, data lineage) and shared services (REST API, data viz, monitoring, scheduler),
support a community of data engineers and data scientists by understanding their problems and answering their questions and help them write the solutions on the Data Platform
participate to the build of our Data Science platform and our ML Ops program
participate to the data onboarding of third-party data providers such as Bloomberg or internal applications
design and maintain APIs
build a research environment for our Quants
Education / Qualifications / Key experiences
Master's degree, in Computer Science, Engineering, Mathematics or a related field
Hands on experience leading large-scale global data warehousing and analytics projects
5+ years of experience of implementation and tuning of Data Lake/Hadoop Spark platforms, BI, ETL, …
Experience in defining, implementing and scaling Data Modelling or API Design practices
Experience delivering data platforms in Azure Cloud
Experience with Senior Stakeholders
Experience with Quantitative tools
Strong experience in the design and implementation of several of the following:
Master & Reference Data Management
Data Quality Management
Data Analytics and BI
English - Fluent in speaking and writing
Spark (preferably on Databricks)
Scala or Python (preferably both)
Cloud computing practices: Infra as Code, security (preferably on Azure)
Experience working with data of various formats (np. Avro, Parquet, JSON, CSV)
Experience in designing and building APIs + API management is a plus
Git + CD/CI
Knowledge on JS Angular, React
Optional technical skills for a Data Engineer
Azure Cloud - Kubernetes, DataFactory, Azure functions, Cognitive Services, Event Hub, Pureview, Webapp
Data Bitemporal experience
Soft skills and competencies:
Ability to think strategically about business, product, and technical challenges in an enterprise environment
Ability to collaborate effectively across organizations with a vast variety of stakeholders: business, compliance, developers, data managers, project managers
Strong problem-solving skills
Self-motivated, proactive and able to work in a complex organization
Leadership, influence and conflict resolution
Team player in a multi-cultural work environment, able to see the big picture as well as deep dive into details when necessary.