Do you consider yourself one of the most talented technician's in the market?
If so, there's a tech-centric Hedge Fund who would love to have a conversation. They're continuously disrupting the industry with their heavy investment into technology. All of their systems are proprietary and built by their Data Engineers; they’ve developed an integrated platform, underpinning the entire business and advocating creative problem-solving.
They’re looking to add a Senior Data Engineer to their high-performing, global team, which is responsible for the build-out and maintenance of thousands of complex data ingestion processes, fueling investment decisions and facilitating critical operations. The Data Engineering team is diverse in regard to the projects and responsibilities. The right fit for a Data Engineer, in terms of skillset and aspirations, will be assessed throughout the interview process, giving you the flexibility to influence where you’ll be placed and the work you’ll be doing (e.g. working closely with data specifically or heavy engineering for platform buildouts).
- Building easily supportable data ingestion pipelines, platforms and systems.
- Digging into and exploring data; identifying features and issues and communicating them to others clearly and concisely.
- Standardisation and development of ingestion methodologies, including promoting those standards to other teams across the firm in the form of tools and libraries
- Ensuring that systems are highly effective for downstream teams, as well as reliable, scalable and flexible over the long term as business needs grow and change.
- Supporting, monitoring and improving existing systems and ingestion pipelines.
The Ideal Candidate Has:
- Broad technical knowledge, particularly with respect to the processing and exploration of data and an affinity for learning about new technologies.
- A passion for technology, automation, and continual improvement, with a track record of identifying high value automation opportunities.
- Experience with coding in Python / C# / Scala / Java / Go or equivalent.
- Experience working with a variety of data storage and manipulation tools such as SQL, Pandas, Elasticsearch & Kibana, Snowflake.
- Experience with containerisation and orchestration technologies like Docker / Kubernetes / Helm / Flux.
- Experience with various ETL/ELT technologies such as Airflow / Argo / Dagster / Spark / Hive