Data Engineer – Cyber Security
First a bit about ANZ
At ANZ, everything we do boils down to ‘why’ - our purpose - to shape a world where people and communities thrive. We're just as focused on seeing our people thrive as well as our customers. We'll give you every opportunity to develop your career.
We are responding faster to changing customer requirements, focusing on the things that matter the most, energising our people, eliminating waste and reducing bureaucracy.
ANZ has started to move to a new way of working, leveraging agile practices. To understand more about this new way of working and if this role is right for you, we strongly encourage you to take a look at The ANZ Way vimeo channel where you’ll find The ANZ Way animation and the New Ways of Working animation.
The Cyber Data Services
squad is responsible for simplifying the delivery and movement of security event data within the Security Domain by:
- Building and streamlining the deployment of data ingestion pipelines using CI/CD and automation toolsets as well as reusable code components;
- Increasing the visibility of data flows and introducing appropriate monitoring and alerting functionality to maintain service availability;
- Presenting data to our internal customers in a unified manner to facilitate the consistency of data sources across downstream systems;
- Providing a platform for our internal customers to perform threat detections on inflight data.
As an Engineer
in Cyber Defence, you will help design, build, test and support applications and / or underlying bank infrastructure. They work closely with squad members to ensure outcomes meet customer expectations. What you bring to Cyber Data Services as an Engineer? Required skills:
At ANZ we aim to create an inclusive environment where employee differences such as gender, age, culture, disability, sexual orientation, family and caring responsibilities and religion are valued and supported. We work flexibly at ANZ. Talk to us and let us know how this role can be flexible for you. #GD4.2
- Solid experience working in both structured and unstructured data environments;
- Skills in design and development of high throughput data pipelines (NiFi, Flume, Kafka);
- Proficient in at least two of the following languages - Java, Scala, SparkSQL, Shell Scripting;
- Demonstrates an understanding of good software engineering practices including CI/CD, Automated testing and Reliability engineering;
- Exposure to configuration management and automation tools such as Puppet and Ansible;
- Solid understanding of the UNIX OS (RHEL) and managing and administering software deployments on UNIX;
- Some experience building and maintaining big data platforms using Hadoop software distributions is advantageous;
- Experience using containerisation technologies (Kubernetes, Docker)