About us:
We are proudly part of a privately owned global group of insurance and online comparison related businesses, with approximately 50 offices and 10,000 employees. In Australia, the Group includes trusted and well-known brands Budget Direct, Ozicare, Chasing Cars, iSelect and Compare the Market. In late 2024, iSelect and Compare the Market formed a new Aggregation business within the group and now support millions of Australians to compare and buy their personal finance and household products such as insurance, energy, and loans.
How you fit:
The DataOps Engineer is responsible for automating, monitoring, and optimizing data pipelines and infrastructure to ensure efficient, scalable, and reliable data workflows. This role focuses on DataOps best practices, CI/CD automation, cloud infrastructure, observability, and security, enabling seamless data delivery across the organization.
The ideal candidate has expertise in DevOps, cloud data platforms, infrastructure automation, and data pipeline orchestration. They will work closely with Data Engineers, Data Scientists, Platform Engineers, and Security teams to improve data availability, governance, and operational efficiency.
The primary objectives of the role are:
- Streamlining data pipeline deployment through CI/CD
- Ensuring observability, performance, and security in cloud data environments
- Automating infrastructure management and data workflow orchestration
- Driving best practices for DataOps, governance, and cost optimization
What you do:
Data Pipeline Automation & Orchestration
- Design, implement, and maintain CI/CD pipelines for automated data workflow deployment.
- Develop infrastructure-as-code (IaC) templates for provisioning cloud-based data environments/resources.
- Automate ETL/ELT workflow management using orchestration tools like Apache Airflow, Prefect, or AWS Step Functions.
- Optimize data ingestion, transformation, and processing pipelines for scalability and efficiency.
Cloud Infrastructure & Security
- Manage data infrastructure in AWS, Azure, or GCP, ensuring high availability and security.
- Implement role-based access control (RBAC), encryption, and data security best practices.
- Collaborate with Security teams to ensure compliance with industry regulations (GDPR, HIPAA, SOC2).
- Monitor cloud costs and optimize resource usage for storage, compute, and networking.
Monitoring, Observability & Incident Response
- Develop real-time monitoring dashboards and alerts for data pipeline performance and failures.
- Use observability tools such as Datadog, Prometheus, Grafana, CloudWatch etc.to track latency, errors, and throughput.
- Establish incident response workflows and automated rollback strategies for failures.
- Work with Site Reliability Engineers (SRE) and DevOps teams to maintain high system uptime and resilience.
DataOps & DevOps Best Practices
- Implement version control, automated testing, and release management for data pipelines.
- Promote DataOps methodologies to improve collaboration between Data Engineering, IT, and Business teams.
- Ensure proper data governance, metadata tracking, and auditability in data workflows.
- Optimize data processing efficiency
Collaboration & Stakeholder Engagement
- Work closely with Data Engineers, Data Scientists, and Platform Engineers to improve data workflows.
- Act as a DataOps champion, providing guidance on best practices for pipeline reliability and security.
- Support business teams by ensuring data availability, accessibility, and lineage tracking.
What you need:
Qualifications:
• Bachelor’s or Master’s degree in Computer Science, Data Engineering, Cloud Computing, or a related field/relevant experience.
Experience & Skills:
- 3+ years of experience in DataOps, Data Engineering, or Cloud Infrastructure roles.
- Experience with CI/CD tools (GitHub Actions, Jenkins, Azure DevOps etc.)
- Proficiency in orchestration tools (Apache Airflow, Prefect, Dagster, AWS Step Functions etc.).
- Strong understanding of ETL/ELT pipelines, workflow scheduling, and automation.
- Hands-on experience with cloud data platforms (AWS, Azure, GCP).
- Proficiency in Terraform for infrastructure-as-code (IaC).
- Experience with observability tools (Datadog, Prometheus, Grafana, CloudWatch, Splunk etc.).
- Strong troubleshooting skills for data pipeline failures, latency, and performance optimization.
- Proficiency in Python, Bash, or PowerShell for automation and scripting.
- Experience with SQL and NoSQL databases (PostgreSQL, Snowflake, BigQuery, DynamoDB, etc.).
- Ability to work with cross-functional teams (Data Engineering, IT, Security, Business Intelligence).
- Strong documentation skills to maintain runbooks, architecture diagrams, and pipeline workflows.
What's in it for you:
Career Opportunities
- Vibrant and social community who have annual celebrations, family fun days and regular events.
- Enjoy flexible work arrangements, including the option of one day working from home each week or a 9-day fortnight.
- Enjoy additional leave days - ‘ME’ leave and ‘Volunteer Day’ leave.
- Employee discounts on Car, Home, Travel insurance.
- Income protection insurance provided to support you in the event of non-work-related illness or injury.
If you are someone who fosters team spirit, has resilience and aspires to continue learning and developing their skills to drive commercial outcomes, you will have tremendous success in helping us achieve our progressive growth agendas. High energy levels and a love of having fun are also important to working in this energetic business.
Interested? We’d love to hear from you!