We are passionate about people making their own decisions about where and when they work. Our aim is to facilitate hybrid working (mix of in the office and from home) where possible to support our people to be effective, empowered, and productive to achieve both their career and personal goals. Because we recognise that working flexibly means different things to different people, flexible working exists in many forms.
Who are we?
Humanforceʼs vision is to make work easier and life better for frontline and flexible workforces.
Humanforce provides the market leading, best-in-one human capital management (HCM) suite for frontline and flexible workforces - without compromise. Our employee centered, intelligent and compliant HCM suite is highly integrated and composable, and consists of Workforce Management (WFM), HR, Talent, Payroll, and Wellbeing.
Humanforce has built strong foundations since its founding in 2002. We help over 2300 customers and almost 1 million employees under management in 30+ countries, across a wide range of industries, including aged, child and health care; education; hospitality; retail; local government and more. Today, we have offices across Australia, New Zealand, United Kingdom, North America and the Philippines.
Customers choose Humanforce because we enable them to deliver an exceptional employee experience, build a compelling employee value proposition, and connect the flow of the worlds talent with the growth, productivity and efficiency objectives of frontline and flexible workforces.
Who you are
Help build and run the data infrastructure for our real-time, customer-facing analytics platform. You’ll focus on reliable and performant CDC ingestion, transformation, governance, and operational excellence so our analysts can publish high-quality analytics within our analytics platform.
Day to day, you’ll partner closely with our Senior Data Engineer to work across the breadth of our data infrastructure Streamkap, dbt Core, ClickHouse, and AWS to continually improve and extend our data platform.
You’ll collaborate with a cross-functional squad (Data Analyst, DBA, and Software Engineers). If you’re a driven, self-motivated problem solver who enjoys shaping an evolving architecture, is comfortable with ambiguous problem spaces, and takes pride in innovative yet pragmatic solutions, you’ll have clear ownership and impact on a critical, customer-visible product suite.
What you will do
- Build and maintain reliable data pipelines that ingest CDC streams (Streamkap) and batch data into ClickHouse, transforming with dbt Core.
- Implement the orchestrator to schedule, monitor, and recover our pipelines.
- Enhance ELT performance and stability, and troubleshoot issues across staging and production environments.
- Implement and uphold data governance practices: testing, documentation, and service levels (e.g., freshness/completeness SLIs/SLOs) with lineage.
- Operate and observe pipelines using AWS services
- Collaborate with a cross-functional team (Senior Data Engineer, Data Analyst, DBA, Software Engineers, Product Managers) to deliver trustworthy datasets consumed in ThoughtSpot.
- Ensure integrity, reliability, and security of the data engineering architecture.
Our stack
- Warehouse: ClickHouse
- Ingestion/CDC: Streamkap
- Transformations: dbt Core
- Cloud: AWS (S3, Lambda, ECS, EventBridge)
- Infra/CI: Terraform, GitHub Actions
What you’ll need
- 3+ years of hands-on data engineering.
- Strong SQL and Python (non-negotiable).
- Experience building and maintaining Cloud Data Warehouse Infrastructure (ideally ClickHouse or a background in Snowflake/BigQuery/Redshift and a willingness to learn ClickHouse).
- Experience building and operating production data pipelines (e.g., with dbt Core).
- Practical experience with AWS and Dockerised workloads.
- Understanding of modern ELT practices, testing, and documentation.
- Direct experience using AI tools in your day-to-day work, or a strong openness and eagerness to learn how to apply them to enhance outcomes.
Some ‘nice to have’
- Experience with CDC+Kafka based ingestion tools (such as Streamkap) for near-real-time ingestion.
- Terraform and infrastructure-as-code workflows;
- CI/CD workflows with GitHub Actions.
- Security-minded habits (IAM, tenancy isolation, PII handling).
- Agile delivery experience (Jira) and code review via GitHub PRs.
Our values
- We are bold
- We are all in
- We are customer obsessed
- We do what we say
- We are good humans
Our approach to flexibility
We are passionate about people making their own decisions about where and when they work. Our hybrid model of minimum two days a week of in office moments supports flexibility tailored to individual and team needs, empowering people to achieve their career and personal goals.
Benefits
- A truly flexible workplace through our Flex@HF approach
- The opportunity to be part of a fast-growing global tech company
- A focus on learning and development through Humanforce HR
- A generous talent referral program – know great people, be rewarded
- 12 weeks paid parental leave for primary careers, 4 weeks for secondary
- 4 extra days leave to focus on your wellbeing
- Contemporary and practical Employee Assistance Program
- A cool reward and recognition program – shout to your colleagues and earn points to spend
- Access to our own financial wellbeing platform Thrive – including earned wage access, tools to budget and save, perks and cashback across 100s of Australian retailers
- Fun, collaborative culture with passionate people
- A workplace where you can genuinely improve the world of work!
We are a diverse and dispersed organisation and are actively looking to grow our team with individuals from all diverse backgrounds. We encourage applicants from all backgrounds, cultures, ages, genders, neurodiversity, religions, sexual orientations, and experiences to apply.
- Published on 26 Sep 2025, 5:53 AM