We're looking for a DataOps Engineer to join the DataOPs team of the Grow division of Unity.
The team is in charge of the operational stability, performance and cost of data infrastructures that serve either users (within or outside Unity) and applications (to transfer data between applications). The main technologies we're working with are Druid, Spark Structured Streaming, Iceberg, Trino, Airflow and Aurora RDS.
As a team we put a lot of focus on reducing the cost and improving the performance of our data services hosted on the cloud.
We work on both AWS and GCP clouds, and our most important data infrastructures run on k8s, either on AWS EKS or on GCP GKE.
The role requires to make these data infrastructures as stable, cheap and performant as possible.
The short term success is be able to maintain all the above services as an on-call.
The long-term success is to participate in the development of the Druid and Streaming services and improve their stability, cost and performance.
The impact on the team will be to increase the impact of the team on the data infrastructures within Grow.
We're a team that moves fast, handles interesting technologies, and always strive to improve the stability, cost and performance of our infrastructures that manage traffic at a very high scale. The roles suits a devops, sre or dataops that want to be part of an excellent DataOps team that manages data transfers at high traffic of Millions/sec with minimum production issues while always focusing on cost and performance. And in addition, the team members are cool, professional people who love what they do.
What you'll be doing
Improve stability of Druid, Streaming, Iceberg, Trino, Mysql infrastructures
Participate in the on-call rotation of these infra
Focus on monitoring, cost reduction and performance improvements
Develop scripts in python
Work in collaboration with customers of our infra
Requirements: What we're looking for
3+ years of experience in Ops team (Devops or DataOps) that developed in python and as an on-call of cloud infrastructures, either on AWS, GCP or Azure
Experience with AWS and GCP clouds, at least 3 years of experience in both clouds (combined)
Experience with k8s (either on AWS EKS or GCP GKE)
Experience with Streaming technologies (either SPark Streaming, SPark Structured Streaming, or Flink)
Experience with databases (Druid, Trino)
You might also have
Good development skills in scala or python
Good understanding of Linux OS administration
Background as an SRE
.המשרה מיועדת לנשים ולגברים כאחד