Similarweb is the leading digital intelligence platform used by over 3500 global customers. Our wide range of solutions power the digital strategies of companies like Google, eBay, and Adidas.
We help our customers succeed in todays digital world by giving them access to data-driven insights, competitive benchmarks, strategic analysis, and more.
In 2021, we went public on the New York Stock Exchange, and we havent stopped growing since!
We're seeking exceptional Individuals to join us to our R&D team. At Similarweb, you'll innovate fast, collaborate with brilliant minds, solve big problems, work with cutting-edge technologies and data at incredible scales, and make a tangible impact on the world's most innovative companies.
Were looking for a Big Data Engineer to develop and integrate systems that retrieve, process and analyze data from around the digital world, generating customer-facing data. This role will report to our Team Manager, R&D.
Why is this role so important at Similarweb?
Similarweb is a data-focused company, and data is the heart of our business.
As a big data engineer developer, you will work at the very core of the company, designing and implementing complex high scale systems to retrieve and analyze data from millions of digital users.
Your role as a big data engineer will give you the opportunity to use the most cutting-edge technologies and best practices to solve complex technical problems while demonstrating technical leadership.
So, what will you be doing all day?
Your role as part of the R&D team means your daily responsibilities may include:
Design and implement complex high scale systems using a large variety of technologies.
You will work in a data research team alongside other data engineers, data scientists and data analysts. Together you will tackle complex data challenges and bring new solutions and algorithms to production.
Contribute and improve the existing infrastructure of code and data pipelines, constantly exploring new technologies and eliminating bottlenecks.
You will experiment with various technologies in the domain of Machine Learning and big data processing.
You will work on a monitoring infrastructure for our data pipelines to ensure smooth and reliable data ingestion and calculation.
Requirements: This is the perfect job for someone who:
Passionate about data.
Holds a BSc degree in Computer Science/Engineering or a related technical field of study.
Has at least 4 + years of software or data engineering development experience in one or more of the following programming languages: Python, Java, or Scala.
Has strong programming skills and knowledge of Data Structures, Design Patterns and Object Oriented Programming.
Has good understanding and experience of CI/CD practices and Git.
Excellent communication skills with the ability to provide constant dialog between and within data teams.
Can easily prioritize tasks and work independently and with others.
Conveys a strong sense of ownership over the products of the team.
Is comfortable working in a fast-paced dynamic environment.
Advantage:
Has experience with containerization technologies like Docker and Kubernetes.
Experience in designing and productization of complex big data pipelines.
Familiar with a cloud provider (AWS / Azure / GCP).
Experience with Big Data technologies and common frameworks such as Spark, Airflow, Kafka, Parquet, Databricks, EMR, Kubernetes.
.המשרה מיועדת לנשים ולגברים כאחד