Design and deploy scalable, standardized, and maintainable data pipelines that enable efficient logging, error handling, and Real-Time data enrichment. The role requires strong ownership of both implementation and performance.
תחומי אחריות
Optimize Splunk queries and search performance using best practices
Build and manage data ingestion pipelines from sources like Kafka, APIs, and log streams
Standardize error structures (error codes, severity levels, categories)
Create mappings between identifiers such as session ID, user ID, and service/module components
Implement Real-Time data enrichment processes using APIs, databases, or lookups
Set up alerting configurations with thresholds, modules, and logic-based routing
Collaborate with developers, DevOps, and monitoring teams to unify logging conventions
Document flows and ensure traceability across environments
Requirements: Minimum 3 years of hands-on experience in Splunk Mandatory
Proficient in SPL, data parsing, dashboards, macros, and performance tuning Mandatory
Experience working with event-driven systems (e.g., Kafka, REST APIs) Mandatory
Deep understanding of structured/semi-structured data (JSON, XML, logs) Mandatory
Strong scripting ability with Python or Bash
Familiar with CI/CD processes using tools like Git and Jenkins
Experience with data modeling, enrichment logic, and system integration
Advantage: familiarity with log schema standards (e.g., ECS, CIM)
Ability to work independently and deliver production-ready, scalable solutions Mandatory
.המשרה מיועדת לנשים ולגברים כאחד