We invite you to join our Data Platform team β building the data infrastructure that powers analytics and insights for SOFTSWISS. We process enterprise-scale data and serve multiple internal teams across the company.
In this role, you'll have the opportunity to:
Work with modern data stack and AI tools to build complex solutions
Design solutions to technical challenges and influence platform architecture
Research, develop and integrate new data processing tools and technologies
Build business-critical, robust and self-service data products using Apache Spark, Kafka, Delta Lake and other big data technologies
Ensure pipeline performance and data quality through scalable architecture and optimization
Refine data solutions as business requirements and technical constraints evolve
Collaborate with teams to understand data needs and support adoption of data products across the company
5+ years of software engineering experience with expertise in development, deployment, and integration
Proficiency in at least one high-level programming language (Scala, Java, Python or equivalent)
Experience in data environments, such as Data Lakes, Data Warehouses, Data Marts, Data transformation concepts and working with large data volumes
Experience in building ETL pipelines to perform feature engineering on large-scale datasets using Apache Spark and Delta Lake
Experience with sourcing and modelling data from application APIs
Experience with building stream-processing applications using Spark-Streaming, Kafka Streams or others
Experience designing and implementing monitoring, alerting, and observability solutions for data pipelines using Prometheus, Grafana, Alertmanager or others
Strong analytical and problem-solving skills
Upper-intermediate proficiency in English and strong command of Russian
Experience with workflow orchestration tools such as Apache Airflow
Experience with Kubernetes and containerization
Familiarity with cloud platforms (AWS, GCP, Azure, OCI)
Familiarity with CDC and Debezium
Familiarity with Clickhouse
Familiarity with data quality validation frameworks for data pipelines
Familiarity with Data Lineage
Familiarity with low-latency NoSQL datastores (such as HBase, Cassandra, MongoDB), Relational databases (such as MySQL, Postgres) and Search systems (such as ElasticSearch, Solr)
Masters, PhD, or equivalent experience in Software Engineering, Mathematics or Computer Science
Full-time remote work opportunities and flexible working hours
Private insurance
Additional 1 Day Off per calendar year
Sports program compensation
Comprehensive Mental Health Programme
Free online English lessons with a native speaker
Generous referral program
Training, internal workshops, and participation in international professional conferences and corporate events.