Senior Data Engineer – Scala – Spark – Kafka – AWS
This role provides the opportunity to work on the following tech stack: Scala, Spark, Kafka, AWS & Docker/Kubernetes.
This company is evolving alongside personal mobility as more vehicles are fleet-owned and Insurance must adapt to the changing risk profile. Consumer decisions & preferences are forcing underwriting models to be transformed. Increasing usage of technology has created a torrent of new data which enhances the predictive power of modern AI-driven applications. Their core product utilizes streaming telematics data and amends insurance pricing in real-time responding to the drivers' needs, environment, behaviour and usage.
They have built their working environment around the people and not the skills required (although they are important). They believe heavily in open source software, idea sharing & open collaboration throughout all development teams. They need people who share the same values and know how important they are to the success of the project.
- Competitive salaries
- Options scheme and will benefit from being an early joiner
- Work from the office, from home or anywhere!
- Very flexible and generous holiday policy
- Opportunity to learn and develop skills within a modern working environment.
- Spark/Flink/Kafka automated testing experience.
- Scala, Java. Functional programming experience. Python experience a plus.
- AWS frameworks.
- Kubernetes/Docker – Microservices.
- Jenkins, Maven or Gradle, Git, Ansible.
- Prometheus, Grafana.
- Experience developing or testing microservices, especially data-centric ones.
- Interest in systems architecture and building distributed systems at scale.
- Strong automation mindset and a passion for root cause analysis.
- Expertise in performance tuning and service monitoring.
Nice to haves:
- Event sourcing / CQRS patterns
- Timeseries data stores (Influx DC, CrateDB)
- NoSQL databases (Cassandra or HBase)
- Modern data warehousing (Hive, Kylin or Presto)
- Indexing and search engines (Elasticsearch)