The tech stack for this role includes Scala, Spark, Python, and Kafka. The project includes developing, maintaining, testing and evaluating Big Data solutions for the new and up-coming platform.
You will be required to add significant value on Ingestion Framework which enables Data owners to upload data from source systems based in the data lake. The Ingestion Framework provides a common, unified mechanism which allows transfer the data from disparate source systems to Hadoop.
The company are based in the Finance industry and are on the way to being one of the biggest banks across Europe. They have several tech hubs but are currently looking for onsite engineers to be based in either Gdynia, Warsaw or remote for the expert level engineers.
Key responsibilities include:
- Working on the BigData processing pipeline which is used to process mass amount of data
- Using Spark to transform corporate data to enable searching, data visualization, and advanced analytics
- Collaborating with the DevOps, QA and Product management team
- Extensive Scala background (ideally 5years+)
- Strong knowledge Spark
- YARN and Hive
Nice to have:
- Agile Methodologies
- Background in the Finance sector
- Data driven mentality
- Excellent communication and experience working with distributed global teams