Banner Default Image

Scala Big Data Engineer - Autónomo

Job title Scala Big Data Engineer - Autónomo
Location Madrid
Reference: 4950
Contact name: Max Kanter
Contact email:
Published: January 9, 2023 5:08

Job Description

Signify are working with a large, Communications company in Europe on an exciting new role which will focus on designing more efficient and valuable data pipelines.


The task:

As a Scala Developer, you will be responsible for designing, developing, and maintaining data pipelines that will provide valuable insights for the company. By adopting a DevOps approach, you will ensure that the system is always running smoothly by automating tasks and focusing on creating new features rather than deployment. You will also be responsible for testing and monitoring the system using appropriate methods and tools.


Core requirements:

  • 3+ years of programming in Scala with Apache Spark
  • Strong knowledge of ETL processes
  • DevOps knowledge and experience (2 years minimum)
  • Prior use of Hadoop, and HDFS for large file storage (must be able to work with autonomously)


Your responsibilities will include:

  • Developing data architectures
  • Contributing to the short, mid, and long term vision of the system
  • Extracting, transforming, and loading data from large and complex data sets
  • Ensuring that data is easily accessible and performs well in scalable environments
  • Participating in the planning and architecture of the big data platform to optimize performance
  • Building large data warehouses for further reporting or advanced analytics
  • Collaborating with machine learning engineers to implement and deploy solutions
  • Ensuring robust CI/CD processes are in place


The tech stack at a glance:

Scala, Spark, Hadoop, Databricks, Kafka, AWS, and more


To be successful in this role, you should have strong knowledge and experience with Scala and Spark. You should also have experience with SQL and NoSQL databases, and be familiar with CI/CD concepts.

In addition, you should have technical knowledge in data pipeline management, workflow management (such as Oozie or Airflow), and large file storage (such as HDFS, Data Lake, S3, or Blob storage).

Experience with stream processing technologies (such as Kafka, Kinesis, or Elasticsearch) and a cloud environment (such as Hadoop, Cloudera, EMR, or Databricks) is a plus.


If you think this role sounds interesting, please don't hesitate to apply or reach out to me with any questions at


Please note: This is a senior freelance contract role (or Autónomo), please only apply if you are happy to work as a freelancer in Spain, and have over 5 years of total software engineering experience.