Pere Miquel is part of the European Data Lake in Zurich.
Tell us about your role?
Regarding my role, I mostly develop ETL solutions using Azure services (Data Factory, Databricks...) with Spark and Scala. In the European Data Lake we aim to create a centralized point of truth for the group's data. Lately, we are also getting requests for cloud-native applications, which lets us use other tools such as App Functions and Docker / K8s.
What brought you to this position? How did you get to your current field?
I ended up in this role after briefly working as a Data Scientist / Engineer in my former company (consulting). There I got in touch with technologies such as Spark, which I really liked so I did a Master's degree in Big Data.
What would you say is your favourite data tool?
If you ask about my favorite data tool... That would be Databricks. It's really comfortable having the flexibility to mess around with your cluster, testing in the notebooks, getting all the metrics with just one click and its clean integration with other cloud services. I just want it to keep growing and providing new interesting approaches to the Big Data usage such as the Delta Lake.
What do you think 2020 will bring in the tech/programming world?
Looking into 2020 on my field I believe that the Data Engineering tasks will get a larger audience thanks to the tools that ease part of the work and allow the users to focus more on the business and functional cases. However, this is also a good opportunity for developers, where we'll be required to dive deeper into the underlying technologies, as more Big Data applications in the market mean more processes that will need fine-tuning!