Job Description
As a Data Engineer, you will be responsible for maintaining and evolving the Solvency II ETL processes, primarily built using Informatica technologies.
In addition, you will support a separate team working on a Big Data platform using PySpark and Databricks within Microsoft Azure.
Key Responsibilities:
Maintain and enhance the Solvency II data processing chain, which includes:
Design and build new ETL pipelines using Python, SQL, and PySpark on Databricks within the Azure Cloud environment.
Monitor and ensure the quality of data across ETL pipelines.
Keep documentation up-to-date, including:
Actively participate in roadmap discussions and contribute ideas for future improvements.
Provide guidance and act as a sparring partner for continuous improvement across the Solvency II data chain.
Skills and Experience:
What do we offer?
A dynamic, international and challenging work environment
Training and support to reach your full potential including the opportunity for continuous professional development
Attractive terms and conditions, including competitive salary, pension package and a range of flexible benefits and rewards
Challenging tasks with individual development and training opportunities