Cloud Data Architect - Hybrid Apply
For an European institution in Brussels, Belgium, we are looking for a hybrid Cloud Data Architect. Candidates need to have experience in migrating legacy data systems (SAP DataServices, SAS Data Integration) to a modern cloud-based, open-source data platform solution (preferably Data Lakehouse).Candidates need to be based in Belgium or willing to relocate to Brussels, when coming from abroad. Candidates need to be willing to work 5 days on-site.Candidates need to be fluent in English. A work permit is not provided. This position is open for contractors.Tasks and Responsibilities:Develop and update a data architecture strategy that adapts to evolving needs and accommodates both Business Intelligence and AI workloads;Design and implement architectures for the cloud that are vendor agnostic;Design a modern scalable data platform to replace a large legacy data system in a phased approach;Align architectural decisions with data governance policies and the department’s vision on cloudification;Establish and enforce data management policies and processes, including data quality, security and platform health monitoring;Ensure regulatory compliance and adherence to audit requirements;Provide guidance and mentorship to data analysts and data engineers;Facilitate change management by guiding colleagues and users through the migration process;Document and maintain data architecture and data assets in detail;Assistance with deployment, configuration and testing of the system;Participation in meetings with other project teams;Profile:Bachelor or Master degree;+13 years of IT experience;Experience in migrating legacy data systems (SAP Data Services, SAS Data Integration) to a modern cloud-based, open-source data platform solution (preferably Data Lakehouse);Excellent knowledge of designing scalable and flexible modern cloud-based and open-sources data architectures;Experience with AI-powered assistants like Amazon Q for innovative data solutions design;Strong exposure with Kubernetes;Previous experience with relational and non-relational database systems (PostgreSQL, Oracle or Elasticsearch, MongoDB);Experience with ETL/ELT processes and related data ingestion and transformation tools (like Spark, dbt, Trino);Proficiency in data pipeline orchestration tools (like Airflow, Dagster, Luigi);Knowledge of data governance frameworks and tools (like DataHub, Open Metadata);Familiarity with data quality management, data security, access control and regulatory compliance;Proficiency with system-to-system integration via RESTful APIs;Experience with DevSecOps practices and tools related to data pipelines, including CI/CD for data infrastructure;Good knowledge of modelling tools;Fluent in English;

