About the team
The chapter Technology & Engineering - Data & AI Engineering represents all experts who deliver data and AI engineering capabilities to our product teams. These experts are organized together to enhance those functional skillsets, improve E.ON’s MLOps & AIOps tech stack and deliver a high level of data delivery & model automation.
The products you will be working on belong to our team Digital Solutions I Future Energy Home. This team develops and operates software to manage home energy devices such as photovoltaic, inverters, batteries, EV charging wall boxes, heat pumps and meters as a cloud-based solution for our central and local units so they can rollout those solutions to the end-customers. We integrate with numerous vendors’ devices and apply centralized insights, analytics and control mechanisms for these devices.
meaningful & challenging -Your tasks
Integrate cross clouds (AWS & Azure) Data Platform Pipelines, using Data Mesh and Data Fabric architecture concepts
Implement data sharing interfaces or connectors to share business data (e.g. solar telemetry, electric vehicle charging data , …) to our regional business data consumers
Build robust data pipeline applications with AWS and Azure data services using software principles such as Clean Code and SOLID principles
Work closely with Data Solutions Architects to understand and shape overarching Data Architecture for data sharing interfaces and connectors
Mentor and coach others, conduct pair-programming sessions and review merge requests as well as active contribution to „Community of Practice“
authentic & ambitious -Your profile
At least 5 years experience in building enterprise-grade python data pipeline (ETL) applications using software best practices such as Clean Code / SOLID principles in AWS / Azure
At least 3 years of experience in relevant AWS Services for Data Engineering (e.g. Athena, Lambda, Glue, AWS IAM & Cloudwatch) and also background in Azure
Prodound knowledge in Databricks (e.g. PySpark), Snowflake, Python Pandas, Python-Pytest and Python Behave.
Experience building DataOps pipelines with GitLab, Cloudformation, Terraform or CDK and using orchestration tools (e.g. AWS StepFunctions)
Preferable experience in Data Modelling Concepts such as Data Vault 2.0 and Dimensional Data Modelling
Excellent communication skills and the ability to mentor and coach other developers
smart & useful -Our benefits