Brightgrove logo
Українська
Middle Data Engineer

Middle Data Engineer

for Smart Metering solutions
ЛОКАЦІЯ
Віддалено, Київ, Україна, Вроцлав, Польща, Бухарест, Румунія
СПЕЦІАЛІЗАЦІЯ
Data
РІВЕНЬ
Middle
СТЕК ТЕХНОЛОГІЙ
Scala, Apache Spark, AWS (S3, Glue, Athena), Apache Hudi, Apache Kafka, Terraform, Docker
Порекомендувати друга

Ваші дані

Дані про кандидата

0/4000

ПРО КЛІЄНТА

A leading European smart metering manufacturer serving approximately 800 customers through its proprietary IoT platform. The company is undergoing a major digital transformation, migrating from on-premises monolithic infrastructure to a modern cloud-native, multi-tenant architecture on AWS.

ПРО ПРОЄКТ

The cloud transformation program focuses on migrating existing infrastructure to AWS, transitioning from a monolithic single-tenant setup to a scalable, cloud-native platform. The current phase involves building a data lakehouse on AWS - migrating and re-architecting data interpretation and processing pipelines, working with Scala-based services and Terraform-managed infrastructure.

ТВОЯ КОМАНДА

You will work alongside Brightgrove's Technical Lead and the client's Product Owner and Team Lead/

ЩО ДЛЯ ТЕБЕ

  • Interview process that respects people and their time
  • Professional and open IT community
  • Internal meet-ups and resources for knowledge sharing
  • Time for recovery and relaxation
  • Bright online and offline events
  • Opportunity to become part of our internal volunteer community

ЗА ЩО БУДЕШ ВІДПОВІДАТИ

Design and implement data pipelines for migration of data interpretation services into the AWS environment
Contribute to building and evolving the data lakehouse architecture on AWS
Work with Scala-based data processing components, adapting them for cloud-native operation
Support infrastructure setup and maintenance using Terraform
Collaborate with the client team on data architecture decisions and pipeline design
Write supporting scripts and tooling in Python as needed
Participate in daily standups and task planning sessions with the client team
Take full ownership of deliverables end-to-end - from clarifying requirements through implementation to validation
Flag risks, gaps, and blockers early
Document technical decisions and migration patterns for knowledge transfer

НЕОБХІДНІ НАВИЧКИ

  • Data engineering experience (seniority level to be confirmed)
  • Strong Scala proficiency - hands-on production experience required
  • Experience with Apache Spark, Apache Hudi, Apache Kafka
  • Familiarity with AWS data services: Athena, Redshift, Glue, S3, or similar
  • Python scripting skills for tooling and automation
  • Working knowledge of Terraform (infrastructure as code)
  • Linux, Docker, GitHub Actions
  • Experience building or contributing to data lakehouse architectures
  • Dedicated and self-organized - able to manage your own workload, set priorities, and deliver consistently
  • Ownership mindset - you follow through without being chased and hold yourself accountable for outcomes
  • Proactive - you push things forward and ask the right questions
  • Strong organizational and communication skills
  • Experience with LLM-assisted development tools (Cursor or similar) is a plus
  • Familiarity with data migration projects and ETL/ELT patterns
  • English proficiency sufficient for daily technical collaboration
Recruiter Вікторія Дорош
Твій рекрутер
Вікторія Дорош

Надіслати резюме

0/4000

за репост — плюси в карму (а можливо і реферальні бонуси)