Brightgrove logo
Українська
Middle Data Engineer

Middle Data Engineer

for Investor Services Solutions
Location
Bucharest, Romania
Area
Data
Tech Level
Middle
Tech Stack
PySpark, Azure, AWS, Python, Snowflake
Refer a Friend

your info

REFERRAL'S INFO

0/4000

About the Client

Our customer is a leading investor services group employing 4750+ people across 25 jurisdictions worldwide. Solution brings together that rare combination of global technical expertise and a deep understanding of their clients' needs. Solution helps their clients and the sector to stay compliant. They act as a guardian and facilitate client’s investments.

Project details

Bridge current capacity for BAU data engineering. Keep pipelines healthy, improve reliability, and free senior engineers to focus on roadmap and strategic work. 

Initial engagement 3 to 6 months with option to extend.

Your Team

Small, high performing group led by the Group Head of Data Transformation and the Head of Data Engineering. Tooling includes Snowflake, DBT, Python and orchestration on Kubernetes. Collaboration with DevOps and Data Science. Culture values ownership, clarity and calm incident response.

What's in it for you

  • Interview process that respects people and their time
  • Professional and open IT community
  • Internal meet-ups and resources for knowledge sharing
  • Time for recovery and relaxation
  • Bright online and offline events
  • Opportunity to become part of our internal volunteer community

Responsibilities

  • Build, monitor and support data pipelines landing from varied sources into Snowflake
  • Create and maintain DBT models, tests, documentation and environments
  • Write clean Python for ELT and utilities, including packaging and simple CI CD steps with Jenkins or similar
  • Investigate and resolve BAU incidents within agreed priorities and handover notes
  • Improve observability, data quality tests and run books
  • Contribute concise PRs and reviews, follow coding standards and branching strategy
  • Coordinate with DevOps on resource usage, secrets and deploys on Kubernetes
  • Support knowledge transfer and keep documentation up to date

Skills

Must have

  1. 4 plus years in data engineering with production experience in Python
  2. Strong DBT modeling with tests, exposures and environment management
  3. Solid Snowflake skills including performance basics, roles and warehouses
  4. Comfortable reading logs, tracing failures and fixing pipeline issues
  5. Clear English communication and pragmatic problem solving

Nice to have

  1. PySpark or Snowpark
  2. Dagster or another orchestrator
  3. Jenkins or similar CI CD
  4. Experience with Azure or AWS storage and integrations
Recruiter Iulia Oancea
Your personal recruiter
Iulia Oancea

Apply Now

0/4000

sharing is caring & referral bonus