Job Description
This role will be responsible for analyzing business and system requirements and defining an optimal data strategy to fulfill them. The role holder will design and implement data acquisition strategies and will own and extend the business’s data pipeline through the collection, storage, processing, and transformation of large datasets. This role will lead the data architecture in the LatAm region and manage a small team to implement this strategy. To be successful in this role, you will need excellent analytic skills related to working with structured, semi-structured, and unstructured datasets, expert SQL knowledge, and experience with relational as well as NoSQL database systems. You must have expert-level programming skills in Python/PySpark, and it would be beneficial to have experience in Java/Scala programming.
Requirements
- University Degree in Computer Science, Information Systems, Statistics, or related field.
- Expertise with Data Lake/Big Data Projects implementation in Cloud (preferably MS Azure) and/or On-premise platforms.
- Experience with Cloud Azure technology stack: ADLS Gen2, Databricks, EventHub, Stream Analytics, Synapse Analytics, AKS, Key Vault.
- On Premise: Spark, HDFS, Hive, Hadoop distributions (Cloudera or MapR), Kafka, Airflow (or any other scheduler).
- Experience with designing and building lakehouse architectures in Parquet/Delta and Synapse Serverless or Databricks SQL (knowledge of Unity Catalogue is a big plus).
- Ability to develop, maintain, and distribute the code in a modularized fashion.
- Working experience with DevOps framework.
- Very good understanding of Software Development Lifecycle, source code management, code reviews, etc.
- Experience in performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Experience in building processes supporting data transformation, data structures, metadata, dependency, and workload management.
- Ability to collaborate across different teams/geographies/stakeholders/levels of seniority.
- Energetic, enthusiastic, and results-oriented personality.
- Customer-focused with an eye on continuous improvement.
- Motivation and ability to perform as a consultant in data engineering projects.
- Ability to work independently but also within a team.
- Leadership skills to develop the team’s strategy & a team of 3 people.
- Strong will to overcome the complexities involved in developing and supporting data pipelines.
- Agile mindset.
- Language Requirements: English (Fluent spoken and written Spanish – Nice to have).
Key Responsibilities
- Development & Lead data engineers (3 people).
- Building scalable, performant, supportable, and reliable data pipelines; you must enjoy optimizing data systems as well as building them from the ground up.
- Setting up new and monitoring existing metrics, analyzing data, and collaborating with other Data & Analytics team members to identify and implement system and process improvements.
- Supporting the collection of metadata into our data catalogue system, ensuring data is maintained and cataloged accurately.
- Defining and implementing DevOps framework using CI/CD.
- Working closely with big data architects, data analysts, data warehouse engineers, and data scientists.
- Supporting and promoting Agile way of working using SCRUM framework.
#J-18808-Ljbffr

Kontaktperson:
DHL Germany HR Team