Auf einen Blick
- Aufgaben: Design and deploy real-time data pipelines and optimize data models.
- Arbeitgeber: Valmet is a leading provider of process technologies for the pulp, paper, and energy industries.
- Mitarbeitervorteile: Join a corporate start-up with opportunities for innovation and growth in a dynamic environment.
- Warum dieser Job: Be part of a team revolutionizing manufacturing through IoT and smart solutions.
- Gewünschte Qualifikationen: Bachelor's degree and 5+ years in data engineering with expertise in ETL and cloud technologies.
- Andere Informationen: Work in Berlin or Porto and collaborate with diverse teams to drive digital transformation.
Das voraussichtliche Gehalt liegt zwischen 48000 - 84000 € pro Jahr.
Valmet
Valmet develops and supplies competitive and reliable process technologies, services and automation to the pulp, paper and energy industries. Our automation business covers a wide base of global process industries. We are committed to moving forward.
You want to redefine how entire industries work by leveraging IoT, Smart Manufacturing and Industry 4.0? Would you like to be part of the success of a digital solution which will revolutionize the manufacturing process to improve shop floor performance? If your answer is a big yes, you should continue reading!
With HQ in Berlin, FactoryPal is a corporate Start-Up – with an additional location in Porto/Portugal. The venture is poised to become the leading end-to-end IoT solution for machine efficiency and equipment effectiveness. The digitally enabled solution is not just completely reshaping how companies produce and elevate their efficiency levels, but it is fundamentally augmenting the way manufacturing employees do their job.
We are data scientists, engineers, designers, IIoT experts, product managers, and manufacturing operations consultants. We are a team, united by our shared ambition: revolutionize manufacturing and transform the way it is done to ensure smooth operations.
Become part of an amazing journey and successfully accompany our customers in their Digital Factory efforts!
(Senior) Data Engineer (m/f/d)
Role and Responsibilities
-
Design, develop, and deploy real-time data pipelines using stream processing platforms such as Apache Kafka, Apache Flink, and AWS Glue.
-
Build a high-performance, ACID-compliant Data Lake using Apache Iceberg.
-
Create, enhance, and optimize data models and implement data warehousing solutions within the Snowflake platform.
-
Monitor, identify, and proactively reduce technical debt to maintain system health.
-
Develop and improve the current data architecture, emphasizing data lake security, data quality and timeliness, scalability, and extensibility.
-
Deploy and use various big data technologies and run pilots to design low-latency data architectures at scale.
-
Contribute to automating and monitoring data pipelines, as well as streamlining client onboarding.
-
Collaborate with cross-functional teams, including Software Engineers, Product Owners, Data Scientists, Data Analysts, and shop floor consultants, to build and improve our data and analytics solutions.
Qualifications
-
Bachelor’s degree in Management Information Systems, Statistics, Software Engineering, STEM, or a related technical/quantitative field.
-
5+ years of experience with ETL, data modeling, and data lake approaches.
-
5+ years of experience with processing multi-dimensional datasets from different sources and automating the end-to-end ETL pipeline.
-
3+ years of experience in Python.
-
3+ years of experience in cloud technologies (AWS).
-
3+ years of experience with streaming-based systems (Kafka/Kinesis) and event-driven design.
-
2+ years of experience in distributed computing systems (such as Spark/Flink).
-
Experience with continuous delivery and integration.
-
Ability to effectively communicate with both business and technical teams.
Nice to have
-
Familiarity with IoT data ingestion into any cloud system.
-
Basic understanding of Infrastructure as Code principles and experience with Terraform.
-
Proficiency in writing dbt models (e.g., sources, transformations, tests).
-
Knowledge of building data pipelines and applications to trigger or schedule jobs using airflow.
-
Experience with micro-service architecture.
Are you interested? Join our team and send us your detailed application documents via the „Apply“ button.
#J-18808-Ljbffr
(Senior) Data Engineer (m/f/d) Arbeitgeber: Aitopics
Kontaktperson:
Aitopics HR Team
StudySmarter Bewerbungstipps 🤫
So bekommst du den Job: (Senior) Data Engineer (m/f/d)
✨Tip Number 1
Make sure to showcase your experience with real-time data pipelines and streaming platforms like Apache Kafka and Flink. Highlight any specific projects where you successfully implemented these technologies, as this will demonstrate your hands-on expertise.
✨Tip Number 2
Emphasize your proficiency in cloud technologies, particularly AWS. If you've worked on projects that involved building or optimizing data lakes using AWS services, be sure to mention those experiences to align with our needs.
✨Tip Number 3
Collaboration is key in our team. Share examples of how you've worked with cross-functional teams, especially with software engineers and data scientists, to build data solutions. This will show that you can effectively communicate and work within a diverse team.
✨Tip Number 4
If you have experience with IoT data ingestion or Infrastructure as Code principles, make sure to highlight that. These skills are nice to have and could set you apart from other candidates, showing that you are well-rounded in the field.
Diese Fähigkeiten machen dich zur top Bewerber*in für die Stelle: (Senior) Data Engineer (m/f/d)
Tipps für deine Bewerbung 🫡
Understand the Role: Make sure you fully understand the responsibilities and qualifications required for the (Senior) Data Engineer position. Tailor your application to highlight your relevant experience with data pipelines, cloud technologies, and big data systems.
Highlight Relevant Experience: In your CV and cover letter, emphasize your 5+ years of experience with ETL processes, data modeling, and cloud technologies like AWS. Provide specific examples of projects where you've successfully implemented these skills.
Showcase Technical Skills: Clearly list your technical skills related to the job description, such as proficiency in Python, experience with streaming systems like Kafka, and familiarity with distributed computing systems. Mention any relevant certifications or courses you've completed.
Craft a Compelling Cover Letter: Write a personalized cover letter that explains why you're excited about the opportunity at Valmet and how your background aligns with their mission to revolutionize manufacturing through IoT and smart technologies. Be sure to convey your passion for data engineering.
Wie du dich auf ein Vorstellungsgespräch bei Aitopics vorbereitest
✨Showcase Your Technical Skills
Be prepared to discuss your experience with ETL processes, data modeling, and cloud technologies like AWS. Highlight specific projects where you've successfully implemented these skills, especially using tools like Apache Kafka or Snowflake.
✨Demonstrate Problem-Solving Abilities
Expect questions that assess your ability to identify and reduce technical debt. Share examples of how you've proactively improved system health and optimized data architectures in previous roles.
✨Emphasize Collaboration
Since the role involves working with cross-functional teams, be ready to discuss your experience collaborating with software engineers, data scientists, and product owners. Provide examples of how you effectively communicated complex technical concepts to non-technical stakeholders.
✨Stay Updated on Industry Trends
Familiarize yourself with the latest trends in IoT, Smart Manufacturing, and Industry 4.0. Being knowledgeable about these topics will show your enthusiasm for the industry and your commitment to contributing to the company's mission.