Auf einen Blick
- Aufgaben: Build and maintain data pipelines for steel production scheduling.
- Arbeitgeber: Join a modern team in Berlin focused on optimizing industrial processes.
- Mitarbeitervorteile: Enjoy professional development, language courses, and a sleek office environment.
- Warum dieser Job: Make a real impact on production efficiency while collaborating with diverse experts.
- Gewünschte Qualifikationen: Bachelor’s or Master’s in a technical field; experience with data pipelines and Python required.
- Andere Informationen: Fluency in English is essential; German is a plus.
Das voraussichtliche Gehalt liegt zwischen 43200 - 72000 € pro Jahr.
As a Data Engineer on our team, you will play a crucial role in building software that automates and optimizes scheduling for steel production. Your responsibilities will include designing, building and maintaining event-driven data pipelines to integrate real-time data from various steel plants, ensuring data cleanliness and performing data transformations. You will monitor, debug and resolve issues related to faulty data, ensuring accuracy and reliability across our systems. A key part of your role will be understanding and navigating complex data flows in an industrial setting.
You will work closely with a diverse team of mathematicians, C++ and Python backend developers, as well as our customers, to understand and address the data needs of all stakeholders. Your work will directly contribute to improving the efficiency of production scheduling.
DEIN PROFIL
- Bachelor’s or Master’s degree in a technical field (e.g. Computer Science, Engineering).
- Proven experience managing production data pipelines, with experience in troubleshooting data inconsistencies and pipeline issues. Knowledge of monitoring solutions.
- Ability to perform simple data analysis. Interest in digging into the data and understanding underlying processes.
- Strong software engineering skills in Python (C++ knowledge is a plus). Ability to write scalable, maintainable high quality code.
- Experience with SQL databases (PostgreSQL, Oracle). Understanding of query optimization, schema design and database migrations.
- Production experience with stream processing systems, particularly Apache Kafka (experience with Kafka Connect is a plus).
- Knowledge of Docker and Swarm container orchestration. Experience implementing CI/CD pipelines is a plus.
- Experience with Git-based workflows and collaborative development practices.
- English is required; fluency in German is a plus.
BENEFITS
- Transparency of business strategies.
- Autonomy balanced by responsibility.
- Cohesive, multidisciplinary team.
- Modern machine learning techniques.
- Sleek, new office located in Berlin.
- Professional educational opportunities.
- German language courses and public transport included.
#J-18808-Ljbffr
Data Engineer (m/w/d) Arbeitgeber: Smart Steel Technologies GmbH
Kontaktperson:
Smart Steel Technologies GmbH HR Team
StudySmarter Bewerbungstipps 🤫
So bekommst du den Job: Data Engineer (m/w/d)
✨Tip Number 1
Familiarize yourself with the specific technologies mentioned in the job description, such as Apache Kafka and Docker. Having hands-on experience or projects that showcase your skills in these areas can set you apart from other candidates.
✨Tip Number 2
Engage with the data engineering community online. Participate in forums or groups related to data pipelines and production data management. This not only helps you learn but also allows you to network with professionals who might provide insights or referrals.
✨Tip Number 3
Prepare to discuss real-world scenarios where you've managed data inconsistencies or pipeline issues. Being able to articulate your problem-solving process will demonstrate your practical experience and understanding of the role.
✨Tip Number 4
Show your enthusiasm for the steel production industry and how data engineering can optimize it. Research current trends and challenges in this field, and be ready to share your thoughts during the interview to show your genuine interest.
Diese Fähigkeiten machen dich zur top Bewerber*in für die Stelle: Data Engineer (m/w/d)
Tipps für deine Bewerbung 🫡
Understand the Role: Make sure to thoroughly read the job description for the Data Engineer position. Understand the key responsibilities and required skills, such as experience with data pipelines, Python, and SQL databases.
Tailor Your CV: Customize your CV to highlight relevant experience in managing production data pipelines, troubleshooting data issues, and your software engineering skills in Python. Mention any experience with tools like Apache Kafka and Docker.
Craft a Strong Cover Letter: Write a cover letter that connects your background to the specific needs of the role. Emphasize your interest in data analysis and your ability to work collaboratively with diverse teams, as well as your motivation to improve production efficiency.
Highlight Relevant Projects: If you have worked on projects involving data pipelines or stream processing systems, be sure to include these in your application. Describe your role, the technologies used, and the impact of your work on the project outcomes.
Wie du dich auf ein Vorstellungsgespräch bei Smart Steel Technologies GmbH vorbereitest
✨Showcase Your Technical Skills
Be prepared to discuss your experience with data pipelines, especially in troubleshooting and managing production data. Highlight specific projects where you utilized Python, SQL, or Apache Kafka, and be ready to explain your approach to solving data inconsistencies.
✨Understand the Industry Context
Familiarize yourself with the steel production industry and the specific challenges it faces regarding data management. This will help you demonstrate your ability to navigate complex data flows and show that you understand the impact of your work on production efficiency.
✨Emphasize Collaboration Skills
Since you'll be working closely with a diverse team, share examples of how you've successfully collaborated with others in past roles. Discuss your experience working with backend developers and mathematicians, and how you addressed the data needs of various stakeholders.
✨Prepare for Problem-Solving Questions
Expect questions that assess your problem-solving abilities, particularly related to data monitoring and debugging. Think of scenarios where you had to resolve issues with faulty data and be ready to walk through your thought process and the steps you took to ensure data accuracy.