Senior DevOps Engineer for Data Engineering team (m/f/x)
Jetzt bewerben
Senior DevOps Engineer for Data Engineering team (m/f/x)

Senior DevOps Engineer for Data Engineering team (m/f/x)

Vollzeit 56000 - 84000 € / Jahr (geschätzt) Kein Home Office möglich
Jetzt bewerben
Dynatrace Austria GmbH

Auf einen Blick

  • Aufgaben: Join our Data Engineering team to automate and enhance our Data Platform.
  • Arbeitgeber: Dynatrace is a leading software company focused on perfecting digital experiences.
  • Mitarbeitervorteile: Enjoy flexible working models, attractive compensation, and a unique career development program.
  • Warum dieser Job: Make a tangible impact on product innovation while collaborating with a diverse, international team.
  • Gewünschte Qualifikationen: 3+ years in DevOps or similar roles, with strong cloud and automation skills required.
  • Andere Informationen: Relocation support available for candidates moving to new countries.

Das voraussichtliche Gehalt liegt zwischen 56000 - 84000 € pro Jahr.

Your role at Dynatrace

Data = information = knowledge = power. Do you want to hold the keys to that power? Are you motivated by solving challenging problems, where creativity is as crucial as your ability to write code, deliver solutions, and bring valuable data sets together to answer business questions?

If this sounds like an environment where you will thrive, come and join our Data Engineering team. Interested? Cause we are!

About the role

In Dynatrace we are all about automation, self healing and noOps approach. We preach automation wherever possible and we live by what we preach.

In the Data Engineering team for which we are hiring we are providing data to drive the world class Application Intelligence platform that is Dynatrace.

Why you will love being a Dynatracer

  • Dynatrace is a leader in unified observability and security.
  • We provide a culture of excellence with competitive compensation packages designed to recognize and reward performance.
  • Our employees work with the largest cloud providers, including AWS, Microsoft, and Google Cloud, and other leading partners worldwide to create strategic alliances.
  • The Dynatrace platform uses cutting-edge technologies, including our own Davis hypermodal AI, to help our customers modernize and automate cloud operations, deliver software faster and more securely, and enable flawless digital experiences.
  • Over 50% of the Fortune 100 companies are current customers of Dynatrace.

Aufgaben

As a DevOps Engineer in the Data Engineering you will help us automate away our Data Platform both by providing the necessary tooling as by designing processes.

It is quite an unique situation, as Dynatrace delivers the one of the best tools for DevOp, with this opportunity you would put your experience to drive this product, dogfooding it whenever possible and building a tool for other DevOps as well.

You will be building tools to automate installation at scale, accelerating time-to-value and enhancing the reliability of the Data Platform. That includes scripts but we may also need to integrate with existing mechanisms via APIs or provide means to reconfigure an already deployed product. Have an impact on how we shape our ETL pipeline and make sure the deployments of new builds of pipeline are automatic, predictable and transparent. All this working towards eliminating data downtimes and adding bricks to building trust in the data that your fellow Dynatracers will use on all levels of seniority to build the product that our customers love.

This is an exciting opportunity to make a direct, tangible impact on our product and work on our crucial Digital Business Platform.

As a member of the Data Engineering team, you will be at the center of Dynatrace product innovation.

In a company as Agile in organization as Dynatrace, there is always an option and encouragement to explore new areas when you find them interesting, moving to new positions and building a career with Dynatrace.

We guarantee plenty of challenges and scope to grow.

What will help you succeed

Main responsibilities

  • Creating deployment integrations for cloud platforms, primarily AWS and Azure.
  • Deployment automation in Jenkins and Terraform
  • Designing and automating processes for ETL data pipeline(s)
  • Proactively ensuring continuous and smooth data related processes execution
  • Collaboration in international cross-lab teams (mostly in the same time zone, across Europe) on the delivery of current objectives.

Profil

  • Priority skills & experience
  • 3+ years professional experience with process automation, preferably as a DevOps, SRE or sysadmin
  • 3+ years working with Cloud solutions, preferably AWS, on configuration, deployment management and automation
  • Experience with deployment automation, and CI/CD pipelines preferably using Jenkins
  • Good English communication skills.
  • Desired skills & experience
  • Experience with Cloud databases, preferably Snowflake
  • Experience with DB services administration (PostgreSQL, AWS RDS, Aurora, Snowflake) and practical knowledge of SQL
  • Practical knowledge of IaC tools: CloudFormation, Terraform, and similar tools.
  • Mindset focused on monitoring and observability
  • Nice-to-haves
  • Experience in CI/CD support for MS Power BI development
  • Experience with working on data pipelines (ETL / ELT) automation as well as supporting Data Engineering and Data Science team(s)
  • Experience with multiple cloud platforms (AWS, GCP, Azure)
  • Good command of scripting language(s): Python, Shell script, PowerShell.
  • Practical knowledge of IaC tools: Ansible, Chef, Puppet, PowerShell DSC, SaltStack, CloudFormation, Terraform, and similar tools.
  • Java literacy, experience with other programming languages
  • Familiarity with Docker and Kubernetes.

Wir bieten

  • We offer attractive compensation packages and stock purchase options withnumerous benefits and advantages.
  • Due to legal reasons, we are obliged to disclose the minimum salary for this position, which is € 56,000 gross per year based on full-time employment. We offer ahigher salaryin line with qualifications and experience.

JBG81_AT

Senior DevOps Engineer for Data Engineering team (m/f/x) Arbeitgeber: Dynatrace Austria GmbH

At Dynatrace, we pride ourselves on being an exceptional employer, offering a dynamic work culture that fosters innovation and creativity. Our Data Engineering team is at the forefront of technology, providing ample opportunities for professional growth and collaboration across international teams. With attractive compensation packages, a supportive relocation program, and a commitment to employee development, we empower our Dynatracers to thrive in their careers while making a tangible impact on our cutting-edge Application Intelligence platform.
Dynatrace Austria GmbH

Kontaktperson:

Dynatrace Austria GmbH HR Team

StudySmarter Bewerbungstipps 🤫

So bekommst du den Job: Senior DevOps Engineer for Data Engineering team (m/f/x)

✨Tip Number 1

Familiarize yourself with the specific tools and technologies mentioned in the job description, such as Jenkins, Terraform, and AWS. Having hands-on experience or projects that showcase your skills with these tools can set you apart from other candidates.

✨Tip Number 2

Highlight any experience you have with automation and CI/CD pipelines. Be prepared to discuss specific challenges you've faced in automating processes and how you overcame them during the interview.

✨Tip Number 3

Since collaboration is key in this role, think of examples where you've successfully worked in cross-functional teams. Being able to communicate your teamwork experiences will demonstrate your fit for the international and collaborative culture at Dynatrace.

✨Tip Number 4

Stay updated on the latest trends in data engineering and cloud solutions. Showing your passion for continuous learning and innovation can resonate well with the company's values and mission.

Diese Fähigkeiten machen dich zur top Bewerber*in für die Stelle: Senior DevOps Engineer for Data Engineering team (m/f/x)

Process Automation
Cloud Solutions (AWS, Azure)
Deployment Automation
CI/CD Pipelines (Jenkins)
ETL Data Pipeline Design
SQL Knowledge
Cloud Database Administration (PostgreSQL, AWS RDS, Aurora, Snowflake)
Infrastructure as Code (IaC) Tools (Terraform, CloudFormation)
Scripting Languages (Python, Shell Script, PowerShell)
Monitoring and Observability Mindset
Collaboration in Cross-Lab Teams
Agile Methodologies
Problem-Solving Skills
Good English Communication Skills

Tipps für deine Bewerbung 🫡

Tailor Your CV: Make sure to customize your CV to highlight your experience with process automation, cloud solutions, and CI/CD pipelines. Use keywords from the job description to ensure your application stands out.

Craft a Compelling Cover Letter: In your cover letter, express your passion for automation and data engineering. Mention specific projects or experiences that demonstrate your skills in AWS, Jenkins, and Terraform, and how they align with Dynatrace's goals.

Showcase Relevant Skills: Clearly list your technical skills related to the job, such as scripting languages (Python, Shell), IaC tools (Terraform, CloudFormation), and experience with ETL processes. This will help the hiring team see your fit for the role.

Highlight Collaboration Experience: Since the role involves working in international cross-lab teams, emphasize any previous experience you have in collaborative environments. Discuss how you effectively communicated and worked with diverse teams to achieve common goals.

Wie du dich auf ein Vorstellungsgespräch bei Dynatrace Austria GmbH vorbereitest

✨Showcase Your Automation Skills

Since the role emphasizes automation, be prepared to discuss your experience with tools like Jenkins and Terraform. Share specific examples of how you've automated processes in previous roles, particularly in cloud environments.

✨Demonstrate Cloud Expertise

Highlight your experience with cloud platforms, especially AWS and Azure. Be ready to explain how you've managed deployments and configurations in these environments, as well as any challenges you've faced and overcome.

✨Discuss ETL Pipeline Experience

The position involves shaping ETL pipelines, so come prepared to talk about your experience with data pipelines. Discuss any relevant projects where you designed or automated ETL processes, focusing on the impact of your work.

✨Emphasize Collaboration Skills

Collaboration in cross-lab teams is key for this role. Share examples of how you've worked effectively in diverse teams, particularly in international settings. Highlight your communication skills and how they contributed to successful project outcomes.

Senior DevOps Engineer for Data Engineering team (m/f/x)
Dynatrace Austria GmbH
Jetzt bewerben
Dynatrace Austria GmbH
Ähnliche Positionen bei anderen Arbeitgebern
Europas größte Jobbörse für Gen-Z
discover-jobs-cta
Jetzt entdecken
>