AgileEngine is a privately held company established in 2010 and HQed in the Washington DC area. We rank among the fastest-growing US companies on the Inc 5000 list and the top-3 software developers in DC on Clutch.
22 серпня 2025

Data Engineer (Senior) ID40199 (вакансія неактивна)

Львів, Буенос-Айрес (Аргентина), Краків (Польща), Сан-Паулу (Бразилія), віддалено

Hi there! AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI/ML, and our people-first culture has earned us multiple Best Place to Work awards.

Why join us
If you’re looking for a place to grow, make an impact, and work with people who care, we’d love to meet you! :)

About the role
We are looking for a Senior Data Engineer to take ownership of our data infrastructure, designing and optimizing high-performance, scalable solutions. You’ll work with AWS and big data frameworks like Hadoop and Spark to drive impactful data initiatives across the company.

What you will do
● Design, build, and maintain large-scale data pipelines and data processing systems in AWS;
● Develop and optimize distributed data workflows using Hadoop, Spark, and related technologies;
● Collaborate with data scientists, analysts, and product teams to deliver reliable and efficient data solutions;
● Implement best practices for data governance, security, and compliance;
● Monitor, troubleshoot, and improve the performance of data systems and pipelines;
● Mentor junior engineers and contribute to building a culture of technical excellence;
● Evaluate and recommend new tools, frameworks, and approaches for data engineering.

Must haves
● Bachelor’s or Master’s degree in Computer Science, Engineering, or related field;
5+ years of experience in data engineering, software engineering, or related roles;
● Strong hands-on expertise with AWS services (S3, EMR, Glue, Lambda, Redshift, etc.);
● Deep knowledge of big data ecosystems, including Hadoop (HDFS, Hive, MapReduce) and Apache Spark (PySpark, Spark SQL, streaming);
● Strong SQL skills and experience with relational and NoSQL databases;
● Proficiency in Python, Java, or Scala for data processing and automation;
● Experience with workflow orchestration tools (Airflow, Step Functions, etc.);
● Solid understanding of data modeling, ETL/ELT processes, and data warehousing concepts;
● Excellent problem-solving skills and ability to work in fast-paced environments;
● Ability to work German TimeZone;
● Upper-Intermediate English level.

Nice to haves
● Experience with real-time data streaming platforms (Kafka, Kinesis, Flink);
● Knowledge of containerization and orchestration (Docker, Kubernetes);
● Familiarity with data governance, lineage, and catalog tools;
● Previous leadership or mentoring experience.



Perks and benefits
Professional growth: Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps
Competitive compensation: We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities
A selection of exciting projects: Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands
Flextime: Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office — whatever makes you the happiest and most productive.



Meet Our Recruitment Process
Asynchronous stage — An automated, self-paced track that helps us move faster and give you quicker feedback:
● Short online form to confirm basic requirements
30–60 minute skills assessment
5-minute introduction video

Synchronous stage — Live interviews
● Technical interview with our engineering team (scheduled at your convenience)
● Final interview with your future teammates

If it’s a match—you’ll get an offer!