Протягом 20 рокiв GlobalLogic надає консультацiї, розробляє та створює чудовi цифровi продукти та програмне забезпечення у самiсiнькому серцi Кремнiєвої Долини. Ми гарантуємо вам доброзичливе та iнклюзивне середовище, де у вас буде достатньо викликiв, i де ви будете вчитися та зростати кожен день.
12 серпня 2022

Senior Data Infrastructure Engineer (IRC160893) (вакансія неактивна)

Кошалін (Польща), Краків (Польща), Щецин (Польща), Вроцлав (Польща)

GlobalLogic is inviting an experienced Senior Data Infrastructure Engineer to join our engineering team.

We are operating with a company developing scientifically validated software for performing rapid analyses to generate real-world evidence at scale. Their solutions are used worldwide to make meaning from real-world data and to produce meaningful results for life sciences companies.

You will work closely with our Product and Science team in developing custom transformation logic for longitudinal data, which is in Java/Python/Scala and/or R and executed over a Spark cluster. In addition, you will be integral in developing and enhancing our platform and its connections to Spark and a combination of big data infrastructure.

Requirements:

  • Bachelor’s degree or equivalent in Computer Science, Computer Engineering, Information Systems, or a related field.
  • 4+ years of experience or equivalent in the position offered or related position, including 2 years of experience with designing, developing, and maintaining large-scale data ETL pipelines using Java/Scala in AWS, Hadoop, Spark, and DataBricks to manage Apache Spark infrastructure.
  • Experience working with programming languages like Java, Python, SQL, and SCALA.
  • Experience or knowledge of building and optimizing ETL pipelines
  • Experience building systems with large data sets
  • Experience or working knowledge with distributed systems
  • Experience translating requirements from product, and DevOps teams to technology solutions using SDLC.

Preferences:

  • Scala, Python, Hadoop, Apache, Databricks

Responsibilities:

  • Develop transformation logic to convert disparate datasets into the Client’s proprietary format.
  • Work with the Science team to develop transformations in Spark SQL and UDFs executed over a Spark cluster.
  • Assess, develop, troubleshoot, and enhance our measure system, which utilizes a combination of Java, Scala, and Python.
  • Work on a full-stack rapid-cycle analytic application.
  • Develop highly effective, performant, and scalable components capable of handling large amounts of data for over 100 million patients.
  • Work with the Science and Product teams to understand and assess client needs, and to ensure optimal system efficiency.
  • Take ownership of software development and prototyping through implementation
  • Build proprietary cloud-based big data analytics for healthcare and improve core back-end & cloud-based data services

We offer:

  • Interesting and challenging work in a large and dynamically developing company
  • Exciting projects involving the newest technologies
  • Professional development opportunities
  • Excellent compensation and benefits package

Гарячі вакансії

Всі вакансії