Evotec logo

Scientific Data Platform Engineer

Evotec
Full-time
Remote friendly (Seattle, WA)
United States
IT

Want to see how your resume matches up to this job? A free trial of our JobsAI will help! With over 2,000 biopharma executives loving it, we think you will too! Try it now — JobsAI.

Role Summary

Scientific Data Platform Engineer is a role focused on architecting, building, and maintaining scalable data platforms and pipelines to support operational and analytical data needs across scientific and business domains. The position involves collaborating with software engineers and scientists, leading data modeling and governance efforts, and staying current with cloud computing and data engineering best practices. This role requires a strong curiosity and the ability to communicate complex concepts to both technical and non-technical stakeholders, with some on-site presence in Seattle.

Responsibilities

  • Architect, build, and maintain scalable data platforms and pipelines to support operational and analytical data needs across scientific and business domains.
  • Design and implement data orchestration workflows using modern tools and frameworks.
  • Integrate data from diverse scientific and laboratory sources into centralized data systems and platforms.
  • Collaborate with software engineers and scientists to ensure data systems meet performance, reliability, and usability standards.
  • Lead efforts in data modeling, metadata management, and data governance.
  • Stay current with emerging technologies and best practices in cloud computing, data engineering, and biotherapeutic data systems.
  • Communicate technical concepts clearly and effectively to both technical and non-technical stakeholders.
  • Draft technical documentation, architecture diagrams, and operational procedures.
  • Spend at least 2 days a week at our Seattle site.

Qualifications

  • Ph.D. in computer science, data engineering, physical sciences, or a related field with a strong data engineering focus and 4+ years of experience.
  • Proven experience building and maintaining data pipelines and orchestration systems.
  • Proficiency in Python and experience with version control systems and DevOps.
  • Hands-on experience with AWS cloud services, including S3, Lambda, Fargate, and ECS.
  • Familiarity with modern data architectures for operational and analytical systems (e.g., data lakes, lakehouses, event-driven architectures).
  • Strong understanding of relational databases, SQL, and data exchange formats.

Additional Qualifications

  • Experience working with Laboratory Information Management Systems (LIMS) in a commercial setting.
  • Experience in physical science domains (e.g., chemistry, biology).
  • Familiarity with data science toolsets (e.g., scikit-learn, TensorFlow, PyTorch).
  • Experience with scientific data visualization and dashboarding tools.
  • Proficiency with additional programming languages such as Java or R.
  • Excellent communication skills and attention to detail, with strong organizational and time management abilities.