Takeda logo

Senior Data Engineer

Takeda
Full-time
Remote friendly (Thousand Oaks, CA)
United States
$111,800 - $175,670 USD yearly
IT

Want to see how your resume matches up to this job? A free trial of our JobsAI will help! With over 2,000 biopharma executives loving it, we think you will too! Try it now — JobsAI.

Role Summary

Senior Data Engineer responsible for designing and operating enterprise-grade data pipelines powering analytics and AI for manufacturing and quality. Based in Thousand Oaks, CA, reporting to the Head of Data Digital & Technology, DD&T, and partnering with site teams and global DD&T colleagues. Lead cross-functional initiatives to standardize data ingestion and modeling, ensure data quality and governance, and deploy secure, scalable solutions while mentoring engineers and driving globally aligned practices.

Responsibilities

  • Design and optimize data pipelines: ETL/ELT, batch/stream using Databricks and Dataiku, with strong observability and reliability.
  • Develop reusable ingestion and transformation patterns to standardize processes and minimize redundant work across sites.
  • Create and maintain modular data models (e.g., lakehouse/medallion) and semantic layers aligned with global architecture standards.
  • Ensure data quality and compliance by implementing validation controls, governance frameworks, and audit-ready practices for regulated environments.
  • Deploy and operate data services in cloud environments (AWS/Azure) using CI/CD, IaC, and DataOps best practices for performance and scalability.
  • Collaborate with stakeholders—including product owners, data scientists, and global teams—to translate requirements into robust engineering solutions.
  • Document and share technical designs and best practices, contributing to engineering standards and promoting reusability across programs.
  • Provide technical leadership and mentorship, guiding engineers, reviewing designs/code, and influencing global best practices.

Qualifications

  • Education: Bachelor’s or Master’s degree in Computer Science, Statistics, Mathematics, or related field, or equivalent experience in data engineering/digital product development.
  • Technical Skills:
    • In-depth programming in Python, Java, or Scala.
    • Experience with data modeling, big data technologies (e.g., Hadoop, Spark), and database systems (SQL/NoSQL).
    • Familiarity with data warehousing and cloud platforms (AWS/Azure).
  • Certifications: AWS Data Engineering or Cloud certification (preferred AWS Certified Data Engineer / Architect Associate; Databricks Certified Data Engineer Professional).
  • Tools & Frameworks: Hands-on experience with Dataiku and/or Databricks; knowledge of CI/CD and Agile/SAFe practices.
  • Other: Strong analytical/problem-solving skills, proficiency in data visualization, and ability to work in multi-stakeholder, agile environments with a collaborative, professional attitude.

Skills

  • Data pipeline design and optimization
  • Data modeling and semantic layers
  • Data quality and governance
  • Cloud platforms (AWS/Azure) and DataOps
  • CI/CD, IaC, Dataiku, Databricks
  • Collaboration with cross-functional teams
  • Technical leadership and mentorship

Education

  • Bachelor’s or Master’s degree in Computer Science, Statistics, Mathematics, or related field, or equivalent experience.
Apply now
Share this job