Role Summary
Responsible AI Manager within Amgen's AI & Data Science organization. Leads design and operationalization of Amgen's Responsible AI governance framework to ensure AI is developed and used responsibly and in compliance with regulatory and ethical expectations across the enterprise. The role combines strong technical fluency with hands-on process design to translate complex governance requirements into clear, scalable workflows that teams can apply in practice. Reports to the Director of Responsible AI.
Responsibilities
- Own Amgen’s AI systems inventory as a foundational governance asset and single source of truth, providing guidance to support inventory completeness, accuracy, and consistent characterization of AI systems.
- Guide AI systems as they progress through required Responsible AI assessments, ensuring outcomes are captured, routed to the appropriate process flow, and followed through via relevant governance actions and reviews.
- Monitor governance signals across the AI inventory, including assessment status, outcomes, and exceptions, to maintain continuous oversight of Responsible AI compliance.
- Use inventory and governance data to provide clear visibility, traceability, and audit-ready evidence of Responsible AI oversight across the enterprise.
- Design, implement, and operate Responsible AI governance processes across the AI lifecycle by integrating Responsible AI requirements into existing enterprise processes, in close partnership with AI & Data, Legal, Quality, Compliance, Cybersecurity, and Digital Trust teams.
- Translate Responsible AI governance objectives and requirements into clear, intuitive workflows, decision points, and documentation that are easy for teams to understand and apply in practice.
- Partner with cross-functional teams to rethink, test, and refine Responsible AI governance processes.
- Contribute to the development of practical guidance and guardrails related to AI platforms, tools, and use cases.
- Define and monitor governance-relevant metrics and signals across the AI inventory to maintain visibility into Responsible AI compliance and emerging risks.
- Use assessment outcomes, governance signals, and exceptions to support risk-based decision-making and prioritization within Responsible AI governance processes.
Qualifications
- Basic Qualifications: Doctorate degree; OR Master’s degree and 2 years of computer science, AI/ML, data science, IT, or a related field experience; OR Bachelor’s degree and 4 years of computer science, AI/ML, data science, IT, or a related field experience; OR Associate’s degree and 8 years of computer science, AI/ML, data science, IT, or a related field experience; OR High school diploma / GED and 10 years of computer science, AI/ML, data science, IT, or a related field experience.
- Must-Have: A Bachelor's Degree in Computer Science, AI/ML, Data Science, IT, or related field with 5–7+ years of experience in AI, data or technology governance, risk management, or related roles, or a Master’s Degree in Computer Science, AI/ML, Data Science, IT, or related field with 3–5+ years of experience in AI, data or technology governance, risk management, or related roles.
- Must-Have: Strong understanding of how modern AI systems and platforms are designed, deployed, and operated in enterprise environments, including lifecycle management, platform integrations, data flows, and operational controls.
- Must-Have: Working knowledge of the software development lifecycle (SDLC), with experience writing SQL queries to analyze AI system or inventory data in support of governance, risk, and compliance activities.
- Must-Have: Knowledge of applicable AI regulatory and ethical frameworks, such as the EU AI Act, the NIST AI Risk Management Framework, and FDA or EMA guidance.
- Must-Have: Strong analytical, organizational, and problem-solving skills with high attention to detail; excellent communication and stakeholder management skills, with the ability to work effectively across functions and seniority levels; team-oriented, with initiative and self-motivation.
- Nice-to-Have: Experience creating and operationalizing AI governance or similar frameworks in a large enterprise environment.
- Nice-to-Have: Experience in pharma, biotech, life sciences, or another regulated industry.