Job Title: Assistant Manager | AI Security, risk & control metrics management | Delhi | Cyber Strategy & Transf
Job Title: Assistant Manager – AI Security & Governance
Location: Gurugram, India
Function: Deloitte Technology & Transformation
Work You’ll Do:
- AI Security Risk Assessments: Conduct comprehensive security and governance assessments of AI systems, ML models, and data pipelines to identify risks related to model integrity, bias, explainability, data leakage, and adversarial attacks.
- AI Governance Framework Development: Contribute to the design and implementation of AI governance frameworks aligned with leading standards (ISO/IEC 42001, NIST AI Risk Management Framework, EU AI Act, OECD AI Principles).
- Security & Compliance Evaluations: Evaluate the design and effectiveness of controls across AI lifecycle stages — data ingestion, model training, deployment, and monitoring — in alignment with ISMS, NIST CSF, and emerging AI-specific standards.
- Policy & Control Design: Support the development of organizational AI security policies, Responsible AI guidelines, and model risk management procedures that integrate cybersecurity and data ethics principles.
- Technical AI Risk Testing: Perform security configuration reviews, vulnerability assessments, and testing of AI platforms, APIs, and ML pipelines to identify gaps in data protection, access control, and auditability.
- Client Engagement: Assist in client discussions, workshops, and reporting activities; clearly articulate findings and recommendations in both technical and business language.
- Collaboration: Work closely with cross-functional teams (Data Science, Risk, Compliance, and IT) to embed security-by-design principles within AI initiatives.
Skills Required:
- AI & ML Understanding: Working knowledge of AI/ML system architectures, model lifecycle, and associated security challenges (e.g., model poisoning, data drift, privacy leakage).
- Cybersecurity Fundamentals: Strong foundation in information security principles, including IAM, data protection, encryption, network security, and secure coding practices.
- Frameworks & Standards: Exposure to ISO/IEC 42001, NIST AI RMF, ISO 27001, NIST CSF, and Responsible AI practices.
- Risk Management: Experience in identifying and assessing AI and cybersecurity risks, and developing mitigation strategies.
- Communication: Excellent analytical, documentation, and presentation skills.
- Teamwork: Ability to work collaboratively in a fast-paced consulting environment.
Qualifications & Experience:
- Education: Bachelor’s degree in Computer Science, Information Security, Engineering, or related field.
- Experience: 3–6 years of relevant experience in Cybersecurity or Risk Consulting, with exposure to AI/ML systems preferred.
- Certifications (Preferred):
- ISO/IEC 42001 Lead Implementer or Auditor (preferred)
- NIST AI RMF Practitioner (or equivalent training)
- ISO 27001 Lead Auditor/Implementer
- General cybersecurity certifications such as CCSP, CISM, or CISSP (beneficial)
Preferred Skills:
- Familiarity with secure AI system design and model validation processes
- Understanding of AI model transparency, fairness, and ethics considerations
- Exposure to architecture of ncloud-native AI platforms (AWS Sagemaker, Azure AI, Google Vertex AI)