WA3410
AI Security, Compliance, and Explainability Training
In this AI course, attendees master the AI auditing processes and understand the importance of making AI transparent through explainability techniques. Students also learn AI's role in various sectors, best practices for system security, and the intricacies of AI design and deployment.
Course Details
Duration
2 days
Prerequisites
- Foundational Knowledge in AI and Machine Learning
- Familiarity with Data Management
- Basic Cybersecurity Concepts
Target Audience
- AI and Machine Learning Practitioners
- IT Regulatory and Compliance Officers
- Cybersecurity Professionals
- Decision Makers and Executives
Skills Gained
- Evaluate the ethical implications of AI systems, recognizing potential biases, discrimination, and privacy risks
- Navigate the complex landscape of AI regulations and compliance requirements across industries, ensuring responsible AI development and deployment
- Implement robust security measures to protect AI systems from cyber threats, adversarial attacks, and data breaches
- Design and deploy secure AI systems, incorporating privacy-preserving techniques and mitigating vulnerabilities
- Apply explainable AI (XAI) techniques to understand and interpret AI model decisions, enhancing transparency and accountability
- Conduct comprehensive AI audits, assessing compliance with ethical guidelines and regulatory standards
- Analyze real-world case studies to learn from ethical and regulatory challenges in AI applications
- Advocate for responsible AI development and deployment, prioritizing fairness, transparency, and accountability in AI systems
- Contribute to developing ethical AI guidelines and policies within your organization or industry
- Collaborate with diverse stakeholders to build trust and ensure the ethical use of AI for the benefit of society
Course Outline
- Ethics and Regulation
- What is an AI System?
- View of AI System
- AI System Classifications
- Branches of AI Today
- AI by the numbers
- AI - the Good
- AI - the Bad
- Principles of AI Ethics
- Principles of AI Ethics
- Fairness
- Accountability
- Transparency
- Explainability
- Privacy and autonomy
- Reliable
- AI Ethics in Practice
- Regulatory Compliance in AI Systems
- What are the benefits of AI regulation?
- What are the disadvantages of regulating AI
- Regulations and standards in AI
- GDPR and data protection
- AI in healthcare (HIPAA and other relevant laws)
- AI in healthcare examples
- AI in finance and regulatory compliance
- US FINRA AI Deployment
- AI in US finance examples
- AI in the global finance examples
- Case studies of AI non-compliance
- Addressing Regulatory and Compliance
- Dangers of Discrimination and Bias
- Data Security and Data Privacy
- Control and Security Concerns of AI
- Cooperative Corporate Compliance
- Security and Privacy
- What is AI Cybersecurity?
- Threats and challenges in AI security
- Implementing AI in cybersecurity
- Adversarial attacks
- Model inversion and extraction
- Data poisoning
- Best practices for securing AI systems
- Robustness techniques
- Differential privacy
- Federated learning
- Homomorphic encryption
- Secure AI Design and Deployment
- Secure Software Development
- Connectivity
- Exploitation of AI Systems (Jailbreaks)
- Infrastructure Concerns
- System Vulnerabilities
- Data Privacy
- Data Leaks via Generating Text
- Azure OpenAI
- Adversarial Attacks
- Malicious Use of AI
- Bias and Discrimination
- Regulatory and Ethical Considerations
- Security and Privacy in Chatbots
- Ensuring Security and Privacy
- Data Protection
- Enforcing Data Protection
- Anonymization Techniques
- Best Practices for Security with Generative AI
- Sources of Bias in AI
- Tackling AI Bias
- Real-world Case Studies
- Autonomous Vehicles and the Trolley Problem
- AI in Warfare and Weaponization
- AI in Criminal Justice
- AI Auditing and Certification
- Introduction
- Organizational Roles in AI Ethics and Compliance
- Implementing AI Ethics Guidelines and Checklists
- Key Components of an AI Audit
- Steps in the AI Auditing Process
- Post-Deployment Monitoring and Feedback Loops
- Reporting and Recommendations
- AI Certification Process
- Explainable AI (XAI)
- Introduction to Machine Learning Interpretability
- Importance of ML interpretability
- Different types of ML interpretability models
- Model-agnostic interpretability methods
- Model-specific interpretability methods
- Limitations of model-specific interpretability
- Limitations of Model-agnostic interpretability
- Global vs. Local interpretability
- Interpretability in Deep Learning
- Techniques and Methods for Explainability
- Layer-wise relevance propagation (LRP)
- Sensitivity analysis
- Gradient-weighted class activation mapping (Grad-CAM)
- Evaluating Interpretability
- Techniques for evaluating interpretability
- Overview of existing evaluation frameworks
- Model-Agnostic Visual Analytics (MAVA)
- Human-AI Collaborated Evaluation (HACE)
- Interpretability in Large Language Models
- Interpretability in Generative LLM’s
- Common evaluation metrics for generative AI models
- Common evaluation metrics - Diversity metrics
- Common evaluation metrics – Likelihood
- Common evaluation metrics – Perplexity
- Common evaluation metrics - Inception Score
- Common evaluation metrics - FID
- Common evaluation metrics – BLEU
- Common evaluation metrics – ROUGE
- Common evaluation metrics - Human evaluation
- Techniques for Interpreting Large Language Models
- Importance of XAI in various sectors
- XAI in Healthcare: Enhancing Care and Transparency
- XAI in Finance: Driving Decisions and Building Trust
- XAI in Legal Systems: Fairness and Accountability
- Lab Exercises
- Lab 1. AI Ethics and Regulation
- Lab 2. Understanding security and privacy
- Lab 3. Learning the CoLab Jupyter Notebook Environment
- Lab 4. Guardrails with template manual
- Lab 5. Guardrails with system prompt
- Lab 6. Optional - Implementing Nemo Guardrails for LLM Response Restriction Overview
- Lab 8. AstroZeneca Ethics-Based AI Audit Framework Design
- Lab 9. Lab 1 – Designing a Gender Bias Test for a Large Language Model (LLM)
- Lab 10. Exploring Machine Learning Interpretability (MLI) with H2O's Driverless AI Overview