The ISACA Advanced in AI Security Management (AAISM) is a specialized certification launched in late 2025. It is the first advanced credential of its kind designed specifically for experienced security leaders to bridge traditional security governance with the unique challenges of artificial intelligence.
---------- Question 1
Which metric would be most effective for an AI security manager to monitor the effectiveness of a 'threat and vulnerability management' program for AI systems?
- Number of users with administrative access
- Mean time to detect model poisoning attempts
- Total number of GPU hours utilized
- Percentage of employees who completed basic IT training
---------- Question 2
A security manager is collaborating on an AI governance charter. Which of the following best describes a 'Supporting Task' related to establishing AI-specific policies?
- Buying new chairs for the data science office to improve ergonomics.
- Defining clear roles and responsibilities for AI system accountability.
- Writing the raw CSS code for the AI application's login page.
- Selecting the brand of coffee for the AI development team's breakroom.
---------- Question 3
An AI security program is being evaluated for maturity. Which element demonstrates a proactive approach to Stakeholder Considerations in Domain 1?
- Providing quarterly reports on model performance only to the IT team
- Implementing a transparent communication channel for users to report AI bias
- Hard-coding all security settings to prevent any user modifications
- Limiting the AI program documentation to a single, locked physical safe
---------- Question 4
To ensure Business Continuity for an AI-dependent customer service platform, the security manager should prioritize which of the following as part of the disaster recovery strategy?
- Redundant ISP connections for the main corporate headquarters
- Version-controlled backups of the model weights and training pipelines
- Daily physical security audits of the primary data center facility
- Standardizing all AI development on a single cloud service provider
---------- Question 5
A multinational corporation must comply with the EU AI Act while deploying a high-risk AI system for recruitment. What is the most critical first step for the AI security manager to ensure regulatory alignment within the AI security program?
- Conducting a fundamental rights impact assessment
- Increasing the frequency of penetration testing
- Updating the disaster recovery plan for AI servers
- Encrypting all training datasets at rest
---------- Question 6
What is the primary function of 'Explainable AI' (XAI) in the context of security and risk management?
- To provide a user-friendly interface for the model
- To justify the use of expensive AI hardware to the CFO
- To enable human oversight and detect logic flaws
- To automatically patch software vulnerabilities
---------- Question 7
A security manager is collaborating on a charter for AI Governance. What is the primary purpose of defining 'Roles and Responsibilities' in this document?
- To ensure clear accountability for AI risk ownership and to align security tasks with business objectives.
- To determine which employee gets to choose the music played in the office during the team meetings.
- To provide a list of every employee's home address to the human resources department for the annual directory.
- To limit the number of hours that any single person is allowed to use the company's AI tools each week.
---------- Question 8
Which security control is specifically designed to prevent 'Prompt Injection' attacks from manipulating the behavior of an LLM-based application?
- Full-disk encryption on the database server
- Input sanitization and robust guardrail models
- Changing the administrative password every 90 days
- Restricting physical access to the server room
---------- Question 9
An organization wants to implement ethical AI controls. Which practice best ensures that an AI system s decisions are explainable to a non-technical stakeholder?
- Providing the full source code of the neural network to the stakeholder.
- Using Local Interpretable Model-agnostic Explanations (LIME) to describe model outputs.
- Requiring the stakeholder to take a three-month course on machine learning.
- Publishing the raw mathematical formulas used in the model s activation functions.
---------- Question 10
An AI security program includes monitoring for Model Inversion. What evidence would a security analyst look for in the logs to identify a potential membership inference or inversion attempt?
- A sudden spike in the number of concurrent users accessing the application.
- A high volume of repeated, slightly varied queries aimed at probing the models decision boundaries.
- An increase in the latency of API responses due to high server CPU utilization.
- Unauthorized login attempts to the model training servers administrative console.
Are they useful?
Click here to get 540 more questions to pass this certification at the first try! Explanation for each option is included!
Follow the below LINKEDIN channel to stay updated about 89+ exams!

Comments
Post a Comment