The process for evaluating risks when using or developing AI is similar to any other risk management process.
First, you need to define which groups of people or companies (called "interested parties") could be affected by your company's actions. For example, your employees may be affected, and your customers might be affected too, but they will face different risks with varying impacts.
The main parts of risk evaluation include:
- Risk category (e.g., personal data breach)
- Risk description (e.g., personal data entered into an AI platform could leak to a third party)
Note: One risk category can have many different risks, either by subcategories or descriptions.
Each risk will have a likelihood (occurrence) and an impact. The final RPM (Risk Priority Number) is the result of multiplying occurrence by impact.
Every risk assessment should include a risk matrix. A risk matrix helps decide how to handle different levels of risk. It should have at least three risk levels:
- Low risk: Usually acceptable and doesn't require immediate action.
- Medium risk: Requires an action plan and investigation, but the timeline can be longer.
- High risk: Needs immediate action.
If you have limited resources, you can create 5 risk categories and set different response times for each risk level. The highest-priority risks should be limited to 3-4, as too many will make it difficult to prioritize actions.
It's a good practice to have clear traceability in risk management, where you can track the RPM before and after mitigation steps. Keep different risk evaluation tables for each period so you can view historical data, rather than overwriting it in one risk assessment.