About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Statistical parity difference evaluation metric
Last updated: Mar 14, 2025
The statistical parity difference metric compares the percentage of favorable outcomes for monitored groups to reference groups.
Metric details
Statistical parity difference is a fairness evaluation metric that can help determine whether your asset produces biased outcomes.
Scope
The statistical parity difference metric evaluates generative AI assets and machine learning models.
- Types of AI assets:
- Prompt templates
- Machine learning models
- Generative AI tasks: Text classification
- Machine learning problem type: Binary classification
Scores and values
The statistical parity difference metric score indicates the difference between the ratio of favorable outcomes in monitored and reference groups.
- Range of values: 0.0-1.0
- Best possible score: 0.0
- Ratios:
- Under 0: Higher benefits for the monitored group
- At 0: Both groups have equal benefits
- Over 0: Higher benefit for the reference group
Do the math
The following formula is used for calculating statistical parity difference:
Parent topic: Evaluation metrics
Was the topic helpful?
0/1000