Fairness module in dalex

We live in a world that is getting more divided each day. In some parts of the world, the differences and inequalities between races, ethnicities, and sometimes sexes are aggravating. The data we use for modeling is in the major part a reflection of the world it derives from. And the world can be biased, so data and therefore model will likely reflect that. The introduction to this topic is well presented in Fairness and machine learning.

We propose a way in which ML engineers can easily check if their model is biased.

Fairness module is still work-in-progres and new features will be added over time.

In [1]:
import dalex as dx
import numpy as np

import plotly
plotly.offline.init_notebook_mode()
In [2]:
dx.__version__
Out[2]:
'1.7.0'

Case study - german credit data

To showcase the abilities of the module, we will be using the German Credit dataset to assign risk for each credit-seeker.

This simple task may require using an interpretable decision tree classifier.

In [3]:
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
from sklearn.tree import DecisionTreeClassifier

# credit data
data = dx.datasets.load_german()

# risk is the target
X = data.drop(columns='risk')
y = data.risk


categorical_features = ['sex', 'job', 'housing', 'saving_accounts', "checking_account", 'purpose']
numerical_features = ['age', 'duration', 'credit_amount']
categorical_transformer = Pipeline(steps=[
    ('onehot', OneHotEncoder(handle_unknown='ignore'))
])

preprocessor = ColumnTransformer(transformers=[
        ('cat', categorical_transformer, categorical_features),
        ('num', 'passthrough', numerical_features)
])

clf = Pipeline(steps=[
    ('preprocessor', preprocessor),
    ('classifier', DecisionTreeClassifier(max_depth=7, random_state=123))
])

clf.fit(X, y)
Out[3]:
Pipeline(steps=[('preprocessor',
                 ColumnTransformer(transformers=[('cat',
                                                  Pipeline(steps=[('onehot',
                                                                   OneHotEncoder(handle_unknown='ignore'))]),
                                                  ['sex', 'job', 'housing',
                                                   'saving_accounts',
                                                   'checking_account',
                                                   'purpose']),
                                                 ('num', 'passthrough',
                                                  ['age', 'duration',
                                                   'credit_amount'])])),
                ('classifier',
                 DecisionTreeClassifier(max_depth=7, random_state=123))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

We create an Explainer object to proceed with dalex functionalities.

In [4]:
exp = dx.Explainer(clf, X, y)
Preparation of a new explainer is initiated

  -> data              : 1000 rows 9 cols
  -> target variable   : Parameter 'y' was a pandas.Series. Converted to a numpy.ndarray.
  -> target variable   : 1000 values
  -> model_class       : sklearn.tree._classes.DecisionTreeClassifier (default)
  -> label             : Not specified, model's class short name will be used. (default)
  -> predict function  : <function yhat_proba_default at 0x1303344a0> will be used (default)
  -> predict function  : Accepts only pandas.DataFrame, numpy.ndarray causes problems.
  -> predicted values  : min = 0.0, mean = 0.7, max = 1.0
  -> model type        : classification will be used (default)
  -> residual function : difference between y and yhat (default)
  -> residuals         : min = -0.944, mean = 0.0, max = 0.9
  -> model_info        : package sklearn

A new explainer has been created!
In [5]:
exp.model_performance().result
Out[5]:
recall precision f1 accuracy auc
DecisionTreeClassifier 0.96 0.84 0.896 0.844 0.914793

Let's say that performance is satisfying. To check if the model is biased, we will use the fairness module from dalex. Checking if the model is fair should be straightforward. Apart from the dx.Explainer, we will need 2 parameters:

  • protected - array-like with subgroup values that denote a sensitive attribute (protected variable) like sex, nationality etc. The fairness metrics will be calculated for each of those subgroups and compared.
  • privileged - a string representing one of the subgroups. It should be the one suspected of the most privilege.
In [6]:
# array with values like male_old, female_young, etc.
protected = data.sex + '_' + np.where(data.age < 25, 'young', 'old')

privileged = 'male_old'

Now it is time to check fairness!

We use a unified dalex interface to create a fairness explanation object. Use the model_fairness() method:

In [7]:
fobject = exp.model_fairness(protected = protected, privileged=privileged)

The idea here is that ratios between scores of privileged and unprivileged metrics should be close to 1. The closer the more fair the model is. But to relax this criterion a little bit, it can be written more thoughtfully:

fairness_check

Where the epsilon is a value between 0 and 1, it should be a minimum acceptable value of the ratio. On default, it is 0.8, which adheres to four-fifths rule (80% rule) often looked at in hiring, for example.

In [8]:
fobject.fairness_check(epsilon = 0.8) # default epsilon
No bias was detected!

Conclusion: your model is fair in terms of checked fairness criteria.

Ratios of metrics, based on 'male_old'. Parameter 'epsilon' was set to 0.8 and therefore metrics should be within (0.8, 1.25)
                   TPR       ACC       PPV       FPR       STP
female_old    1.018828  1.000000  0.971963  0.921525  0.964677
female_young  0.981172  0.938824  0.891355  0.872197  0.855055
male_young    1.019874  0.977647  0.929907  0.896861  0.918392

This model cannot be called fair! Generally, each metric should be between (epsilon, 1/epsilon). Metrics are calculated for each subgroup, and then their scores are divided by the score of the privileged subgroup. That is why we omit male_old in this method. When at least 2 metrics have scores ratio outside of the epsilon range, the model may be declared unfair. In our case it cannot be decided automatically but the bias is visible and FPR (False Positive Rate) is preety important in case of risk assigning, so let's call our model unfair.

Useful attributes

The result attribute is metric_scores where each row is divided by row indexed with privileged (in this case male_old).

In [9]:
# to see all scaled metric values you can run
fobject.result
Out[9]:
TPR TNR PPV NPV FNR FPR FDR FOR ACC STP
female_old 1.018828 1.063177 0.971963 1.114495 0.590909 0.921525 1.166667 0.474860 1.000000 0.964677
female_young 0.981172 1.102888 0.891355 1.071864 1.409091 0.872197 1.645833 0.670391 0.938824 0.855055
male_old 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000 1.000000
male_young 1.019874 1.083032 0.929907 1.142509 0.568182 0.896861 1.416667 0.346369 0.977647 0.918392
In [10]:
# or unscaled ones via
fobject.metric_scores
Out[10]:
TPR TNR PPV NPV FNR FPR FDR FOR ACC STP
female_old 0.974 0.589 0.832 0.915 0.026 0.411 0.168 0.085 0.850 0.792
female_young 0.938 0.611 0.763 0.880 0.062 0.389 0.237 0.120 0.798 0.702
male_old 0.956 0.554 0.856 0.821 0.044 0.446 0.144 0.179 0.850 0.821
male_young 0.975 0.600 0.796 0.938 0.025 0.400 0.204 0.062 0.831 0.754

Bias Detection Plots

The fairness explanation object includes plots that allow bias visualization from different perspectives:

  1. fairness_check plot

  2. metric_scores plot

Fairness Check plot

This is a visualization of the fairness_check result.

In [11]:
fobject.plot()
Found NaN's or 0's for models: {'DecisionTreeClassifier'}
It is advisable to check 'metric_ratios'

If a bar reaches the red field, it means that for this metric model is exceeding the (epsilon, 1/epsilon) range. In this case the DecisionTreeClassifier has one NaN. In this case appropriate message is given (it can be disabled with verbose=False).

One can also plot metric scores

Metric Scores plot

This is a visualization of the metric_scores attribute.

In [12]:
fobject.plot(type = 'metric_scores')

Vertical lines showcase the score of the privileged subgroup. Points closer to the line indicate less bias in the model.

Both plot types are complementary in terms of the metrics. Metric Scores plot is an excellent way to ensure that the interpretation of the Fairness Check plot is on point (small metric values may make the ratios high/low).

Multiple models

Supporting multiple model visualization is a key functionality. One can make many models and compare them in terms of the fairness metrics.

In [13]:
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
In [14]:
# create models
numeric_features = ['credit_amount', 'duration', 'age']

numeric_transformer = Pipeline(steps=[
    ('scaler', StandardScaler())])

categorical_transformer = Pipeline(steps=[
    ('onehot', OneHotEncoder(handle_unknown='ignore'))])


preprocessor = ColumnTransformer(
    transformers=[
        ('cat', categorical_transformer, categorical_features),
        ('num', numeric_transformer, numeric_features)])

clf_forest = Pipeline(steps=[('preprocessor', preprocessor),
                      ('classifier', RandomForestClassifier(random_state=123, max_depth=5))]).fit(X,y)

clf_logreg = Pipeline(steps=[('preprocessor', preprocessor),
                      ('classifier', LogisticRegression(random_state=123))]).fit(X,y)
In [15]:
# create Explainer objects 
exp_forest  = dx.Explainer(clf_forest, X,y, verbose = False)
exp_logreg  = dx.Explainer(clf_logreg, X,y, verbose = False)

# create fairness explanations
fobject_forest = exp_forest.model_fairness(protected, privileged)
fobject_logreg = exp_logreg.model_fairness(protected, privileged)
In [16]:
fobject.plot(objects=[fobject_forest, fobject_logreg])
Found NaN's or 0's for models: {'DecisionTreeClassifier'}
It is advisable to check 'metric_ratios'

When the plot is missing any bars, a console output will tell the user to check the Metric Scores plot. We can examine why these bars are missing:

In [17]:
fobject.plot(objects=[fobject_forest, fobject_logreg], type = "metric_scores")

Parity loss plots

Parity loss plots are other kind of plots that use parity_loss attribute. But what is parity_loss?

In [18]:
# attribute of object
fobject.parity_loss
Out[18]:
TPR    0.057341
TNR    0.238959
PPV    0.216121
NPV    0.311027
FNR    1.434352
FPR    0.327320
FDR    1.000704
FOR    2.204880
ACC    0.085734
STP    0.277682
dtype: float64

Parity loss apart from being an attribute is a way that summarizes the bias across subgroups. We needed a function that is symmetrical in terms of division (f(a/b) = f(b/a)). This is why we decided to use following formula: parityloss

The intuition behind this formula is simple. The bigger the difference in metrics the higher the parity loss will be. It is good entry point for visualization because we have only one value for each metric.

With this knowledge we are now ready for some plots! Note that in all following plots the metrics can be changed. Just pass metrics = ["TPR", "FPR", ... ]

Radar plot

Fairly simple radar plot, it shows each parity loss of metric in form of point on ploar coordinate system.

In [19]:
fobject.plot(objects=[fobject_forest, fobject_logreg], type = "radar")

Judging by the size, the LogisticRegression classifier is the most biased

Heatmap

Heatmap is also simple way to check parity loss.

In [20]:
fobject.plot(objects=[fobject_forest, fobject_logreg], type = "heatmap")
Found NaNs in following models: {'RandomForestClassifier'}

When Nans are present the fields will be left without color.

Stacked

Stacked plot is a way to look at cumulated parity loss. It stacks each bar on top of each other. The widths of bars depict the amount of bias. The lesser the better.

In [21]:
fobject.plot(objects=[fobject_forest, fobject_logreg], type = "stacked")

Performance and Fairness

Sometime it is good idea to look both at performance and fairness metrics. This is where this plot is handy! This plot does not have metrics parameter, but it requires fairness_metric and performance_metric parameters. The dafaults are TPR and accuracy respectively.

In [22]:
fobject.plot(objects=[fobject_forest, fobject_logreg], type = "performance_and_fairness")

Note that y axis is reversed this way the best models are in top right corner.

Ceteris Paribus Cutoff

Last but not least is Ceteris Paribus Cutoff plot. It shows us what would happen if we changed cutoff only for one subgroup. It also shows where minimum of summed parity loss of metrics is.

In [23]:
fobject.plot(objects=[fobject_logreg], # for better visibility only one additional model
             type = "ceteris_paribus_cutoff", 
             subgroup="female_young") # necessary argument

To achieve minimal parity loss cutoff for female_young should be set to the values shown by minimums. Please note that manipulating cutoff in this way can be considered unfair because we are artificialy lowering standards for particular subgroup.

Mitigation

There are few possible solutions to overcome bias affecting classification models. In dalex, there are 3 mitigation techniques:

  1. resample - returns indices that may be used to pick relevant samples of data

  2. reweight - returns sample (case) weights for model training

  3. roc-pivot - returns the Explainer with changed y_hat

In [24]:
from dalex.fairness import resample, reweight, roc_pivot
from copy import copy

First, let's prepare models by doing copies of Explainers.

In [25]:
# copying
clf_u = copy(clf)
clf_p = copy(clf)

Resampling

In [26]:
# resample
indices_uniform = resample(protected, y, verbose = False)
indices_preferential = resample(protected,
                                y, 
                                type = 'preferential', # different type 
                                probs = exp.y_hat, # requires probabilities
                                verbose = False)


clf_u.fit(X.iloc[indices_uniform, :], y[indices_uniform])
clf_p.fit(X.iloc[indices_preferential, :], y[indices_preferential])
Out[26]:
Pipeline(steps=[('preprocessor',
                 ColumnTransformer(transformers=[('cat',
                                                  Pipeline(steps=[('onehot',
                                                                   OneHotEncoder(handle_unknown='ignore'))]),
                                                  ['sex', 'job', 'housing',
                                                   'saving_accounts',
                                                   'checking_account',
                                                   'purpose']),
                                                 ('num', 'passthrough',
                                                  ['age', 'duration',
                                                   'credit_amount'])])),
                ('classifier',
                 DecisionTreeClassifier(max_depth=7, random_state=123))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

Reweight

In [27]:
weights = reweight(protected, y, verbose = False)

clf_weighted = Pipeline(steps=[('preprocessor', preprocessor),
                      ('classifier', DecisionTreeClassifier(random_state=123))])

kwargs = {clf_weighted.steps[-1][0] + '__sample_weight': weights}

clf_weighted.fit(X,y, **kwargs)
Out[27]:
Pipeline(steps=[('preprocessor',
                 ColumnTransformer(transformers=[('cat',
                                                  Pipeline(steps=[('onehot',
                                                                   OneHotEncoder(handle_unknown='ignore'))]),
                                                  ['sex', 'job', 'housing',
                                                   'saving_accounts',
                                                   'checking_account',
                                                   'purpose']),
                                                 ('num',
                                                  Pipeline(steps=[('scaler',
                                                                   StandardScaler())]),
                                                  ['credit_amount', 'duration',
                                                   'age'])])),
                ('classifier', DecisionTreeClassifier(random_state=123))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

ROC pivot

In [28]:
exp5 = copy(exp)

# roc pivot
exp5 = roc_pivot(exp5, protected, privileged, theta = 0.02, verbose = False)

This function returns an Explainer, but we want to explain the models created before. Next, we make fairness objects.

In [29]:
exp2 = dx.Explainer(clf_weighted, X, y, verbose = False)
exp3 = dx.Explainer(clf_u, X, y, verbose = False)
exp4 = dx.Explainer(clf_p, X, y, verbose = False)


fobject1 = exp.model_fairness(protected, privileged, label='base')
fobject2 = exp2.model_fairness(protected, privileged, label='weighted')
fobject3 = exp3.model_fairness(protected, privileged, label='res_unif')
fobject4 = exp4.model_fairness(protected, privileged, label='res_pref')
fobject5 = exp5.model_fairness(protected, privileged, label='roc')

# plotting
fobject1.plot([fobject2, fobject5, fobject4, fobject3])
Found NaN's or 0's for models: {'base', 'weighted', 'roc'}
It is advisable to check 'metric_ratios'

We see that the metrics are lower after mitigation. We can also observe this in fairness_check, for example let's investigate roc_pivot.

In [30]:
fobject5.fairness_check()
No bias was detected!

Conclusion: your model is fair in terms of checked fairness criteria.

Ratios of metrics, based on 'male_old'. Parameter 'epsilon' was set to 0.8 and therefore metrics should be within (0.8, 1.25)
                   TPR       ACC       PPV       FPR       STP
female_old    1.018828  1.000000  0.971963  0.921525  0.964677
female_young  0.981172  0.938824  0.891355  0.872197  0.855055
male_young    1.019874  0.977647  0.929907  0.896861  0.918392

The only issue is in the FPR metric.

Summary

Fairness module in dalex is a unified and accessible way to ensure that the models are fair. In the next versions of the module we plan to add bias mitigation methods. There is a long term plan to add support for individual fairness.

Plots

This package uses plotly to render the plots:

Resources - https://dalex.drwhy.ai/python