Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CFRL Feature condition autoencoder #573

Open
HeyItsBethany3 opened this issue Jan 14, 2022 · 42 comments
Open

CFRL Feature condition autoencoder #573

HeyItsBethany3 opened this issue Jan 14, 2022 · 42 comments
Assignees
Labels
Type: Question User questions

Comments

@HeyItsBethany3
Copy link

Hi Team,

Firstly I want to say thank you so much for implementing a truly model-agnostic method for counterfactuals! I've been searching for many months now for a counterfactual tool I can easily use for GBMs and it made my day when I found this method.

I have implemented the CFRL method on a simple model with 12 features, but the counterfactuals are not very sparse, usually sparsity is around 9 or 10. I tried specifying more features to be immutable to enforce sparsity which is not ideal. Is there a way to add feature conditioning into the autoencoder? I am concerned that if I train the autoencoder on 12 variables and then fix 6 variables for instance, the autoencoder may have found important patterns in the variables that I then remove at a later stage.

Do you have any other advice on how to improve sparsity?
I have changed the loss function coefficients but results didn't vary too much. I don't have much experience with tuning autoencoders too. Do you have any advice on the best parameters to start with or focus on, that would make the most difference?

Thank you so much for your help!
Bethany

@RobertSamoilescu
Copy link
Collaborator

RobertSamoilescu commented Jan 14, 2022

Hi @HeyItsBethany3. It would be ideal if you can share some code/notebook and the data to have more context and identify the problem. Thank you!

@HeyItsBethany3
Copy link
Author

HeyItsBethany3 commented Jan 14, 2022

Hi @RobertSamoilescu. I can't show the code explicitly as it's confidential data. I'm predicting credit risk using a simple GBM model. The code is essentially exactly the same as the alibi example for the Adult CFLR dataset. https://docs.seldon.io/projects/alibi/en/latest/examples/cfrl_adult.html
I have restricted age to [0, 5] and I am varying the immutable features. Currently my HIDDEN_DIM is 10 and LATENT_DIM is 8. I have 2000 elements in my data (500 test data). There are 70 instances predicted as bad risk and 430 predicted as good risk so it is a relatively unbalanced dataset.
Let me know if there's any other specific information I can give to help. Thanks so much for your help!

@RobertSamoilescu
Copy link
Collaborator

One reason for poor sparsity would be a bad autoencoder. If the reconstruction is bad, I would expect the counterfactuals to be far from the input.

Some things to look for:

  • make sure that the numerical features are standardized and the categorical ones are one-hot encoded.
  • plot the mean reconstruction error over all numerical features and the mean accuracy over all categorical features for the train and validation dataset (make sure that you don't overfit on the training dataset).
  • make sure that the reconstruction loss is small (low value for numerical features and high accuracy for categorical features)
  • I would keep the hidden_dim=128 and play around first with the latent_dim (some small values 4, 5, 6, 7 ... ). If you overfit (see the validation loss going up) then try reducing the hidden_dim to 64, 32, ... .
  • If you still overfit, you can drop the hidden_dim.

Let me know if this works and maybe you can share some plots with the training & validation error.

@jklaise jklaise added the Type: Question User questions label Jan 18, 2022
@HeyItsBethany3
Copy link
Author

HeyItsBethany3 commented Jan 18, 2022

@RobertSamoilescu Thanks so much, this was very helpful.

Eventually I reduced the dimensions to hidden_dim=8 and latent_dim=4. This reduced sparsity from around 9 or 10 to around 6 and 7. It also reduced the distance between the counterfactual and the original instance. My dataset is unbalanced - there is more data for good credit risk than for bad credit risks, so the counterfactuals are better for the good risk instances.

A couple more questions:

  • When I use the autoencoder, I pass in x_one_hot_encoded, with size (1,23). The autoencoder returns a reconstructed x of the same size (1,23). How do I then calculate the mean accuracy over all categorical features? How do you convert the reconstructed x back to meaningful categorical variables?
    I just found the mean reconstruction error for all of the features regardless of whether they were meant to be numerical or not.
  • What do you mean by drop hidden_dim? I tried to create the autoencoder without it but it said it was a necessary parameter.
  • Do you know if it's possible to specify in the autoencoder itself which features are immutable/restricted?

Thanks for your help!

@RobertSamoilescu
Copy link
Collaborator

  1. When using a heterogeneous dataset (i.e., a dataset having both numerical and categorical variables) then you need to use the heterogenous autoencoder as in the adult example. The output of the heterogeneous autoencoder(HeAE) has a special format, as it returns a list where:
    • the first element in the list corresponds to a concatenation of the reconstruction for all numerical features
    • the rest of the elements correspond to each categorical features (one for each categorical feature).

For example, if we have 3 numerical features and 2 categorical features, where the categorical features can take 3 and 4 values respectively, then the output of the heterogeneous HeAE would look like: [[num1 num2 num3], [cat11, cat12, cat13], [cat21, cat22, cat23, cat24]] (remember that categorical variables are one-hot-encoded and hence this representation). To check how good the reconstruction is for a categorical variable, you can take the argmax of the corresponding head (i.e., for categorical variable 1, take argmax([cat11, cat12, cat13]) and check if it is a match with the input value). Note that using a heterogenous autoencoder is essential when having a mixture of numerical and categorical ones as the reconstruction for the numerical features is using the MeanSquaredError loss while for the categorical ones is using the SparseCorssentropy loss. From what I understand, you are dealing with a heterogeneous dataset but somehow the output of the autoencoder you are using does not correspond to what I described above, so I believe that there might be something wrong with your implementation.

  1. The mapping of the reconstruction should now be clear:
    • Numerical variables should stay the same if you are using the standardized representation (which I recommend when checking the reconstruction error). Otherwise, you have to multiply by the standard deviation and add the mean if you want to work in the original input space.
    • For the categorical variables, you should just take the argmax to recover the original values (as described above)
    • As an extra step, you may want to rearange the columns in the original order.

Note that everything I am describing here can be done with the heae_preprocessor and the heae_inv_preprocessor (including, rearanging the columns). The heae_preprocessor should standardize numerical features and transform the categorical ones into a one-hot representation, while the heae_inv_preprocessor should map them back to the original input space.

  1. If your dataset is unblanced, I recommend you to try to balance it.

  2. It is true that the current implementation in alibi does not support to drop the hidden layer. Although you can define your own encoder/decoder modules to fit your problem as in here.

  3. Note that there is no need to specify immutable features for the autoencoder. The autoencoder is just a tool that the main CFRL algorithm uses. The immutability should only be specified for the CFRL algorithm.

@HeyItsBethany3
Copy link
Author

Thank you for this, this was so helpful. A couple more questions:

  • When you use diverse=True to generate many counterfactuals eg.100 per instance, there is a different set of counterfactuals returned each time. Is there a way to set a seed for this randomness and what is determining this randomness?
  • I've also been using anchors for the credit model with tabular data. Is there a way to generate multiple anchors for one instance? Is there a way to exclude features we wouldn't like to include in the anchor?
    Thank you so much for your help, it's been invaluable.

@RobertSamoilescu
Copy link
Collaborator

  1. To make the diversity run deterministic, you can simply set the seed through numpy as follows:
np.random.seed(0)
explanation = explainer.explain(X=X, Y_t=Y_t, C=C, diversity=True, num_samples=100, batch_size=10)

As mentioned in the paper, at its core, the CF-RL is not really designed to generate a diverse counterfactual due to the determinstic nature of the DDPG. Although we can enforce some diversity by playing a bit with the conditional vector (please see the paper for a detailed explanation). The randomness comes from the construction of the conditional vector:

if diverse:
# Note that this is still a feasible counterfactual
X_low_ohe[:, i] *= np.random.rand(*X_low_ohe[:, i].shape)
X_high_ohe[:, i] *= np.random.rand(*X_high_ohe[:, i].shape)

if diverse:
# Note that by masking random entries we still have a feasible counterfactual
mask *= np.random.randint(low=0, high=2, size=mask.shape)

  1. Unfortunately, alibi does not support the generation of multiple anchor nor to exclude some features from the anchor. Although I believe that with a bit of engineering, you can do both. I will try to provide some guidance on how to achieve this if you are interested.

2.a. The implementation in alibi returns the anchor with the maximum coverage as it can be seen in here:

if better_anchors.size > 0:
best_anchor_idx = better_anchors[np.argmax(coverages[better_anchors])]
best_coverage = coverages[best_anchor_idx]
best_anchor = anchors[candidate_anchors[best_anchor_idx]]
if best_coverage == 1. or stop_on_first:
break

I believe you can play around with this and retain all the anchors that might be of interest.
Furthermore, it is worth looking how those variables evolve throughout the runtime:
best_of_size = {0: []} # type: Dict[int, list]
best_anchor = ()

2.b. The generation of potential anchor is performed in here:

anchors = self.propose_anchors(best_of_size[current_size - 1])

If you go inside the function, you can see that each anchor is constructed by adding one feature at a time:

tuples = [(x,) for x in all_features]
for x in tuples:

for f in all_features:
for t in previous_best:
new_t = self._sort(t + (f,), allow_duplicates=False)
if len(new_t) != len(t) + 1: # Avoid repeating the same feature ...
continue

I think you can exclude some features by removing them from the all_features variable defined here:
all_features = range(state['n_features'])

You will have to check that those changes do not break something else. For example, reducing the number of features will reduce the size of the maximum anchor. Thus, you may need to change the following line:

max_anchor_size = self.state['n_features']

to something like:

max_anchor_size = self.state['n_features'] - number_excluded features

I haven't tested any of the suggested changes related to anchor, so there might be some potential issues that I did not consider.
It would be great if you can maybe open another issue related to those anchor extensions and provided a use case where you might need multiple anchors or exclude some features from the anchor.

@HeyItsBethany3
Copy link
Author

Thank you so much for this - I will implement it and let you know how it goes.

@HeyItsBethany3
Copy link
Author

HeyItsBethany3 commented Feb 25, 2022

Hi @RobertSamoilescu, I hope you are well!
I am implementing this method for a different model. The categorical columns are ordinal encoded but not one hot encoded.
Do I need to one hot encode them for the method to work?

I am getting an error trying to fit the autoencoder:

  File ".../cf_explainer.py", line 96, in autoencoder
    heae.fit(trainset, epochs=config.EPOCHS)
  File ".../python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File ".../python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 1129, in autograph_handler
    raise e.ag_error_metadata.to_exception(e)
ValueError: in user code:

    File ".../python3.8/site-packages/keras/engine/training.py", line 878, in train_function  *
        return step_function(self, iterator)
    File ".../python3.8/site-packages/keras/engine/training.py", line 867, in step_function  **
        outputs = model.distribute_strategy.run(run_step, args=(data,))
    File ".../python3.8/site-packages/keras/engine/training.py", line 860, in run_step  **
        outputs = model.train_step(data)
    File ".../python3.8/site-packages/keras/engine/training.py", line 809, in train_step
        loss = self.compiled_loss(
    File ".../python3.8/site-packages/keras/engine/compile_utils.py", line 201, in __call__
        loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    File ".../python3.8/site-packages/keras/losses.py", line 141, in __call__
        losses = call_fn(y_true, y_pred)
    File ".../python3.8/site-packages/keras/losses.py", line 245, in call  **
        return ag_fn(y_true, y_pred, **self._fn_kwargs)
    File ".../python3.8/site-packages/keras/losses.py", line 1204, in mean_squared_error
        return backend.mean(tf.math.squared_difference(y_pred, y_true), axis=-1)

    ValueError: Dimensions must be equal, but are 110 and 108 for '{{node mean_squared_error/SquaredDifference}} = SquaredDifference[T=DT_FLOAT](he_ae/adult_decoder/dense_3/BiasAdd, Cast)' with input shapes: [128,110], [128,108].

Do you have any suggestions why this might be erroring?
Thank you

@RobertSamoilescu
Copy link
Collaborator

@HeyItsBethany3, I believe that you have two options:

  1. treat ordinal features as categorical and use one-hot-encoding -- same as you did before.
  2. since the features are ordinal, there exists an implicit ordering of their values. Thus, you can treat them as numerical features and the autoencoder will regress on them using MSE loss. Furthermore, the autoencoder allows you to specify the type of the numerical features (i.e., int or float). In your case, since the ordinal features have discrete values, you have to set them as int. Just do not forget to remove them from the list/dictionary of categorical variables (i.e., category_map) - that's where the error may come from.

For the second case, you should not worry about the range when applying the CFRL (i.e., the autoencoder may output a value of 8 for an ordinal feature, but the maximum is 7). It is guaranteed that the CFRL won't go outside the range and will consider only values from 0 to 7, where 0 is the minimum value.

@HeyItsBethany3
Copy link
Author

@RobertSamoilescu Thanks so much, this is really helpful.
One more quick question - do missing values (NaN) need to be handled for use with Alibi packages, particularly with anchor_tabular?

@RobertSamoilescu
Copy link
Collaborator

@HeyItsBethany3, please check issues #391 and #516.

As you see in issue #391, AnchorTabular does not work with NaN values. You can either drop the rows or use your preferred imputation method (e.g., mean, median, etc) to replace the NaN values.

Categorical variables are expected to be labeled or one-hot encoded. In this situation, you can consider NaN as a possible value. For example, let's say that you have a categorical feature that can take 2 string values: ['a', 'b'], but you have some instances for which you have NaN. Instead of using an imputation method (e.g., mode), you can replace all NaN's with '?'. Thus your categorical features will have three values: ['a', 'b', '?']. Then you can label encode it and pass it AnchorTabular. Just keep in mind that your predictive model should be able to deal with this workaround as it will be queried many times by the AnchorTabular algorithm.

@HeyItsBethany3
Copy link
Author

HeyItsBethany3 commented Feb 28, 2022

@RobertSamoilescu Thanks so much for your help. I decided to implement both approaches and compare them.
At the moment I one hot encoding my categorical variables before passing it through the model but am still getting a similar error as to above:
ValueError: Dimensions must be equal, but are 110 and 147 for '{{node mean_squared_error/SquaredDifference}} = SquaredDifference[T=DT_FLOAT](he_ae/adult_decoder/dense_3/BiasAdd, Cast)' with input shapes: [128,110], [128,147].
The number of categorical variables is 9 (so there are now 48 columns of this with it being one hot encoded).

I am a bit confused as to how to create the autoencoder given that this example encodes the categorical variable as part of the prediction function.
This is my code below:

       original_data = self.data.xTrain.reset_index()
        encoded_xTrain = self.encoder.encode(original_data)

        heae_preprocessor, heae_inv_preprocessor = get_he_preprocessor(X=encoded_xTrain,
                                                               feature_names=self.encoder.transformed_columns,
                                                               category_map=self.category_map,
                                                               feature_types=self.feature_types)

        trainset_input = heae_preprocessor(encoded_xTrain).astype(np.float32)
        trainset_outputs = {
            "output_1": encoded_xTrain[:, len(self.categorical_ids):] # Numerical data
        }
        for i, cat_id in enumerate(self.categorical_ids):
            trainset_outputs.update({
                f"output_{i+2}": encoded_xTrain[:, cat_id]
            })

        trainset = tf.data.Dataset.from_tensor_slices((trainset_input, trainset_outputs))
        trainset = trainset.shuffle(1024).batch(128, drop_remainder=True)

        # Define autoencoder path and create dir if it doesn't exist.
        heae_path = os.path.join("tensorflow", "credit_autoencoder")
        if not os.path.exists(heae_path):
            os.makedirs(heae_path)

        # Define the heterogeneous auto-encoder
        heae = HeAE(encoder=ADULTEncoder(hidden_dim=config.HIDDEN_DIM, latent_dim=config.LATENT_DIM),
                decoder=ADULTDecoder(hidden_dim=config.HIDDEN_DIM, output_dims=self.OUTPUT_DIMS))

        # Define loss functions
        he_loss = [keras.losses.MeanSquaredError()]
        he_loss_weights = [1.]

        # Add categorical losses
        for i in range(len(self.data.categorical_columns)):
            he_loss.append(keras.losses.SparseCategoricalCrossentropy(from_logits=True))
            he_loss_weights.append(1./len(self.data.categorical_columns))

        # Define metrics
        metrics = {}
        for i, cat_name in enumerate(self.data.categorical_columns):
            metrics.update({f"output_{i+2}": keras.metrics.SparseCategoricalAccuracy()})

        # Compile model.
        heae.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-3),
                     loss=he_loss,
                     loss_weights=he_loss_weights,
                     metrics=metrics)

        if len(os.listdir(heae_path)) == 0:
            # Fit and save autoencoder.
            heae.fit(trainset, epochs=config.EPOCHS)
            heae.save(heae_path, save_format="tf")
        else:
            heae = keras.models.load_model(heae_path, compile=False)

I have also tried using the original data to generate the trainset_outputs, but this gives a TypeError.
Thanks so much for your help.

@RobertSamoilescu
Copy link
Collaborator

RobertSamoilescu commented Mar 1, 2022

@HeyItsBethany3, I don't understand exactly why you are trying to do with the encoded_xTrain = self.encoder.encode(original_data) ... I would say that this is not correct and it is probably where the errors come from since the data is not in the correct format. On the other hand, maybe I am missing something ... Anyways, I wrote an example with lots of comments that should clarify everything:

import os
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow.keras as keras

from sklearn.preprocessing import LabelEncoder
from alibi.models.tensorflow.autoencoder import HeAE
from alibi.explainers.backends.cfrl_tabular import get_he_preprocessor
from alibi.models.tensorflow.cfrl_models import ADULTEncoder, ADULTDecoder

# some fake dataset
size = 500
age = np.random.randint(low=20, high=80, size=size)
height = 1.4 + 0.6 * np.random.rand(size)
weight = (100 * (height - 1) + 20 * (np.random.rand(size) - 0.5)).astype(np.int32)
gender = np.random.choice(['male', 'female'], size=size)
education = np.random.choice(['high-school', 'bachelors',  'masters'], size=size)

# construct dataframe
dataset = pd.DataFrame({
    'age': age,
    'gender': gender,
    'height': height,
    'education': education,
    'weight': weight,    
})


# define meta-data
feature_names = ['age', 'gender', 'height', 'education', 'weight']
category_map = {1: ['male', 'female'], 3: ['high-school', 'bachelors',  'masters']}
feature_types = {'age': int, 'weight': int}

numerical_ids = [i for i in range(len(feature_names)) if i not in category_map]
categorical_ids = list(category_map.keys())

# take a look at the dataset
dataset.head()

# make sure that the categorical variables are label encoded, otherwise the he-preprocessor does not work well
# don't have to do this if the categorical features are already label-encoded.
for cat_id in categorical_ids:
    mapping = {val: i for (i, val) in enumerate(category_map[cat_id])}
    dataset[feature_names[cat_id]].replace(mapping, inplace=True)

# make sure that the dataset is numpy array and not pandas
original_data = dataset.to_numpy()

# construct preprocessor and inv_preprocessor
heae_preprocessor, heae_inv_preprocessor = get_he_preprocessor(X=original_data,
                                                               feature_names=feature_names,
                                                               category_map=category_map,
                                                               feature_types=feature_types)
# we can preprocessdata
#  - numerical features are standardized and place at the very begining
#  - categorical feature are one-hot encoded and placed at the very end 
trainset_input = heae_preprocessor(original_data).astype(np.float32)


# construct autoencoder targets for numerical features
trainset_outputs = {
    # use the trainset because the numerical are standardized (mean 0, std 1) - this is important
    # note that this is not the case with the original dataset.
    'output_1': trainset_input[:, len(numerical_ids)] 
}

# construct autoencode targets for categorical features.
for i, cat_id in enumerate(categorical_ids):
    trainset_outputs.update({
        # note that we use the label encoded format of the categorical variables.
        f"output_{i+2}": original_data[:, cat_id]
    })

trainset = tf.data.Dataset.from_tensor_slices((trainset_input, trainset_outputs))
trainset = trainset.shuffle(1024).batch(128, drop_remainder=True)

# Define autoencoder path and create dir if it doesn't exist.
heae_path = os.path.join("tensorflow", "credit_autoencoder")
if not os.path.exists(heae_path):
    os.makedirs(heae_path)

# Define the heterogeneous auto-encoder
HIDDEN_DIM = 32
LATENT_DIM = 2
OUTPUT_DIMS = [len(numerical_ids)]
OUTPUT_DIMS += [len(category_map[cat_id]) for cat_id in categorical_ids]

heae = HeAE(encoder=ADULTEncoder(hidden_dim=HIDDEN_DIM, latent_dim=LATENT_DIM),
            decoder=ADULTDecoder(hidden_dim=HIDDEN_DIM, output_dims=OUTPUT_DIMS))

# Define loss functions
he_loss = [keras.losses.MeanSquaredError()]
he_loss_weights = [1.]

# Add categorical losses
for cat_id in categorical_ids:
    he_loss.append(keras.losses.SparseCategoricalCrossentropy(from_logits=True))
    he_loss_weights.append(1./len(categorical_ids))

# Define metrics
metrics = {}
for i, _ in enumerate(categorical_ids):
    metrics.update({f"output_{i+2}": keras.metrics.SparseCategoricalAccuracy()})

# Compile model.
heae.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-3),
             loss=he_loss,
             loss_weights=he_loss_weights,
             metrics=metrics)

if len(os.listdir(heae_path)) == 0:
    # Fit and save autoencoder.
    EPOCHS = 100
    heae.fit(trainset, epochs=EPOCHS)
    heae.save(heae_path, save_format="tf")
else:
    heae = keras.models.load_model(heae_path, compile=False)

Also, some other errors might come from the usage of pandas DataFrame instead of numpy array. Please consider using only numpy arrays with the utils functions.

@HeyItsBethany3
Copy link
Author

Hi @RobertSamoilescu
With the encoded_xTrain I am one hot encoding all the data (so increasing the number of columns in the data too). I don't ordinally or label encode the data before one hot encoding.
I was trying to follow this example.

Thank you for your example. Why are you label encoding before one hot encoding? And where does the one hot encoding come in?
Does the heae_preprocessor handle all the one hot encoding and standardisation? If so, when the explainer is called, does this preprocess the data first in this way using the heae_preprocessor? I'd ideally not like the data to be standardised as my machine learning model needs in unstandardised data.

At the moment my 9 categorical variables are at the start and numerical variables at the end. My category map looks like this:

{0: ['a','b', 'c' , 'd', 'e'], 1: ['a', 'b', 'c'], ........, 8: ['x', 'y', 'z']}

Is this the right format or should it be like:

{0: ['a','b', 'c' , 'd', 'e'], 5: ['a', 'b', 'c'], ........, 50: ['x', 'y', 'z']}

If the data is already one hot encoded, do the categories in the map need to be in the correct order that the columns respond to? I'm not too sure how to create this.

Thanks so much for your help, this is really mind boggling me.

@RobertSamoilescu
Copy link
Collaborator

The dataset is expected to be in a raw format as in the example you linked to:

  • numerical features are int or float
  • categorical variables are label encoded - int

The get_he_preprocessor expects the dataset to be in this format. Do not one-hot encode the categorical variables as it does not know how to handle that. Just make sure that categorical are label encoded.

The get_he_preprocessor returns two objects:

  • heae_preprocessor - if you call this on your original data you will see that it standardizes the numerical features and one-hot encodes the categorical variables. This is required to train the autoencoder properly. You will see that the features are reordered, namely, the numerical features will be placed at the beginning and the categorical ones at the end. Do not worry about the ordering as it will be handled properly as I will explain.
  • heae_inv_preprocessor - decodes the output of the autoencoder back into the original input space. This means that the numerical features will be destandardized (back to their original range) and cast back to their original type (e.g. age will be an integer if it was an integer in your original dataset), categorical variables will be transformed back from one-hot encoding to label encoding. Moreover, the columns will be permuted to their original order (so let's say that you originally had gender (categorical), age (numerical), education (categorical), then the heae_preprocessor orders them like age, gender, education, but the heae_inv_preprocessor will put them back into the original order gender, age, education).

Note that the training of the autoencoder has to be done in this format. Thus, the numerical features will be standardized and categorical features will be one-hot encoded. All the standardization and one-hot encoding is performed by heae_preprocessor as in the example above. Do not apply any transformation to your dataset. Keep it as raw as possible (just ensure that the categorical ones are labeled encoded).

Now, the answer to your question regarding the preprocessing in the explainer. The answer is no. The dataset is not preprocessed and passed to the model. Internally, what is happening is the following:

  • when we need to use the autoencoder, we preprocess the dataset - standardize and one-hot encode. That's the format that the autoencoder expects. This is done through this argument which is in fact heae_preprocessor.
  • The output of the autoencoder is post-processed and projected back to the original input space through this argument which is in fact heae_inv_preprocessor. Now with a decoded dataset in the raw format, we query the model. So your model will receive the data in a raw format. Note that we don't make any assumptions regarding the data format of your model expects. We just give the data in the raw format we received.

Another important note is that you can perform any kind of preprocessing in the predictor itself..
You can define something like:

def my_predictor(X):
    X = model_preprocessor(X)  # do whatever preprocessing your model requires
    return model(X)

@HeyItsBethany3
Copy link
Author

HeyItsBethany3 commented Mar 3, 2022

Ahh okay thank you for this, this makes sense!
I'm happy to only use Label Encoding as the variables are properly treated as categorical variables inn the process :)

Do the categories in the category map need to be in the correct order as they are encoded eg. if sex is encoded as 0 male and 1 female, would the category map need to be {0: ['male', 'female']} or would {0: ['female', 'male']} suffice?

Why did you suggest that I needed to use one hot encoding in this thread? Do I still need to do this?
#573 (comment)

Thank you

@RobertSamoilescu
Copy link
Collaborator

@HeyItsBethany3, yes, the order of the values should match the label encoding, thus the correct option is {0: ['male', 'female']. Otherwise, although the performance of the autoencoder and explainer will not be affected, you will get into trouble when you would want to map the categorical labels back to an interpretable representation (e.g. strings) as the mapping might not be correct. I added this correction to example in here.

To answer your second question, I suggested to use the one-hot encoding (OHE) because that's what's happening under the hood. I didn't mean that you should transform the categorical variables to OHE explicitly. Apologize for the confusion.

@HeyItsBethany3
Copy link
Author

HeyItsBethany3 commented Mar 7, 2022

@RobertSamoilescu
Thank you! I've implemented the autoencoder & it is working. The category map is working too!
My code is erroring on fitting the explainer:

self.explainer = CounterfactualRLTabular(predictor=self.predict_fn, encoder=self.heae.encoder, decoder=self.heae.decoder, latent_dim=config.LATENT_DIM, encoder_preprocessor=self.heae_preprocessor,decoder_inv_preprocessor=self.heae_inv_preprocessor,coeff_sparsity=config.COEFF_SPARSITY,coeff_consistency=config.COEFF_CONSISTENCY,category_map=self.category_map,feature_names=self.features,train_steps=config.TRAIN_STEPS,batch_size=config.BATCH_SIZE,backend="tensorflow")

self.explainer.fit(X=self.xTrain)
Traceback (most recent call last):
  File "pipeline.py", line 25, in <module>
    explainer.setup_autoencoder()
  File "cf_explainer.py", line 104, in setup_autoencoder
    self.explainer.fit(X=self.xTrain)
  File "../python3.8/site-packages/alibi/explainers/cfrl_tabular.py", line 278, in fit
    return super().fit(X)
  File ".../python3.8/site-packages/alibi/explainers/cfrl_base.py", line 645, in fit
    data_generator = self.backend.data_generator(X=X, **self.params)
  File "../python3.8/site-packages/alibi/explainers/backends/tensorflow/cfrl_base.py", line 234, in data_generator
    return TfCounterfactualRLDataset(X=X, preprocessor=encoder_preprocessor, predictor=predictor,
  File ".../python3.8/site-packages/alibi/explainers/backends/tensorflow/cfrl_base.py", line 68, in __init__
    if self.Y_m.shape[1] > 1:
IndexError: tuple index out of range

I've tried to follow all the code and understand the issue but I have no idea. Thank you

@HeyItsBethany3
Copy link
Author

@RobertSamoilescu I have tried to implement the same thing using your example above and am getting the same error. Here is the code:

import os
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow.keras as keras

from sklearn.preprocessing import LabelEncoder
from alibi.models.tensorflow.autoencoder import HeAE
from alibi.explainers.backends.cfrl_tabular import get_he_preprocessor
from alibi.models.tensorflow.cfrl_models import ADULTEncoder, ADULTDecoder
from sklearn.linear_model import LogisticRegression
from alibi.explainers import CounterfactualRLTabular, CounterfactualRL

# some fake dataset
size = 500
age = np.random.randint(low=20, high=80, size=size)
height = 1.4 + 0.6 * np.random.rand(size)
weight = (100 * (height - 1) + 20 * (np.random.rand(size) - 0.5)).astype(np.int32)
gender = np.random.choice(['male', 'female'], size=size)
education = np.random.choice(['high-school', 'bachelors',  'masters'], size=size)

# construct dataframe
dataset = pd.DataFrame({
    'age': age,
    'gender': gender,
    'height': height,
    'education': education,
    'weight': weight,    
})


# define meta-data
feature_names = ['age', 'gender', 'height', 'education', 'weight']
category_map = {1: ['male', 'female'], 3: ['high-school', 'bachelors',  'masters']}
feature_types = {'age': int, 'weight': int}

numerical_ids = [i for i in range(len(feature_names)) if i not in category_map]
categorical_ids = list(category_map.keys())

# take a look at the dataset
#print(dataset.head())

# make sure that the categorical variables are label encoded, otherwise the he-preprocessor does not work well
# don't have to do this if the categorical features are already label-encoded.
for cat_id in categorical_ids:
    mapping = {val: i for (i, val) in enumerate(category_map[cat_id])}
    dataset[feature_names[cat_id]].replace(mapping, inplace=True)

#print(dataset.head())

# make sure that the dataset is numpy array and not pandas
original_data = dataset.to_numpy()


# construct preprocessor and inv_preprocessor
heae_preprocessor, heae_inv_preprocessor = get_he_preprocessor(X=original_data,
                                                               feature_names=feature_names,
                                                               category_map=category_map,
                                                               feature_types=feature_types)
# we can preprocessdata
#  - numerical features are standardized and place at the very begining
#  - categorical feature are one-hot encoded and placed at the very end 
trainset_input = heae_preprocessor(original_data).astype(np.float32)


# construct autoencoder targets for numerical features
trainset_outputs = {
    # use the trainset because the numerical are standardized (mean 0, std 1) - this is important
    # note that this is not the case with the original dataset.
    'output_1': trainset_input[:, len(numerical_ids)] 
}

# construct autoencode targets for categorical features.
for i, cat_id in enumerate(categorical_ids):
    trainset_outputs.update({
        # note that we use the label encoded format of the categorical variables.
        f"output_{i+2}": original_data[:, cat_id]
    })


#print(trainset_input)
#print(trainset_outputs)
#print(heae_inv_preprocessor(trainset_input))

trainset = tf.data.Dataset.from_tensor_slices((trainset_input, trainset_outputs))
trainset = trainset.shuffle(1024).batch(128, drop_remainder=True)

# Define autoencoder path and create dir if it doesn't exist.
heae_path = os.path.join("tensorflow", "credit_autoencoder")
if not os.path.exists(heae_path):
    os.makedirs(heae_path)

# Define the heterogeneous auto-encoder
HIDDEN_DIM = 32
LATENT_DIM = 2
OUTPUT_DIMS = [len(numerical_ids)]
OUTPUT_DIMS += [len(category_map[cat_id]) for cat_id in categorical_ids]

heae = HeAE(encoder=ADULTEncoder(hidden_dim=HIDDEN_DIM, latent_dim=LATENT_DIM),
            decoder=ADULTDecoder(hidden_dim=HIDDEN_DIM, output_dims=OUTPUT_DIMS))

# Define loss functions
he_loss = [keras.losses.MeanSquaredError()]
he_loss_weights = [1.]

# Add categorical losses
for cat_id in categorical_ids:
    he_loss.append(keras.losses.SparseCategoricalCrossentropy(from_logits=True))
    he_loss_weights.append(1./len(categorical_ids))

# Define metrics
metrics = {}
for i, _ in enumerate(categorical_ids):
    metrics.update({f"output_{i+2}": keras.metrics.SparseCategoricalAccuracy()})

# Compile model.
heae.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-3),
             loss=he_loss,
             loss_weights=he_loss_weights,
             metrics=metrics)

if len(os.listdir(heae_path)) == 0:
    # Fit and save autoencoder.
    EPOCHS = 100
    heae.fit(trainset, epochs=EPOCHS)
    heae.save(heae_path, save_format="tf")
else:
    heae = keras.models.load_model(heae_path, compile=False)
    

    
COEFF_SPARSITY = 0.5      # Sparisty coefficient
COEFF_CONSISTENCY = 0.5   # Consisteny coefficient
TRAIN_STEPS = 10000       # Number of training steps
BATCH_SIZE = 100          # Batch size

# Create a model
predictions = np.random.choice([0, 1], size=size)
clf = LogisticRegression(random_state=0).fit(original_data, predictions)
predict_fn = lambda x: clf.predict(x)


explainer = CounterfactualRLTabular(predictor=predict_fn,
                                    encoder=heae.encoder,
                                    decoder=heae.decoder,
                                    latent_dim=LATENT_DIM,
                                    encoder_preprocessor=heae_preprocessor,
                                    decoder_inv_preprocessor=heae_inv_preprocessor,
                                    coeff_sparsity=COEFF_SPARSITY,
                                    coeff_consistency=COEFF_CONSISTENCY,
                                    category_map=category_map,
                                    feature_names=feature_names,
                                    train_steps=TRAIN_STEPS,
                                    batch_size=BATCH_SIZE,
                                    backend="tensorflow")

explainer.fit(X=original_data)

Thank you!

@RobertSamoilescu
Copy link
Collaborator

@HeyItsBethany3, the output of the model should be 2 dimensional, (N, C) where N is the number of instances and C is the number of classes. An easy fix would be to replace predict_fn = lambda x: clf.predict(x) with predict_fn = lambda x: clf.predict_proba(x). If your classifier does not support predict_proba but only proba, you can replace the output label with the one hot encoding representation. For example, consider that you have called the predictor on 3 instances and you have 3 possible classes/labels: y = [2, 0, 1]. You can transform it into

y = [
   [0, 0, 1],
   [1, 0, 0],
   [0, 1, 0]
]

Note that the result of the explainer whould not be affected if you replace probabilities with one-hot encodings since the explainer is actually taking an argmax along the columns.

@HeyItsBethany3
Copy link
Author

Thanks for this! This worked for me, I used the format

y = [
   [0, 1],
   [1, 0],
   [0, 1]
]

I'm stuck on one error explaining the instance. Feels so close to working! I've tried every combination I can think of but still keeping getting this condition error.

ranges = {'Variable1': [0.0, 1.0]}
self.explainer = CounterfactualRLTabular(predictor=predict_fn,
                                                 encoder=self.heae.encoder,
                                                 decoder=self.heae.decoder,
                                                 latent_dim=config.LATENT_DIM,
                                                 encoder_preprocessor=self.heae_preprocessor,
                                                 decoder_inv_preprocessor=self.heae_inv_preprocessor,
                                                 coeff_sparsity=config.COEFF_SPARSITY,
                                                 coeff_consistency=config.COEFF_CONSISTENCY,
                                                 category_map=self.category_map,
                                                 feature_names=self.features,
                                                 train_steps=config.TRAIN_STEPS,
                                                 batch_size=config.BATCH_SIZE,
                                                 ranges=ranges,
                                                 immutable_features=[],
                                                 backend="tensorflow")

self.explainer.fit(X=self.encoded_xTrain)
self.target_class = np.array([1, 0])
explanation = self.explainer.explain(instances, self.target_class, C=[{"Variable1":[0, 0.5]}])

This is the error:

Traceback (most recent call last):
  File "counterfactuals/scripts/pipeline.py", line 26, in <module>
    explainer.explain(instance)
  File ".../counterfactuals/cf_explainer.py", line 128, in explain
    explanation = self.explainer.explain(instances, self.target_class, C=[{"Variable1":[0, 0.5]}])
  File "...lib/python3.8/site-packages/alibi/explainers/cfrl_tabular.py", line 369, in explain
    C_vec = self.params["conditional_vector"](X=X,
  File ".../lib/python3.8/site-packages/alibi/explainers/backends/cfrl_tabular.py", line 835, in get_conditional_vector
    C_num = get_numerical_conditional_vector(X=X,
  File ".../lib/python3.8/site-packages/alibi/explainers/backends/cfrl_tabular.py", line 646, in get_numerical_conditional_vector
    X_low_ohe = preprocessor(X_low)
  File "..../lib/python3.8/site-packages/sklearn/compose/_column_transformer.py", line 748, in transform
    Xs = self._fit_transform(
  File ".../lib/python3.8/site-packages/sklearn/compose/_column_transformer.py", line 606, in _fit_transform
    return Parallel(n_jobs=self.n_jobs)(
  File "..../lib/python3.8/site-packages/joblib/parallel.py", line 1043, in __call__
    if self.dispatch_one_batch(iterator):
  File "..../lib/python3.8/site-packages/joblib/parallel.py", line 861, in dispatch_one_batch
    self._dispatch(tasks)
  File "..../lib/python3.8/site-packages/joblib/parallel.py", line 779, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)
  File "..../lib/python3.8/site-packages/joblib/_parallel_backends.py", line 208, in apply_async
    result = ImmediateResult(func)
  File ".../lib/python3.8/site-packages/joblib/_parallel_backends.py", line 572, in __init__
    self.results = batch()
  File "..../python3.8/site-packages/joblib/parallel.py", line 262, in __call__
    return [func(*args, **kwargs)
  File "...v/lib/python3.8/site-packages/joblib/parallel.py", line 262, in <listcomp>
    return [func(*args, **kwargs)
  File "..../lib/python3.8/site-packages/sklearn/utils/fixes.py", line 216, in __call__
    return self.function(*args, **kwargs)
  File "..../lib/python3.8/site-packages/sklearn/pipeline.py", line 876, in _transform_one
    res = transformer.transform(X)
  File ".../lib/python3.8/site-packages/sklearn/preprocessing/_data.py", line 973, in transform
    X = self._validate_data(
  File "..../lib/python3.8/site-packages/sklearn/base.py", line 566, in _validate_data
    X = check_array(X, **check_params)
  File "...../lib/python3.8/site-packages/sklearn/utils/validation.py", line 805, in check_array
    raise ValueError(
ValueError: Found array with 0 sample(s) (shape=(0, 108)) while a minimum of 1 is required by StandardScaler.

Thank you

@RobertSamoilescu
Copy link
Collaborator

@HeyItsBethany3,
I am not entirely sure where the error comes from. It would be great if you can reproduce the error on the toy example above.

I would try to call the self.heae_preprocessor on the instances to see if the data is in the right format. If you get an error, it is probably the case that the instances variables are not in the same format as self.encoded_xTrain.

Also, if you are explaining a single instance, make sure that you have the batch dimension too, so the dimension would be (1, F) where F is the number of features. For example if you have an instance x=np.array([f1, f2, f3]), make sure you pass it as x=np.array([[f1, f2, f3]]).

To avoid another potential issue, make sure that self.target_class are label-encoded and not one-hot.

@HeyItsBethany3
Copy link
Author

HeyItsBethany3 commented Mar 21, 2022

Hi @RobertSamoilescu, Thank you so much for your help! I managed to fix this by debugging using all your suggestions.
I am now optimising the autoencoder as before, but I'm a bit confused on computing the categorical accuracy.
Your detailed description was so helpful. I am using this line below to compute the output of the autoencoder and then I take the argmax of each categorical columns as you described.

output = autoencoder.predict(preprocessor(instance))

However, I am not sure how to find original one hot encodings. When I print the output of preprocessor(instance), I have many rows of numerical values and then at the end only rows of zeros. For instance,

[[-0.953  0.6321 -0.7952  1.995  1.238 -0.4763
 ... -0.55692 -1.28525 -0.562
   0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.
   0.          0.          0.          0.          0.          0.        ]]

It's almost like the last rows should be the one hot encoded versions of the categorical variables but there are no 1's anywhere.
Thanks for your help!

@RobertSamoilescu
Copy link
Collaborator

@HeyItsBethany3,

I believe I know what's happening, but I cannot be entirely sure without access to the data, preprocessor and autoencoder. I will try to explain it with a simple example.

Consider that we generate the following dataset containing only categorical features:

X1 = np.random.randint(0, 5, (10, 4))
category_map = {i: list(range(5)) for i in range(X1.shape[1])}

In this case, X1 is a matrix of 10 rows and 4 columns containing only elements from the set {0, 1, 2, 3, 4}. We also define the corresponding category_map for this dataset.

Now, behind the scene, the heae_preprocessor is defining a categorical transformation that takes label encoded inputs and spits out one-hot encodings. This is done as follows:

from sklearn.preprocessing import OneHotEncoder
cat_transf = OneHotEncoder(
    categories=[range(len(x)) for x in category_map.values()],
    handle_unknown="ignore" 
)

Now we can fit the cat_transf on the X1 dataset:

cat_transf = cat_transf.fit(X1)

Intentionally I will define a data instance for which the values of the categorical features are not in the category_map. For example, we can do this as:

X2 = np.random.randint(5, 10, (1, 4))

In this case, the values of the categorical features will sampled from the set {5, 6, 7, 8, 9}, which are definitely not in the category_map (remember category map knows only about the values {0, 1, 2, 3, 4}).
Because the values are unknown to the cat_transf, each one-hot encoding will be replaced by an array of zeros.

X2 = np.random.randint(5, 10, (1, 4))
X2_enc = cat_transf.transform(X2).todense()
print(X2_enc)  # this prints [[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]

This behavior is due to handle_unknown="ignore" argument that is passed when construction cat_transf. For more information on this topic please refer to the official sklearn documentation here.

In alibi, the definition of the categorical transformation is defined in here.

I believe that in you case this happened because you defined the category map based only on the training dataset. It might be the case for high-dimensional inputs, that the training dataset might not cover all the values from the test, which will result in an edge case as I described above. For example, if your training dataset is:

X1 = np.array([
  [1, 0, 2, 3],
  [0, 1, 3, 2]
  [1, 1, 2, 2]
])

and you defined the category_map something like:

category_map = {i: np.unique(X1[:, i]) for i in range(X1.shape[1])}

this will result in category_map = {0: [0, 1], 1: [0, 1], 2: [2, 3], 3: [2, 3]}.
Now if your test instances is x = np.array([[2, 2, 1, 0]]), then this will be encoded as: [0 0 0 0 0 0 0 0].

Hope this helps!

@HeyItsBethany3
Copy link
Author

@RobertSamoilescu Thank you so much! This worked! I have the counterfactual method fully working now.

I am fitting the autoencoder as you suggested above and the error is very high. Previously with the small model (12 features) my numerical reconstruction error was around 5. Now with the large model (of 120 features), the error is 2381.
Do you have any advice on why this is so high?
Or advice on which dimensions (Latent_Dim and Hidden_dim) to vary etc. I've increased epochs from 50 to 500 but it didn't seem to improve the error.
Does the autoencoder standardise the variables as part of the process?

The code is to calculated the reconstruction error for one instance is below. Then I take the average over the whole of the test data set.

label_encoded_instance = self.encoder.encode(instance.reset_index())
one_hot_instance = heae_preprocessor(label_encoded_instance).astype(np.float32) 
output = autoencoder.predict(one_hot_instance) # Calls autoencoder
numerical_output = output[0][0]

for i, col in enumerate(numerical_cols):
            original_value = instance[col]
            original_value = original_value.to_numpy()[0]
            autoencoded_value = numerical_output[i+1]

            rmse += np.square(original_value-autoencoded_value)

rmse = np.sqrt(rmse)/len(numerical_cols)

Screenshot 2022-04-01 at 15 52 17

Screenshot 2022-04-01 at 15 51 48

Thank you so much for your help! Feeling very out of my depth.

@RobertSamoilescu
Copy link
Collaborator

RobertSamoilescu commented Apr 5, 2022

@HeyItsBethany3,
From the code you posted, it seems that you are not computing properly the error. As we've previously discussed, the autoencoder expects standardized numerical features and one-hot encoded categorical features. The output of the autoencoder is also in the same format, namely the numerical features are standardized and the output for each categorical variable describes a categorical distribution from which you can sample the most likely class or sample randomly a label according to the output probability distribution.

It seems like when you are computing the error, you take the difference between the non-standardized numerical feature (i.e., in the original input format) and the standardized output of the autoencoder. This is more likely the source of your large error. It is very likely from your results that the original numerical features might have large ranges (e.g., [0, 10000]), while the output of the autoencoder has standardized numerical features (mean 0 and standard deviation of 1).

If you want to compute the error in the original input format, you need to apply a post-processing operation to the output of the autoencoder. Note that this can be easily done using the heae_inv_preprocessor. Although I don't recommend computing the error in the original input space as it becomes very hard to understand how well the autoencoder performs. For example, you can have a numerical feature in a range [0, 10000] and another one in [0, 10]. Then an error of 100 might not be very suggestive as the ranges are not comparable.

A better way, which I recommend, is to compute the error in the standardized format. So instead of taking as a reference original_value = instance[col], you use one_hot_instance. This is much better because all numerical features are standardized and you can have a better feeling of how well the autoencoder performs.

Now, if after you've done all this correction you still get a large error then you might want to tune the autoencoder a bit. I think a decent way to get a feeling of what encoding size you need is to play a bit with a dimensionality reduction algorithm like PCA. Just see how many components you need to have a decent reconstruction, and that should be a decent value for you to start with. Eventually, you will try to reduce the size of the encoding (if possible) because an autoencoder should be a bit stronger than a naive PCA.

If you still see that the error is high, it might be the case that there is no correlation between your feature, so you cannot reduce the dimensionality. In this case, you can avoid training the autoencoder. Please see the experiment section in the paper in which we also present the results when no autoencoder is used.

If you decide to go for no autoencoder, you will still need to simulate that you have an autoencoder. In this case, the encoding should be an invertible function f which limits the output range in a symmetric interval needed for DDPG, and the decoder should be the inverse function. A simple way to do this is to consider the encoding function to be tanh and the decoding function to be inverse tanh. Note that the tanh does the trick for us as it limits the output range between [-1, 1] and it is an invertible mapping between the real line and [-1, 1]. It also can work well if the numerical features are standardized. Alternatively, you can directly scale each feature in [-1, 1] using something like a + (b - a) (f - f_min) / (f_max - f_min) by settting a=-1, b=1. For this latter method, a test time you may need to clamp the values between [f_min, f_max] as you may encouter values that are outside the range. In this case, the reconstruction won't be perfect and the error will depend on how much the test values went outside the range.

Hope this helps!

@HeyItsBethany3
Copy link
Author

Thank you for this! I fixed that issue and the error reduced significantly. It's now around 130. What kind of reconstruction error/categorical accuracy would you say is reasonable?

Looking at PCA is a great suggestion, I'll do this. It seems straightforward this way if we only have one dimension in the autoencoder, but there are 2 dimensions. How would you set both of them/test the relationship between the two?

Thank you!

@RobertSamoilescu
Copy link
Collaborator

Btw, I just realized that you probably want to include that N under the squared-root. Check here. Anyways, your error still seems quite large. Note that if the numerical features are standardized (i.e. mean 0 and var 1), even if you output a reconstruction equal to the mean value (i.e. 0 for each feature), your error should be around 1. So in principle, you want to do way better than that.

For the categorical reconstruction, try to do your best. In the end, the reconstruction loss is what influences the sparsity of the counterfactuals. Better reconstuction is more likely to produce counterfactual that are closer to the input instance. I would go for smething >0.9.

If you continue to stuggle with the autoencoder, I suggest you try the version without autoencoder. I've mentioned 2 ways in which you can do this in the previous comment.

@HeyItsBethany3
Copy link
Author

Thanks so much for all of your advice. I've used the PCA method and found some good dimensions.
For the same dimensions, my PCA loss (around 0.3) is much better than the autoencoder loss (0.9).
I'm fairly certain the implementation is correct and there is no problem in my code.

Why would you think the autoencoder would not be performing as well? Is it possible to use PCA instead?

At the moment I'm removing any outliers (data points with high autoencoder reconstruction loss) to see if this improves the autoencoder. Can you think of any other factor which might be hindering the autoencoder?

Thank you

@HeyItsBethany3
Copy link
Author

HeyItsBethany3 commented Apr 20, 2022

@RobertSamoilescu
I've tried implementing the tanh method above but I'm getting some errors. Have I implemented it correctly?

Here is the source code:

        self.heae_preprocessor, self.heae_inv_preprocessor = get_he_preprocessor(X=self.encoded_xTrain,
                                                               feature_names=self.features,
                                                               category_map=self.category_map,
                                                               feature_types=self.feature_types)


        trainset_input = self.heae_preprocessor(self.encoded_xTrain).astype(np.float32)
        # construct autoencoder targets for numerical features
        trainset_outputs = {
            "output_1": trainset_input[:, len(self.numerical_ids)] # Numerical data
        }

        # construct autoencoded targets for categorical features.
        for i, cat_id in enumerate(self.categorical_ids):
            trainset_outputs.update({
                # note that we use the label encoded format of the categorical variables.
                f"output_{i+2}": self.encoded_xTrain[:, cat_id]
            })


        trainset = tf.data.Dataset.from_tensor_slices((trainset_input, trainset_outputs))
        trainset = trainset.shuffle(1024).batch(128, drop_remainder=True)

        cf_encoder = lambda x, **kwargs: np.tanh(x)
        cf_decoder = lambda x, **kwargs: self.cf_decode(x, **kwargs)

        wrapper_predict_fn = lambda x: self.counterfactual_predict_fn(x)


        self.explainer = CounterfactualRLTabular(predictor=wrapper_predict_fn,
                                                 encoder=cf_encoder,
                                                 decoder=cf_decoder,
                                                 latent_dim=config.LATENT_DIM,
                                                 encoder_preprocessor=self.heae_preprocessor,
                                                 decoder_inv_preprocessor=self.heae_inv_preprocessor,
                                                 coeff_sparsity=config.COEFF_SPARSITY,
                                                 coeff_consistency=config.COEFF_CONSISTENCY,
                                                 category_map=self.category_map,
                                                 feature_names=self.features,
                                                 train_steps=10,
                                                 batch_size=config.BATCH_SIZE,
                                                 backend="tensorflow")

        print("Explainer initialised")
        self.explainer.fit(X=self.encoded_xTrain)
        print("Explainer fitted")


def cf_decode(self, x, **kwargs):
        decoded = np.arctanh(x)
        numerical_len = len(self.data.numerical_columns)+1

        output1 = decoded[:,0:numerical_len]

        tensor_list = []
        tensor_list.append(tf.convert_to_tensor(output1))

        previous_idx = numerical_len
        i = 0
        for key, value in self.category_map.items():
            next_idx = previous_idx + len(value)
            output_value = decoded[:, previous_idx:next_idx]
            tensor_list.append(tf.convert_to_tensor(output_value))
            previous_idx = next_idx
            i += 1
        return tensor_list
File ".../cf_explainer.py", line 180, in use_tanh_as_autoencoder
    self.explainer.fit(X=self.encoded_xTrain)
  File ".../python3.8/site-packages/alibi/explainers/cfrl_tabular.py", line 278, in fit
    return super().fit(X)
  File ".../python3.8/site-packages/alibi/explainers/cfrl_base.py", line 681, in fit
    X_cf_tilde = pp_func(self.backend.to_numpy(X_cf_tilde),
  File ".../python3.8/site-packages/alibi/explainers/cfrl_tabular.py", line 71, in __call__
    return sample(X_hat_split=X_cf,
  File ".../python3.8/site-packages/alibi/explainers/backends/cfrl_tabular.py", line 419, in sample
    X_ohe_hat_split += sample_categorical(X_hat_cat_split=X_hat_split[-cat_feat:],
  File ".../python3.8/site-packages/alibi/explainers/backends/cfrl_tabular.py", line 360, in sample_categorical
    proba = softmax(X_hat_cat_split[i], axis=1)
  File ".../python3.8/site-packages/scipy/special/_logsumexp.py", line 214, in softmax
    return np.exp(x - logsumexp(x, axis=axis, keepdims=True))
  File "..../python3.8/site-packages/scipy/special/_logsumexp.py", line 99, in logsumexp
    a_max = np.amax(a, axis=axis, keepdims=True)
  File "<__array_function__ internals>", line 5, in amax
  File ".../python3.8/site-packages/numpy/core/fromnumeric.py", line 2754, in amax
    return _wrapreduction(a, np.maximum, 'max', axis, None, out,
  File "..../python3.8/site-packages/numpy/core/fromnumeric.py", line 86, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation maximum which has no identity

Thank you!

@HeyItsBethany3
Copy link
Author

@RobertSamoilescu Would really appreciate your help on this :) Thank you!

@RobertSamoilescu
Copy link
Collaborator

RobertSamoilescu commented Apr 26, 2022

@HeyItsBethany3 ,
You can use any encoding/decoding procedure as long as the decoder is differentiable. The decoder must be differentiable to induce sparsity.

Whether you use PCA or no dimensionality reduction (i.e. tanh/tanh^-1), I recommend you to wrap the encoder/decoder procedure in a tf.keras.Model/torch.nn.Module to avoid any errors. Also note the types of encoder/decoder in the docstrings.

For the PCA you can use any implementation and extract are the principal components/eigen vectors. In that case the actor will output the coefficients corresponding to each principal component. All you have to do is multiply them accordingly to get the reconstruction.

Regarding the poor AE performance. If you are entirely sure that the AE training pipeline is correct, it is possibile that you overfit on the training data. Have you plotted the error on the train & validation set? If the error is large for the validation and not for training then you need to apply some regularization or reduce the size of the networks.

@HeyItsBethany3
Copy link
Author

@RobertSamoilescu I've compared the AE error, the training reconstruction loss is around 0.92 and the test error is 0.96 so I don't think this is overfit. What do you think? What do you mean by reduce the size of the networks - is this lower the number of dimensions (eg. latent dim)?

I've implemented the encoder & decoder using keras models but I'm still getting the same error as above. I'll include some more code that I've used.

Thank you

Option 1:

self.heae_preprocessor, self.heae_inv_preprocessor = get_he_preprocessor(X=self.encoded_xTrain,
                                                               feature_names=self.features,
                                                               category_map=self.category_map,
                                                               feature_types=self.feature_types)


        trainset_input = self.heae_preprocessor(self.encoded_xTrain).astype(np.float32)
        # construct autoencoder targets for numerical features
        trainset_outputs = {
            "output_1": trainset_input[:, len(self.numerical_ids)] # Numerical data
        }

        # construct autoencoded targets for categorical features.
        for i, cat_id in enumerate(self.categorical_ids):
            trainset_outputs.update({
                # note that we use the label encoded format of the categorical variables.
                f"output_{i+2}": self.encoded_xTrain[:, cat_id]
            })

        trainset = tf.data.Dataset.from_tensor_slices((trainset_input, trainset_outputs))
        trainset = trainset.shuffle(1024).batch(128, drop_remainder=True)

        wrapper_predict_fn = lambda x: self.counterfactual_predict_fn(x)

        cf_encoder = TanhEncoder()
        cf_decoder = TanhDecoder(self.data, self.category_map)

        # Define the heterogeneous autoencoder
        self.heae = HeAE(encoder=cf_encoder, decoder=cf_decoder)

        # Define loss functions
        he_loss = [keras.losses.MeanSquaredError()]
        he_loss_weights = [1.]

        # Add categorical losses
        for cat_id in self.categorical_ids:
            he_loss.append(keras.losses.SparseCategoricalCrossentropy(from_logits=True))
            he_loss_weights.append(1./len(self.categorical_ids))

        # Define metrics
        metrics = {}
        for i, _ in enumerate(self.categorical_ids):
            metrics.update({f"output_{i+2}": keras.metrics.SparseCategoricalAccuracy()})

        # Compile model
        self.heae.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-3),
             loss=he_loss,
             loss_weights=he_loss_weights,
             metrics=metrics)

        self.heae.fit(trainset, epochs=config.EPOCHS)

        self.explainer = CounterfactualRLTabular(predictor=wrapper_predict_fn,
                                                 encoder=self.heae.encoder,
                                                 decoder=self.heae.decoder,
                                                 latent_dim=config.LATENT_DIM,
                                                 encoder_preprocessor=self.heae_preprocessor,
                                                 decoder_inv_preprocessor=self.heae_inv_preprocessor,
                                                 coeff_sparsity=config.COEFF_SPARSITY,
                                                 coeff_consistency=config.COEFF_CONSISTENCY,
                                                 category_map=self.category_map,
                                                 feature_names=self.features,
                                                 train_steps=2,
                                                 batch_size=config.BATCH_SIZE,
                                                 backend="tensorflow")

        print("Explainer initialised")
        self.explainer.fit(X=self.encoded_xTrain)
        print("Explainer fitted")

Option 2:

self.heae_preprocessor, self.heae_inv_preprocessor = get_he_preprocessor(X=self.encoded_xTrain,
                                                               feature_names=self.features,
                                                               category_map=self.category_map,
                                                               feature_types=self.feature_types)


        trainset_input = self.heae_preprocessor(self.encoded_xTrain).astype(np.float32)
        # construct autoencoder targets for numerical features
        trainset_outputs = {
            "output_1": trainset_input[:, len(self.numerical_ids)] # Numerical data
        }

        # construct autoencoded targets for categorical features.
        for i, cat_id in enumerate(self.categorical_ids):
            trainset_outputs.update({
                # note that we use the label encoded format of the categorical variables.
                f"output_{i+2}": self.encoded_xTrain[:, cat_id]
            })


        trainset = tf.data.Dataset.from_tensor_slices((trainset_input, trainset_outputs))
        trainset = trainset.shuffle(1024).batch(128, drop_remainder=True)

        wrapper_predict_fn = lambda x: self.counterfactual_predict_fn(x)

        cf_encoder = TanhEncoder()
        cf_decoder = TanhDecoder(self.data, self.category_map)

        self.explainer = CounterfactualRLTabular(predictor=wrapper_predict_fn,
                                                 encoder=cf_encoder,
                                                 decoder=cf_decoder,
                                                 latent_dim=config.LATENT_DIM,
                                                 encoder_preprocessor=self.heae_preprocessor,
                                                 decoder_inv_preprocessor=self.heae_inv_preprocessor,
                                                 coeff_sparsity=config.COEFF_SPARSITY,
                                                 coeff_consistency=config.COEFF_CONSISTENCY,
                                                 category_map=self.category_map,
                                                 feature_names=self.features,
                                                 train_steps=2,
                                                 batch_size=config.BATCH_SIZE,
                                                 backend="tensorflow")
       
        print("Explainer initialised")
        self.explainer.fit(X=self.encoded_xTrain)
        print("Explainer fitted")

TanhEncoder & TanhDecoder:

from typing import List
import tensorflow as tf
import tensorflow.keras as keras
import numpy as np
import math


class TanhEncoder(keras.Model):

    def __init__(self, **kwargs):
        super().__init__(**kwargs)

    def call(self, x: tf.Tensor, **kwargs) -> tf.Tensor:
        encoded = tf.tanh(x)

        return encoded


class TanhDecoder(keras.Model):

    def __init__(self, data, category_map, **kwargs):
        self.data = data
        self.category_map = category_map
        super().__init__(**kwargs)


    def call(self, x: tf.Tensor, **kwargs) -> List[tf.Tensor]:
        tensor_list = [] # List of tensors

        decoded = tf.atan(x)

        numerical_len = len(self.data.numerical_columns)+1
        decoded_numerical_output = decoded[:,0:numerical_len]
        tensor_list.append(tf.convert_to_tensor(decoded_numerical_output))

        previous_idx = numerical_len
        for key, value in self.category_map.items():
            next_idx = previous_idx + len(value)
            output_value = decoded[:, previous_idx:next_idx]
            tensor_list.append(tf.convert_to_tensor(output_value))
            previous_idx = next_idx

        return tensor_list

@RobertSamoilescu
Copy link
Collaborator

RobertSamoilescu commented May 3, 2022

@HeyItsBethany3,
There are a few mistakes in your implementation.
I wrote a working example of CFRL without AE using a fake dataset. People younger that 50 are classified as 1 and people older than 50 are classified as 0.

And I got the following results:

  • orig_pd.head()
age gender height education weight Label
25.0 female 1.532881 bachelors 49.0 1
38.0 female 1.729462 bachelors 81.0 1
36.0 female 1.788416 masters 70.0 1
48.0 female 1.522529 high-school 56.0 1
21.0 female 1.860446 masters 88.0 1
  • cf_pd.head()
age gender height education weight Label
51 female 1.53328 bachelors 49 0
52 female 1.733367 bachelors 81 0
51 female 1.78725 bachelors 70 0
51 female 1.524484 high-school 56 0
52 female 1.86018 bachelors 88 0

Unfortunately, for the AE case, I cannot help you if you don't share your code and data.

@RobertSamoilescu
Copy link
Collaborator

There has been a minor update of the gist.

@RobertSamoilescu
Copy link
Collaborator

RobertSamoilescu commented May 5, 2022

@HeyItsBethany3,
I noticed there was an error on the CFRL example for tabular dataset that might be related to your issue of training the autoencoder. Basically, the numerical targets in the training were wrong. See PR #651 for more details.

@HeyItsBethany3
Copy link
Author

@RobertSamoilescu Thank you this helped a lot! My error is a lot better now and I realised I needed many more epochs.

Are the target variables for the categorical parts still correct? ie. Should we still be using encoded_xTrain instead of trainset_input?

I also noticed that for the loss we have a combination of mean squared error loss (for numerical variables) and cross-entropy loss (categorical). But in the metrics we only pass in the cross-entropy loss. Is this correct?
Thank you

@RobertSamoilescu
Copy link
Collaborator

@HeyItsBethany3 ,

The targets for the categorical variables is correct since the order provided by keys in the category_map is preserved by the get_he_preprocessor and when we set the reconstruction targets. for the AE. For the SparseCategoricalCrossentropy the targets are expected to be labels and not ohe. You can check the documentation here.

The metrics are used just to provide a more interpretable evaluation of the training procedure. It would be a bit hard to say how good the reconstruction for the categorical features is just by looking at the cross-entropy. Accuracy gives us a better understanding on how good the reconstruction is for categorical variables.

@HeyItsBethany3
Copy link
Author

@RobertSamoilescu Thank you!
What metrics would you recommend using to get a good picture of both numerical and categorical variables?
I'm calculating RMSE and categorical accuracy afterwards as you suggested but I would like to be able to plot the loss vs epochs over the course of training the model.

@RobertSamoilescu
Copy link
Collaborator

Well, I guess you can always combine the two losses (MSE + CE), but personally I would prefer to keep them separated. One plot for all numerical features and another one for categorical ones. In that way you can understand better where the AE might struggle.

@HeyItsBethany3
Copy link
Author

HeyItsBethany3 commented May 18, 2022

Thanks. If I added a metrics array like (with the above code):

metrics = {"output_1": keras.metrics.MeanSquaredError(), "output_2": keras.metrics.SparseCategoricalAccuracy(),  
"output_3": keras.metrics.SparseCategoricalAccuracy(),  "output_4": keras.metrics.SparseCategoricalAccuracy(),
 "output_5": keras.metrics.SparseCategoricalAccuracy()}

Would this calculate MSE for the numerical variables and then calculate sparse categorical accuracy for each categorical variable, then combine the two into a plot?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Question User questions
Projects
None yet
Development

No branches or pull requests

3 participants