README

TabNet : Attentive Interpretable Tabular Learning

This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attentive Interpretable Tabular Learning. arXiv preprint arXiv:1908.07442.) https://arxiv.org/pdf/1908.07442.pdf. Please note that some different choices have been made overtime to improve the library which can differ from the orginal paper.

CircleCI

PyPI version

PyPI - Downloads

PyPI - Python Version

Conda - Platform

Conda (channel only)

GitHub - License

Any questions ? Want to contribute ? To talk with us ? You can join us on Slack

Installation

Easy installation

You can install using pip or conda as follows.

with pip

pip install pytorch-tabnet

with conda

conda install -c conda-forge pytorch-tabnet

Source code

If you wan to use it locally within a docker container:

  • git clone git@github.com:dreamquark-ai/tabnet.git

  • cd tabnet to get inside the repository


CPU only

  • make start to build and get inside the container

GPU

  • make start-gpu to build and get inside the GPU container


  • poetry install to install all the dependencies, including jupyter

  • make notebook inside the same terminal. You can then follow the link to a jupyter notebook with tabnet installed.

What is new ?

  • from version > 4.0 attention is now embedding aware. This aims to maintain a good attention mechanism even with large number of embedding. It is also now possible to specify attention groups (using grouped_features). Attention is now done at the group level and not feature level. This is especially useful if a dataset has a lot of columns coming from on single source of data (exemple: a text column transformed using TD-IDF).

Contributing

When contributing to the TabNet repository, please make sure to first discuss the change you wish to make via a new or already existing issue.

Our commits follow the rules presented here.

What problems does pytorch-tabnet handle?

  • TabNetClassifier : binary classification and multi-class classification problems

  • TabNetRegressor : simple and multi-task regression problems

  • TabNetMultiTaskClassifier: multi-task multi-classification problems

How to use it?

TabNet is now scikit-compatible, training a TabNetClassifier or TabNetRegressor is really easy.

from pytorch_tabnet.tab_model import TabNetClassifier, TabNetRegressor

clf = TabNetClassifier()  #TabNetRegressor()
clf.fit(
  X_train, Y_train,
  eval_set=[(X_valid, y_valid)]
)
preds = clf.predict(X_test)

or for TabNetMultiTaskClassifier :

from pytorch_tabnet.multitask import TabNetMultiTaskClassifier
clf = TabNetMultiTaskClassifier()
clf.fit(
  X_train, Y_train,
  eval_set=[(X_valid, y_valid)]
)
preds = clf.predict(X_test)

The targets on y_train/y_valid should contain a unique type (e.g. they must all be strings or integers).

Default eval_metric

A few classic evaluation metrics are implemented (see further below for custom ones):

  • binary classification metrics : ‘auc’, ‘accuracy’, ‘balanced_accuracy’, ‘logloss’

  • multiclass classification : ‘accuracy’, ‘balanced_accuracy’, ‘logloss’

  • regression: ‘mse’, ‘mae’, ‘rmse’, ‘rmsle’

Important Note : ‘rmsle’ will automatically clip negative predictions to 0, because the model can predict negative values. In order to match the given scores, you need to use np.clip(clf.predict(X_predict), a_min=0, a_max=None) when doing predictions.

Custom evaluation metrics

You can create a metric for your specific need. Here is an example for gini score (note that you need to specifiy whether this metric should be maximized or not):

from pytorch_tabnet.metrics import Metric
from sklearn.metrics import roc_auc_score

class Gini(Metric):
    def __init__(self):
        self._name = "gini"
        self._maximize = True

    def __call__(self, y_true, y_score):
        auc = roc_auc_score(y_true, y_score[:, 1])
        return max(2*auc - 1, 0.)

clf = TabNetClassifier()
clf.fit(
  X_train, Y_train,
  eval_set=[(X_valid, y_valid)],
  eval_metric=[Gini]
)

A specific customization example notebook is available here : https://github.com/dreamquark-ai/tabnet/blob/develop/customizing_example.ipynb

Semi-supervised pre-training

Added later to TabNet’s original paper, semi-supervised pre-training is now available via the class TabNetPretrainer:

# TabNetPretrainer
unsupervised_model = TabNetPretrainer(
    optimizer_fn=torch.optim.Adam,
    optimizer_params=dict(lr=2e-2),
    mask_type='entmax' # "sparsemax"
)

unsupervised_model.fit(
    X_train=X_train,
    eval_set=[X_valid],
    pretraining_ratio=0.8,
)

clf = TabNetClassifier(
    optimizer_fn=torch.optim.Adam,
    optimizer_params=dict(lr=2e-2),
    scheduler_params={"step_size":10, # how to use learning rate scheduler
                      "gamma":0.9},
    scheduler_fn=torch.optim.lr_scheduler.StepLR,
    mask_type='sparsemax' # This will be overwritten if using pretrain model
)

clf.fit(
    X_train=X_train, y_train=y_train,
    eval_set=[(X_train, y_train), (X_valid, y_valid)],
    eval_name=['train', 'valid'],
    eval_metric=['auc'],
    from_unsupervised=unsupervised_model
)

The loss function has been normalized to be independent of pretraining_ratio, batch_size and the number of features in the problem. A self supervised loss greater than 1 means that your model is reconstructing worse than predicting the mean for each feature, a loss bellow 1 means that the model is doing better than predicting the mean.

A complete example can be found within the notebook pretraining_example.ipynb.

/!\ : current implementation is trying to reconstruct the original inputs, but Batch Normalization applies a random transformation that can’t be deduced by a single line, making the reconstruction harder. Lowering the batch_size might make the pretraining easier.

Data augmentation on the fly

It is now possible to apply custom data augmentation pipeline during training. Templates for ClassificationSMOTE and RegressionSMOTE have been added in pytorch-tabnet/augmentations.py and can be used as is.

Easy saving and loading

It’s really easy to save and re-load a trained model, this makes TabNet production ready.

# save tabnet model
saving_path_name = "./tabnet_model_test_1"
saved_filepath = clf.save_model(saving_path_name)

# define new model with basic parameters and load state dict weights
loaded_clf = TabNetClassifier()
loaded_clf.load_model(saved_filepath)