diff --git a/use-cases/credit_risk/risk_bucketing.ipynb b/use-cases/credit_risk/risk_bucketing.ipynb new file mode 100644 index 0000000000..614fc5362f --- /dev/null +++ b/use-cases/credit_risk/risk_bucketing.ipynb @@ -0,0 +1,477 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Risk Bucketing\n", + "One of the most common use cases for machine learning in financial services is estimating the probability of default on a loan.\n", + "\n", + "Risk bucketing refers to the process of grouping borrowers with similar creditworthiness. Treating all borrowers equally will generally result in poor predictions, as the model cannot capture entirely different characteristics of the data all at once. By dividing borrowers into different groups based on risk characteristics, risk bucketing enables us to make accurate predictions.\n", + "\n", + "Risk bucketing is a good example of an unsupervised clustering problem. The K-means algorithm (which we use here) is one way we can perform risk bucketing.\n", + "\n", + "However, there is one major issue: how do we know the optimal number of risk buckets (clusters) to use for a given set of data/borrowers? This notebook demonstrates a number of techniques for calculating the optimal number of clusters:\n", + "- The Elbow Method\n", + "- Silhouette Scores\n", + "- Gap Analysis" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import pandas as pd\n", + "import boto3" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Our data source is the well-known German Credit Risk data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "s3 = boto3.resource(\"s3\")\n", + "s3_sample = s3.Object(\n", + " \"sagemaker-sample-files\",\n", + " \"datasets/tabular/uci_statlog_german_credit_data/german_credit_data.csv\",\n", + ").get()[\"Body\"]\n", + "credit = pd.read_csv(s3_sample)\n", + "credit.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We drop the numeric columns *dependents* and *existingcredits* as we are not going to use them." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "credit.drop([\"dependents\", \"existingcredits\"], inplace=True, axis=1)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Convert the *job* column, which contains categorical values, into a numerical one that contains ordinal values." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "credit[\"job\"].replace(\n", + " [\n", + " \"unemployed\",\n", + " \"unskilled\",\n", + " \"skilled employee / official\",\n", + " \"management / highly skilled\",\n", + " ],\n", + " [0, 1, 2, 3],\n", + " inplace=True,\n", + ")" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import matplotlib.pyplot as plt\n", + "import seaborn as sns\n", + "\n", + "sns.set()\n", + "plt.rcParams[\"figure.figsize\"] = (10, 6)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Drop all columns except for the following (numeric) ones:\n", + "- Age\n", + "- Job (ordinal ranging from 0=unemployed to 3=management / highly skilled)\n", + "- Credit amount\n", + "- Duration (the duration of the loan in months)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "numerical_credit = credit.select_dtypes(exclude=\"O\")\n", + "numerical_credit.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Plotting histograms of these four features, we can see that they are all positively skewed." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plt.figure(figsize=(10, 8))\n", + "k = 0\n", + "cols = numerical_credit.columns\n", + "for i, j in zip(range(len(cols)), cols):\n", + " k += 1\n", + " plt.subplot(2, 2, k)\n", + " plt.hist(numerical_credit.iloc[:, i])\n", + " plt.title(j)\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## The Elbow Method\n", + "Our first method for estimating the optimal number of clusters is the Elbow Method. This involves calculating *inertia* which is the sum of the squared distances of observations from their closest centroid.\n", + "\n", + "If we plot inertia against number of clusters (k) we can see an *elbow* (i.e. the curve starts to flatten out) around the value of k=4. This is an indication that increasing the number of clusters is undesirable (when traded-off against increased complexity).\n", + "\n", + "Here we are using the KMeans algorithm from Sklearn, as it provides inertia as part of its output. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from sklearn.preprocessing import StandardScaler\n", + "from sklearn.cluster import KMeans\n", + "import numpy as np" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "scaler = StandardScaler()\n", + "scaled_credit = scaler.fit_transform(numerical_credit)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "inertia = []\n", + "for k in range(1, 10):\n", + " kmeans = KMeans(n_clusters=k)\n", + " kmeans.fit(scaled_credit)\n", + " inertia.append(kmeans.inertia_)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plt.plot(range(1, 10), inertia, \"bx-\")\n", + "plt.xlabel(\"Number of Clusters (k)\")\n", + "plt.ylabel(\"Inertia\")\n", + "plt.title(\"The Elbow Method\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Silhouette Scores\n", + "Silhouette scores take a value between 1 and -1. A value of 1 indicates an observation that is close to the correct centroid and correctly classified. A value of -1 shows that the observation is not correctly clustered.\n", + "\n", + "The strength of the Silhouette Score is that it takes into account both the intra-cluster distance (how close observations are to their centroid) and the inter-cluster distance (how far apart the centroids are). The formula for Silhouette Score is as follows:\n", + "\\begin{equation}\n", + "Silhouette = \\frac{x - y}{max(x, y)}\n", + "\\end{equation}\n", + "where x is the mean inter-cluster distance between clusters, and y is the mean intra-cluster distance.\n", + "\n", + "In this case, we can see that the peak Silhouette score occurs when the number of clusters (k) is 2. This implies that it is not worth using more than 2 clusters." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from matplotlib import cm\n", + "import matplotlib.pyplot as plt\n", + "from sklearn.metrics import silhouette_score" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "silhouette_scores = []\n", + "for n_clusters in range(2, 10):\n", + " clusterer = KMeans(n_clusters=n_clusters)\n", + " preds = clusterer.fit_predict(scaled_credit)\n", + " centers = clusterer.cluster_centers_\n", + " silhouette_scores.append(silhouette_score(scaled_credit, preds))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plt.plot(range(2, 10), silhouette_scores, \"bx-\")\n", + "plt.xlabel(\"Number of Clusters (k)\")\n", + "plt.ylabel(\"Silhouette Score\")\n", + "plt.title(\"Silhouette Scores\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Gap Analysis\n", + "Our final approach is *gap analysis*. This is based on the work of Tibshirani et al. (2001) which proposes finding the optimal number of clusters\n", + "based on a reference distribution.\n", + "\n", + "Our data consists of $n$ independent observations of $p$ features. The Euclidean distance between observations $i$ and $i'$ is:\n", + "\\begin{equation}\n", + "d_{ii'} = \\sum_{j} (x_{ij} - x_{i'j})^2\n", + "\\end{equation}\n", + "\n", + "And the sum of all pairwise distances for points in cluster r is:\n", + "\\begin{equation}\n", + "D_{r} = \\sum_{i,i' \\in C_{r}} d_{ii'}\n", + "\\end{equation}\n", + "\n", + "Then the pooled, within-cluster sum of squares around the cluster mean is:\n", + "\\begin{equation}\n", + "W_{k} = \\sum_{r=1}^k \\frac{1}{2n_{r}}D_{r}\n", + "\\end{equation}\n", + "\n", + "The idea of this approach is to standardize the graph of $log(W_{k})$ by comparing it with its expectation under an appropriate null reference distribution of the data. The expectation of $W_{k}$ is approximately\n", + "\\begin{equation}\n", + "log(pn/12) - (2/p)log(k) + constant\n", + "\\end{equation}" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!pip install gap-stat" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Import the OptimalK module for calculating the gap statistic." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from gap_statistic.optimalK import OptimalK" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Calculate the gap statistic for various values of $k$ using parallelization." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "optimalK = OptimalK(n_jobs=8, parallel_backend=\"joblib\")\n", + "n_clusters = optimalK(scaled_credit, cluster_array=np.arange(1, 10))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "gap_result = optimalK.gap_df\n", + "gap_result.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If we plot the resulting gap values, we observe a sharp increase up to the point where the gap value reaches its peak. In this case this corresponds to 5 clusters. The analysis suggests that this is the optimal number for clustering." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plt.plot(gap_result.n_clusters, gap_result.gap_value)\n", + "min_ylim, max_ylim = plt.ylim()\n", + "plt.axhline(np.max(gap_result.gap_value), color=\"r\", linestyle=\"dashed\", linewidth=2)\n", + "plt.title(\"Gap Analysis\")\n", + "plt.xlabel(\"Number of Clusters\")\n", + "plt.ylabel(\"Gap Value\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## K-Means Clustering \n", + "Due to their differing approaches, the three analyses above all provide a different value for the optimal number of clusters. It will require some trial and error to determine which is indeed the optimal number of clusters.\n", + "\n", + "For example, let us proceed on the basis that the optimal number of clusters is two (as suggested by the *Silhouette Scores*).\n", + "\n", + "We perform K-Means clustering to separate our observations into two." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "kmeans = KMeans(n_clusters=2)\n", + "clusters = kmeans.fit_predict(scaled_credit)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Finally, we generate some plots to visualize the clusters in two dimensions. The plots show the observations (with color indicating the assigned cluster). Black crosses are used to show the position of the two centroids.\n", + "\n", + "The first plot shows the relationship between the *age* and *credit* features. Here we can see that *age* is the more dispersed feature, with the centroids located vertically inline.\n", + "\n", + "The second plot considers two continuous features: *credit* and *duration*. Here we observe two clearly separated clusters. This suggests that the *duration* feature is more volatile when compared with the *credit* feature.\n", + "\n", + "Finally, the third plot examines the relationship between *age* and *duration*. It turns out that there are many overlapping observations across these two features." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "plt.figure(figsize=(10, 12))\n", + "plt.subplot(311)\n", + "plt.scatter(scaled_credit[:, 2], scaled_credit[:, 1], c=kmeans.labels_, cmap=\"viridis\")\n", + "plt.scatter(\n", + " kmeans.cluster_centers_[:, 2],\n", + " kmeans.cluster_centers_[:, 1],\n", + " s=80,\n", + " marker=\"x\",\n", + " color=\"k\",\n", + ")\n", + "plt.title(\"Age vs Credit\")\n", + "plt.subplot(312)\n", + "plt.scatter(scaled_credit[:, 1], scaled_credit[:, 0], c=kmeans.labels_, cmap=\"viridis\")\n", + "plt.scatter(\n", + " kmeans.cluster_centers_[:, 1],\n", + " kmeans.cluster_centers_[:, 0],\n", + " s=80,\n", + " marker=\"x\",\n", + " color=\"k\",\n", + ")\n", + "plt.title(\"Credit vs Duration\")\n", + "plt.subplot(313)\n", + "plt.scatter(scaled_credit[:, 2], scaled_credit[:, 0], c=kmeans.labels_, cmap=\"viridis\")\n", + "plt.scatter(\n", + " kmeans.cluster_centers_[:, 2],\n", + " kmeans.cluster_centers_[:, 0],\n", + " s=120,\n", + " marker=\"x\",\n", + " color=\"k\",\n", + ")\n", + "plt.title(\"Age vs Duration\")\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Conclusion \n", + "In this notebook we have discussed why risk bucketing is necessary, and considered three different approaches to estimating the optimal number of risk buckets.\n", + "\n", + "Having estimated the optimal number of risk buckets, we made use of K-Means clustering to split our observations between the target number of risk buckets.\n", + "\n", + "The follow-on activity is to create models for estimating default risk for each risk bucket, with each model trained separately using the data corresponding to the corresponding risk bucket." + ] + } + ], + "metadata": { + "instance_type": "ml.t3.medium", + "kernelspec": { + "display_name": "Python 3 (Data Science)", + "language": "python", + "name": "python3__SAGEMAKER_INTERNAL__arn:aws:sagemaker:eu-west-1:470317259841:image/datascience-1.0" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.10" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/use-cases/index.rst b/use-cases/index.rst index 9ecbaea207..a0732c2aa8 100644 --- a/use-cases/index.rst +++ b/use-cases/index.rst @@ -46,3 +46,12 @@ Pipelines with NLP for Product Rating Prediction :maxdepth: 1 product_ratings_with_pipelines/pipelines_product_ratings + + +Credit Risk +----------- + +.. toctree:: + :maxdepth: 1 + + credit_risk/risk_bucketing \ No newline at end of file