Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Mastering Customer Segmentation with LLM

Damian GilFollowTowards Data Science--ListenShare· Intro· Data· Method 1: Kmeans· Method 2: K-Prototype· Method 3: LLM + Kmeans· ConclusionA customer segmentation project can be approached in multiple ways. In this article I will teach you advanced techniques, not only to define the Clusters, but to analyze the result. This post is intended for those data scientists who want to have several tools to address clustering problems and be one step closer to being seniors DS.What will we see in this article?Let’s see 3 methods to approach this type of projects:As a small preview I will show the following comparison of the 2D representation (PCA) of the different models created:You will also learn dimensionality reduction techniques such as:Some of the results being these:You can find the project with the notebooks here. And you can also take a look at my github:github.comA very important clarification is that this is not an end-to-end project. This is because we have skipped one of the most important parts in this type of project: The exploratory data analysis (EDA) phase or the selection of variables.The original data used in this project is from a public Kaggle: Banking Dataset — Marketing Targets. Each row in this data set contains information about a company’s customers. Some fields are numerical and others are categorical, we will see that this expands the possible ways to approach the problem.We will only be left with the first 8 rows. Our dataset looks like this:Let’s see a brief description of the columns of our dataset:For the project, I’ve utilized the training dataset by Kaggle. In the project repository, you can locate the “data” folder where a compressed file of the dataset used in the project is stored. Additionally, you will find two CSV files inside of the compressed file. One is the training dataset provided by Kaggle (train.csv), and the other is the dataset after performing an embedding (embedding_train.csv), which we will explain further later on.To further clarify how the project is structured, the project tree is shown:This is the most common method and the one you will surely know. Anyway, we are going to study it because I will show advanced analysis techniques in these cases. The Jupyter notebook where you will find the complete procedure is called kmeans.ipynbA preprocessing of the variables is carried out:Let’s see how it looks in code.Output:It is crucial that there are as few outliers in our data as Kmeans is very sensitive to this. We can apply the typical method of choosing outliers using the z score, but in this post I will show you a much more advanced and cool method.Well, what is this method? Well, we will use the Python Outlier Detection (PyOD) library. This library is focused on detecting outliers for different cases. To be more specific we will use the ECOD method (“empirical cumulative distribution functions for outlier detection”).This method seeks to obtain the distribution of the data and thus know which are the values ​​where the probability density is lower (outliers). Take a look at the Github if you want.One of the disadvantages of using the Kmeans algorithm is that you must choose the number of clusters you want to use. In this case, in order to obtain that data, we will use Elbow Method. It consists of calculating the distortion that exists between the points of a cluster and its centroid. The objective is clear, to obtain the least possible distortion. In this case we use the following code:Output:We see that from k=5, the distortion does not vary drastically. It is true that the ideal is that the behavior starting from k= 5 would be almost flat. This rarely happens and other methods can be applied to be sure of the most optimal number of clusters. To be sure, we could perform a Silhoutte visualization. The code is the following:It can be seen that the highest silhouette score is obtained with n_cluster=9, but it is also true that the variation in the score is quite small if we compare it with the other scores. At the moment the previous result does not provide us with much information. On the other hand, the previous code creates the Silhouette visualization, which gives us more information:Since understanding these representations well is not the goal of this post, I will conclude that there seems to be no very clear decision as to which number is best. After viewing the previous representations, we can choose K=5 or K= 6. This is because for the different clusters, their Silhouette score is above the average value and there is no imbalance in cluster size. Furthermore, in some situations, the marketing department may be interested in having the smallest number of clusters/types of customers (This may or may not be the case).Finally we can create our Kmeans Model with K=5.The way of evaluating kmeans models is somewhat more open than for other models. We can useIn relation to the model evaluation metrics, we can use the following code:As far as can be shown, we do not have an excessively good model. Davies’ score is telling us that the distance between clusters is quite small.This may be due to several factors, but keep in mind that the energy of a model is the data; if the data does not have sufficient predictive power, you cannot expect to achieve exceptional results.For visualizations, we can use the method to reduce dimensionality, PCA. For them we are going to use the Prince library, focused on exploratory analysis and dimensionality reduction. If you prefer, you can use Sklearn’s PCA, they are identical.First we will calculate the principal components in 3D, and then we will make the representation. These are the two functions performed by the previous steps:Don’t worry too much about those functions, use them as follows:Output:It can be seen that the clusters have almost no separation between them and there is no clear division. This is in accordance with the information provided by the metrics.Something to keep in mind and that very few people keep in mind is the PCA and the variability of the eigenvectors.Let’s say that each field contains a certain amount of information, and this adds its bit of information. If the accumulated sum of the 3 main components adds up to around 80% variability, we can say that it is acceptable, obtaining good results in the representations. If the value is lower, we have to take the visualizations with a grain of salt since we are missing a lot of information that is contained in other eigenvectors.The next question is obvious: What is the variability of the PCA executed?The answer is the following:As can be seen, we have 48.37% variability with the first 3 components, something insufficient to draw informed conclusions.It turns out that when a PCA analysis is run, the spatial structure is not preserved. Luckily there is a less known method, called t-SNE, that allows us to reduce the dimensionality and also maintains the spatial structure. This can help us visualize, since with the previous method we have not had much success.If you try it on your computers, keep in mind that it has a higher computational cost. For this reason, I sampled my original dataset and it still took me about 5 minutes to get the result. The code is as follows:As a result, I got the following image. It shows a greater separation between clusters and allows us to draw conclusions in a clearer way.In fact, we can compare the reduction carried out by the PCA and by the t-SNE, in 2 dimensions. The improvement is clear using the second method.Finally, let’s explore a little how the model works, in which features are the most important and what are the main characteristics of the clusters.To see the importance of each of the variables we will use a typical “trick” in this type of situation. We are going to create a classification model where the “X” is the inputs of the Kmeans model, and the “y” is the clusters predicted by the Kmeans model.The chosen model is an LGBMClassifier. This model is quite powerful and works well having categorical and numerical variables. Having the new model trained, using the SHAP library, we can obtain the importance of each of the features in the prediction. The code is:Output:It can be seen that feature housing has the greatest predictive power. It can also be seen that cluster number 4 (green) is mainly differentiated by the loan variable.Finally we must analyze the characteristics of the clusters. This part of the study is what is decisive for the business. For them we are going to obtain the means (for the numerical variables) and the most frequent value (categorical variables) of each of the features of the dataset for each of the clusters:Output:We see that the clusters with job=blue-collar do not have a great differentiation between their characteristics. This is something that is not desirable since it is difficult to differentiate the clients of each of the clusters. In the job=management case, we obtain better differentiation.After carrying out the analysis in different ways, they converge on the same conclusion: “We need to improve the results”.If we remember our original dataset, we see that we have categorical and numerical variables. Unfortunately, the Kmeans algorithm provided by Skelearn does not accept categorical variables, forcing the original dataset to be modified and drastically altered.Luckily, you’ve taken with me and my post. But above all, thanks to ZHEXUE HUANG and his article Extensions to the k-Means Algorithm for Clustering Large Data Sets with Categorical Values, there is an algorithm that accepts categorical variables for clustering. This algorithm is called K-Prototype. The bookstore that provides it is Prince.The procedure is the same as in the previous case. In order not to make this article eternal, let’s go to the most interesting parts. But remember that you can access the Jupyter notebook here.Because we have numerical variables, we must make certain modifications to them. It is always recommended that all numerical variables be on similar scales and with distributions as close as possible to Gaussian ones. The dataset that we will use to create the models is created as follows:Because the method that I have presented for outlier detection (ECOD) only accepts numerical variables, the same transformation must be performed as for the kmeans method. We apply the outlier detection model that will provide us with which rows to eliminate, finally leaving the dataset that we will use as input for the K-Prototype model:We create the model and to do this we first need to obtain the optimal k. To do this we use the Elbow Method and this piece of code:Output:We can see that the best option is K=5.Be careful, since this algorithm takes a little longer than those normally used. For the previous graph, 86 minutes were needed, something to keep in mind.Well, we are now clear about the number of clusters, we just have to create the model:We already have our model and its predictions, we just need to evaluate it.As we have seen before we can apply several visualizations to obtain an intuitive idea of how good our model is. Unfortunately the PCA method and t-SNE do not admit categorical variables. But don’t worry, since the Prince library contains the MCA (Multiple correspondence analysis) method and it does accept a mixed dataset. In fact, I encourage you to visit the Github of this library, it has several super useful methods for different situations, see the following image:Well, the plan is to apply a MCA to reduce the dimensionality and be able to make graphical representations. For this we use the following code:Remember that if you want to follow each step 100%, you can take a look at Jupyter notebook.The dataset named mca_3d_df contains that information:Let’s make a plot using the reduction provided by the MCA method:Wow, it doesn’t look very good… It is not possible to differentiate the clusters from each other. We can say then that the model is not good enough, right?I hope you said something like:“Hey Damian, don’t go so fast!! Have you looked at the variability of the 3 components provided by the MCA?”Indeed, we must see if the variability of the first 3 components is sufficient to be able to draw conclusions. The MCA method allows us to obtain these values in a very simple way:Aha, here we have something interesting. Due to our data we obtain basically zero variability.In other words, we cannot draw clear conclusions from our model with the information provided by the dimensionality reduction provided by MCA.By showing these results I try to give an example of what happens in real data projects. Good results are not always obtained, but a good data scientist knows how to recognize the causes.We have one last option to visually determine if the model created by the K-Prototype method is suitable or not. This path is simple:Note that the components provided by the PCA will be the same as for method 1: Kmeans, since it is the same dataframe.Let’s see what we get…It doesn’t look bad, in fact it has a certain resemblance to what has been obtained in Kmeans.Finally we obtain the average value of the clusters and the importance of each of the variables:The variables with the greatest weight are the numerical ones, notably seeing that the confinement of these two features is almost sufficient to differentiate each cluster.In short, it can be said that results similar to those of Kmeans have been obtained.This combination can be quite powerful and improve the results obtained. Let’s get to the point!LLMs cannot understand written text directly, we need to transform the input of this type of models. For this, Word Embedding is carried out. It consists of transforming the text into numerical vectors. The following image can clarify the idea:This coding is done intelligently, that is, phrases that contain a similar meaning will have a more similar vector. See the following image:Word embedding is carried out by so-called transforms, algorithms specialized in this coding. Typically you can choose what the size of the numerical vector coming from this encoding is. And here is one of the key points:Thanks to the large dimension of the vector created by embedding, small variations in the data can be seen with greater precision.Therefore, if we provide input to our information-rich Kmeans model, it will return better predictions. This is the idea we are pursuing and these are its steps:Well, the first step is to encode the information through word embedding. What is intended is to take the information of each client and unify it into text that contains all its characteristics. This part takes a lot of computing time. That’s why I created a script that did this job, call embedding_creation.py. This script collects the values contained in the training dataset and creates a new dataset provided by the embedding. This is the script code:As it is quite important that this step is understood. Let’s go by points:Finally we obtain the dataframe from the embedding, which will be the input of our Kmeans model.This step has been one of the most interesting and important, since we have created the input for the Kmeans model that we will create.The creation and evaluation procedure is similar to that shown above. In order not to make the post excessively long, only the results of each point will be shown. Don’t worry, all the code is contained in the jupyter notebook called embedding, so you can reproduce the results for yourself.In addition, the dataset resulting from applying the word embedding has been saved in a csv file. This csv file is called embedding_train.csv. In the Jupyter notebook you will see that we access that dataset and create our model based on it.We could consider embedding as preprocessing.We apply the method already presented to detect outliers, ECOD. We create a dataset that does not contain these types of points.First we must find out what the optimal number of clusters is. For this we use Elbow Method.After viewing the graph, we choose k=5 as our number of clusters.The next thing is to create our Kmeans model with k=5. Next we can obtain some metrics like these:Seeing then that the values are really similar to those obtained in the previous case. Let’s study the representations obtained with PCA analysis:It can be seen that the clusters are much better differentiated than with the traditional method. This is good news. Let us remember that it is important to take into account the variability contained in the first 3 components of our PCA analysis. From experience, I can say that when it is around 50% (3D PCA) more or less clear conclusions can be drawn.We see then that it is 40.44% cumulative variability of the 4 components, it is acceptable but not ideal.One way I can visually see how compact the clusters are is by modifying the opacity of the points in the 3D representation. This means that when the points are agglomerated in a certain space, a black spot can be observed. In order to understand what I’m saying, I show the following gif:As can be seen, there are several points in space where the points of the same cluster cluster together. This indicates that they are well differentiated from the other points and that the model knows how to recognize them quite well.Even so, it can be seen that various clusters cannot be differentiated well (Ex: cluster 1 and 3). For this reason, we carry out a t-SNE analysis, which we remember is a method to reduce dimensionality but also maintains the spatial structure.A noticeable improvement is seen. The clusters do not overlap each other and there is a clear differentiation between points. The improvement obtained using the second dimensionality reduction method is notable. Let’s see a 2D comparison:Again, it can be seen that the clusters in the t-SNE are more separated and better differentiated than with the PCA. Furthermore, the difference between the two methods in terms of quality is smaller than when using the traditional Kmeans method.To understand which variables our Kmeans model relies on, we do the same move as before: we create a classification model (LGBMClassifier) and analyze the importance of the features.We see then that this model is based above all on the “marital” and “job” variables. On the other hand we see that there are variables that do not provide much information. In a real case, a new version of the model should be created without these variables with little information.The Kmeans + Embedding model is more optimal since it needs fewer variables to be able to give good predictions. Good news!We finish with the part that is most revealing and important.Managers and the business are not interested in PCA, t-SNE or embedding. What they want is to be able to know what the main traits are, in this case, of their clients.To do this, we create a table with information about the predominant profiles that we can find in each of the clusters:Something very curious happens: the clusters where the most frequent position is that of “management” are 3. In them we find a very peculiar behavior where the single managers are younger, those who are married are older and the divorced are the how older they are. On the other hand, the balance behaves differently, single people have a higher average balance than divorced people, and married people have a higher average balance. What was said can be summarized in the following image:This revelation is in line with reality and social aspects. It also reveals very specific customer profiles. This is the magic of data science.The conclusion is clear:You have to have different tools because in a real project, not all strategies work and you must have resources to add value. It is clearly seen that the model created with the help of the LLMs stands out.----Towards Data SciencePassionate about data, I transitioned from physics to data science. Worked at Telefonica, HP, and now CTO at Seniority.AI since 2022.Damian GilinTowards AI--2Heiko HotzinTowards Data Science--16Giuseppe ScalamognainTowards Data Science--14Damian GilinLatinXinAI--Dominik PolzerinTowards Data Science--9Deniz Gunay--Gustavo Espíndola--Vlad--Tushit Dave--1Tiago CunhainExpedia Group Technology--1HelpStatusWritersBlogCareersPrivacyTermsAboutText to speechTeams



This post first appeared on VedVyas Articles, please read the originial post: here

Share the post

Mastering Customer Segmentation with LLM

×

Subscribe to Vedvyas Articles

Get updates delivered right to your inbox!

Thank you for your subscription

×