A need for unsupervised learning or clustering procedures crop up regularly for problems such as customer behavior segmentation, clustering of patients with similar symptoms for diagnosis or anomaly detection. Unsupervised models are always more challenging since the interpretation of the cluster always comes back to strong subject matter knowledge and knowing your data. The profiling of the clusters is arguably the most challenging aspect of the work. In a business context it can take weeks iterating on the model and socialising the results with the business areas before there is broad agreement on their interpretation and how they can be used.
Kmeans, partitioning around medoids and Gaussian mixture models are go to methods for clustering and have had success with all. A technique I have not used before but are interested in is unsupervised random forests. This post will go into some of the detail and intuition behind URF’s and an example on the iris data set comparing the different methods.
# libraries
suppressPackageStartupMessages(library(randomForest))
suppressPackageStartupMessages(library(caret))
suppressPackageStartupMessages(library(cluster))
suppressPackageStartupMessages(library(RColorBrewer))
# data
data(iris)
set.seed(3984)
# set colours
myColRamp <- colorRampPalette(colors = c("#25591f", "#818c3c", "#72601b"))
Supervised Random Forest
Everyone loves the random forest algorithm. It’s fast, it’s robust and surprisingly accurate for many complex problems. To start of with we’ll fit a normal supervised random forest model. I’ll preface this with the point that a random forest model isn’t really the best model for this data. A random forest model takes a random sample of features and builds a set of weak learners. Given there are only 4 features in this data set there are a maximum of 6 different trees by selecting at random 4 features. But let’s put that aside and push on because we all know the iris data set and makes learning the methods easier.
# split
holdout <- sample(1:150, 50)
# random forest model
rf <- randomForest(x = iris[-holdout,-5], y = iris$Species[-holdout], mtry = 2, ntree = 2000, proximity = TRUE)
rf
## ## Call: ## randomForest(x = iris[-holdout, -5], y = iris$Species[-holdout], ntree = 2000, mtry = 2, proximity = TRUE) ## Type of random forest: classification ## Number of trees: 2000 ## No. of variables tried at each split: 2 ## ## OOB estimate of error rate: 6% ## Confusion matrix: ## setosa versicolor virginica class.error ## setosa 31 0 0 0.00000000 ## versicolor 0 34 2 0.05555556 ## virginica 0 4 29 0.12121212
# predict on test set
y_predicted<-predict(rf, iris[holdout,-5])
df1<-data.frame(Orig=iris$Species[holdout], Pred=y_predicted)
confusionMatrix(table(df1$Orig, df1$Pred))
## Confusion Matrix and Statistics ## ## ## setosa versicolor virginica ## setosa 19 0 0 ## versicolor 0 13 1 ## virginica 0 2 15 ## ## Overall Statistics ## ## Accuracy : 0.94 ## 95% CI : (0.8345, 0.9875) ## No Information Rate : 0.38 ## P-Value [Acc > NIR] : < 2.2e-16 ## ## Kappa : 0.9095 ## Mcnemar's Test P-Value : NA ## ## Statistics by Class: ## ## Class: setosa Class: versicolor Class: virginica ## Sensitivity 1.00 0.8667 0.9375 ## Specificity 1.00 0.9714 0.9412 ## Pos Pred Value 1.00 0.9286 0.8824 ## Neg Pred Value 1.00 0.9444 0.9697 ## Prevalence 0.38 0.3000 0.3200 ## Detection Rate 0.38 0.2600 0.3000 ## Detection Prevalence 0.38 0.2800 0.3400 ## Balanced Accuracy 1.00 0.9190 0.9393
As expected it does a pretty good job on the hold out sample.
Unsupervised Random Forest
In the unsupervised case we don’t have labels to train on. Instead, like other clustering procedures, need to find the underlying structure in the data. For an unsupervised random forest the set up is as follows.
- A joint distribution of the explanatory variables is constructed and draws are taken from this distribution to create synthetic data. In most cases the the same number of draws as in the real data set will be taken.
- The real and synthetic data are combined. A label is then created, say 1 for the real data and 0 for the synthetic data.
- The random forest model then works in the same way, building a set of weak learners and determining whether or not observation
is real or synthetic.
The key output we want is the proximity (or similarity/dissimilarity) matrix. This is an matrix where each value is the proportion of times observation
and
where in the same terminal node. For example, if 100 trees were fit and the
entry is 0.9, it means 90 times out of 100 observation
and
where in the same terminal node. With this matrix we can then perform a normal clustering procedure such as kmeans or PAM (number of cool things could be done once the proximity matrix is created).
rf2 <- randomForest(x = iris[,-5], mtry = 2, ntree = 2000, proximity = TRUE)
rf2
## ## Call: ## randomForest(x = iris[, -5], ntree = 2000, mtry = 2, proximity = TRUE) ## Type of random forest: unsupervised ## Number of trees: 2000 ## No. of variables tried at each split: 2
Now the unsupervised random forest model is fit we’ll extract the proximity matrix and use this as input to a PAM procedure.
prox <- rf2$proximity
pam.rf <- pam(prox, 3)
pred <- cbind(pam.rf$clustering, iris$Species)
table(pred[,2], pred[,1])
## ## 1 2 3 ## 1 47 3 0 ## 2 0 49 1 ## 3 0 15 35
Clusters <- as.factor(pam.rf$cluster)
Species <- iris$Species
ggplot(iris, aes(x = Petal.Length, y = Petal.Width, col = Clusters, pch = Species)) + geom_point(size = 3) +
scale_color_manual(values = myColRamp(3))
Only 18 observations were misclassified which isn’t too bad for an unsupervised procedure. Strangely there is one point in the Setosa bunch that was classified as Versicolor. Ordinarily this would not occur.
Comparison with straight kmeans and PAM
A standard kmeans and PAM procedure will be fit for comparison.
# straight kmeans
xiris <- scale(iris[,-5])
km <- kmeans(xiris, 3, nstart = 200)
pred.km <- cbind(km$cluster, iris$Species)
table(pred.km[,2], pred.km[,1])
## ## 1 2 3 ## 1 50 0 0 ## 2 0 39 11 ## 3 0 14 36
Clusters <- as.factor(km$cluster)
Species <- iris$Species
ggplot(iris, aes(x = Petal.Length, y = Petal.Width, col = Clusters, pch = Species)) + geom_point(size = 3) +
scale_color_manual(values = myColRamp(3))
pm <- pam(xiris, 3)
pred.pm <- cbind(pm$clustering, iris$Species)
table(pred.pm[,2], pred.pm[,1])
## ## 1 2 3 ## 1 50 0 0 ## 2 0 41 9 ## 3 0 14 36
Clusters <- as.factor(pm$cluster)
Species <- iris$Species
ggplot(iris, aes(x = Petal.Length, y = Petal.Width, col = Clusters, pch = Species)) + geom_point(size = 3) +
scale_color_manual(values = myColRamp(3))
It’s fair to assume that the clustering procedures do pretty well so the largest numbers are the correctly allocated ones.
In these examples:
- kmeans incorrectly allocates 25 observations
- PAM incorrectly allocates 23 observations and
- Random forest incorrectly allocates 18
Inspecting the plots, the random forest model tends to do a little better clustering the fringe Versicolor/Virginica species around petal length 5. Even though the random forest procedure probably isn’t most suited to this data set with only 4 independent variables it still does well. With a more complex data set with many independent variables I expect this to work very well.
Follow me on social media: