# How to classify Product Catalogue Using Ensemble

*Audio : Listen to This Blog.*

## The Problem Statement

The Otto Product Classification Challenge was a competition hosted on Kaggle, a website dedicated to solving complex data science problems. The purpose of this challenge was to classify products into correct category, based on their recorded features.

## Data

The organizers had provided a training data set containing 61878 entries, and a test data set that had 144368 entries. The data contained 93 features, based on which the products had to be classified. The target column in the training data set indicated the category of the product. The training and test data sets are available for download here. A sample of training data set can be seen in the figure 1.

### Solution Approach

The features in the training data set had a large variance. Anscombe transform on the features reduced the variance. In the process, it also transformed the features from an approximately Poisson distribution into an approximately normal distribution. Rest of the data was pretty clean, and thus we could use it directly as input to the classification algorithm. For classification, we tried two approaches { one was using the xgboost algorithm, and other using the deeplearning algorithm through h2o. xgboost is an implementation of extreme gradient boosting algorithm, which can be used effectively for classification.

As discussed in the TFI blog, The gradient boosting algorithm is an ensemble method based on decision trees. For classification, at every branch in the tree, it tries to eliminate a category, to finally have only one category per leaf node. For this, it needs to build trees for each category separately. But since we had only 9 categories, we decided to use it. The deep learning algorithm provided by h2o is based on a multi-layer neural network. It is trained using a variation of gradient descent method. We used multi-class logarithmic loss as the error metric to find a good model, as this was also the model used for ranking by Kaggle.

### Building the Classifier

param list(`objective' = `multi:softprob', `eval metric' = `mlogloss', `num class' = 9, `nthread' = 8, `eta'= 0.1, `max depth' = 27, `gamma' = 2, `min child weight' = 3, `subsample' = 0.75, `colsample bytree' = 0.85) nround = 5000 classier = xgboost(param=param, data = x, label = y, nrounds=nround)

We also created another classifier using the deep-learning algorithm provided by h2o. The configuration for this algorithm was as follows:

classication=T, activation=`RectierWithDropout', hidden=c(1024,512,256), hidden dropout ratio=c(0.5,0.5,0.5), input dropout ratio=0.05, epochs=50, l1=1e-5, l2=1e-5, rho=0.99, epsilon=1e-8, train samples per iteration=4000, max w2=10, seed=1