Gauging how Support Vector Machine Algorithm behaves with Hyperparameter Tuning
The “juice.csv” data contains purchase information for Citrus Hill or Minute Maid orange juice. A description of the variables follows.
- Purchase: A factor with levels CH and MM indicating whether the customer purchased Citrus Hill or Minute Maid Orange Juice
- WeekofPurchase: Week of purchase
- StoreID: Store ID
- PriceCH: Price charged for CH
- PriceMM: Price charged for MM
- DiscCH: Discount offered for CH
- DiscMM: Discount offered for MM
- SpecialCH: Indicator of special on CH
- SpecialMM: Indicator of special on MM
- LoyalCH: Customer brand loyalty for CH
- SalePriceMM: Sale price for MM
- SalePriceCH: Sale price for CH
- PriceDiff: Sale price of MM less sale price of CH
- Store7: A factor with levels No and Yes indicating whether the sale is at Store 7
- PctDiscMM: Percentage discount for MM
- PctDiscCH: Percentage discount for CH
- ListPriceDiff: List price of MM less list price of CH
- STORE: Which of 5 possible stores the sale occured at
More crucial attribute than the prices of both these brands is the price difference between both these brands. We already have the Store ID in the dataset so STORE and Store7 need not be included in the dataset. List Price difference is a redundant attribute as we already have the Sales Price Difference in the data.
After comparing the Train and Test Scores for all the models:
Basic Models
SVM with Linear Kernel is the best in case of Basic Models with the least Error scores for both Train and Test datasets.
Tuned Models
For both RBF and Linear Kernels, the cost parameter for the best model is 0.31 and the scores are almost equal. For Polynomial, the cost parameter is 9.61 but the model isn't as good.
Taking the Accuracy rate and Error Rate into Consideration, both tuned models - SVM with Kernel RBF and Kernel Linear are good but RBF is slighlty better as there are lower chances of Overfitting.
Removing such redundant variables in the dataset and keeping more relevant attributes will help us avoid the overfitting of the model.