vignettes/introduction.Rmd
introduction.Rmd
SuperML R package is designed to unify the model training process in
R like Python. Generally, it’s seen that people spend lot of time in
searching for packages, figuring out the syntax for training machine
learning models in R. This behaviour is highly apparent in users who
frequently switch between R and Python. This package provides a python´s
scikit-learn interface (fit
, predict
) to train
models faster.
In addition to building machine learning models, there are handy functionalities to do feature engineering
This ambitious package is my ongoing effort to help the r-community build ML models easily and faster in R.
You can install latest cran version using (recommended):
install.packages("superml")
You can install the developmemt version directly from github using:
devtools::install_github("saraswatmks/superml")
For machine learning, superml is based on the existing R packages. Hence, while installing the package, we don’t install all the dependencies. However, while training any model, superml will automatically install the package if its not found. Still, if you want to install all dependencies at once, you can simply do:
install.packages("superml", dependencies=TRUE)
This package uses existing r-packages to build machine learning model. In this tutorial, we’ll use data.table R package to do all tasks related to data manipulation.
We’ll quickly prepare the data set to be ready to served for model training.
load("../data/reg_train.rda")
# if the above doesn't work, you can try: load("reg_train.rda")
library(data.table)
library(caret)
#> Loading required package: ggplot2
#> Loading required package: lattice
library(superml)
#> Loading required package: R6
library(Metrics)
#>
#> Attaching package: 'Metrics'
#> The following objects are masked from 'package:caret':
#>
#> precision, recall
head(reg_train)
#> Id MSSubClass MSZoning LotFrontage LotArea Street Alley LotShape LandContour
#> 1: 1 60 RL 65 8450 Pave <NA> Reg Lvl
#> 2: 2 20 RL 80 9600 Pave <NA> Reg Lvl
#> 3: 3 60 RL 68 11250 Pave <NA> IR1 Lvl
#> 4: 4 70 RL 60 9550 Pave <NA> IR1 Lvl
#> 5: 5 60 RL 84 14260 Pave <NA> IR1 Lvl
#> 6: 6 50 RL 85 14115 Pave <NA> IR1 Lvl
#> Utilities LotConfig LandSlope Neighborhood Condition1 Condition2 BldgType
#> 1: AllPub Inside Gtl CollgCr Norm Norm 1Fam
#> 2: AllPub FR2 Gtl Veenker Feedr Norm 1Fam
#> 3: AllPub Inside Gtl CollgCr Norm Norm 1Fam
#> 4: AllPub Corner Gtl Crawfor Norm Norm 1Fam
#> 5: AllPub FR2 Gtl NoRidge Norm Norm 1Fam
#> 6: AllPub Inside Gtl Mitchel Norm Norm 1Fam
#> HouseStyle OverallQual OverallCond YearBuilt YearRemodAdd RoofStyle RoofMatl
#> 1: 2Story 7 5 2003 2003 Gable CompShg
#> 2: 1Story 6 8 1976 1976 Gable CompShg
#> 3: 2Story 7 5 2001 2002 Gable CompShg
#> 4: 2Story 7 5 1915 1970 Gable CompShg
#> 5: 2Story 8 5 2000 2000 Gable CompShg
#> 6: 1.5Fin 5 5 1993 1995 Gable CompShg
#> Exterior1st Exterior2nd MasVnrType MasVnrArea ExterQual ExterCond Foundation
#> 1: VinylSd VinylSd BrkFace 196 Gd TA PConc
#> 2: MetalSd MetalSd None 0 TA TA CBlock
#> 3: VinylSd VinylSd BrkFace 162 Gd TA PConc
#> 4: Wd Sdng Wd Shng None 0 TA TA BrkTil
#> 5: VinylSd VinylSd BrkFace 350 Gd TA PConc
#> 6: VinylSd VinylSd None 0 TA TA Wood
#> BsmtQual BsmtCond BsmtExposure BsmtFinType1 BsmtFinSF1 BsmtFinType2
#> 1: Gd TA No GLQ 706 Unf
#> 2: Gd TA Gd ALQ 978 Unf
#> 3: Gd TA Mn GLQ 486 Unf
#> 4: TA Gd No ALQ 216 Unf
#> 5: Gd TA Av GLQ 655 Unf
#> 6: Gd TA No GLQ 732 Unf
#> BsmtFinSF2 BsmtUnfSF TotalBsmtSF Heating HeatingQC CentralAir Electrical
#> 1: 0 150 856 GasA Ex Y SBrkr
#> 2: 0 284 1262 GasA Ex Y SBrkr
#> 3: 0 434 920 GasA Ex Y SBrkr
#> 4: 0 540 756 GasA Gd Y SBrkr
#> 5: 0 490 1145 GasA Ex Y SBrkr
#> 6: 0 64 796 GasA Ex Y SBrkr
#> 1stFlrSF 2ndFlrSF LowQualFinSF GrLivArea BsmtFullBath BsmtHalfBath FullBath
#> 1: 856 854 0 1710 1 0 2
#> 2: 1262 0 0 1262 0 1 2
#> 3: 920 866 0 1786 1 0 2
#> 4: 961 756 0 1717 1 0 1
#> 5: 1145 1053 0 2198 1 0 2
#> 6: 796 566 0 1362 1 0 1
#> HalfBath BedroomAbvGr KitchenAbvGr KitchenQual TotRmsAbvGrd Functional
#> 1: 1 3 1 Gd 8 Typ
#> 2: 0 3 1 TA 6 Typ
#> 3: 1 3 1 Gd 6 Typ
#> 4: 0 3 1 Gd 7 Typ
#> 5: 1 4 1 Gd 9 Typ
#> 6: 1 1 1 TA 5 Typ
#> Fireplaces FireplaceQu GarageType GarageYrBlt GarageFinish GarageCars
#> 1: 0 <NA> Attchd 2003 RFn 2
#> 2: 1 TA Attchd 1976 RFn 2
#> 3: 1 TA Attchd 2001 RFn 2
#> 4: 1 Gd Detchd 1998 Unf 3
#> 5: 1 TA Attchd 2000 RFn 3
#> 6: 0 <NA> Attchd 1993 Unf 2
#> GarageArea GarageQual GarageCond PavedDrive WoodDeckSF OpenPorchSF
#> 1: 548 TA TA Y 0 61
#> 2: 460 TA TA Y 298 0
#> 3: 608 TA TA Y 0 42
#> 4: 642 TA TA Y 0 35
#> 5: 836 TA TA Y 192 84
#> 6: 480 TA TA Y 40 30
#> EnclosedPorch 3SsnPorch ScreenPorch PoolArea PoolQC Fence MiscFeature
#> 1: 0 0 0 0 <NA> <NA> <NA>
#> 2: 0 0 0 0 <NA> <NA> <NA>
#> 3: 0 0 0 0 <NA> <NA> <NA>
#> 4: 272 0 0 0 <NA> <NA> <NA>
#> 5: 0 0 0 0 <NA> <NA> <NA>
#> 6: 0 320 0 0 <NA> MnPrv Shed
#> MiscVal MoSold YrSold SaleType SaleCondition SalePrice
#> 1: 0 2 2008 WD Normal 208500
#> 2: 0 5 2007 WD Normal 181500
#> 3: 0 9 2008 WD Normal 223500
#> 4: 0 2 2006 WD Abnorml 140000
#> 5: 0 12 2008 WD Normal 250000
#> 6: 700 10 2009 WD Normal 143000
split <- createDataPartition(y = reg_train$SalePrice, p = 0.7)
xtrain <- reg_train[split$Resample1]
xtest <- reg_train[!split$Resample1]
# remove features with 90% or more missing values
# we will also remove the Id column because it doesn't contain
# any useful information
na_cols <- colSums(is.na(xtrain)) / nrow(xtrain)
na_cols <- names(na_cols[which(na_cols > 0.9)])
xtrain[, c(na_cols, "Id") := NULL]
xtest[, c(na_cols, "Id") := NULL]
# encode categorical variables
cat_cols <- names(xtrain)[sapply(xtrain, is.character)]
for(c in cat_cols){
lbl <- LabelEncoder$new()
lbl$fit(c(xtrain[[c]], xtest[[c]]))
xtrain[[c]] <- lbl$transform(xtrain[[c]])
xtest[[c]] <- lbl$transform(xtest[[c]])
}
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
#> The data contains NA values. Imputing NA with 'NA'
# removing noise column
noise <- c('GrLivArea','TotalBsmtSF')
xtrain[, c(noise) := NULL]
xtest[, c(noise) := NULL]
# fill missing value with -1
xtrain[is.na(xtrain)] <- -1
xtest[is.na(xtest)] <- -1
KNN Regression
knn <- KNNTrainer$new(k = 2,prob = T,type = 'reg')
knn$fit(train = xtrain, test = xtest, y = 'SalePrice')
probs <- knn$predict(type = 'prob')
labels <- knn$predict(type='raw')
rmse(actual = xtest$SalePrice, predicted=labels)
#> [1] 43343.59
SVM Regression
svm <- SVMTrainer$new()
svm$fit(xtrain, 'SalePrice')
pred <- svm$predict(xtest)
rmse(actual = xtest$SalePrice, predicted = pred)
Simple Regresison
lf <- LMTrainer$new(family="gaussian")
lf$fit(X = xtrain, y = "SalePrice")
summary(lf$model)
#>
#> Call:
#> stats::glm(formula = f, family = self$family, data = X, weights = self$weights)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -364148 -14978 -1563 14119 275341
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 1.888e+05 1.699e+06 0.111 0.911565
#> MSSubClass -2.600e+02 6.904e+01 -3.766 0.000176 ***
#> MSZoning -5.623e+02 1.882e+03 -0.299 0.765141
#> LotFrontage -7.255e+00 3.602e+01 -0.201 0.840407
#> LotArea 2.612e-01 1.285e-01 2.033 0.042336 *
#> Street -3.948e+04 1.876e+04 -2.105 0.035595 *
#> LotShape 2.578e+03 2.241e+03 1.150 0.250245
#> LandContour 5.433e+02 2.353e+03 0.231 0.817477
#> Utilities -7.064e+04 3.647e+04 -1.937 0.053066 .
#> LotConfig 1.401e+03 1.174e+03 1.194 0.232780
#> LandSlope 1.204e+04 5.486e+03 2.196 0.028366 *
#> Neighborhood 1.061e+02 2.059e+02 0.516 0.606262
#> Condition1 -4.813e+03 1.063e+03 -4.526 6.76e-06 ***
#> Condition2 -2.350e+04 6.590e+03 -3.566 0.000380 ***
#> BldgType 3.006e+03 2.849e+03 1.055 0.291571
#> HouseStyle 2.096e+03 9.981e+02 2.100 0.036013 *
#> OverallQual 1.586e+04 1.495e+03 10.609 < 2e-16 ***
#> OverallCond 6.405e+03 1.313e+03 4.879 1.25e-06 ***
#> YearBuilt 2.703e+02 8.903e+01 3.036 0.002466 **
#> YearRemodAdd 2.021e+02 8.744e+01 2.312 0.021005 *
#> RoofStyle 4.510e+03 2.229e+03 2.023 0.043348 *
#> RoofMatl -1.136e+04 2.377e+03 -4.777 2.06e-06 ***
#> Exterior1st -8.005e+02 5.848e+02 -1.369 0.171387
#> Exterior2nd 6.632e+02 5.115e+02 1.297 0.195078
#> MasVnrType 2.882e+03 1.752e+03 1.645 0.100265
#> MasVnrArea 3.275e+01 7.942e+00 4.123 4.07e-05 ***
#> ExterQual 5.747e+02 2.551e+03 0.225 0.821784
#> ExterCond 1.672e+03 2.677e+03 0.624 0.532508
#> Foundation -3.433e+03 2.011e+03 -1.707 0.088133 .
#> BsmtQual 6.071e+03 1.568e+03 3.871 0.000116 ***
#> BsmtCond -3.050e+03 2.109e+03 -1.446 0.148473
#> BsmtExposure 1.218e+03 9.786e+02 1.244 0.213654
#> BsmtFinType1 -9.457e+02 7.540e+02 -1.254 0.210109
#> BsmtFinSF1 8.958e+00 6.150e+00 1.457 0.145543
#> BsmtFinType2 -1.720e+03 1.404e+03 -1.225 0.220811
#> BsmtFinSF2 2.037e+01 1.194e+01 1.706 0.088243 .
#> BsmtUnfSF 2.187e+00 5.858e+00 0.373 0.708977
#> Heating 6.077e+03 4.809e+03 1.264 0.206639
#> HeatingQC -2.847e+03 1.558e+03 -1.827 0.068024 .
#> CentralAir 2.633e+03 5.683e+03 0.463 0.643300
#> Electrical 2.720e+03 1.577e+03 1.725 0.084836 .
#> `1stFlrSF` 5.067e+01 7.396e+00 6.852 1.31e-11 ***
#> `2ndFlrSF` 5.679e+01 6.419e+00 8.848 < 2e-16 ***
#> LowQualFinSF 5.411e+01 2.526e+01 2.142 0.032449 *
#> BsmtFullBath 1.143e+04 3.328e+03 3.435 0.000619 ***
#> BsmtHalfBath 4.301e+03 4.830e+03 0.891 0.373401
#> FullBath 9.536e+03 3.595e+03 2.652 0.008127 **
#> HalfBath -7.362e+01 3.264e+03 -0.023 0.982008
#> BedroomAbvGr -6.066e+03 2.114e+03 -2.869 0.004208 **
#> KitchenAbvGr -1.718e+04 6.530e+03 -2.631 0.008642 **
#> KitchenQual 9.137e+03 2.012e+03 4.542 6.28e-06 ***
#> TotRmsAbvGrd 2.167e+03 1.525e+03 1.421 0.155661
#> Functional -4.019e+03 1.537e+03 -2.615 0.009062 **
#> Fireplaces -2.614e+03 2.784e+03 -0.939 0.347976
#> FireplaceQu 4.065e+03 1.444e+03 2.815 0.004971 **
#> GarageType -1.152e+03 1.219e+03 -0.945 0.344867
#> GarageYrBlt -7.813e+00 4.635e+00 -1.686 0.092172 .
#> GarageFinish 1.228e+03 1.571e+03 0.781 0.434888
#> GarageCars 1.522e+04 3.679e+03 4.136 3.85e-05 ***
#> GarageArea 2.705e+00 1.211e+01 0.223 0.823273
#> GarageQual 5.214e+03 2.510e+03 2.077 0.038066 *
#> GarageCond -4.059e+03 2.514e+03 -1.615 0.106692
#> PavedDrive -2.392e+03 3.531e+03 -0.677 0.498373
#> WoodDeckSF 2.770e+01 9.862e+00 2.809 0.005076 **
#> OpenPorchSF 4.027e-01 1.821e+01 0.022 0.982365
#> EnclosedPorch 1.718e+01 2.012e+01 0.854 0.393527
#> `3SsnPorch` 2.146e+01 3.481e+01 0.617 0.537626
#> ScreenPorch 5.179e+01 2.055e+01 2.520 0.011906 *
#> PoolArea -4.865e+01 3.316e+01 -1.467 0.142635
#> Fence -2.237e+03 1.461e+03 -1.532 0.125958
#> MiscVal -5.616e-01 2.129e+00 -0.264 0.791964
#> MoSold 1.543e+01 4.139e+02 0.037 0.970262
#> YrSold -5.887e+02 8.479e+02 -0.694 0.487643
#> SaleType 2.424e+03 1.372e+03 1.767 0.077557 .
#> SaleCondition 1.885e+02 1.417e+03 0.133 0.894166
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for gaussian family taken to be 1159889712)
#>
#> Null deviance: 6.7241e+12 on 1023 degrees of freedom
#> Residual deviance: 1.1007e+12 on 949 degrees of freedom
#> AIC: 24353
#>
#> Number of Fisher Scoring iterations: 2
predictions <- lf$predict(df = xtest)
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 32614.51
Lasso Regression
lf <- LMTrainer$new(family = "gaussian", alpha = 1, lambda = 1000)
lf$fit(X = xtrain, y = "SalePrice")
predictions <- lf$predict(df = xtest)
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 34648.74
Ridge Regression
lf <- LMTrainer$new(family = "gaussian", alpha=0)
lf$fit(X = xtrain, y = "SalePrice")
predictions <- lf$predict(df = xtest)
rmse(actual = xtest$SalePrice, predicted = predictions)
#> [1] 34613.02
Logistic Regression with CV
lf <- LMTrainer$new(family = "gaussian")
lf$cv_model(X = xtrain, y = 'SalePrice', nfolds = 5, parallel = FALSE)
predictions <- lf$cv_predict(df = xtest)
coefs <- lf$get_importance()
rmse(actual = xtest$SalePrice, predicted = predictions)
Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 0)
rf$fit(X = xtrain, y = "SalePrice")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> OverallQual 844737289071
#> GarageCars 519341354161
#> GarageArea 469350341639
#> 1stFlrSF 464828647849
#> YearBuilt 350171916777
#> BsmtFinSF1 349347376479
#> FullBath 278331177914
#> 2ndFlrSF 270448721224
#> GarageYrBlt 258897020738
#> TotRmsAbvGrd 210969968426
#> LotArea 195978890087
#> ExterQual 195576344093
#> YearRemodAdd 183031820680
#> MasVnrArea 180322788352
#> KitchenQual 133568555304
#> FireplaceQu 124804980078
#> Fireplaces 123094397261
#> Foundation 96858243861
#> OpenPorchSF 95731351066
#> BsmtQual 90675977573
#> LotFrontage 87866509903
#> Neighborhood 74956170518
#> WoodDeckSF 74632794732
#> BsmtUnfSF 73012722415
#> BsmtFinType1 56543231903
#> Exterior2nd 52642308152
#> HeatingQC 49350276110
#> BedroomAbvGr 43892140856
#> GarageType 43585164217
#> HalfBath 38717126668
#> MoSold 38711422395
#> OverallCond 37162748231
#> MSSubClass 36665510934
#> BsmtFullBath 34874984280
#> HouseStyle 29848424083
#> RoofStyle 28956911465
#> YrSold 26846018256
#> LotShape 26371507898
#> GarageFinish 26023355141
#> Exterior1st 24326657737
#> PoolArea 24147348890
#> BsmtExposure 19732168164
#> SaleCondition 18951670570
#> MSZoning 18503791949
#> MasVnrType 17644070801
#> RoofMatl 17617779523
#> LotConfig 16496716294
#> Condition1 15808322329
#> LandContour 15129046563
#> GarageQual 15026531772
#> SaleType 13751102319
#> BldgType 12597027423
#> CentralAir 12113387004
#> ScreenPorch 12074760453
#> BsmtHalfBath 11804212877
#> BsmtCond 11740155587
#> Fence 11714793223
#> EnclosedPorch 10988408338
#> GarageCond 10480563104
#> LandSlope 10132278734
#> ExterCond 8304403921
#> BsmtFinSF2 8200377445
#> KitchenAbvGr 7184142301
#> Functional 6695275114
#> BsmtFinType2 5872905374
#> PavedDrive 4906483821
#> Heating 3766850640
#> Electrical 3559343728
#> LowQualFinSF 3193993875
#> Condition2 3112224580
#> 3SsnPorch 2831952962
#> MiscVal 1748256797
#> Street 426528686
#> Utilities 8546882
rmse(actual = xtest$SalePrice, predicted = pred)
#> [1] 26422.94
Xgboost
xgb <- XGBTrainer$new(objective = "reg:linear"
, n_estimators = 500
, eval_metric = "rmse"
, maximize = F
, learning_rate = 0.1
,max_depth = 6)
xgb$fit(X = xtrain, y = "SalePrice", valid = xtest)
pred <- xgb$predict(xtest)
rmse(actual = xtest$SalePrice, predicted = pred)
Grid Search
xgb <- XGBTrainer$new(objective = "reg:linear")
gst <- GridSearchCV$new(trainer = xgb,
parameters = list(n_estimators = c(10,50), max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'))
gst$fit(xtrain, "SalePrice")
gst$best_iteration()
Random Search
rf <- RFTrainer$new()
rst <- RandomSearchCV$new(trainer = rf,
parameters = list(n_estimators = c(5,10),
max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'),
n_iter = 3)
rst$fit(xtrain, "SalePrice")
#> [1] "In total, 3 models will be trained"
rst$best_iteration()
#> $n_estimators
#> [1] 10
#>
#> $max_depth
#> [1] 5
#>
#> $accuracy_avg
#> [1] 0.01463718
#>
#> $accuracy_sd
#> [1] 0.005035033
#>
#> $auc_avg
#> [1] NaN
#>
#> $auc_sd
#> [1] NA
Here, we will solve a simple binary classification problem (predict people who survived on titanic ship). The idea here is to demonstrate how to use this package to solve classification problems.
Data Preparation
# load class
load('../data/cla_train.rda')
# if the above doesn't work, you can try: load("cla_train.rda")
head(cla_train)
#> PassengerId Survived Pclass
#> 1: 1 0 3
#> 2: 2 1 1
#> 3: 3 1 3
#> 4: 4 1 1
#> 5: 5 0 3
#> 6: 6 0 3
#> Name Sex Age SibSp Parch
#> 1: Braund, Mr. Owen Harris male 22 1 0
#> 2: Cumings, Mrs. John Bradley (Florence Briggs Thayer) female 38 1 0
#> 3: Heikkinen, Miss. Laina female 26 0 0
#> 4: Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35 1 0
#> 5: Allen, Mr. William Henry male 35 0 0
#> 6: Moran, Mr. James male NA 0 0
#> Ticket Fare Cabin Embarked
#> 1: A/5 21171 7.2500 S
#> 2: PC 17599 71.2833 C85 C
#> 3: STON/O2. 3101282 7.9250 S
#> 4: 113803 53.1000 C123 S
#> 5: 373450 8.0500 S
#> 6: 330877 8.4583 Q
# split the data
split <- createDataPartition(y = cla_train$Survived,p = 0.7)
xtrain <- cla_train[split$Resample1]
xtest <- cla_train[!split$Resample1]
# encode categorical variables - shorter way
for(c in c('Embarked','Sex','Cabin')) {
lbl <- LabelEncoder$new()
lbl$fit(c(xtrain[[c]], xtest[[c]]))
xtrain[[c]] <- lbl$transform(xtrain[[c]])
xtest[[c]] <- lbl$transform(xtest[[c]])
}
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
#> The data contains blank values. Imputing them with 'NA'
# impute missing values
xtrain[, Age := replace(Age, is.na(Age), median(Age, na.rm = T))]
xtest[, Age := replace(Age, is.na(Age), median(Age, na.rm = T))]
# drop these features
to_drop <- c('PassengerId','Ticket','Name')
xtrain <- xtrain[,-c(to_drop), with=F]
xtest <- xtest[,-c(to_drop), with=F]
Now, our data is ready to be served for model training. Let’s do it.
KNN Classification
knn <- KNNTrainer$new(k = 2,prob = T,type = 'class')
knn$fit(train = xtrain, test = xtest, y = 'Survived')
probs <- knn$predict(type = 'prob')
labels <- knn$predict(type = 'raw')
auc(actual = xtest$Survived, predicted = labels)
#> [1] 0.6385027
Naive Bayes Classification
nb <- NBTrainer$new()
nb$fit(xtrain, 'Survived')
pred <- nb$predict(xtest)
#> Warning: predict.naive_bayes(): more features in the newdata are provided as
#> there are probability tables in the object. Calculation is performed based on
#> features to be found in the tables.
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.7771836
SVM Classification
#predicts labels
svm <- SVMTrainer$new()
svm$fit(xtrain, 'Survived')
pred <- svm$predict(xtest)
auc(actual = xtest$Survived, predicted=pred)
Logistic Regression
lf <- LMTrainer$new(family = "binomial")
lf$fit(X = xtrain, y = "Survived")
summary(lf$model)
#>
#> Call:
#> stats::glm(formula = f, family = self$family, data = X, weights = self$weights)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -2.6102 -0.6018 -0.4367 0.7038 2.4493
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) 1.830070 0.616894 2.967 0.00301 **
#> Pclass -0.980785 0.192493 -5.095 3.48e-07 ***
#> Sex 2.508241 0.230374 10.888 < 2e-16 ***
#> Age -0.041034 0.009309 -4.408 1.04e-05 ***
#> SibSp -0.235520 0.117715 -2.001 0.04542 *
#> Parch -0.098742 0.137791 -0.717 0.47361
#> Fare 0.001281 0.002842 0.451 0.65230
#> Cabin 0.008408 0.004786 1.757 0.07899 .
#> Embarked 0.248088 0.166616 1.489 0.13649
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 831.52 on 623 degrees of freedom
#> Residual deviance: 564.76 on 615 degrees of freedom
#> AIC: 582.76
#>
#> Number of Fisher Scoring iterations: 5
predictions <- lf$predict(df = xtest)
auc(actual = xtest$Survived, predicted = predictions)
#> [1] 0.8832145
Lasso Logistic Regression
lf <- LMTrainer$new(family="binomial", alpha=1)
lf$cv_model(X = xtrain, y = "Survived", nfolds = 5, parallel = FALSE)
pred <- lf$cv_predict(df = xtest)
auc(actual = xtest$Survived, predicted = pred)
Ridge Logistic Regression
lf <- LMTrainer$new(family="binomial", alpha=0)
lf$cv_model(X = xtrain, y = "Survived", nfolds = 5, parallel = FALSE)
pred <- lf$cv_predict(df = xtest)
auc(actual = xtest$Survived, predicted = pred)
Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 1, max_features = 3)
rf$fit(X = xtrain, y = "Survived")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> Sex 67.80128
#> Fare 57.97193
#> Age 48.37045
#> Pclass 24.64915
#> Cabin 21.45972
#> SibSp 13.51637
#> Parch 10.45743
#> Embarked 10.23844
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.7976827
Xgboost
xgb <- XGBTrainer$new(objective = "binary:logistic"
, n_estimators = 500
, eval_metric = "auc"
, maximize = T
, learning_rate = 0.1
,max_depth = 6)
xgb$fit(X = xtrain, y = "Survived", valid = xtest)
pred <- xgb$predict(xtest)
auc(actual = xtest$Survived, predicted = pred)
Grid Search
xgb <- XGBTrainer$new(objective="binary:logistic")
gst <-GridSearchCV$new(trainer = xgb,
parameters = list(n_estimators = c(10,50),
max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'))
gst$fit(xtrain, "Survived")
gst$best_iteration()
Random Search
rf <- RFTrainer$new()
rst <- RandomSearchCV$new(trainer = rf,
parameters = list(n_estimators = c(10,50), max_depth = c(5,2)),
n_folds = 3,
scoring = c('accuracy','auc'),
n_iter = 3)
rst$fit(xtrain, "Survived")
#> [1] "In total, 3 models will be trained"
rst$best_iteration()
#> $n_estimators
#> [1] 50
#>
#> $max_depth
#> [1] 5
#>
#> $accuracy_avg
#> [1] 0.7964744
#>
#> $accuracy_sd
#> [1] 0.03090914
#>
#> $auc_avg
#> [1] 0.7729436
#>
#> $auc_sd
#> [1] 0.04283084
Let’s create some new feature based on target variable using target encoding and test a model.
# add target encoding features
xtrain[, feat_01 := smoothMean(train_df = xtrain,
test_df = xtest,
colname = "Embarked",
target = "Survived")$train[[2]]]
xtest[, feat_01 := smoothMean(train_df = xtrain,
test_df = xtest,
colname = "Embarked",
target = "Survived")$test[[2]]]
# train a random forest
# Random Forest
rf <- RFTrainer$new(n_estimators = 500,classification = 1, max_features = 4)
rf$fit(X = xtrain, y = "Survived")
pred <- rf$predict(df = xtest)
rf$get_importance()
#> tmp.order.tmp..decreasing...TRUE..
#> Sex 69.787235
#> Fare 60.832089
#> Age 52.982604
#> Pclass 24.419818
#> Cabin 21.419274
#> SibSp 13.112177
#> Parch 10.175269
#> feat_01 6.675399
#> Embarked 6.450819
auc(actual = xtest$Survived, predicted = pred)
#> [1] 0.8018717