## Predictive modeling: Kaggle Titanic competition (part 2)

Table of Contents

In a previous post, I documented data preprocessing, feature engineering, data visualization, and model building that culminated in a submission to the Kaggle Titanic competition. I used a linear logistic regression model with a set of features that included some engineered features and some 'out-of-the-box' features, to classify passengers into those who survived and those who perished. This model resulted in about 78% classification accuracy, placing me within the top 40% submissions… not bad, but not great. Let's see if we can do better here by delving into classification trees, ensemble methods like random forests, and the versatile `caret` package.

# Decision tree learning

We've all seen a decision tree before. They often appear as flowcharts, where for example a series of yes/no decisions lead to a particular action. In predictive analysis, the goal of decision tree analysis is to learn or “grow” a tree that effectively predicts a particular outcome (among other outcomes), as a function of input variables (features). Trees are typically learned by a process called recursive partitioning, where the outcome variable is partitioned into subsets of similar or identical responses based on a “test” derived from a particular feature (e.g., Age > 30?). Each branch of the tree represents the outcome of a test (e.g., Yes/No), and each leaf/terminal node is one level of the outcome variable (e.g., survived/perished).

From my background designing and analyzing experiments, one particular feature of decision trees is especially attractive: the ability to include one or several two-way or higher order interactions between variables/features. A hypothetical example of an interaction in a sinking ship disaster is that older males are more likely to perish than survive, but younger females are more likely to perish than older females. That is, gender could interact with age. None of the models in the previous post included any interaction terms, two-way, three-way, or higher. Nor did I even examine any interactions. When there are many possible features and not so many observations (small n large p problems), but even in the present case, the inclusion of (and inspection of) all possible combinations of higher order interactions is not feasible. Additionally, the parameters in substantially more complex models with several and/or higher order interactions often cannot be reliably estimated with ordinary least squares or maximum likelihood estimation.

## Preprocessing

I'll load the preprocessed training and test data I created in the previous post. I'll include some of the original features that I removed before model fitting in the previous post. This time around, I'll rely less on my domain knowledge / hunches about which variables to include, and more on the models. I did however remove a few of the original variables that wouldn't make good features, as well as my 'CabinAlpha', 'Good' and 'Bad' cabin features that I'm not happy with. As we did before, I'll perform the identical operations on both the training data and test data provided by Kaggle.

```knitr::opts_chunk\$set(echo=TRUE, warning=FALSE, message=FALSE, collapse=TRUE, error=TRUE)

train <- read.csv('preprocessed_train_data.csv', header = TRUE)
test <- read.csv('preprocessed_test_data.csv', header = TRUE)

CleanDF = function(df){
delete <- c("honorific", "Name", "Ticket", "Cabin", "CabinAlpha", "BadCabin", "GoodCabin")
return(df[, !(names(df) %in% delete)])
}

train <- CleanDF(train)
test <- CleanDF(test)
```

Upon inspecting the variable types in the new dataframe I found a few variable types that need to be changed. This function will do the trick.

```#function to change some variable types
changeVars = function(df){
#change ordinal variables coded as numeric to factors
df\$Pclass <- as.factor(df\$Pclass)
#the outcome variable will only exist in the training data
if("Survived" %in% names(df)){
df\$Survived <- as.factor(df\$Survived)
}
#return new data frame
return(df)
}
train <- changeVars(train)
test <- changeVars(test)
```

## Classification trees

Although several different algorithms are available for decision tree learning, I'll only use one type here, called conditional inference trees. In writing this post I relied heavily on an excellent introduction to classification trees and recursive partitioning more generally by Carolin Strobl and colleagues. Let's start by learning some conditional inference trees. The R functions `ctree` (which implements classification and regression trees) and `cforest` (which implements random forests) are both available in the `partykit` package. As explained in the documentation, the `ctree` algorithm works roughly like this:

1) Test the global null hypothesis of independence between any of the input variables and the response. Stop if this hypothesis cannot be rejected. Otherwise select the input variable with the strongest association to the response. This association is measured by a p-value corresponding to a test for the partial null hypothesis of a single input variable and the response.
2) Implement a binary split in the selected input variable.
3) Recursively repeat steps 1) and 2).

```# set a random seed to ensure repeatability
set.seed(99)
require(partykit)
```

The `ctree` function takes an argument for an object of class “TreeControl”, returned by the function `ctree_control`, containing various parameters for tree learning. Let's start simple, with a tree that is forced to have only 1 split (i.e., only 1 input variable will be selected to partition the outcome variable). I'll call it m5 for model 5, carried over from the previous post using logistic regression models.

```tc_d1 <- ctree_control(maxdepth = 1)
m5 <- ctree(Survived ~ ., data = train, control = tc_d1)
print(m5)
##
## Model formula:
## Survived ~ Pclass + PassengerId + Sex + Age + SibSp + Parch +
##     Fare + Embarked + median.hon.age + median.class.fare + honorificType +
##     CabinAlphaUno + FamSize + Fare.pp + wcFirst
##
## Fitted party:
## [1] root
## |   [2] CabinAlphaUno in A, T, U: 0 (n = 703, err = 30.3%)
## |   [3] CabinAlphaUno in B, C, D, E, F, G: 1 (n = 188, err = 31.4%)
##
## Number of inner nodes:    1
## Number of terminal nodes: 2
```

The printed output shows that the single split was on the variable 'CabinAlphaUno', meaning it exhibited the strongest association with Survival. I crudely created this feature from the existing 'Cabin' variable, by simply taking the first letter of the string. So this tells me that there's probably better feature engineering to be done with the Cabin variable. I'm going to bypass 'CabinAlphaUno' for a moment and grow a tree that doesn't have access to it.

```m6 <- ctree(Survived ~ . -CabinAlphaUno, data = train, control = tc_d1)
print(m6)
##
## Model formula:
## Survived ~ Pclass + PassengerId + Sex + Age + SibSp + Parch +
##     Fare + Embarked + median.hon.age + median.class.fare + honorificType +
##     FamSize + Fare.pp + wcFirst
##
## Fitted party:
## [1] root
## |   [2] honorificType in Master, Miss, Mrs: 1 (n = 351, err = 27.9%)
## |   [3] honorificType in Mr, Noble: 0 (n = 540, err = 16.5%)
##
## Number of inner nodes:    1
## Number of terminal nodes: 2
```

Now the strongest association with the outcome variable was the honorifics I extracted from the passenger names. This feature was used to split the outcomes into 2 terminal nodes, with what appears to be mostly men (save a few female noblepersons) on one side of the split, and women plus boys on the other. Let's plot a simple version of this classification tree.

```plot(m6, type="simple")
```

The grey terminal nodes provide the number of observations per node, plus the classification error. So 28% of the women and boys did not survive, whereas 17% of the men and “nobles” did. The default plot helps visualize these ratios, with color representing the outcome.

```plot(m6)
```

It looks to me like the main effect of 'honorificType' is actually capturing an Age by Sex interaction. Let's grow a tree that is forced to include just those two variables.

```m7 <- ctree(Survived ~ Sex+Age, data = train)
plot(m7)
```

Yep. This tree represents a statistically significant Age x Sex interaction. Females generally survived regardless of age, but for males age mattered. But, even the boys (males under 12) fared worse than the females as a monolithic group; just slightly more than half of the boys survived. So the “women and children first” strategy didn't work completely. But still my hunch is that if we don't let the tree select either the CabinAlphaUno or the hhonorific variables, then “wcFirst” will be selected first among other candidates. Let's find out.

```m8 <- ctree(Survived ~ wcFirst,
data = train[, !(names(train)%in% c('CabinAlphaUno', 'honorificType'))], control = tc_d1)
plot(m8)
```

Indeed, the “women and children first” feature is the 3rd runner up for the initial split. Ok, curiosity satisfied. Now let's stop restricting the tree, and let the algorithm decide on its own which features to choose, and when to stop growing the tree.

```m9 <- ctree(Survived ~ ., data = train)
plot(m9)
```

Now that's a tree. There are several higher-order interactions here, including something that you don't see in linear models with interactions terms (including the experimenter's choice: analysis of variance). That is, the 'CabinAlphaUno' is split once at the first level, and then again two levels deeper on a branch that stems from itself. This is entirely possible in decision trees, because at each split every variable is tested once again.

Does this tree reveal anything interpretable over and above the Age x Sex interaction we already saw (and that is driving the second-level splits here)? For one thing, cabin has a particularly strong association with survival, winning the spot for the first split–despite that over ¾ of the passengers' cabin assignments were unknown ('U'). The other striking relationship I notice is that among the women and children (leftmost branch) possessing a basic class ticket (#3), family size matters a lot. For example, among those 45 people belonging to larger families (4+ people), almost none survived.

So how well does this tree's predicted/fitted outcomes capture the observed outcomes? I'll call the `ClassPerform` function defined in the previous post, which takes a confusion matrix as its argument and returns two measures of binary classification performance: observed accuracy and kappa. Also note that the tree's predicted responses (which we need to build the confusion matrix) are obtained in the usual way with R's `predict` function.

```# Confusion matrix
m9.cm <- as.matrix(table(train\$Survived, predict(m9, train, type = "response")))
ClassPerform(m9.cm)
##    ClassAcc     Kappa
## 1 0.8338945 0.6421279
```

This tree fits the observed data no better than the much simpler logistic regression model in the previous post. But does this neccessarily mean that its predictive accuracy is no better? No, because a model's fit to the observed data is a measure of its explanatory power (assuming an underlying casual theory, but that's another issue). At the very least, these measures can help sort out which model best describes known data. Prediction and explanation are often conflated or poorly specified, including in the professional scientific literature (e.g., probably in one of my papers). I like this description of each by Shmueli:

In explanatory modeling the focus is on minimizing bias to obtain the most accurate representation of the underlying theory. In contrast, predictive modeling seeks to minimize the combination of bias and estimation variance, occasionally sacrificing theoretical accuracy for improved empirical precision… the “wrong” model can sometimes predict better than the correct one.

We could derive predicted outcomes for a new dataset, like the Kaggle test data here, but there's an issue with single classification trees that decreases their utility as classifiers. What's the problem? Here is Strobl and colleagues:

The main flaw of simple tree models is their instability to small changes in the learning data: In recursive partitioning, the exact position of each cutpoint in the partition, as well as the decision about which variable to split in, determines how the observations are split up in new nodes, in which splitting continues recursively. However, the exact position of the cutpoint and the selection of the splitting variable strongly depend on the particular distribution of observations in the learning sample. Thus, as an undesired side effect of the recursive partitioning approach, the entire tree structure could be altered if the first splitting variable, or only the first cutpoint, was chosen differently because of a small change in the learning data. Because of this instability, the predictions of single trees show a high variability.

## Bagging and random forests

So single trees are prone to high variance predictions (i.e., overfitting). Did anybody come up with a better idea? Yes. A better approach involves repeatedly growing trees from randomly selected bootstrapped samples of the training data (samples of the same size, but with replacement), and making predictions based on the whole ensemble of trees. This bootstrap aggregating, or bagging, is a popular ensemble approach. There is another method that aggregates even more diverse trees. In addition to the growing several trees from bootstrapped samples, each tree could be made to choose from a randomly restricted subset of all available features. This ensemble approach is called a random forest. Among the parameters of random forests are the number of trees to grow, and the number of randomly selected features to choose from. Theoretical and empirical evidence suggests that individual classification trees are unbiased (but high variance) predictors, so these ensemble methods mitigate the high variance by growing many trees.

Let's build a bagged ensemble, which we can implement in `cforest` by setting `mtry` to Inf (i.e., the full set of available features).
We'll also build a random forest, using the default for 'mtry'. We'll grow 500 trees (the default) for each model.

```m10 <- cforest(Survived ~ ., data = train, ntree = 500, mtry = Inf)
m11 <- cforest(Survived ~ ., data = train, ntree = 500)
```

So where's the tree? Well, while there are ways to plot one of the 500 trees that make up the ensemble, it doesn't make sense like it did with the single trees we grew earlier. Ensemble classifiers use “votes” from hundreds of independently sampled trees, each of which has only seen a subset of all the training data, and a subset of all available features, to settle on a predicted outcome. So no single tree in the ensemble will neccessarily give us a sense of the forest. The main goal here though is prediction, so let's do that. Actually, we don't have access to a test set that the models haven't seen before. But we can use the imperfect alternative of out-of-bag (OOB) error estimation. Basically, the method is to repeatedly evaluate predictions from a given tree on held-out data that wasn't available for growing that particular tree (i.e., data that was not in the bootstrap sample used to grow that tree). Here I'll obtain the confusion matrix for each model (incorporating the OOB prediction error), and call my `ClassPerform` function to get the accuracy measures.

```# OOB classification of true vs. predicted classes
m10.cf <- table(predict(m10, OOB=TRUE), train\$Survived)
m11.cf <- table(predict(m11, OOB=TRUE), train\$Survived)

# Bagging
ClassPerform(m10.cf)
##    ClassAcc     Kappa
## 1 0.8451178 0.6640437

# Random forest
ClassPerform(m11.cf)
##    ClassAcc     Kappa
## 1 0.8361392 0.6465669
```

## Kaggle submission

In comparison with the repeated 10-fold cross-validated prediction accuracy of my best logistic regression model in the previous post, these aren't any better, maybe a bit worse. But I can't help submit one of these models' predictions to Kaggle, to see how the model fares on a truly new dataset. I'll submit the predictions from the ensemble of bagged trees, but now I'll make predictions from the model trained on all of the available data (by setting OOB to FALSE).

```Survived <- as.factor(predict(m10, test, OOB=FALSE))
submission <- data.frame(PassengerId = test\$PassengerId, Survived=Survived)

#Write to .csv
write.csv(submission, 'bag.csv', row.names = FALSE)
```

How did I do?

It's an improvement on the logistic regression model. Less than 0.5% improvement, but I nonetheless moved up 343 spots on the leaderboard. The margins of success are increasingly small as prediction accuracy gets higher. There is a lot more we can do with ensemble methods in `partykit`, but there are so many other kinds of predictive models out there with their own R packages. If only there was a way to access them all, and their associated methods, in an integrated framework… oh wait there is a way.

# Modeling with the caret package

The caret (Classification And Regression Training) package provides a standardized interface to hundreds of different R functions useful for predictive modeling. The author of this `caret` co-authored the book Applied Predictive Modeling, in which `caret` features heavily.

At the heart of `caret` is the `train` function, which can 1) estimate model performance from a training set; 2) use resampling approaches to evaluate the effects of model tuning parameters on model performance; 3) choose the optimal model across these parameters. So far we've done some of the former, but neither of the latter. Here is pseudocode for the general `train` algorithm:

1. Define model parameters to evaluate.
2. FOR each parameter:
3. ….FOR each resampling iteration:
4. …….Hold-out 'test' data
5. …….Fit model on remainder of 'training' data
6. …….Predict the outcome variable on the test data
7. ….END
8. ….Compute mean accuracy metrics across test sets
9. END
10. Determine optimal parameters.
11. Fit the best model to the full training dataset.
12. Optionally derive predictions for new test dataset.

Ok so let's continue with the ensemble approach from above. We'll build a random forest of conditional inference trees (i.e., `cforest` function), but this time we'll use `train` to systematically evaluate the influence of the parameter 'mtry' (# of randomly selected predictors).

First, we choose a resampling method and define its parameters. To maintain consistency with what I've done so far, I'll use repeated 10-fold cross validation (repetition = 5).

```require(caret)

RfoldCV <- trainControl(method="repeatedcv", number=10, repeats=5,
verboseIter = FALSE, allowParallel = TRUE)
```

Now we call `train`, which takes the model formula, data, estimation method, and the train control object as arguments. How does train select which levels of parameters to evaluate? By default, if p is the number of tuning parameters, the parameter grid size is 3p^ The user can also input a custom grid, though, which is what I elected to do here to sample more of the parameter space.

```customGrid <-  expand.grid(mtry=seq(3,27,by = 2))
t1 <- train(Survived ~ ., data = train, method="cforest",
trControl = RfoldCV,  tuneGrid=customGrid)
```

Let's examine some of the contents of the returned object, starting with the classification accuracy at different levels of the 'mtry' parameter.

```print(t1, details=T)
## Conditional Inference Random Forest
##
## 891 samples
##  15 predictor
##   2 classes: '0', '1'
##
## No pre-processing
## Resampling: Cross-Validated (10 fold, repeated 5 times)
## Summary of sample sizes: 803, 802, 802, 802, 802, 801, ...
## Resampling results across tuning parameters:
##
##   mtry  Accuracy   Kappa      Accuracy SD  Kappa SD
##    3    0.8103584  0.5748504  0.04065178   0.09170079
##    5    0.8202288  0.6095693  0.04218561   0.09216546
##    7    0.8258343  0.6222412  0.04172612   0.09090105
##    9    0.8247132  0.6179529  0.04252162   0.09278510
##   11    0.8262812  0.6202889  0.04386115   0.09686357
##   13    0.8262787  0.6194249  0.04294836   0.09550746
##   15    0.8267382  0.6199570  0.04319338   0.09579875
##   17    0.8271826  0.6202590  0.04398649   0.09809832
##   19    0.8262812  0.6183850  0.04236671   0.09445320
##   21    0.8265033  0.6185649  0.04280261   0.09552688
##   23    0.8240339  0.6134755  0.04264691   0.09466191
##   25    0.8240162  0.6133688  0.04448782   0.09864004
##   27    0.8244782  0.6142466  0.04353985   0.09642757
##
## Accuracy was used to select the optimal model using  the largest value.
## The final value used for the model was mtry = 17.
##
## ----------------------------------------------------------
##
## The final model:
##
##
##   Random Forest using Conditional Inference Trees
##
## Number of trees:  500
##
## Response:  .outcome
## Inputs:  Pclass2, Pclass3, PassengerId, Sexmale, Age, SibSp, Parch, Fare, EmbarkedQ, EmbarkedS, median.hon.age, median.class.fare, honorificTypeMiss, honorificTypeMr, honorificTypeMrs, honorificTypeNoble, CabinAlphaUnoB, CabinAlphaUnoC, CabinAlphaUnoD, CabinAlphaUnoE, CabinAlphaUnoF, CabinAlphaUnoG, CabinAlphaUnoT, CabinAlphaUnoU, FamSize, Fare.pp, wcFirstyes
## Number of observations:  891
```

The optimal forest emerged when several but not all predictors were made available. In addition to the mean accuracy metrics, the `train` object also contains the individual metrics from each prediction (i.e., number of folds x repetitions) for the optimal model. We can get a sense of the variability (and sampling distribution) of each measure by plotting histograms.

```hist(t1\$resample\$Accuracy, col="grey")
```

```hist(t1\$resample\$Kappa, col="green")
```

We also can inspect performance across the parameter space (in this case, just one parameter) with a ggplot method.

```ggplot(t1)
```

## Kaggle submission

Our best Kaggle submission so far was a bagged ensemble of trees, but the above plot suggests that with the current set of predictors, a random forest with a subset of most but not all randomly selected predictors is more accurate than when all predictors are available. Let's make a prediction.

The `predict.train` function takes a `train` object as its first argument, which contains only the optimal model. That is, you should not supply train\$finalModel to this argument.

```Survived <- as.factor(predict.train(t1, test))
submission <- data.frame(PassengerId = test\$PassengerId, Survived=Survived)

#Write to .csv
write.csv(submission, 'rand_forest.csv', row.names = FALSE)
```

How did we do?

Actually, a significant improvement. We used the same set of candidate features that were available in my previous submissions, but this time `caret` helped us zero in on a better ensemble (random forest) model. This is a subtantial jump on the leaderboard, and we're now within the top 25% of submissions. So how to improve? We could perform more comprehensive comparisons of this model with several other models available in R and the `caret` interface. But actually I'm not convinced that we've extracted the best set of features from the out of the box training data provided by Kaggle. In the next and final post I'll start from scratch from the Kaggle data, engineer a few more more features, fit a wider variety of models, and hopefully end with my best submission.

Advertisements

## Predictive modeling: Kaggle Titanic competition (part 1)

The Kaggle Titanic competition is a great way to learn more about predictive modeling, and to try out new methods. So what is Kaggle? It is “the world's largest community of data scientists. They compete with each other to solve complex data science problems, and the top competitors are invited to work on the most interesting and sensitive business problems from some of the world’s biggest companies…” Kaggle provides a few “Getting Started” competitions with highly structured data, including this one. The goal here is to predict who survived the Titanic disaster and who did not based on available information.

In this post and a subsequent post I'll share my journey through data preprocessing, feature engineering, data visualization, feature selection, cross-validation, model selection, and a few submissions to the competition. Along the way I relied many times for ideas and sometimes code, on a few excellent existing R tutorials for the Titanic prediction challenge, including this one and this one.

# Data Preprocessing

Before any modeling or predictive analysis, we need the data in a suitable format. Preprocessing includes assessing and potentially changing variable types, dealing with missing data, and generally getting to know your data. Whenever possible I try to write generic functions that take a dataframe argument, because eventually whatever preprocessing I do on the training data, I'll eventually do on the testing data.

```knitr::opts_chunk\$set(echo=TRUE, warning=FALSE, message=FALSE, collapse=TRUE, error=TRUE)

#Read training data that I downloaded from Kaggle website
#the 'NA' and blank arguments to 'na.strings' ensures proper treatment of missing data
train <- read.csv('train.csv', header=TRUE, na.strings = c("NA",""))

#function to change certain variable types
changeVars = function(df){

#change ordinal variables coded as numeric to factors
df\$Pclass <- as.factor(df\$Pclass)

#change text strings (coded as factors) to character type
df\$Ticket <- as.character(df\$Ticket)
df\$Cabin <- as.character(df\$Cabin)
df\$Name <- as.character(df\$Name)

#return new data frame
return(df)
}

#Update training dataset
train <- changeVars(train)
```

Most datasets have missing data. We'll need to identify any missing data, and then decide how to deal with them. Let's check for missing values with the Amelia package 'missmap' function.

```require(Amelia)
missmap(train, col=c("green", "black"), legend=F)
```

```#Cabin is mostly missing, and Age contains 177 missing values
summary(train\$Age)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.    NA's
##    0.42   20.12   28.00   29.70   38.00   80.00     177
```

Let's impute the missing age values by looking at the honorific for each passenger, which are embedded between a comma and period in the Name column. Write a function that returns the original df with a new column of honorifics attached, as well.

```require(plyr)
Honorifics = function(df) {
#first remove characters following period
lastnames <- sub('\\..*', '', df\$Name)

#then remove characters preceding the comma
honorific <- sub('.*\\,', '', lastnames)

#finally return char. vector w/o leading or trailing whitespace
#and attach back to data. This might actually be a useful feature later.s
df\$honorific  <- as.factor(gsub("^\\s+|\\s+\$", "", honorific))

#get the median age for each group
tmp <- ddply(df, .(honorific),
summarize, median.hon.age = round(median(Age, na.rm = T),0))

#if there are sole honorifics, and that person has no age value,
#we're out of luck so let's just assume the average age of the
#age column for that dataset.
tmp\$median.hon.age[which(is.na(tmp\$median.hon.age))] <- round(mean(df\$Age, na.rm = TRUE), 0)

#merge data frames
tmp2 <- merge(df, tmp)

#replace NAs with median 'honorific' age
tmp2\$Age <- ifelse(is.na(tmp2\$Age), tmp2\$median.hon.age, tmp2\$Age)

#return new dataframe
return(tmp2)
}

#update training data and inspect results of new Age variable
train <- Honorifics(train)
summary(train\$Age)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##    0.42   21.00   30.00   29.39   35.00   80.00
```

The 'Embarked' variable also has a few missing entries, but since there are so few let's just replace them with the most common embarked location: 'S'.

```aggregate(Survived ~ Embarked, data=train, FUN=sum)
##   Embarked Survived
## 1        C       93
## 2        Q       30
## 3        S      217
aggregate(Survived ~ Embarked, data=train, FUN=mean)
##   Embarked  Survived
## 1        C 0.5535714
## 2        Q 0.3896104
## 3        S 0.3369565
summary(train\$Embarked)
##    C    Q    S NA's
##  168   77  644    2

ImputeEmbark = function(df) {
df\$Embarked[which(is.na(df\$Embarked))] <- 'S'
return(df)
}
train <- ImputeEmbark(train)
```

Fare also needs a bit of work due to fares of \$0.00. I'll replace \$0.00 with the median fare from that class.

```summary(train\$Fare)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##    0.00    7.91   14.45   32.20   31.00  512.30

ImputeFare = function(df){
tmp <- ddply(df, .(Pclass),
summarize, median.class.fare = round(median(Fare, na.rm = T),0))

#attach to data and return new df
tmp2 <- merge(df, tmp)
tmp2\$Fare <- ifelse(df\$Age==0, df\$median.class.fare, df\$Age)
return(tmp2)
}

#update training data and inspect results
train <- ImputeFare(train)
summary(train\$Fare)
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
##    0.42   21.00   30.00   29.39   35.00   80.00
```

## A touch of feature engineering

Now let's take a closer look at these honorifics, because this might be a useful feature. In other words there is potential here for feature engineering: creating new features/variables/predictors from the raw data. This is a crucial aspect of predictive modeling, and data analysis generally. No matter how fancy your model, garbage in, garbage out. Here I'll create a new feature containing honorific classes.

```boxplot(train\$Age ~ train\$honorific)
```

```sort(table(train\$honorific))
##
##         Capt          Don     Jonkheer         Lady          Mme
##            1            1            1            1            1
##           Ms          Sir the Countess          Col        Major
##            1            1            1            2            2
##         Mlle          Rev           Dr       Master          Mrs
##            2            6            7           40          125
##         Miss           Mr
##          182          517

#We don't want a feature with single instances so let's subsume the single
#honorifics into sensible existing categories with a function
HonorificFeatureBuild = function(df){

#create a new column instead of replacing the
#existing 'honorific' column (for sake of completeness)
df\$honorificType <- df\$honorific

#Function to replace old honorific(s) with new one
replace.string <- function(df, original, replacement) {
for (x in original) {
levels(df\$honorificType)[levels(df\$honorificType)==x] <- replacement
}
return(df\$honorificType)
}

#Combine 'Ms' and 'Dona', which I noticed occurs in the test set, with 'Mrs'
df\$honorificType <- replace.string(df, c('Dona', 'Ms'), 'Mrs')

#Combine Madam, Mademoiselle, to equivalent: Miss
df\$honorificType <- replace.string(df, c('Mme','Mlle'), 'Miss')

#Combine several titles of upper class people into 'Noble'
df\$honorificType <- replace.string(df,
c('Sir','Capt','Rev','Dr','Col','Don',
'Major','the Countess', 'Jonkheer', 'Lady'), 'Noble')

#Remove dropped levels
df\$honorificType <- droplevels(df\$honorificType)

#Return new data frame
return(df)
}

#Return new dataframe and inspect new variable
train <- HonorificFeatureBuild(train)
sort(table(train\$honorificType))
##
##  Noble Master    Mrs   Miss     Mr
##     23     40    126    185    517
boxplot(train\$Age ~ train\$honorificType)
```

Mosaic plots are a good way to visualize categorical data, with the height of the bars representing respective proportions and the width representing the size of the group/factor.

```require(vcd)
mosaicplot(train\$honorificType ~ train\$Survived, shade=F, color=T)
```

It looks like this new feature might be useful; we'll see soon enough. But first a little more work in the preprocessing stage. Let's look at the 'Cabin' variable and see about creating another feature there.

```CabinFeatures = function(df){
df\$Cabin[which(is.na(df\$Cabin))] <- 'Unknown'
df\$CabinAlpha <- gsub('[[:digit:]]+', "", df\$Cabin)

#some have multiple entries; let's just keep the first one
df\$CabinAlphaUno <- sub("^(\\w{1}).*\$", "\\1", df\$CabinAlpha)

#cabins B, D, E look good and U looks pretty bad
#but do these provide any more information than Pclass?
table(df\$Pclass, df\$CabinAlphaUno)
#yes, so let's make GoodCabin and BadCabin features
df\$GoodCabin <- as.factor(
ifelse((df\$CabinAlphaUno=='B' |
df\$CabinAlphaUno=='D' |
df\$CabinAlphaUno=='E'),
'yes','no'))
df\$BadCabin <- as.factor(ifelse(df\$CabinAlphaUno=='U', 'yes', 'no'))
#return new data frame
return(df)
}

#Update df with new Cabin information
train <- CabinFeatures(train)
```

Finally, there are a few more sensible features to add, and preprocessing steps, before we're ready to build models. (Of course we might end up back at this stage later on, for example if we think of a great new feature to extract from the raw training data.)

```AddFinalFeatures <- function(df) {

#Consolidate #siblings (SibSp) and #parents/children (Parch)
#into new family size (FamSize) feature
df\$FamSize <- df\$SibSp + df\$Parch

#Adjust fare by size of family (per person)
df\$Fare.pp <- df\$Fare / (df\$FamSize + 1)

#'wcFirst' is for the "women and children first" policy for lifeboats
#(https://en.wikipedia.org/wiki/Women_and_children_first).
#I'll assume that teenage boys were treated as adults
df\$wcFirst <- 'no'
df\$wcFirst[which(df\$Sex == "female" | df\$Age < 13)] <- 'yes'
df\$wcFirst <- as.factor(df\$wcFirst)
return(df)
}

#Update the df
train <- AddFinalFeatures(train)
```

Last, let's remove any unneccessary columns which will make it easier to define models that include all variables in the data frame.

```CleanDF = function(df){
delete <- c("honorific", "Name", "SibSp", "Parch", "Ticket", "Fare", "Cabin",
"median.hon.age", "median.class.fare", "CabinAlpha", "CabinAlphaUno")
return(df[, !(names(df) %in% delete)])
}

#Here is our preprocessed training data
train <- CleanDF(train)

#Write to disk for later use
write.csv(train, "curated_train_data.csv", row.names = F)
```

## Prepare the test data

Now we're ready to fit models to the training data. Eventually though to submit predictions to Kaggle, we'll need the test data in the same format as our new training data. So here I read the test data and then call all the preprocessing functions (as usual, generic functions are the way to go to avoid repetition).

```test <- read.csv('test.csv', header=TRUE, na.strings = c("NA",""))
test <- changeVars(test)
test <- Honorifics(test)
test <- ImputeEmbark(test)
test <- ImputeFare(test)
test <- HonorificFeatureBuild(test)
test <- CabinFeatures(test)
test <- AddFinalFeatures(test)
test <- CleanDF(test)

#Write to disk for later use
write.csv(test, "curated_test_data.csv", row.names = F)
```

Before we continue let's make sure there were no issues with the outputs of any of these functions. When I eventually submit to Kaggle, I need to fit my favored model to the test set, which means the test set can't have any variables, or levels of variables, that it hasn't seen in the training data. There should not be any missing data either.

```missmap(test, col=c("green", "black"), legend=F)
```

Looks good. Now we're ready to build the first model.

# Model Building

What's the goal of predictive modeling? To settle on a statistical model of the outcome variable as some function of our available features. In this case the outcome variable is binary and categorical: whether a person survived the disaster or not. This is a binary classification problem, and there are several ways to approach it.

## Model 1: Logistic regression (simple model)

A common approach to binary classification is the logistic regression model. Logistic regression is not a binary classification method per se, because it estimates the probability of a binary response–ranging continuously from 0 to 1. But by simply rounding the estimated probability to the closest integer (0 or 1) we can use logistic regression as a binary classifier. To start, let's fit a logistic regression model with 3 features I think will be important.

```m1.glm <- glm(Survived ~ FamSize + Sex + Pclass, data=train, family='binomial')
summary(m1.glm)
##
## Call:
## glm(formula = Survived ~ FamSize + Sex + Pclass, family = "binomial",
##     data = train)
##
## Deviance Residuals:
##     Min       1Q   Median       3Q      Max
## -2.2738  -0.7282  -0.4709   0.5903   2.4869
##
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)
## (Intercept)  2.50663    0.23843  10.513  < 2e-16 ***
## FamSize     -0.15038    0.05983  -2.514 0.011953 *
## Sexmale     -2.77670    0.19498 -14.241  < 2e-16 ***
## Pclass2     -0.84771    0.24661  -3.437 0.000587 ***
## Pclass3     -1.87347    0.21522  -8.705  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
##     Null deviance: 1186.66  on 890  degrees of freedom
## Residual deviance:  820.12  on 886  degrees of freedom
## AIC: 830.12
##
## Number of Fisher Scoring iterations: 4
```

Consistent with my hunch about these features, they're all statistically significant. But to get a better sense of the model and its predictions, let's inspect the predicted/fitted values at all levels of the model's features. We apply the `predict` function to the model along with the chosen levels of the predictor variables in 'newdata'. The default for 'type' is on the scale of the linear predictors (log-odds: probabilities on logit scale). Here we'll choose the alternative option “response” (the scale of the response variable), which gives the predicted probabilities–a more transparent unit to interpret (at least for me).

```newdata <- data.frame(
FamSize=rep(0:7, 6),
Sex=rep(sort(rep(c('male','female'),8)),6),
Pclass=as.factor(sort(rep(c('1','2','3'), 16))))

#Add predicted probs
newdata\$PredProb <- predict(m1.glm, newdata, type = "response")

#Plot them at all feature levels
require(ggplot2)
ggplot(newdata, aes(x=FamSize, y=PredProb, color=Pclass)) +
geom_line(aes(group=Pclass)) +
geom_point() +
geom_hline(yintercept = .5) +
facet_grid(. ~ Sex)
```

The model predicts survivors to be female, in smaller families, and in higher classes. It captures trends in the data, but it wrongly predicts no male survivors when we know that over 100 males did survive. So how well does this model predict the actual outcomes?

First we create a confusion matrix, from which we can compute different measures of classification performance. There are several ways to assess classification performance. Two commonly used methods are “Classification (Observed) Accuracy” and Cohen's Kappa (originally conceived for inter-rater agreement).

Accuracy is just how often the classifier is correct. Kappa measures how well the classifier performed in comparison to how well it would have performed by chance. A model with a high Kappa reflects a large discrepancy between the accuracy (how often does classifier correctly predict outcome) and the null error rate (how often model would be wrong if it always predicted the majority class).

```#Attach the model's predicted outcomes to the existing data frame
train\$predictedSurvival <- round(m1.glm\$fitted.values)

#First we make the confusion matrix containing the proportions of 'yes' and 'no,
#where actual outcomes are rows and predicted outcomes are columns.
m1.cm <- as.matrix(table(train\$Survived, train\$predictedSurvival))

#Function that returns our accuracy measures
#it takes a confusion matrix as its argument
ClassPerform = function(cm){

#Classification (observed) accuracy
#INn signal detection terminology:
#[h=hits; cr=correct rejections; fa=false alarms; m=misses]
#(h + cr) / (h + cr + fa + m)
#Or ...
#TruePositive + TrueNegative / Total
obs.acc <- (cm[1,1] + cm[2,2]) / sum(cm)

#k = (observed accuracy - random accuracy) / (1 - random accuracy)
#we already have observed accuracy, but we need random accuracy,
#the probability of agreement expected by chance from the observed data
#and the model's predictions

ObservedNegative  <- (cm[1,1] + cm[1,2]) / sum(cm)
PredictedNegative <- (cm[1,1] + cm[2,1]) / sum(cm)
ObservedPositive  <- (cm[2,1] + cm[2,2]) / sum(cm)
PredictedPositive <- (cm[1,2] + cm[2,2]) / sum(cm)
RandomACC <- (ObservedNegative*PredictedNegative) + (ObservedPositive*PredictedPositive)
kappa <- (obs.acc - RandomACC) / (1 - RandomACC)
#Return both measures
return(data.frame(ClassAcc = obs.acc, Kappa = kappa))
}
```

So how well does model 1 predict the actual outcomes in the training data?

```m1.performance <- ClassPerform(m1.cm)
print(m1.performance)
##    ClassAcc    Kappa
## 1 0.8002245 0.566665
```

Not too bad. Kappa ranges between -1 and 1, where scores of zero or lower signify no agreement (random relationship) between classifier and reality. Interpretation depends on the field and problem type, but one offered rule of thumb is > 0.75 excellent, 0.40-0.75 fair to good, < 0.40 poor. Let's keep going, we can do better.

## Model 2: Logistic regression (saturated model)

Let's use the same type of model but assume that all our features are good ones that should be included in the model.

```#remove the predicted outcomes from before
train\$predictedSurvival <- NULL

#Fit a logistic regression model with all the features (excluding passenger ID)
m2.glm <- glm(Survived ~ . - PassengerId, data=train, family='binomial')
summary(m2.glm)
##
## Call:
## glm(formula = Survived ~ . - PassengerId, family = "binomial",
##     data = train)
##
## Deviance Residuals:
##     Min       1Q   Median       3Q      Max
## -2.4023  -0.5598  -0.3756   0.5518   2.5263
##
## Coefficients:
##                       Estimate Std. Error z value Pr(>|z|)
## (Intercept)          2.906e+01  1.014e+03   0.029  0.97713
## Pclass2             -5.774e-01  4.103e-01  -1.407  0.15937
## Pclass3             -1.678e+00  4.136e-01  -4.057 4.97e-05 ***
## Sexmale             -2.790e+01  1.014e+03  -0.028  0.97804
## Age                 -3.198e-02  9.882e-03  -3.236  0.00121 **
## EmbarkedQ           -2.172e-01  3.946e-01  -0.550  0.58203
## EmbarkedS           -5.316e-01  2.493e-01  -2.132  0.03300 *
## honorificTypeMaster  1.622e+01  8.827e+02   0.018  0.98534
## honorificTypeMiss   -1.201e+01  4.983e+02  -0.024  0.98077
## honorificTypeMr     -1.320e-01  6.015e-01  -0.219  0.82626
## honorificTypeMrs    -1.107e+01  4.983e+02  -0.022  0.98228
## GoodCabinyes         8.652e-01  3.850e-01   2.247  0.02463 *
## BadCabinyes         -4.789e-01  3.943e-01  -1.215  0.22454
## FamSize             -4.299e-01  9.330e-02  -4.608 4.06e-06 ***
## Fare.pp              1.364e-03  9.228e-03   0.148  0.88249
## wcFirstyes          -1.314e+01  8.827e+02  -0.015  0.98813
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
##     Null deviance: 1186.66  on 890  degrees of freedom
## Residual deviance:  715.58  on 875  degrees of freedom
## AIC: 747.58
##
## Number of Fisher Scoring iterations: 13
```

The AIC model fit is substantially lower than the simpler model, despite this model having several more terms (AIC penalizes more complexity/parameters). How well does this model predict the outcomes in the training data?

```train\$predictedSurvival <- round(m2.glm\$fitted.values)
m2.cm <- as.matrix(table(train\$Survived, train\$predictedSurvival))
m2.performance <- ClassPerform(m2.cm)
print(m2.performance)
##    ClassAcc    Kappa
## 1 0.8372615 0.654238
```

Better than before. But inspection of the Coefficients suggests there may be some uninformative features. One way to get a sense of which ones these might be is to sequentially compare more and more complex models with likelihood ratio tests. This is easy to do with the 'anova' function. But note that the terms are added in the same order as specified in the model, so I need to refit the model placing the terms in what I think is the order of importance based on my domain knowledge and exploratory analyses I've conducted so far.

```m2.glm.ordered <- glm(Survived~Sex+Pclass+Age+FamSize+honorificType+
GoodCabin+BadCabin+Embarked+Fare.pp+wcFirst, data=train, family="binomial")
#Analysis of Deviance
deviance.table <- anova(m2.glm.ordered, test="Chisq")
print(deviance.table)
## Analysis of Deviance Table
##
## Model: binomial, link: logit
##
## Response: Survived
##
## Terms added sequentially (first to last)
##
##
##               Df Deviance Resid. Df Resid. Dev  Pr(>Chi)
## NULL                            890    1186.66
## Sex            1  268.851       889     917.80 < 2.2e-16 ***
## Pclass         2   90.916       887     826.89 < 2.2e-16 ***
## Age            1   24.176       886     802.71 8.791e-07 ***
## FamSize        1   14.113       885     788.60 0.0001721 ***
## honorificType  4   56.882       881     731.72 1.310e-11 ***
## GoodCabin      1    9.643       880     722.07 0.0019008 **
## BadCabin       1    1.290       879     720.78 0.2559915
## Embarked       2    4.758       877     716.02 0.0926273 .
## Fare.pp        1    0.017       876     716.01 0.8968401
## wcFirst        1    0.426       875     715.58 0.5139383
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#Visualize the reduction in the residual deviance
plot(deviance.table\$`Resid. Dev`, type="o")
```

From the deviances and significance levels of each feature, it looks like we're justified in dropping 'BadCabin', 'Fare.pp', and 'wcFirst'.

## Model 3: Logistic regression (middle ground model)

Let's fit a simpler 3rd model without these features, with the hope that it represents a middle ground between complexity and predictive power.

```m3.glm <- glm(Survived~Sex+Pclass+Age+FamSize+honorificType+GoodCabin+Embarked, data=train, family="binomial")
summary(m3.glm)
##
## Call:
## glm(formula = Survived ~ Sex + Pclass + Age + FamSize + honorificType +
##     GoodCabin + Embarked, family = "binomial", data = train)
##
## Deviance Residuals:
##     Min       1Q   Median       3Q      Max
## -2.3017  -0.5664  -0.3772   0.5486   2.5202
##
## Coefficients:
##                       Estimate Std. Error z value Pr(>|z|)
## (Intercept)          15.837211 492.786269   0.032  0.97436
## Sexmale             -14.886862 492.786370  -0.030  0.97590
## Pclass2              -0.845554   0.324104  -2.609  0.00908 **
## Pclass3              -1.957946   0.306748  -6.383 1.74e-10 ***
## Age                  -0.031807   0.009785  -3.251  0.00115 **
## FamSize              -0.432291   0.080284  -5.385 7.26e-08 ***
## honorificTypeMaster   3.124046   0.797886   3.915 9.03e-05 ***
## honorificTypeMiss   -12.096458 492.786161  -0.025  0.98042
## honorificTypeMr      -0.104168   0.586068  -0.178  0.85893
## honorificTypeMrs    -11.162869 492.786169  -0.023  0.98193
## GoodCabinyes          1.069362   0.346830   3.083  0.00205 **
## EmbarkedQ            -0.202543   0.395174  -0.513  0.60827
## EmbarkedS            -0.509562   0.248663  -2.049  0.04044 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
##     Null deviance: 1186.66  on 890  degrees of freedom
## Residual deviance:  717.53  on 878  degrees of freedom
## AIC: 743.53
##
## Number of Fisher Scoring iterations: 13
```

As I suspected, the AIC is lower because of the penalization factor. How well does this 3rd model predict the outcomes in the training data?

```train\$predictedSurvival <- round(m3.glm\$fitted.values)
m3.cm <- as.matrix(table(train\$Survived, train\$predictedSurvival))
m3.performance <- ClassPerform(m3.cm)
print(m3.performance)
##    ClassAcc    Kappa
## 1 0.8383838 0.656051
```

Very close to the saturated model, but a touch more accurate. But given the reduced complexity I favor this model substantially over the saturated model.

# Out-of-sample Prediction

So far we've assessed each model's explanatory power by inspecting in-sample model fit. But ultimately we're after out-of-sample prediction! We need to settle on a good model that can best predict something it has never seen before… hence the word 'predict'. Let's use cross-validation to estimate how well each model performs on data it was not trained on, and therefore shouldn't overfit. I think that the AIC values we saw earlier should give us a hint about out of sample prediction performance, because minimizing the AIC is (asymptotically) equivalent to minimizing at least one type of cross validation value (Stone 1977). Although there are fast and efficient ways to implement what I do below (which I'll rely on later), it helped me understand the process by coding it (mostly) from scratch.

## k-fold cross validation

I'll create a function that performs k-fold CV and returns both performance measures we saw earlier: accuracy and kappa. The function takes three arguments: a data frame, the number of folds, and a model object. The basic idea is to split the data into k chunks, and use each chunk once as the validation data. The model trained on the remaining k-1 chunks tries to predict the outcomes in the validation set, and the resulting performance is aggregated across the k chunks.

```set.seed(99) # make the results repeatable from here on out
kFoldCV <- function(k, mod){

# get the model data frame
df <- mod\$model

#Randomly partition data into k folds
#Use of sample here means we can repeat this function and get a random
#set of folds each time the function is called
folds <- cut(sample(seq_len(nrow(df))),  breaks=k, labels=F)
accuracy.mat <- matrix(nrow = k, ncol=2)

for (i in 1:k) {
#Create training set from the k-1 subsamples
trainFold <- subset(df, folds != i)

#Create the test set
validationFold <- subset(df, folds == i)

#Use the fitted model on the test set
predValues <- predict(mod, newdata = validationFold, type="response")

#Classify predicted values into binary outcome
classifier <- ifelse(predValues < 0.5, 0, 1)

#Make confusion matrix
# (get the name of the outcome variable from the model object)
y <- validationFold[,names(mod\$model)[1]]
confusion.mat <- as.matrix(table(y, classifier))

#Store classification accuracy measures for that run using
#'ClassPerform' function created earlier
accuracy.mat[i, ] <- t(ClassPerform(confusion.mat))
}
#Return both measures in long format
return(data.frame(Measure=c('ClassAcc','Kappa'),
value=c(mean(accuracy.mat[,1]), mean(accuracy.mat[,2]))))
}
```

Now let's compute 10-fold CV for each model and inspect the results.

```m1.ACC <- kFoldCV(k = 10, m1.glm)
m2.ACC <- kFoldCV(k = 10, m2.glm)
m3.ACC <- kFoldCV(k = 10, m3.glm)

print(m1.ACC)
##    Measure     value
## 1 ClassAcc 0.8002747
## 2    Kappa 0.5673504
print(m2.ACC)
##    Measure     value
## 1 ClassAcc 0.8372534
## 2    Kappa 0.6537724
print(m3.ACC)
##    Measure    value
## 1 ClassAcc 0.838377
## 2    Kappa 0.653931
```

It looks like model 3 has the slight advantage in both cross validated measures. Ok now before continuing, I was curious about the relationship between the number of folds and predictive accuracy. So I ran several CVs at fold sizes from 2 to 60 for model 3, and plotted the relationships.

```numFolds <- data.frame(matrix(nrow=59, ncol=2))
for (f in 2:60){
numFolds[f-1, ] <- kFoldCV(f, m3.glm)[, 2]
}
```

Does number of folds influence observed accuracy? Doesn't look like it, but the variance of observed accuracy seems to increase with k.

```x <- (2:60)
y1 <- numFolds[,1]
y2 <- numFolds[,2]
plot(x, y1, xlab='No. folds', ylab='Accuracy')
lo <- loess(y1~x)
lines(predict(lo), col='red', lwd=2)
```

Does number of folds influence Kappa? Actually yes, it looks like the more folds, the lower the kappa, and the higher the variance. Neither relationship suggests using more than 10 folds–at least for these data (of these dimensions).

```plot(x, y2, xlab='No. folds', ylab='Kappa')
lo <- loess(y2~x)
lines(predict(lo), col='red', lwd=2)
```

## Repeated k-fold cross validation

In k-fold cross validation on a dataset of this size, there are many many ways to cut up the training data into k folds. But we only implemented one of these ways, and it was randomly selected. To account for the random variability introduced by this selection, statisticians recommend repeating the 10-fold CV process several times and deriving average accuracy measures from each replicate. Let's repeat the validation 1000 times.

```reps = 1000 # number of times to replicate the random k selection process
m1.reps <- do.call(rbind, lapply(X=1:reps, FUN=function(x) kFoldCV(10, m1.glm)))
m2.reps <- do.call(rbind, lapply(X=1:reps, FUN=function(x) kFoldCV(10, m2.glm)))
m3.reps <- do.call(rbind, lapply(X=1:reps, FUN=function(x) kFoldCV(10, m3.glm)))

repeatedPerform <- rbind(m1.reps, m2.reps, m3.reps)
repeatedPerform\$mod <- sort(rep(c('m1','m2','m3'), reps*2))

#create data frame with means of repeated CV for each measure and model
repCVperformance <- ddply(repeatedPerform, .(mod, Measure), summarize, value = mean(value))
#arrange columns in same order as usual
repCVperformance <- repCVperformance[,c(2,3,1)]
print(repCVperformance)
##    Measure     value mod
## 1 ClassAcc 0.8002219  m1
## 2    Kappa 0.5638397  m1
## 3 ClassAcc 0.8372607  m2
## 4    Kappa 0.6515769  m2
## 5 ClassAcc 0.8383829  m3
## 6    Kappa 0.6534456  m3
```

I placed all the performance measures–from the original in-sample model fits, and from the single case and the repeated cross validations, in a dataframe that we'll use for plotting below.

```require(reshape)
inSample <- melt(rbind(m1.performance, m2.performance, m3.performance), variable_name = "Measure")
inSample\$mod <- rep(c('m1','m2','m3'), 2)
inSample\$Method <- "InSample"

singleCV <- rbind(m1.ACC, m2.ACC, m3.ACC)
singleCV\$mod <- sort(rep(c('m1','m2','m3'), 2))
singleCV\$Method <- "SingleCV"

repCVperformance\$Method <- "RepeatedCV"

# combine all into one df
allMeasures <- rbind(inSample, singleCV, repCVperformance)
allMeasures\$mod <- as.factor(allMeasures\$mod)
allMeasures\$Method <- as.factor(allMeasures\$Method)
```

Let's visualize the variability, across repeated k-fold validations, of both classification performance measures. I'll add vertical lines to each histogram to signify the accuracy measures from the single cross validation run (dashed line), the mean accuracy measures from the repeated cross validation runs (solid line), and also the the original accuracy measures from the in-sample model fit (red line).

```histo = function(measure){
CVreps <- subset(repeatedPerform, Measure==measure)
Values <- subset(allMeasures, Measure==measure)

ggplot(CVreps, aes(x = value)) +
geom_histogram(colour = "grey", fill = "grey", binwidth = 0.0001) +
facet_grid(. ~ mod, scales = "free") +
geom_vline(data=Values[Values\$Method=='SingleCV', ], aes(xintercept=value), linetype="dashed") +
geom_vline(data=Values[Values\$Method=='RepeatedCV', ],  aes(xintercept=value)) +
geom_vline(data=Values[Values\$Method=='InSample', ],  aes(xintercept=value), color="red") +
xlab(measure) +
theme_bw(base_size = 11)
}

#Histogram of mean observed accuracy after repeated k-fold cross validation
histo("ClassAcc")
```

```#Histogram of mean Kappa after repeated k-fold cross validation
histo("Kappa")
```

There is sampling variability in both measures, but more so in Kappa. Looking at the x-axis of the Accuracy measure, the variability there is pretty small. Overall though the non-zero variance of both measures, from the in-sample model fit, and from the single cross validation run, is apparent in that the vertical lines aren't identical. Also notice that for Kappa, the in-sample fit led to overstated estimates in comparison with the repeated CV estimates… for every model.

The main point of the single and repeated cross validation, however, was to compare different models so we can choose the best model. Here's a simple plot of the performance measures from the candidate models (2 and 3), but on the same scale this time to facilitate cross-model comparison.

```  betterModels <- subset(allMeasures, mod!="m1")
levels(betterModels\$Method) <- c("InSample", "SingleCV", "RepeatedCV")
ggplot(betterModels, aes(x = mod, y=value, group=Method, color=Method)) +
geom_point() +
geom_line() +
facet_grid(Measure ~ ., scales = "free_y") +
theme_bw(base_size = 11)
```

Again we see the negligible sampling variability of observed accuracy. Cross validation didn't provide any new information. Kappa varies more, and again we see the inflated estimates from the in-sample model fit. But in this particular case, it looks like every measure points to the same conclusion: model 3 is the best model we have so far. I'll stop this post here then, but not before we see how this model does on the actual Kaggle test data. Time for a submission.

# Kaggle Submission

Earlier we prepared the Kaggle test data. So all that's required now is to add a column to 'test' that contains the predicted outcomes according to model 3, remove the other columns (except the index column: PassengerId), write to a .csv file, and drag it onto the Kaggle submission box.

```#Predict binary outcomes and attach
test\$Survived <- round(predict(m3.glm, test, type = "response"),0)
submission <- data.frame(PassengerId = test\$PassengerId, Survived=test\$Survived)

#Write to .csv
write.csv(submission, 'model3.csv', row.names = FALSE)
```

I went to the Kaggle website and dragged my csv file to the submission box. Here's how model #3 fared in predicting the outcomes in the previously unseen test data.

What's the performance metric? The score in the Titanic challenge “is the percentage of passengers you correctly predict.” So despite cross validation during model selection, it looks like my model overfit the training data, signified by the substantial decrease in accuracy in the Kaggle test set. But, the eventual winner of a Kaggle competition will be judged on her model's performance on a brand new test set that is held out until the end of the competition. So actually we don't know whether this decrease from training to test will translate into a decrease from training to the real test data.

In the next post I'll try to improve upon this score by delving into the world of recursive partitioning (classification trees, random forest) and by taking advantage of the versatile `caret` package.