Answered step by step
Verified Expert Solution
Link Copied!

Question

1 Approved Answer

The file Accidents.csv below contains information on 42,183 actual automobile accidents in 2001 in theUnited States that involved one of three levels of injury: NO

The file Accidents.csv below contains information on 42,183 actual automobile accidents in 2001 in theUnited States that involved one of three levels of injury:

NO INJURY

INJURY, or

FATALITY.

For each accident, additional information is recorded, such as day of the week, weather conditions, and road type. A firm might be interested in developing a system for quickly classifying the severity of an accident based on initial reports and associated data in the system (some of which rely on GPS-assisted reporting). You will use it to practice data mining in R.

Partition the data into training (60%) and validation (40%).

Assuming that no information or initial reports about the accident itself are available at the time of prediction (only location characteristics, weather conditions, etc.), which predictors can we include in the analysis? (Use the Data_Codes sheet.)

Run a naive Bayes classifier on the complete training set with the relevant predictors (and INJURYas the response). Note that all predictors are categorical. Show the confusion matrix. Then:

What is the overall error for the validation set? What is the percent improvement relative to the native rule (using the validation set)? Examine the conditional probabilities output.

Why do we get a probability of zero for P(INJURY = No SPD_LIM = 5)?

What is the percent improvement relative to the naive rule (using the validation set)?

Why is it after examining the conditional probabilities output, do we get a probability of zero for(INJURY = No SPD_LIM = 5)?

dataset- https://github.com/MyGitHub2120/Accidentsdataset

Here are my codes

installed.packages("prob")

installed.packages("data.table")

installed.packages("e1071")

installed.packages("caret")

installed.packages("naivebayes")

installed.packages("caTools")

# (1) if an accident has just been reported and no further information is available,

# what should the prediction be? (INJURY = Yes or No?) Why?

read.csv(file.choose("AccidentFull"))

accidents.df<-AccidentsFull

accidents.df$INJURY <- ifelse(accidents.df$MAX_SEV_IR>0, "yes", "no")

head(accidents.df[,c("INJURY","WEATHER_R", "TRAF_CON_R")], 12)

inj.tbl<-table(accidents.df$INJURY_CRASH)

prob.inj<-(inj.tbl['1'])/(inj.tbl['1'] + inj.tbl['0'])

prob.inj

# If an accident has just been reported, the prediction will be no injury because

# there is less than 50% (49.77%) of an injury

# (2)(a) Select the first 12 records in the dataset and look only at the response (INJURY) and the two predictors WEATHER_R and TRAF_CON_R. Then:

#Create a pivot table that examines INJURY as a function of the two predictors for these 12 records. Use all three variables in the pivot table as rows/columns.

ftable(accidents.df[1:12, c("INJURY","WEATHER_R","TRAF_CON_R")])

# (2)(b) Compute the exact Bayes conditional probabilities of an injury (INJURY = Yes)

# given the six possible combinations of the predictors.

numerator1<-2/3 * 3/12

denominator1<- 3/12

prob1<-numerator1 / denominator1

prob1

# P(Injury=yes|WEATHER_R = 1, TRAF_CON_R =0) = 0.667

numerator2<- 0/3 * 3/12

denominator2<- 1/12

prob2<- numerator2/denominator2

prob2

# P(Injury=yes|WEATHER_R = 1, TRAF_CON_R =1) = 0

numerator3<- 0/3 * 3/12

denominator3<- 1/12

prob3<-numerator3/denominator3

prob3

# P(Injury=yes|WEATHER_R = 1, TRAF_CON_R =2) = 0

numerator4<- 1/3 * 3/12

denominator4<- 6/12

prob4<- numerator4/denominator4

prob4

# P(Injury=yes|WEATHER_R = 2, TRAF_CON_R =0) = 0.167

numerator5<- 0/3 * 3/12

denominator5<- 1/12

prob5<- numerator5/denominator5

prob5

# P(Injury=yes|WEATHER_R = 2, TRAF_CON_R =1) = 0

#(2)(c) Classify the 12 accidents using these probabilities and a cutoff of 0.5.

accidents<-c(0.667,0.167,0,0,0.667,0.167,0.667,0.167,0.167,0.167,0)

accidents$prob.inj<- prob.inj

accidents

accidents$pred.prob<- ifelse(accidents$prob.inj>0.5, "yes", "no")

accidents

#(2)(d) Compute manually the naive Bayes conditional probability of an injury given WEATHER_R = 1and TRAF_CON_R = 1.

prob<- 2/3 * 0/3 * 3/12

prob

# P(Injury=yes|WEATHER_R = 1, TRAF_CON_R =1) = 0

#(2)(e) Run a naive Bayes classifier on the 12 records and two predictors using R.

# Check the model output to obtain probabilities and classifications for all 12 records.Compare this to the exact Bayes classification.

# Are the resulting classifications equivalent? Is the ranking (= ordering) of observations equivalent?

library(e1071)

nb<-naiveBayes(INJURY~ TRAF_CON_R + WEATHER_R, data = accidents.df[1:12,])

predict(nb, newdata=accidents.df[1:12,c("INJURY","WEATHER_R", "TRAF_CON_R")],

type = "raw")

# The classifications are not the same. But if we rank observations by probability of injury the exact Nave

# Bayes yield the same ranking and so classification could be made equal by adjusting the cutoff threshold

# for success.

#(3)(a) Partition the data into training (60%) and validation (40%).

#spec = c(train = .6, validate = .4)

# accidents.df <- read.csv('AccidentsFull.csv')

#df=data.frame(accidents.df)

#g = sample(cut(

# seq(nrow(df)),

#nrow(df)*cumsum(c(0,spec)),

#labels = names(spec)

#))

#res=split(df,g)

#train=res$train

#validate=res$validate

#convert your variables to categorical type

for (i in c(1:dim(accidents.df)[2])){

accidents.df[,i] <- as.factor(accidents.df[,i])

}

#Splitting the data set into training and test Set

library(caTools)

set.seed(123)

split = sample.split(accidents.df$INJURY,SplitRatio = 0.6)

training_set = subset(accidents.df, split == TRUE)

valid_set = subset(accidents.df, split == FALSE)

#(3)(b) assuming that no information or initial reports about the accident itself are available at thetime of prediction (only location characteristics, weather conditions, etc.), which predictors can weinclude in the analysis? (Use the Data_Codes sheet.)

# Predictor variables that can be used are "INJURY","WRK_ZONE","RushHour","WKDY","INT_HWY","LGTCON_day","SPD_LIM","TRAF_two_way",

# "SUR_COND_dry","WEATHER_adverse"

#(3)(c) Run a naive Bayes classifier on the complete training set with the relevant predictors (and INJURY as the response). Note that all predictors are categorical. Show the confusion matrix

#define which variable you will be using

variable_train <- training_set[c(1,2,3,4,5,7,8,9,10,13)]

variable_valid <- valid_set[c(1,2,3,4,5,7,8,9,10,13)]

nbTotal <- naiveBayes(INJURY ~ ., data = variable_train)

library(caret)

confusionMatrix(training_set$INJURY, predict(nbTotal, variable_train), positive = "yes")

Step by Step Solution

There are 3 Steps involved in it

Step: 1

blur-text-image

Get Instant Access to Expert-Tailored Solutions

See step-by-step solutions with expert insights and AI powered tools for academic success

Step: 2

blur-text-image

Step: 3

blur-text-image

Ace Your Homework with AI

Get the answers you need in no time with our AI-driven, step-by-step assistance

Get Started

Recommended Textbook for

College Algebra DeMYSTiFieD

Authors: Rhonda Huettenmueller

2nd Edition

0071815856, 9780071815857

More Books

Students also viewed these Mathematics questions

Question

Why is IT planning so important?

Answered: 1 week ago

Question

Relax your shoulders

Answered: 1 week ago

Question

Keep your head straight on your shoulders

Answered: 1 week ago