Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

5Decision Tree Case Study 1

Data Set Information

Source:

[Moro et al., 2014] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014

Description:

The data is from direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ‘yes’ or ‘no’ for subscription.

The data set contains the bank-additional-full.csv with all examples (41188) and 20 inputs, ordered by date (from May 2008 to November 2010), very close to the data analyzed in  [Moro et al., 2014]. You can download the data set from the following link:

https://s3.amazonaws.com/acadgildsite/wordpress_images/datasets/bank/bank-additional-full.csv

Attribute Information:

Input variables:

# bank client data:

1 – age (numeric)

2 – job : type of job (categorical: ‘admin.’,’blue-collar’,’entrepreneur’,’housemaid’,’management’,’retired’,’self-employed’,’services’,’student’,’technician’,’unemployed’,’unknown’)

3 – marital : marital status (categorical: ‘divorced’,’married’,’single’,’unknown’; note: ‘divorced’ means divorced or widowed)

4 – education (categorical: ‘basic.4y’,’basic.6y’,’basic.9y’,’high.school’,’illiterate’,’professional.course’,’university.degree’,’unknown’)

5 – default: has credit in default? (categorical: ‘no’,’yes’,’unknown’)

6 – housing: has housing loan? (categorical: ‘no’,’yes’,’unknown’)

7 – loan: has personal loan? (categorical: ‘no’,’yes’,’unknown’)

# related with the last contact of the current campaign:

8 – contact: contact communication type (categorical: ‘cellular’,’telephone’)

9 – month: last contact month of year (categorical: ‘jan’, ‘feb’, ‘mar’, …, ‘nov’, ‘dec’)

10 – day_of_week: last contact day of the week (categorical: ‘mon’,’tue’,’wed’,’thu’,’fri’)

11 – duration: last contact duration, in seconds (numeric). Important note: this attribute highly affects the Output target (e.g., if duration=0 then y=’no’). Yet, the duration is not known before a call is performed. Also, after the end of the call y is obviously known. Thus, this input should only be included for benchmark purposes and should be discarded if the intention is to have a realistic predictive model.

# other attributes:

12 – campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact)

13 – pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric; 999 means client was not previously contacted)

14 – previous: number of contacts performed before this campaign and for this client (numeric)

15 – poutcome: outcome of the previous marketing campaign (categorical: ‘failure’,’nonexistent’,’success’)

# social and economic context attributes

16 – emp.var.rate: employment variation rate – quarterly indicator (numeric)

17 – cons.price.idx: consumer price index – monthly indicator (numeric)

18 – cons.conf.idx: consumer confidence index – monthly indicator (numeric)

19 – euribor3m: euribor 3 month rate – daily indicator (numeric)

20 – nr.employed: number of employees – quarterly indicator (numeric)

Output variable (desired target):

21 – y – has the client subscribed a term deposit? (binary: ‘yes’,’no’)

Problem Statement: The data is from direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict if the client will subscribe a term deposit.

Import libraries and tools

import pandas as pd

#read the csv file and store it in ‘bank’ dataframe

bank = pd.read_csv(‘datasets/bank-additional/bank-additional/bank-additional-full.csv’, sep=’;’)

bank.head()

Output:

# list all columns (for reference)

bank.columns

Output: Index([‘age’, ‘job’, ‘marital’, ‘education’, ‘default’, ‘housing’, ‘loan’,

‘contact’, ‘month’, ‘day_of_week’, ‘duration’, ‘campaign’, ‘pdays’,

‘previous’, ‘poutcome’, ’emp.var.rate’, ‘cons.price.idx’,

‘cons.conf.idx’, ‘euribor3m’, ‘nr.employed’, ‘y’],

dtype=’object’)

#  y (response)

# convert the response to numeric values and store as a new column

bank[‘outcome’] = bank.y.map({‘no’:0, ‘yes’:1})

Comment on Features

## 1. age

%matplotlib inline

# probably not a great feature since lot of outliers

bank.boxplot(column=’age’, by=’outcome’)

## 2 .job

## useful features as all values revolve around same space

bank.groupby(‘job’).outcome.mean()

Output:

job

admin.           0.129726

blue-collar      0.068943

entrepreneur     0.085165

housemaid        0.100000

management       0.112175

retired          0.252326

self-employed    0.104856

services         0.081381

student          0.314286

technician       0.108260

unemployed       0.142012

unknown          0.112121

Name: outcome, dtype: float64

# create job_dummies (we will add it to the bank DataFrame later)

job_dummies = pd.get_dummies(bank.job, prefix=’job’)

job_dummies.drop(job_dummies.columns[0], axis=1, inplace=True)

## 3. default

# looks like a useful feature

bank.groupby(‘default’).outcome.mean()

Output:

default

no         0.12879

unknown    0.05153

yes        0.00000

Name: outcome, dtype: float64

# but only one person in the dataset has a status of yes

bank.default.value_counts()

Output:

no               32588

unknown     8597

yes                      3

Name: default, dtype: int64

# so, let’s treat this as a 2-class feature rather than a 3-class feature

bank[‘default’] = bank.default.map({‘no’:0, ‘unknown’:1, ‘yes’:1})
## 4. contact

# convert the feature to numeric values

bank[‘contact’] = bank.contact.map({‘cellular’:0, ‘telephone’:1})

## 5. month

# looks like a useful feature at first glance

bank.groupby(‘month’).outcome.mean()

Output:

month

apr    0.204787

aug    0.106021

dec    0.489011

jul    0.090466

jun    0.105115

mar    0.505495

may    0.064347

nov    0.101439

oct    0.438719

sep    0.449123

Name: outcome, dtype: float64

# but, it looks like their success rate is actually just correlated with number of calls

# thus, the month feature is unlikely to generalize

bank.groupby(‘month’).outcome.agg([‘count’, ‘mean’]).sort_values(‘count’)

Output:

## 6.  duration

# looks like an excellent feature, but you can’t know the duration of a call beforehand, thus it can’t be used in your model

bank.boxplot(column=’duration’, by=’outcome’)

## 7.1. previous

# looks like a useful feature

bank.groupby(‘previous’).outcome.mean()

Output:

previous

0    0.088322

1    0.212015

2    0.464191

3    0.592593

4    0.542857

5    0.722222

6    0.600000

7    0.000000

Name: outcome, dtype: float64

## 7.2.  poutcome

# looks like a useful feature

bank.groupby(‘poutcome’).outcome.mean()

poutcome

failure        0.142286

nonexistent    0.088322

success        0.651129

Name: outcome, dtype: float64

# create poutcome_dummies

poutcome_dummies = pd.get_dummies(bank.poutcome, prefix=’poutcome’)

poutcome_dummies.drop(poutcome_dummies.columns[0], axis=1, inplace=True)

# concatenate bank DataFrame with job_dummies and poutcome_dummies

bank = pd.concat([bank, job_dummies, poutcome_dummies], axis=1)

## 8. euribor3m

# prepare a boxplot on euribor3m by outcome, and comment on the ‘euribor3m’ feature

# looks like an excellent feature

bank.boxplot(column=’euribor3m’, by=’outcome’)

Model building

# # create X dataframe having ‘default’, ‘contact’, ‘previous’, ‘euribor3m’ and including 13 dummy #columns

feature_cols = [‘default’, ‘contact’, ‘previous’, ‘euribor3m’] + list(bank.columns[-13:])

X = bank[feature_cols]

# create y

y = bank.outcome

X.head()

Output:

# evaluate the model by splitting into train and test sets

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=12)

# calculate cross-validated AUC

from sklearn.tree import DecisionTreeClassifier

model = DecisionTreeClassifier(max_depth = 6)

model.fit(X_train, y_train)

Output:

DecisionTreeClassifier(class_weight=None, criterion=’gini’, max_depth=6,

max_features=None, max_leaf_nodes=None,

min_impurity_decrease=0.0, min_impurity_split=None,

min_samples_leaf=1, min_samples_split=2,

min_weight_fraction_leaf=0.0, presort=False, random_state=None,

splitter=’best’)

#Store the predicted data in ‘predicted’ array

predicted = model.predict(X_test)

# Import metrics

from sklearn import metrics

# generate evaluation metrics-

print(metrics.accuracy_score(y_test, predicted))

Output: 0.8943918426802622

# Print out the confusion matrix

print(metrics.confusion_matrix(y_test, predicted))

Output: [[10749   161] [ 1144   303]]

# Print out the classification report, and check the f1 score

print(metrics.classification_report(y_test, predicted))

Output:

precision    recall  f1-score   support

0             0.90      0.99           0.94     10910

1             0.65       0.21          0.32       1447

avg / total       0.87       0.89          0.87     12357

Model Visualisation

import numpy as np, pandas as pd, matplotlib.pyplot as plt, pydotplus

from sklearn import tree, metrics, model_selection, preprocessing

from IPython.display import Image, display

dot_data = tree.export_graphviz(model,

out_file=None,

filled=True,

rounded=True,

)

graph = pydotplus.graph_from_dot_data(dot_data)

display(Image(graph.create_png()))

graph.write_png(‘decTreeOutput.png’)

The post 5Decision Tree Case Study 1 appeared first on AcadGild.



This post first appeared on Get Online Updated On Software & Tutorials, How To On Big Data, Android, please read the originial post: here

Share the post

5Decision Tree Case Study 1

×

Subscribe to Get Online Updated On Software & Tutorials, How To On Big Data, Android

Get updates delivered right to your inbox!

Thank you for your subscription

×