@Perfect-Demo
2018-12-09T15:15:22.000000Z
字数 6655
阅读 1955
机器学习深度学习
这个是对kaggle上的泰坦尼克号的数据分析,通过对数据中的各个特征,进行最后是否能够幸存的预测,然后与是否幸存的标签进行比较,得到一个准确度,下面包含线性回归预测以及随机森林预测和联合Boosting的预测
目录如下
首先导入计算库,并且导入数据
import pandas as pdimport numpy as nptitanic = pd.read_csv("train.csv")
然后进行数据预处理,对Age的缺失数据(nan)进行缺失处理,然后将Sex这一项的字符串用.map()方法,替换为0和1.同样,对Embarked这一项用.map()方法替换为0-2的三项.
#进行数据预处理,将age中的NAN按照均值进行填充titanic["Age"] = titanic["Age"].fillna(titanic["Age"].median())titanic.describe()print(titanic["Sex"].unique())# 将Sex中的male 与 femal 改为0和1Sex_map = {'male':0, 'female':1}titanic['Sex'] = titanic['Sex'].map(Sex_map)titanic.head()#同样,对于Embarked进行mapprint(titanic["Embarked"].unique())Embarked_map = {'S':0, 'C':1, 'Q':2}titanic["Embarked"] = titanic["Embarked"].map(Embarked_map)titanic["Embarked"] = titanic["Embarked"].fillna('0')titanic.head()
到这里,数据预处理就完成了.
线性回归
随机森林
首先导入库,用的是机器学习库sklearn里面的函数
from sklearn.linear_model import LinearRegressionfrom sklearn.cross_validation import KFoldfrom sklearn.model_selection import train_test_splitfrom sklearn.preprocessing import StandardScaler
然后开始进行预测分析
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]alg = LinearRegression()#训练集与测试集的百分比分割x_train, x_test, y_train, y_test = train_test_split(titanic[predictors], titanic["Survived"], test_size = 0.2, train_size = 0.8, random_state = 1)#对数据进行标准化sc_x = StandardScaler()x_train = sc_x.fit_transform(x_train)x_test = sc_x.transform(x_test)for i in range (1, 101):alg.fit(x_train, y_train)# print("这是第",i,"次训练")test_hat = alg.predict(x_test)test_hat[test_hat > 0.5] = 1test_hat[test_hat <= 0.5] = 0rate = sum(test_hat[test_hat == y_test]) / len(test_hat)print(rate)
重点重点:这个预测准确率实在是,,,惨不忍睹,,,并且,,,暂时不知道为啥,,,一个二分类问题,随机选都有百分之50左右准确度,,,竟然能预测到27%,,,罪过罪过...
但是吧,接下来的随机森林预测,效果还是看得下去的.
就是按照上面的这个规则,我们来用sklearn模型构建一个随机森林的预测
#我们先导入库from sklearn import cross_validationfrom sklearn.ensemble import RandomForestClassifierpredictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]alg = RandomForestClassifier(random_state = 1, n_estimators = 10, min_samples_split = 2, min_samples_leaf = 1)kf = cross_validation.KFold(titanic.shape[0], n_folds = 3, random_state = 1)scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv = kf)print(scores.mean())
最后我们得到一个预测值:
0.7856341189674523
可以看出这个准确率,对于一个二分类问题来说,效果一般,毕竟我们平常用CNN做图片二分类的话,一般都可以达到百分之93以上.
predictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked"]alg = RandomForestClassifier(random_state = 1, n_estimators = 100, min_samples_split = 6, min_samples_leaf = 2)kf = cross_validation.KFold(titanic.shape[0], n_folds = 3, random_state = 1)scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv = kf)print(scores.mean())
0.8305274971941637
终于上了百分之80大关,但是,,,效果还是一般吧...
但是这里我一开始用的"树"是50,深度是4, 叶子是2,得到的准确度差不多是百分之81,然后一步步尝试着调参,最后得到了这个.
这里就有个问题值得我们去考虑了,在实际的数据分析中,往往需要我们自己去在数据中寻找标签.下面我们就来从原有数据中,生成一些标签.
比如,通过观察,我们发现
1. 如果一家人一起出游,相互帮助会不会增加生还率呢?
2. 然后发现大家名字长短也有很大区别,要不玄学一下,名字长短会不会对最后的生还造成影响呢.
3. 还有,仔细观察发现名字里面包含着Mr, Miss, Sir等等这样的一定程度上代表者身份的内容,如果我们把这些提取出来,也可以分析一下,身份对生还率的影响.
#比如我们可以看看一起来的人的个数和是否存活的关系#我们新建一个特征,就是看看家庭有多少人titanic["FamilySize"] = titanic["SibSp"] + titanic["Parch"]#再玄乎点,我们看看名字长度和最后存活有没有关系titanic["NameSize"] = titanic["Name"].apply(lambda x:len(x))titanic.head()
下面是对名字隐含的身份信息的提取.
import re#观察名字,我们发现名字里有一些特殊的前缀称为,比如Mr, Miss, Dr, 等等,我们可以看看这些特征对结果有什么影响def get_title(name):title_search = re.search('([A-Za-z]+)\.', name)if title_search:return title_search.group(1)return ""titles = titanic["Name"].apply(get_title)# print(titles)print(pd.value_counts(titles))title_map = {"Mr":1, "Miss":2, "Mrs":3, "Master":4, "Dr":5, "Rev":6, "Mlle":7, "Col":8, "Major":9, "Countess":10, "Lady":11, "Sir":12, "Capt":13, "Mme":14, "Jonkheer":15, "Ms":16, "Don":17,}titles = titles.map(title_map)print(titles)titanic["Title"] = titles
上面我们构造了三个特征,然后我们要来看看如何衡量一个特征是否对结果产生了决定性的作用.
衡量一个特征的重要程度的方法
from sklearn.feature_selection import SelectKBest, f_classifimport matplotlib.pyplot as pltpredictors = ["Pclass", "Sex", "Age", "SibSp", "Parch", "Fare", "Embarked", "FamilySize", "NameSize", "Title"]selector = SelectKBest(f_classif, k = 5)selector.fit(titanic[predictors], titanic["Survived"])scores = -np.log10(selector.pvalues_)plt.bar(range(len(predictors)), scores)plt.xticks(range(len(predictors)), predictors, rotation = "vertical")
上面的代码我们对10个特征进行了重要程度的计算,然后通过一个条形图显示出来了,图片如下(越重要的值越大):
我们可以看出,有5个特征是相对突出的,其中性别尤为重要(莫非这就是所说的,让孩子和女性先走.)
既然我们选出了几个重要的特征,那么我们就要开始用这几个选出来的特征进行预测了
predictors = ["Pclass", "Sex", "Age", "Fare", "Embarked", "NameSize", "Title"]alg = RandomForestClassifier(random_state = 1, n_estimators = 100, min_samples_split = 6, min_samples_leaf = 2)kf = cross_validation.KFold(titanic.shape[0], n_folds = 3, random_state = 1)scores = cross_validation.cross_val_score(alg, titanic[predictors], titanic["Survived"], cv = kf)print(scores.mean())#好吧,貌似好像并没有什么很明显的提升...并且一开始只用上面选出来的4个特征的话,准确率还只有79.9%...还下降了几个百分点
最后的准确度:
0.830527497194164
就像我在注释中说的,如果只用选出来的4个特征,准确度还有所下降,所以一定的特征个数或许还是很重要的.
就是把之前的两个东西联合起来.代码如下:
里面因为随机森林效果更好,所以在最后"投票"阶段,给随机森林加了更重的权重,即:(随机森林*3 + 线性回归)/4
#我们之前分别用线性回归和随机森林做了预测,我们现在把这两个东西联合起来from sklearn.ensemble import GradientBoostingClassifierfrom sklearn.linear_model import LogisticRegressionalgorithms = [# [GradientBoostingClassifier(random_state = 1, n_estimators = 25, max_depth = 3), ["Pclass", "Sex", "Age", "Fare", "Embarked", "NameSize", "Title"]],[LogisticRegression(random_state = 1), ["Pclass", "Sex", "Age", "Fare", "Embarked", "NameSize", "Title"]]]kf = KFold(titanic.shape[0], n_folds=3, random_state=1)predictions=[]# for train, test in kf:# train_y = titanic["Survived"].iloc[train]# full_test_predictions = []# for alg, predictors in algorithms:# alg.fit(titanic[predictors].iloc[train, :], train_y)# test_predictions = alg.predict_proba(titanic[predictors].iloc[test, :].astype(float))[:, 1]# full_test_predictions.append(test_predictions)# test_predictions = (full_test_predictions[0] + full_test_predictions[1]) / 2# test_predictions[test_predictions >= 0.5] = 1# test_predictions[test_predictions < 0.5] = 0# predictions.append(test_predictions)# predictions = np.concatenate(predictions, axis = 0)full_predictions = []for alg, predictors in algorithms:alg.fit(titanic[predictors], titanic["Survived"])predictions = alg.predict_proba(titanic[predictors])[:, 1]full_predictions.append(predictions)# predictions = (full_predictions[0] * 3 + full_predictions[1]) / 4# print(predictions)predictions[predictions >= 0.5] = 1predictions[predictions < 0.5] = 0accuracy = sum(predictions[predictions == titanic["Survived"]]) / len(predictions)print(accuracy)print(predictions)# print(len(predictions))# print(predictions)