这个例子说明在判别分析里使用缩水(shrinkage)的方法,可以提高分类的准确率。所谓“缩水”,是指减少预测的特征。我们使用的数据集是模拟数据,你也可以在真实数据集上验证缩水判别分析的分类效果。
实例详解
首先,导入必需的库。
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
准备工作
n_train = 20 # samples for training
n_test = 200 # samples for testing
n_averages = 50 # how often to repeat classification
n_features_max = 75 # maximum number of features
step = 4 # step size for the calculation
函数generate_data()
函数generate_data()用来生成模拟数据集。它有两个参数n_samples, n_features, 分别指定样本数和特征数。该函数返回一个形如(n_samples, n_features)的数组和一个目标类标签数组。
def generate_data(n_samples, n_features):
"""Generate random blob-ish data with noisy features.
This returns an array of input data with shape `(n_samples, n_features)`
and an array of `n_samples` target labels.
Only one feature contains discriminative information, the other features
contain only noise.
"""
X, y = make_blobs(n_samples=n_samples, n_features=1, centers=[[-2], [2]])
# add non-discriminative features
if n_features > 1:
X = np.hstack([X, np.random.randn(n_samples, n_features - 1)])
return X, y
普通判别分析 v.s. 缩水判别分析
scikit-learn的线性判别分析函数LinearDiscriminantAnalysis使用一个线性边界的分类器,它使用贝叶斯规则的类条件密度拟合数据。模型假设所有的类有相同的协方差阵,对每一个类拟合一个高斯密度。LinearDiscriminantAnalysis的主要参数solver是一个字符串,指定使用的解形式。在这里,它取lsqr, 表示最小二乘解,可以与缩水法结合。参数shrinkage默认取值None, 表示没有缩水。在这里,它取auto, 表示使用Ledoit-Wolf(估计缩水的协方差阵)自动缩水。
acc_clf1, acc_clf2 = [], []
n_features_range = range(1, n_features_max + 1, step)
for n_features in n_features_range:
score_clf1, score_clf2 = 0, 0
for _ in range(n_averages):
X, y = generate_data(n_train, n_features)
clf1 = LinearDiscriminantAnalysis(solver='lsqr', shrinkage='auto').fit(X, y)
clf2 = LinearDiscriminantAnalysis(solver='lsqr', shrinkage=None).fit(X, y)
X, y = generate_data(n_test, n_features)
score_clf1 += clf1.score(X, y)
score_clf2 += clf2.score(X, y)
acc_clf1.append(score_clf1 / n_averages)
acc_clf2.append(score_clf2 / n_averages)
features_samples_ratio = np.array(n_features_range) / n_train
可视化分类效果
plt.plot(features_samples_ratio, acc_clf1, linewidth=2,
label="Linear Discriminant Analysis with shrinkage", color='navy')
plt.plot(features_samples_ratio, acc_clf2, linewidth=2,
label="Linear Discriminant Analysis", color='gold')
plt.xlabel('n_features / n_samples')
plt.ylabel('Classification accuracy')
plt.legend(loc=1, prop={'size': 12})
plt.suptitle('Linear Discriminant Analysis vs. \
shrinkage Linear Discriminant Analysis (1 discriminative feature)')
plt.show()
---------------------
【转载】
作者:Goodsta
原文:https://blog.csdn.net/wong2016/article/details/83897057
|
|