目录
- KNN
- 随机森林
- XGBoost
- K-Means
- 后记
- reference
KNN
简介:K近邻(K-Nearest Neighbors, KNN)算法是一种非常简单且直观的监督学习算法,既可以用于分类 (Classification) 问题,也可以用于回归 (Regression) 问题。
核心思想:对于一个待预测的新样本,通过计算它与训练数据中所有样本的距离,找到距离最近的 K 个邻居,然后根据这 K 个邻居的类别进行投票表决,决定新样本的类别。
K 值的选择:
-
K 是算法的关键超参数,表示选取最近邻居的数量。 -
K 值较小:模型会变得对噪声敏感,容易受到异常值的影响。决策边界会非常不规则,容易导致过拟合 (Overfitting)。 -
K 值较大:模型趋于平滑,决策边界会变得过于简单,容易导致欠拟合 (Underfitting)。 -
通常通过交叉验证确定最优 K 值(如网格搜索)。
投票决定类别:
-
分类问题: 在这 K 个最近邻中,统计每个类别的出现次数。将新样本归类到出现次数最多的那个类别。例如,如果 K=5,最近的 5 个邻居中有 3 个属于类别 A,2 个属于类别 B,那么新样本就会被归类到类别 A。 -
回归问题: 在这 K 个最近邻中,计算它们的标签值的平均值(或加权平均值),将这个平均值作为新样本的预测值。
代码实现:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
# 指定列名
feature_names = [
"sepal length (cm)",
"sepal width (cm)",
"petal length (cm)",
"petal width (cm)",
]
# 加载数据
data = pd.read_csv('irisiris.data', header=None, names=feature_names+['class'])
#可视化
sns.pairplot(data, hue='class', markers=["o", "s", "^"])
plt.show()
# 检查是否有缺失值
print(data.isnull().sum())
# 标准化特征值
scaler = StandardScaler()
data[feature_names] = scaler.fit_transform(data[feature_names])
# 查看标准化后的数据
print(data.head()) # 打印标准化后的数据集的前五行,查看标准化效果
# 分割数据集
X = data[feature_names]
y = data['class']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3,random_state=42) # 将数据集按 70% 训练集和 30% 测试集进行分割,设置随机种子为 42 以确保可重复性
# 查看分割后的数据集大小
print(f"训练集大小: {X_train.shape[0]}, 测试集大小: {X_test.shape[0]}") # 打印训练集和测试集的大小
# 初始化 KNN 分类器
knn = KNeighborsClassifier(n_neighbors=3)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test) # 使用训练好的模型对测试数据进行预测,得到预测标签
# 评估模型
print(f"Accuracy: {accuracy_score(y_test, y_pred)}") # 计算并打印模型在测试集上的准确率
print('-')
print("Classification Report:n", classification_report(y_test, y_pred)) # 打印分类报告,包括精确率、召回率、F1 分数等
print('-')
print("Confusion Matrix:n", confusion_matrix(y_test, y_pred)) # 打印混淆矩阵,显示真实标签和预测标签之间的关系
print('-')
网格搜索优化 KNN 模型的参数
# 定义参数范围
param_grid = {'n_neighbors': range(1, 20)} # 创建一个参数字典,设置 'n_neighbors' 参数的取值范围为 1 到 19
# 网格搜索
grid_search = GridSearchCV(KNeighborsClassifier(), param_grid,cv=10) # 创建一个 GridSearchCV 对象,使用 KNeighborsClassifier 和定义的参数范围
grid_search.fit(X_train, y_train) # 使用训练数据进行网格搜索,以找到最佳参数组合
# 最优参数
print(f"Best parameters: {grid_search.best_params_}") # 打印通过网格搜索找到的最优参数
print('-')
# 使用最优参数训练模型
knn_best = grid_search.best_estimator_ # 获取使用最优参数训练的最佳模型
y_pred_best = knn_best.predict(X_test) # 使用最佳模型对测试数据进行预测
# 评估模型
print(f"Accuracy(best): {accuracy_score(y_test, y_pred_best)}") # (优化后)计算并打印模型在测试集上的准确率
print('-')
print("Classification Report(best):n", classification_report(y_test, y_pred_best)) # (优化后)打印分类报告,包括精确率、召回率、F1 分数等
print('-')
print("Confusion Matrix(best):n", confusion_matrix(y_test, y_pred_best)) # (优化后)打印混淆矩阵,显示真实标签和预测标签之间的关系
print('-')
保存/加载
model_filename = 'knn_iris_model.pkl'
scaler_filename = 'scaler_iris.pkl'
joblib.dump(knn_best, model_filename)
joblib.dump(scaler, scaler_filename) # 也保存标准化器,因为预测新数据时需要用到
print(f"模型已保存到 {model_filename}")
print(f"标准化器已保存到 {scaler_filename}")
new_samples = [[5.1, 3.5, 1.4, 0.2], [6.7, 3.0, 5.2, 2.3]]
new_samples_df = pd.DataFrame(new_samples, columns=feature_names)
print("--- 重新加载模型进行预测 ---")
loaded_knn_model = joblib.load(model_filename)
loaded_scaler = joblib.load(scaler_filename)
# 使用加载的标准化器标准化新样本
new_samples_scaled = loaded_scaler.transform(new_samples_df)
# 使用加载的模型预测新样本的类别
predictions_loaded_model = loaded_knn_model.predict(new_samples_scaled)
print(f"重新加载模型后新样本的预测类别: {predictions_loaded_model}")
随机森林
简介:随机森林是由多棵决策树组成的“森林”,是一种集成学习算法。
核心思想:决策树通过“选择最能区分数据的特征”一步步把样本分开,随机森林通过引入随机性,生成多棵决策树,并让这些树之间尽可能地不相关,然后综合所有树的预测结果
随机森林通过两种主要的随机性来确保每棵决策树的多样性:
-
样本随机性 (Bootstrap Aggregating / Bagging): -
从原始数据集中有放回地随机抽取(Bootstrap Sampling)N 个样本,作为训练每棵决策树的数据集。 -
这意味着,有些样本可能会被抽取多次,有些样本可能一次也未被抽取到(这些未被抽取到的样本称为袋外样本 Out-Of-Bag, OOB,它们可以用于评估模型的泛化能力,而无需独立的验证集)。 -
通过这种方式,每棵树的训练数据集都是不同的,从而使每棵树看到的“数据”有所差异。 -
特征随机性 (Feature Randomness): -
在每棵决策树的每个节点进行分裂时,不是从所有特征中选择最佳特征,而是从随机选择的特征子集中选择最佳特征。 -
假设原始数据集有 M 个特征,每次分裂时,只随机选择 m 个特征(通常 m≪M),然后在这 m 个特征中选择信息增益最大或基尼不纯度最小的那个特征进行分裂。 -
作用:这种随机性进一步增加了树之间的差异性。它避免了某些“很强”的特征在所有树的顶端都占据主导地位,从而降低了树之间的相关性,使得集成模型的性能更好。
代码实现:
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
# 导入数据
df = pd.read_csv('data/train.csv')
# print(full_data.shape)
# pd.set_option('display.max_columns', None) # 显示所有列
# pd.set_option('display.max_rows', None) # 显示所有行
# pd.set_option('display.width', 1000) # 设置打印宽度
# print(full_data.describe())
# print(full_data.isnull().sum())
# print(full_data['Dependents'].value_counts(normalize=True)) #单列数据分布占比
# print(pd.crosstab(full_data['Property_Area'], full_data['Loan_Status'], normalize=True))
df = df.drop('Loan_ID', axis=1)
df.dropna(inplace=True)
df['Dependents'] = df['Dependents'].replace('3+', 3).astype(int)
categorical_cols = ['Gender', 'Married', 'Education', 'Self_Employed', 'Property_Area', 'Loan_Status']
df_encoded = pd.get_dummies(df, columns=categorical_cols, drop_first=True)
correlation_matrix = df_encoded.corr()
plt.figure(figsize=(28, 18)) # Adjust figure size as needed
sns.heatmap(correlation_matrix, annot=True, cmap='coolwarm', fmt=".2f", linewidths=.5)
plt.title('Correlation Heatmap of Loan Data')
plt.show()
最相关的变量是(ApplicantIncome - LoanAmount)和(Credit_History - Loan_Status)
逻辑回归
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
# 导入数据
df = pd.read_csv('data/train.csv')
# 数据预处理
df = df.drop('Loan_ID', axis=1)
df['Dependents'] = df['Dependents'].replace('3+',3).astype(float)
df['Gender'].fillna(df['Gender'].value_counts().idxmax(), inplace=True)
df['Married'].fillna(df['Married'].value_counts().idxmax(), inplace=True)
df['Dependents'].fillna(df['Dependents'].value_counts().idxmax(), inplace=True)
df['Self_Employed'].fillna(df['Self_Employed'].value_counts().idxmax(), inplace=True)
df["LoanAmount"].fillna(df["LoanAmount"].mean(skipna=True), inplace=True)
df['Loan_Amount_Term'].fillna(df['Loan_Amount_Term'].value_counts().idxmax(), inplace=True)
df['Credit_History'].fillna(df['Credit_History'].value_counts().idxmax(), inplace=True)
# # 对分类变量进行独热编码
categorical_cols = ['Gender', 'Married', 'Education', 'Self_Employed', 'Property_Area', 'Loan_Status']
df_encoded = pd.get_dummies(df, columns=categorical_cols, drop_first=True)
# 分离特征和目标变量
X = df_encoded.drop('Loan_Status_Y', axis=1)
y = df_encoded['Loan_Status_Y']
# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 初始化并训练逻辑回归模型
model = LogisticRegression(max_iter=1000)
model.fit(X_train, y_train)
# 在测试集上预测
y_pred = model.predict(X_test)
# 评估模型
print(f"Accuracy(best): {accuracy_score(y_test, y_pred)}") # (优化后)计算并打印模型在测试集上的准确率
print('-')
print("Classification Report(best):n", classification_report(y_test, y_pred)) # (优化后)打印分类报告,包括精确率、召回率、F1 分数等
print('-')
print("Confusion Matrix(best):n", confusion_matrix(y_test, y_pred)) # (优化后)打印混淆矩阵,显示真实标签和预测标签之间的关系
print('-')
随机森林
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedKFold, ParameterGrid
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
import matplotlib.pyplot as plt
import seaborn as sns
# 导入数据
df = pd.read_csv('data/train.csv')
# 数据预处理增强
df = df.drop('Loan_ID', axis=1)
# 处理特殊值
df['Dependents'] = df['Dependents'].replace('3+', 3).astype(float)
# 更智能地填充缺失值
df['Gender'] = df['Gender'].fillna(df['Gender'].value_counts().idxmax())
df['Married'] = df['Married'].fillna(df['Married'].value_counts().idxmax())
df['Dependents'] = df['Dependents'].fillna(df.groupby('Married')['Dependents'].transform('median'))
df['Self_Employed'] = df['Self_Employed'].fillna(df['Self_Employed'].value_counts().idxmax())
df['Loan_Amount_Term'] = df['Loan_Amount_Term'].fillna(df['Loan_Amount_Term'].value_counts().idxmax())
df['Credit_History'] = df['Credit_History'].fillna(df['Credit_History'].value_counts().idxmax())
# 处理贷款金额异常值
df['LoanAmount'] = df['LoanAmount'].fillna(df['LoanAmount'].mean(skipna=True))
df['LoanAmount'] = np.log(df['LoanAmount']) # 对数转换,减少偏态
# 创建新特征
df['Total_Income'] = df['ApplicantIncome'] + df['CoapplicantIncome']
df['Total_Income'] = np.log(df['Total_Income']) # 对数转换
df['Loan_Income_Ratio'] = df['LoanAmount'] / df['Total_Income']
# 对分类变量进行独热编码
categorical_cols = ['Gender', 'Married', 'Education', 'Self_Employed', 'Property_Area', 'Loan_Status']
df_encoded = pd.get_dummies(df, columns=categorical_cols, drop_first=True)
# 分离特征和目标变量
X = df_encoded.drop('Loan_Status_Y', axis=1)
y = df_encoded['Loan_Status_Y']
# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
# 创建包含预处理和模型的管道
pipeline = Pipeline([
('scaler', StandardScaler()),
('model', RandomForestClassifier(random_state=42))
])
# 定义要搜索的参数网格
# Adjusted parameters based on correlation insights and general RF best practices
param_grid = {
'model__n_estimators': [30,50,70],
'model__max_depth': [6,7,8],
'model__min_samples_split': [4,6,8],
'model__min_samples_leaf': [1,2,3],
'model__max_features': ['sqrt', 'log2']
}
# 使用StratifiedKFold进行交叉验证
cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)
# 执行网格搜索
grid_search = GridSearchCV(
pipeline,
param_grid,
cv=cv,
scoring='accuracy',
n_jobs=-1,
verbose=1
)
# 训练模型
grid_search.fit(X_train, y_train)
# 获取最佳模型
best_model = grid_search.best_estimator_
# 在测试集上预测
y_pred = best_model.predict(X_test)
# 评估模型
print(f"Accuracy: {accuracy_score(y_test, y_pred):.4f}")
print('-')
print("Classification Report:n", classification_report(y_test, y_pred))
print('-')
print("Confusion Matrix:n", confusion_matrix(y_test, y_pred))
print('-')
print("Best Parameters:", grid_search.best_params_)
# 特征重要性分析
if hasattr(best_model.named_steps['model'], 'feature_importances_'):
feature_importances = best_model.named_steps['model'].feature_importances_
features = X.columns
plt.figure(figsize=(12, 8))
sns.barplot(x=feature_importances, y=features)
plt.title('Feature Importance')
plt.tight_layout()
plt.savefig('feature_importance.png')
plt.close()
XGBoost
简介:XGBoost是一个基于梯度提升(Gradient Boosting)框架的优化版本。它通过不断地训练新的决策树,来纠正前一轮模型的错误,从而提升整体预测性能。
核心思想:XGBoost是基于梯度提升框架的。梯度提升是一种集成学习方法,通过迭代地训练一系列弱学习器(通常是决策树),并将它们的结果加权组合起来,以构建一个强大的预测模型。在每次迭代中,模型会尝试纠正前一个模型的错误,通过在残差(或负梯度)上训练新的树来实现。
代码实现:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
data = pd.read_csv('data/BostonHousing.csv')
# 分离特征和目标变量
X = data.drop('medv', axis=1)
y = data['medv']
correlation_matrix = data.corr()
plt.figure(figsize=(28, 18)) # Adjust figure size as needed
sns.heatmap(correlation_matrix, annot=True, cmap='coolwarm', fmt=".2f", linewidths=.5)
plt.title('Correlation Heatmap of Loan Data')
plt.show()
#文字版
data = pd.read_csv('data/BostonHousing.csv')
# 设置 Pandas 显示选项,确保完整输出相关系数矩阵
pd.set_option('display.max_rows', None) # 显示所有行
pd.set_option('display.max_columns', None) # 显示所有列
pd.set_option('display.width', 1000) # 设置打印宽度
pd.set_option('display.float_format', lambda x: '%.2f' % x) # 保留两位小数
correlation_matrix = data.corr()
print(correlation_matrix)
XGBoost
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, RandomizedSearchCV, KFold
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error, r2_score
import xgboost as xgb
# Load data
data = pd.read_csv('data/BostonHousing.csv')
# Features and target
X = data.drop('medv', axis=1) # Use all features except target
y = data['medv']
# Split data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Scale features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# Initialize XGBoost
xgb_model = xgb.XGBRegressor(random_state=42)
# Define hyperparameter grid
param_dist = {
'n_estimators': [100, 200, 300],
'max_depth': [3, 5, 7],
'learning_rate': [0.01, 0.05, 0.1],
'subsample': [0.7, 0.8, 1.0],
'colsample_bytree': [0.7, 0.8, 1.0]
}
cv = KFold(n_splits=5, shuffle=True, random_state=42)
# Perform random search
random_search = RandomizedSearchCV(
xgb_model, param_distributions=param_dist, n_iter=20, cv=cv,
scoring='neg_mean_squared_error', random_state=42, n_jobs=-1
)
random_search.fit(X_train_scaled, y_train)
# Best model
best_model = random_search.best_estimator_
# Predict and evaluate
y_pred = best_model.predict(X_test_scaled)
rmse = np.sqrt(mean_squared_error(y_test, y_pred))
r2 = r2_score(y_test, y_pred)
print(f'Best Parameters: {random_search.best_params_}')
print(f'RMSE: {rmse:.2f}')
print(f'R²: {r2:.2f}')
# Feature importance
feature_importance = pd.DataFrame({
'Feature': X.columns,
'Importance': best_model.feature_importances_
}).sort_values(by='Importance', ascending=False)
print('nFeature Importance:')
print(feature_importance)
保存/加载
import pandas as pd
import numpy as np
import joblib # 用于模型保存与加载
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error, r2_score
import xgboost as xgb
# 加载数据
data = pd.read_csv('data/BostonHousing.csv')
# 特征与目标
X = data.drop('medv', axis=1)
y = data['medv']
# 划分训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 特征缩放
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
# 使用最佳超参数初始化模型
best_model = xgb.XGBRegressor(
subsample=0.8,
n_estimators=300,
max_depth=5,
learning_rate=0.1,
colsample_bytree=1.0,
random_state=42
)
# 拟合模型
best_model.fit(X_train_scaled, y_train)
# 预测与评估
y_pred = best_model.predict(X_test_scaled)
rmse = np.sqrt(mean_squared_error(y_test, y_pred))
r2 = r2_score(y_test, y_pred)
print(f'RMSE: {rmse:.2f}')
print(f'R²: {r2:.2f}')
# 特征重要性
feature_importance = pd.DataFrame({
'Feature': X.columns,
'Importance': best_model.feature_importances_
}).sort_values(by='Importance', ascending=False)
print('nFeature Importance:')
print(feature_importance)
# 保存模型与缩放器
joblib.dump(best_model, 'best_xgb_model.pkl')
joblib.dump(scaler, 'scaler.pkl')
# 模拟加载模型并测试新数据
scaler_loaded = joblib.load('scaler.pkl')
X_new_scaled = scaler_loaded.transform(X)
loaded_model = joblib.load('best_xgb_model.pkl')
# 对新数据进行预测
y_new_pred = loaded_model.predict(X_new_scaled)
print('nPredictions on new test data:')
print(y_new_pred)
print(y)
rmse = np.sqrt(mean_squared_error(y, y_new_pred))
r2 = r2_score(y, y_new_pred)
print(f'RMSE: {rmse:.2f}')
print(f'R²: {r2:.2f}')
K-Means
简介:K-Means 是一种无监督学习算法,常用于聚类分析(Clustering),目的是将数据划分为若干个“簇”(Clusters),使得同一簇内的样本相似度高,不同簇之间的样本相似度低。
核心思想:通过迭代地更新簇的中心点(质心),最小化样本到其所属簇中心的欧几里得距离平方和(SSE)
设数据集为 X={x1,x2,...,xn}要聚成 K 个簇:随机选择 K 个点作为初始“质心”(cluster centers),将每个样本分配给离它最近的质心,对每个簇,重新计算其质心(即簇中所有点的均值);重复分配样本和更新质心步骤直到质心不再变化或达到最大迭代次数。
实现代码:
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
# 读取数据
full_data = pd.read_csv('data/Mall_Customers.csv')
# print(full_data['Gender'].value_counts(normalize=True)) #单列数据分布占比
# print(full_data.shape)
# pd.set_option('display.max_columns', None) # 显示所有列
# pd.set_option('display.max_rows', None) # 显示所有行
# pd.set_option('display.width', 1000) # 设置打印宽度
# print(full_data.describe())
fig, axes = plt.subplots(1,3, figsize=(18, 10))
sns.histplot(full_data['Age'], color='green', ax=axes[0], kde=True)
sns.histplot(full_data['Annual Income (k$)'], color='red', ax=axes[1], kde=True)
sns.histplot(full_data['Spending Score (1-100)'], color='blue', ax=axes[2], kde=True)
plt.tight_layout()
plt.show()
K-Means
import os
os.environ["OMP_NUM_THREADS"] = "1"
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.cluster import KMeans
import joblib # 用于保存模型
# 设置中文显示
plt.rcParams['font.sans-serif'] = ['Microsoft YaHei']
plt.rcParams['axes.unicode_minus'] = False
# 读取数据
df = pd.read_csv('data/Mall_Customers.csv')
# 选择特征
features = ['Annual Income (k$)', 'Spending Score (1-100)']
X = df[features]
# 用肘部法找最佳聚类数(可选)
defplot_elbow(X):
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters=i, random_state=42)
kmeans.fit(X)
wcss.append(kmeans.inertia_)
plt.figure(figsize=(8, 5))
plt.plot(range(1, 11), wcss, marker='o')
plt.title('肘部法确定最佳聚类数')
plt.xlabel('聚类数 K')
plt.ylabel('组内误差平方和 (WCSS)')
plt.grid(True)
plt.show()
plot_elbow(X) # 只用一次即可
# 聚类函数 + 模型保存
deftrain_and_save_model(X, k=5, model_path='kmeans_model.pkl'):
kmeans = KMeans(n_clusters=k, random_state=42)
labels = kmeans.fit_predict(X)
joblib.dump(kmeans, model_path)
return labels, kmeans
# 训练模型并保存
labels, kmeans_model = train_and_save_model(X, k=5)
df['Cluster'] = labels
# 可视化聚类结果
defplot_clusters(df, model, features):
plt.figure(figsize=(8, 6))
sns.scatterplot(
x=features[0],
y=features[1],
hue='Cluster',
palette='Set1',
data=df,
s=100
)
centers = model.cluster_centers_
plt.scatter(centers[:, 0], centers[:, 1], c='black', s=300, marker='X', label='中心')
plt.title('客户聚类结果')
plt.xlabel(features[0])
plt.ylabel(features[1])
plt.legend()
plt.show()
plot_clusters(df, kmeans_model, features)
# 输入新数据,进行聚类预测
defpredict_new_data(new_data, model_path='kmeans_model.pkl'):
model = joblib.load(model_path)
prediction = model.predict(new_data)
return prediction
# 示例:对部分新客户数据进行分类
new_customers = pd.DataFrame({
'Annual Income (k$)': [15, 85, 60],
'Spending Score (1-100)': [39, 77, 40]
})
predicted_clusters = predict_new_data(new_customers)
new_customers['Predicted Cluster'] = predicted_clusters
# 显示预测结果
print("n新客户分类结果:")
print(new_customers)
后记
归一化/标准化
|
|
|
---|---|---|
转换范围 |
|
|
处理异常值 |
|
|
分布形状 |
|
|
适用场景 |
|
|
计算方式 |
|
|
指标
-
Accuracy:模型预测正确的样本数,占总样本数的比例。
注:当数据集类别分布极度不平衡时(比如 95% 样本都是一类),准确率可能会误导你,因为只预测最多的那类就能有很高准确率。
-
Precision:在所有被预测为正类的样本中,实际真正是正类的比例。
注:精确率高,说明预测结果很“干净”,很少误报。
-
Recall:在所有实际为正类的样本中,被预测正确的比例。
注:召回率高,说明漏掉的正类很少。
-
F1-score:精确率和召回率的调和平均数。
注:在精确率和召回率之间取得平衡时,F1-score 是一个综合指标。
-
Support:每个类别在测试集中实际出现的样本数量
-
混淆矩阵:混淆矩阵就是一个 N × N 方阵,第 i 行 j 列表示:实际是类别 i,预测成类别 j 的数量
注:使用混淆矩阵能直观看到哪类被预测错得最多、是否存在明显偏差。
常见算法对比
回归算法 | 分类算法 |
---|---|
|
|
|
|
|
|
|
|
|
|
GridSearchCV
VS RandomizedSearchCV
|
GridSearchCV |
RandomizedSearchCV |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
param_grid={}
|
param_distributions={}
|
|
|
n_iter 控制试几组 |
reference
https://blog.csdn.net/weixin_45187434/article/details/139667247
https://blog.csdn.net/m0_59596937/article/details/128508570
https://blog.csdn.net/weixin_42363541/article/details/134692160
原文始发于微信公众号(渗透测试安全攻防):机器学习常见算法【上】
- 左青龙
- 微信扫一扫
-
- 右白虎
- 微信扫一扫
-
评论