[关闭]
@EtoDemerzel 2017-11-29T20:54:22.000000Z 字数 5551 阅读 1798

机器学习week9 ex8 review

机器学习 吴恩达

这周学习异常监测, 第一部分完成对一个网络中故障的服务器的监测。第二部分使用协同过滤来实现一个电影推荐系统。

1 Anomaly Detection

监测服务器工作状态的指标:吞吐量(throughput)延迟(latency)
我们有 的无标签数据集,这里认为其中绝大多数都是正常工作的服务器,其中少量是异常状态。
先通过散点图来直观判断。
image_1c01aotoj1vji25auodrvo3ja9.png-27.9kB

1.1 Gaussian distribution

对数据的分布情况选择一个模型。
高斯分布的公式如下:
image_1c01be01g1uam8c01acp1fnkefem.png-7.7kB
其中 是平均值,是标准差。

1.2 Estimating parameters for Gaussian distribution

根据如下公式计算高斯分布的参数:
image_1c01bi92raq31tnb1bad3rh1vl913.png-5.2kB
image_1c01bil5ju5t1me3irf1vsqiqq1g.png-6.6kB
完成estimateGaussian.m如下:

  1. function [mu sigma2] = estimateGaussian(X)
  2. %ESTIMATEGAUSSIAN This function estimates the parameters of a
  3. %Gaussian distribution using the data in X
  4. % [mu sigma2] = estimateGaussian(X),
  5. % The input X is the dataset with each n-dimensional data point in one row
  6. % The output is an n-dimensional vector mu, the mean of the data set
  7. % and the variances sigma^2, an n x 1 vector
  8. %
  9. % Useful variables
  10. [m, n] = size(X);
  11. % You should return these values correctly
  12. mu = zeros(n, 1);
  13. sigma2 = zeros(n, 1);
  14. % ====================== YOUR CODE HERE ======================
  15. % Instructions: Compute the mean of the data and the variances
  16. % In particular, mu(i) should contain the mean of
  17. % the data for the i-th feature and sigma2(i)
  18. % should contain variance of the i-th feature.
  19. %
  20. mu = mean(X);
  21. sigma2 = var(X,1); % choose the way to divide by N rather than N-1
  22. % =============================================================
  23. end

完成之后,脚本文件会执行绘制等高线的操作,即得到如下图像:
image_1c01ddjlh1icm1b5t7lb7nvmjg1t.png-46.7kB

1.3 Selecting the threshold

为临界值, 的情况被认为是异常状况
通过交叉验证集来选择这样的
交叉验证集中的数据是带标签的。根据之前学到的 来评价选择的优劣。
image_1c01dnk2e1u101jaqoqtm3k6ss2a.png-19.9kB
image_1c01dp28h14uacti2p51tgufj12n.png-18.9kB
其中 分别代表true positive,false positive, false negative

  1. function [bestEpsilon bestF1] = selectThreshold(yval, pval)
  2. %SELECTTHRESHOLD Find the best threshold (epsilon) to use for selecting
  3. %outliers
  4. % [bestEpsilon bestF1] = SELECTTHRESHOLD(yval, pval) finds the best
  5. % threshold to use for selecting outliers based on the results from a
  6. % validation set (pval) and the ground truth (yval).
  7. %
  8. bestEpsilon = 0;
  9. bestF1 = 0;
  10. F1 = 0;
  11. stepsize = (max(pval) - min(pval)) / 1000;
  12. for epsilon = min(pval):stepsize:max(pval)
  13. % ====================== YOUR CODE HERE ======================
  14. % Instructions: Compute the F1 score of choosing epsilon as the
  15. % threshold and place the value in F1. The code at the
  16. % end of the loop will compare the F1 score for this
  17. % choice of epsilon and set it to be the best epsilon if
  18. % it is better than the current choice of epsilon.
  19. %
  20. % Note: You can use predictions = (pval < epsilon) to get a binary vector
  21. % of 0's and 1's of the outlier predictions
  22. prediction = (pval < epsilon);
  23. tp = sum((prediction == 1) & (yval == 1)); % true positive
  24. fp = sum((prediction == 1) & (yval == 0)); % false positive
  25. fn = sum((prediction == 0) & (yval == 1)); % false negative
  26. prec = tp / (tp + fp); % precision
  27. rec = tp / (tp + fn); % recall
  28. F1 = 2 * prec * rec/ (prec + rec); % F1
  29. % =============================================================
  30. if F1 > bestF1
  31. bestF1 = F1;
  32. bestEpsilon = epsilon;
  33. end
  34. end
  35. end

按照选定的 ,判断异常情况如下图:
image_1c01gjnio19f1hr71f5egb1t2o34.png-48.1kB

1.4 High dimensional Dataset

对上述函数,换用更高维的数据集。(11 features)
与之前2维的情况并没有什么区别。


2 Recommender system

对关于电影评分的数据集使用协同过滤算法,实现推荐系统。
Datasets来源:MoiveLens 100k Datasets.
对矩阵可视化:
image_1c02sg0jvvu21ulvod4lv21afom.png-46.8kB
作为对比,四阶单位矩阵可视化情况如下:
image_1c02sk658ule1gq3mk91m83s4v13.png-15.8kB

2.1 Movie rating dataset

矩阵 (大小为num_movies num_users);
矩阵 ( 表示电影 被用户 评分过).

2.2 Collaborating filtering learning algorithm

整个2.2都是对cofiCostFunc.m的处理。
原文件中提供的代码如下:

  1. function [J, grad] = cofiCostFunc(params, Y, R, num_users, num_movies, ...
  2. num_features, lambda)
  3. %COFICOSTFUNC Collaborative filtering cost function
  4. % [J, grad] = COFICOSTFUNC(params, Y, R, num_users, num_movies, ...
  5. % num_features, lambda) returns the cost and gradient for the
  6. % collaborative filtering problem.
  7. %
  8. % Unfold the U and W matrices from params
  9. X = reshape(params(1:num_movies*num_features), num_movies, num_features);
  10. Theta = reshape(params(num_movies*num_features+1:end), ...
  11. num_users, num_features);
  12. % You need to return the following values correctly
  13. J = 0;
  14. X_grad = zeros(size(X));
  15. Theta_grad = zeros(size(Theta));
  16. % ====================== YOUR CODE HERE ======================
  17. % Instructions: Compute the cost function and gradient for collaborative
  18. % filtering. Concretely, you should first implement the cost
  19. % function (without regularization) and make sure it is
  20. % matches our costs. After that, you should implement the
  21. % gradient and use the checkCostFunction routine to check
  22. % that the gradient is correct. Finally, you should implement
  23. % regularization.
  24. %
  25. % Notes: X - num_movies x num_features matrix of movie features
  26. % Theta - num_users x num_features matrix of user features
  27. % Y - num_movies x num_users matrix of user ratings of movies
  28. % R - num_movies x num_users matrix, where R(i, j) = 1 if the
  29. % i-th movie was rated by the j-th user
  30. %
  31. % You should set the following variables correctly:
  32. %
  33. % X_grad - num_movies x num_features matrix, containing the
  34. % partial derivatives w.r.t. to each element of X
  35. % Theta_grad - num_users x num_features matrix, containing the
  36. % partial derivatives w.r.t. to each element of Theta
  37. %
  38. % =============================================================
  39. grad = [X_grad(:); Theta_grad(:)];
  40. end

2.2.1 Collaborating filtering cost function

未经过regularization的代价函数如下:
image_1c02oq6s3svs1uitq2b1nc91gse9.png-11.6kB
故增加如下代码:

  1. diff = (X * Theta' - Y);
  2. vari = diff.^2;
  3. J = 1/2 * sum(vari(R == 1));

2.2.2 Collaborating filtering gradient

公式如下:

image_1c02st0h3a1349bbc41seu1t2m1g.png-19.2kB

按照文档里的Tips进行向量化,加入如下代码:

  1. for i = 1: num_movies,
  2. X_grad(i,:) = sum(((diff(i,:).* R(i,:))'.* Theta));
  3. end;
  4. for j = 1: num_users,
  5. Theta_grad(j,:) = sum(((diff(:,j).* R(:,j)) .* X));
  6. end;

想了一会,发现好像可以更彻底地向量化

  1. X_grad = diff.* R * Theta;
  2. Theta_grad = (diff.*R)' * X;

2.2.3 Regularized cost function

2.2.4 Regularized gradient

image_1c031n04f1f171l528ud1aklmid1t.png-23.5kB
只需要在上述代码中加入regularization的部分即可。
如下:

  1. J = 1/2 * sum(vari(R == 1)) + lambda/2 * (sum((Theta.^2)(:)) + sum((X.^2)(:)));
  2. X_grad = diff.*R*Theta + lambda * X;
  3. Theta_grad = (diff.*R)' * X + lambda * Theta;

2.3 Learning movie recommendations

2.3.1 Recommendations

在脚本文件中填入自己对movie_list.txt中部分电影的评分。
似乎提供的电影都是新世纪以前上映的,因此我没有看过太多。我挑选了如下几部评分:
image_1c0338ben1l9o1u4a7166g1moc2a.png-7.1kB
推荐系统给我推荐了如下电影:
image_1c033ai1e1kkq662btr1apa59d3n.png-33.1kB

我没有办法判断准不准,因为我一部也没有看过。但随便搜了其中的几部,感觉我可能并不会喜欢。
也许是我提供的样本太小了,也许是这个推荐系统太简陋了吧。

添加新批注
在作者公开此批注前,只有你和作者可见。
回复批注