Fill This Form To Receive Instant Help
Homework answers / question archive / Machine Learning Assignment 2 Deadline: May 11, 2022 at 23:55h
Machine Learning
Assignment 2
Deadline: May 11, 2022 at 23:55h.
Upload your report (report.pdf), your implementation (lsq regression.py) and your figure file (figures.pdf) to the TeachCenter. Please do not zip your files. Please use the provided framework-file for your implementation.
The goal in this assignment is to learn about linear least squares regression and the double descent phenomenon, shown in Figure 1. In the classical learning setting, the U-shaped risk curve can be observed indicating a bad test error while the training error is very low, i.e. the model does not generalize well to new data. However, a highly over-parameterized model with a large capacity allows the test error to go down again in a second descent (“double descent”), which can sometimes be observed in over-parametrized deep learning settings.
![]() |
Figure 1: The double descent risk curve, which shows in addition to the classical U-shaped risk curve the decreasing test error for high-capacity risk function classes. Figure taken from [1].
We want to reproduce this phenomenon in the context of linear least squares regression with random features. Assume we have input data x = {x1, ..., xN }, with xn ∈ Rd d-dimensional samples uniformely distributed on the unit sphere such that ?xn?2 = 1. Furthermore, consider associated targets y = {y1, ..., yN } with yn ∈ R according to
|
|
|
|
|
Our goal is to model the underlying relation between training input data x and targets y and apply this to new, unseen test data by minimizing the regularized least squares error function
|
2
|
λ
|
w∈RM 2
2 2
n=1
The (possibly) non-linear transform φ(·) lifts the input data xn to a higher-dimensional space with
M features. Here, we are going to use random features v = {v1, ..., vM } on the unit sphere with
1
|
1
φ(xn) = √M
|
{m=1,...,M }
(3)
|
|
(y, yˆ) = (yi
N
i=1
— yˆi) . (4)
Tasks
|
y = {y1, ..., yN } with N = 200, d = 5 and σ = 2 according to eq. (1).
In the same manner, create a test dataset with Nt = 50 for both test input data and test targets.
|
|
|
loss in each run. For each M , do the experiment r = 5 times to obtain averaged scores.
|
|
A = QR
z = QT b
Rx = z → x = R−1z (solve this with numpy.linalg.solve)
|
|
Implement the double descent phenomenon in task12 of lsq regression.py. Do not use any other libraries than numpy and matplotlib.
2
The linear least squares problem from Task 1 can be reformulated in its dual representation, where an equivalent solution can be obtained.
The dual problem is given by
a∗ = arg min 1 aT Ka + λ ?a + y?2, (5)
a∈RN 2 2 2
introducing the kernel matrix K = ΦΦT ∈ RN ×N .
Having knowledge on either the feature transform φ(x) or the corresponding kernel matrix
k(x, x′) = φ(x)T φ(x′) allows us to operate very flexible in either the primal or the dual domain.
Our random ReLU features approximate an ArcCos kernel of the following form
|
k(x, x′) = 1 ?x??x′? sin(θ) + (π − θ) cos(θ) , (6)
|
with θ = cos−1 xT x' . Bear in mind that we designed the input data such that ?x? = 1. Also
note that the corresponding feature vectors in their explicit form do not have to be known.
Similar to Task 1 the dual solution can be obtained in closed-form and can be subsequently used to make predictions for unseen test data x. The relation between the primal solutions w required for making new predictions and the dual variable a is as follows:
|
w = 1 ΦT a. (7)
λ
Tasks
|
|
Use exactly the same datasets as in Task 1. For the train data x, compare the kernel K and ΦΦT . For different numbers of features M = 10, 200, 800 , evaluate both terms and plot the row n = 10 from both resulting N N matrices in one plot. Describe the influence of M . Compute for each M the mean absolute error between both 1D arrays, i.e.
|
|
|
MAE(Kn, (ΦΦT )n) = 1 ΣN |(Kn)i − (ΦΦT )n |.
Compare train and test errors obtained with the primal solution for each setting of M with
the dual solution.
Implement the dual problem also in task12 of lsq regression.py. Again, do not use any other libraries than numpy and matplotlib.
References
[1] Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine- learning practice and the classical bias–variance trade-off. Proceedings of the National Academy of Sciences, 116(32):15849–15854, 2019.