Skip to content

Exercise 7

Part of the course Machine Learning for Materials and Chemistry.

Kernel methods can be implemented quite straightforwardly for small datasets. Use the Morse potential with all constants being 1 as target function.

Task 7.1: Hyperparameter optimizsation

Start from the solution code from exercise 6 and include the optimisation of the sole hyperparameter gamma for each training set size. It is recommended that you just try a grid of values for the hyperparameter. Optimize using the correct loss function for KRR (which one is that?) Which problem do you face if you run this optimization repeatedly?

Task 7.2: Average performance

Using the loss function of KRR, plot the average performance over 10 models as a function of training set sizes by showing both average and the variance as an errorbar. Do the same for the MAE and the RMSE. What do you observe?

Task 7.3: Optimize the kernel

Implicitly, the kernel is another hyperparameter. For simplicity, we will consider one family of kernel functions only: choose different exponents of the distances in the Gaussian kernel function expression and plot the kernel performance as a function of that exponent.