Skip to content

Machine Learning for Materials and Chemistry

Aims to be a elective Master module, can be followed by Bachelor students. Developed at University of Kassel.

Slides

Exercises

Meant to both deepen understanding of the lecture content, but also to boost Python skills.

Exercise 1: Getting Started

Check out the Python course (Basics and NumPy until Array Generators) and do any three exercises which do not have solutions in the course.

Exercise 2: Linear regression and model stability

Linear regression is a common technique used in statistics and machine learning to model the relationship between two variables. However, when using linear regression with a small number of data points, the model may be subject to noise and may not accurately capture the underlying pattern in the data. This can result in overfitting, underfitting, or inaccurate predictions.

As more data points are added, the model becomes more accurate and better able to capture the underlying pattern in the data. This is because the addition of more data points provides more information about the relationship between the variables and helps to reduce the impact of noise in the data.

The Theil-Sen regressor is a robust alternative to linear regression that is designed to overcome the issue of noise in small datasets. It uses a method called the "median slope" to estimate the slope of the line that best fits the data, rather than the mean slope used in linear regression. However, as the number of data points increases, the difference between the Theil-Sen regressor and linear regression tends to become smaller, and linear regression can provide accurate results when enough data is available.

We have used this particular regressor to improve Hammett models for reaction barriers.

Task 2.1: Implementing Linear Regression

  1. Write a Python function randomized_points(n: int) -> tuple[np.ndarray, np.ndarray] that generates n random points (x) uniformly distributed over the interval (0, 1) and the corresponding y=ax+b+ϵ, where a is the slope (a=2), b is the intercept (b=1), and ϵ is the error term that is drawn from a normal distribution with mean 0 and standard deviation 0.2. The function should return two arrays xs and ys, where xs contains the x-coordinates of the n data points and ys contains the corresponding y-coordinates.

  2. Write a Python function linear_regression(xs: np.ndarray, ys: np.ndarray) -> tuple[float, float] that takes in the arrays xs and ys from the previous step and returns the slope and intercept of a linear regression line that fits the data points.

Task 2.2: Using Theil-Sen Regressor from Scipy

  1. Write a Python function theil_sen(xs: np.ndarray, ys: np.ndarray) -> tuple[float, float] that performs linear regression using Theil-Sen Regressor from Scipy. The function should take in the arrays xs and ys from Task 1 and return the slope and intercept of the regression line.

Task 2.3: Assessing the Models

  1. Write a Python function predict(xs: np.ndarray, slope: float, intercept: float) -> np.ndarray that takes in the array xs and the slope and intercept of a regression line and returns an array of predicted y-values for the corresponding x-values in xs.

  2. Write a Python function calculate_prediction_error(n: int) -> Tuple[float, float] that generates n random points in the interval (0,1) and uses the predict function to estimate the corresponding y-values for each x-value using the slope and intercept of the regression line. The function should then calculate the prediction error by subtracting the true noise-free y-values from the estimated y-values. Finally, the function should return the mean and standard deviation of the prediction error.

Exercise 3: Turning a molecule into a mathematical object

When dealing with molecules in machine learning contexts, we need mathematical objects describing such molecules. The straightforward use of cartesian coordinates and nuclear charges (which enter the Hamiltonian) turns out to be disadvantageous. In this exercise, we explore two alternatives.

Normally, representations taylored to molecules such as the Coulomb Matrix perform better than one-hot encoding. However, sometimes meaningful categorial representations catch more of the Physics involved, such that one-hot-encoding leaves more advanced representations behind, e.g. in this work on learning reaction barriers.

Task 3.1: Implement the Coulomb Matrix representation

Write a python function coulomb_matrix(positions: np.ndarray, charges: np.ndarray) -> np.ndarray to calculate the coulomb matrix representation from the cartesian coordinates (xs, ys, zs) and the nuclear charges.

Mij={12Zi2.4if i=jZiZjRiRjif ij

You may use the following molecular geometries to test your function:

Water:

O 0.0 0.0 0.0
H -0.77 -0.573 0.0
H 0.775 -0.566 0.0

Caffeine:

C 3.184 1.228 0.0
N 2.239 0.145 0.001
C 2.488 -1.176 0.004
N 1.401 -1.918 0.003
C 0.383 -1.03 -0.002
C 0.869 0.27 -0.003
C 0.012 1.416 -0.002
O 0.357 2.585 -0.003
N -1.33 1.037 0.002
C -1.851 -0.253 0.001
O -3.047 -0.443 0.005
N -0.95 -1.296 0.003
C -1.43 -2.661 -0.009
C -2.327 2.096 0.0
H 3.838 1.161 0.871
H 3.794 1.2 -0.905
H 2.628 2.164 0.033
H 3.48 -1.583 0.007
H -0.746 -3.272 0.576
H -2.432 -2.674 0.417
H -1.468 -3.041 -1.031
H -2.942 2.026 0.897
H -2.971 1.993 -0.873
H -1.797 3.045 -0.025

Task 3.2: One-hot encoding

To learn categorial data, we need a representation that allows for regression to be done. One-hot encoding is such a representation, of which we will now consider a simplified, slightly overcomplete version for sake of simplicity of this exercise. In one-hot encoding, each category in the data is represented by a binary vector, with a length equal to the number of categories. Each vector has a value of 1 in the position corresponding to the category it represents, and a value of 0 in all other positions. For example, if we have three categories: alkanes, amines, and acids, then the one-hot encoding for alkanes would be [1,0,0], for amines [0,1,0], and for acids [0,0,1]. If there are multiple such categories, they will be appended: if we take the categories (alkanes, amines), then the one-hot encoding would be [1,0,0,0,1,0].

Write a function one_hot(categories: list[str], possible_categories: list[str]) -> list[list[int]] that converts a list of categories such as letters chosen from a set of possible categories into a one-hot encoded vector.

Examples:

one_hot(["A"], ["A", "B"]) -> [1,0]
one_hot(["A", "C"], ["A", "B", "C"]) -> [1,0,0,0,0,1]

Task 3.3: Invertible representations

  1. Is the Coulomb matrix representation invertible, i.e. can you recover the molecule from the representation? Explain your answer.

  2. Write a function sum_formula(coulomb_matrix: list[list[float]]) -> str that obtains the sum formula from the Coulomb matrix representation.

Exercise 4: Loss functions and their impact

Loss functions constitute the key metric that is minimized in order to improve a model. There are many different possible choices which impact the predictions of a model. This means that the loss function needs to be specified to fully qualify a model.

Different loss functions can yield highly disparate predictive power, exemplified in this work estimating bond dissociation energies.

Task 4.1: Loss functions

Let's fit a function of the form

y=aexp(bx)

which can be logarithmized as

log(y)=log(a)+bx

Write loss functions for the squared error of the residuals in the original form and the logarithmized form. Use the following signature: original_loss(xs: np.ndarray, ys: np.ndarray, params: tuple[float]) -> float.

Task 4.2: Fit models

Write a function random_data(a: float, b: float, c: float) -> tuple(np.ndarray, np.ndarray) that returns the values 0-9 for xs and some exponential of the form

y=aexp(bx)+cϵ

where epsilon is some elementwise normal distributed random noise. Fit the data using the loss functions of task 1 and scipy.optimize.minimize once for c=0 and once for c being some non-zero value. Discuss the results.

Task 4.3: Motivation for loss functions

Consider the effects you saw in task 2. What could be possible motiviations to choose one loss function over another? Is there a best loss function? Which problem do you see for loss functions for multivariate models?

Exercise 5: Simple models

Most models for materials/chemistry applications build upon some description of a local environment. One way of doing so is to assign the same property to each similar environment. These kind of models can be refined by making the environment larger, e.g. by considering more neighbors. If the whole molecule is segmented into complete yet disjunct parts, this family of models is called a group contribution model/method. Often, models of increasing local environments yield acceptable results, but fail to converge, as typically many-body and long-range effects and interactions contribute significantly, especially for charged systems.

When considering the full environment, i.e. the whole molecule for small ones, it is possible to learn the 3D geometry from the graphs.

Task 5.1: Dressed atom from a database

The file bondorder.txt contains one line per molecule with three space-separated groups:

  • comma-separated elements
  • comma-separated bond order matrix as upper triangle (there are relevant numpy functions; note that bond orders of 1.5 are used to denote aromatic bonds)
  • energy in Hartrees

For water, the row would be

O,H,H 1,1,0 -74.9621236632301

Write a function that reads this file and estimates the energies using the dressed atom model, i.e. approximating the total energy as a sum of atom-wise contributions which only depend on the elements in question. For example, water has the sum formula of H2O, so the total energy of water can be approximated as twice the energy of a hydrogen plus once the energy of an oxygen. The atomic energies in this model are fitting parameters, which can be obtained via linear regression. Which relative error do you get for the energies?

Task 5.2: Bond counting

Now follow the same strategy as in task 1, but also consider all unique bonds, i.e. a CO bond, a CH bond, ... The total energy is now assumed to have one contribution from every element and another contribution from every bond. Which relative error do you get now?

Task 5.3: Generalisation

Give three different reasons (with examples) where this method must fail beyond the initial success you showed in task 1 and 2. Are these methods useful then? Discuss in the context of data standardisation.

Exercise 6: Graph representations

The interpretation of molecular bond topology as graphs opens up a wide array of concepts and methods. While the interpretation has many edge cases where it fails, it is widely applicable and a common starting point to build representations which are independent of any given molecular configuration. Formally, this is a well-defined mapping, since the graph represents all minimum configurations of realisations of molecules with that graph which are thermally accessible. This exploits the fact that barriers for bond reorganisation are often comparably high and insurmountable at room temperature.

Task 6.1: Trust, but verify

Not all bond order matrices are valid single molecules. Write a function is_valid(bom: np.ndarray) -> bool which checks as many aspects about a bond order matrix as you can think of.

Task 6.2: It's all connected

Write a function is_connected(bom: np.ndarray) -> bool which takes a bond order matrix and determines whether the molecular graph is connected. First, describe (but do not implement) a way how this can be done in plain python without any special graph tricks. Explain why this is costly for large graphs.

Then, realise that non-zero entries in the matrix powers Ak of the adjacency matrix A denote all pairs connected by a walk of length k. Use this to check wheter a molecular graph is connected. Explain why this is costly for large graphs.

Optionally, use the hint that the Fiedler value tells you about graph connectivity without matrix powers. Implement in numpy and explain why this is costly for large graphs.

If you want to double-check your results, you may use networkx.is_connected.

Task 6.3: Tell me y

Describe how to guess which elements are present from just a bond order matrix. Think of three physical or chemical properties which are more likely to be learnable by regression using a graph-based or a geometry-based molecular representation. Justify your categorization for each.

Exercise 7: Decision trees

This time, no code allowed.

Task 7.1: Managing expectations

You have two dice A (can show 1, 2, 3) and B (can show 1, 3 each) and a coin C (can show the numbers 0 and 2). If the quantity

Q(A+B)C

is to be found: which item evaluated first offers the largest insight into the results? Think about the expectation value of the standard deviation.

Task 7.2: Making choices

Create the smallest possible decision tree which estimates the total energy of a molecule (any bonds obeying the octet rule) of at most 4 atoms of the elements C, H, and F. Why is it a) inefficient and b) futile to generalize to many atoms (or even 5)?

Task 7.3: Less is more

What do you expect if columns with random entries are added to a small dataset that is the input to a decision tree model? How would that be in the case of very large datasets?

Exercise 8: Sklearn

Scikit-learn (also sklearn) is a popular software package for machine learning applications. Please read the quickstart documentation.

Task 8.1: Reading data

In moodle, you find a CSV file with training data. You can load the data in sklearn by first loading them into a pandas dataframe. The labels are all columns except id, SMILES, and _ft_*. Plot a histogram of their distributions. Can you already tell which ones are easier to learn?

Task 8.2: Trees

Fit a regression model using a decision tree with different values of max_depth. What do you observe?

Task 8.3: Pipelines

One of the core features of sklearn is to combine estimators and data preprocessing into pipelines which simplify the code you write but also help to ensure that no data leakage happens. Read the documentation and write a pipeline which scales all the features. Explain why this will only make a difference for other methods than decision trees.

Exercise 9: KRR

Task 9.1: Generate training data

Write a function training_data(npoints: int) which generates a training set of npoints training points over the interval (0.2, 5) including their y value, which here should be the Morse potential with unit parameters.

Task 9.2: Implement Kernel Ridge Regression

Do not use external libraries for this task, but rather implement KRR from the equations in the slides using scipy/numpy only. Implement a function build_model(points: np.ndarray) which takes the data points from task 1 and returns the model coefficients (alpha on the slides) for a Gaussian kernel. Write another function evaluate(points: np.ndarray, alphas: np.ndarray, testset: np.ndarray) which returns the mean absolute error on a number of points in the testset of the same length as the training data.

Task 9.3: Learning curves

Run your code for different sizes of the training set and plot the error as a function of the nuber of training points. What do you observe? Do you have some ideas how the learning curve could become steeper or be lower? Hint: What if the domain was not (0.2, 5) but rather (0.2, 100)?