Exercise 01 - Tensors¶

In Chapter 1 of the lecture, we recapitulated tensor notation and tensor analysis. In this exercise, we will learn how to code these operations with torch in Python.

Task 1 - Installation and Python Basics¶

There are many ways to install Python and many editors to write Python code. If you have already a version of Python or Anaconda installed, you can keep that version. However, if you are installing Python for the first time, I recommend the following procedure:

Install a minimal Python¶

  • Go to https://github.com/conda-forge/miniforge and download installer of "Miniforge 3" for your operating system
  • Start the installer
  • Install for "Just Me" and add to PATH variable

Install Visual Studio Code as your code editor¶

  • Go to https://code.visualstudio.com and download installer of "Visual Studio Code" for your operating system
  • Start the installer
  • Install using your personal preferences (desktop icon etc.)
  • Customize to your liking

Install required packages¶

To solve the tasks with code, we will use a package called torch-fem. It is a differentiable finite element solver based on the PyTorch framework. PyTorch is a powerful Python package to operate on tensors. In comparison to NumPy, it stores gradients together with tensors and thus allows automatic differentiation. The package is used widely for machine learning and optimization.

For installation it is best to create a new conda environment via

    conda create -n "struct_opt" python

and activate that environment via

    conda activate struct_opt

to have a fresh new independent virtual Python environment to install the required packages with this course. It is highly recommended to use such an environment to prevent potential conflicts with other Python projects.

In the activated environment, you should install the package torch-fem

    pip install torch-fem

to install the required packages. After that, you should be able to import the torch package in this Jupyter Notebook:

In [ ]:
import matplotlib.pyplot as plt
import torch
from torchfem.utils import plot_contours

torch.set_default_dtype(torch.double)

a) Define the variables $a=2.0$ and $b=3.0$.

In [ ]:
# Implement your solution here

b) Compute the sum $c=a+b$ and print the result using the built-in print() function.

In [ ]:
# Implement your solution here

c) Use an f-string to print a formatted statement as "The sum of a=< a > and b=< b > is < c >."

In [ ]:
# Implement your solution here

d) Here is a function multiply(x,y). Call it with variables $a$ and $b$ and assign the result to a variable $d$. Print $d$.

In [ ]:
def multiply(x, y):
    return x * y


# Implement your solution here

e) Write a function that divides two numbers and adds a third number to the result. Test it by computing $e$ with inputs $a$, $b$, $4.0$. Print $e$.

In [ ]:
# Implement your solution here

f) You get a list of integers called numbers. Iterate over the items in the list with a for loop and print for each item wether it is odd or even.

Hint: The modulo operator i%j returns the remainder of the division $i/j$.

In [ ]:
numbers = [33, 43, 9, 7, 38, 25, 17, 19, 29]

# Implement your solution here

g) Implement a function that takes a list as argument and finds the maximum value in that list. It should return the maximum value and the location of the maximum in the list.

In [ ]:
# Implement your solution here

value, position = find_max(numbers)
print(f"Resulting value: {value}, resulting position: {position}")

Task 2: Vectors and vector products in torch¶

Two vectors $\mathbf{x}, \mathbf{y} \in \mathcal{R}^3$ are given by their components $$ x_i = \begin{pmatrix}2\\1\\3\end{pmatrix} \quad y_i = \begin{pmatrix}5\\0\\1\end{pmatrix} $$

a) Define the vectors in torch using torch.tensor(data). The data is an array-like object, e.g. a list or tuple.

In [ ]:
# Implement your solution here

b) Compute the inner product (=scalar product, dot product) between those two tensors using torch.inner(x,y) and print the result.

In [ ]:
# Implement your solution here

c) Compute the outer product (=dyadic product) between those two tensors using torch.outer(x,y) and print the result.

In [ ]:
# Implement your solution here

Task 3 - Tensors and tensor products in torch¶

Given are the tensors $\mathbf{n} \in \mathcal{R}^{3}$, $\mathbf{E} \in \mathcal{R}^{3 \times 3}$, and $\mathbb{C} \in \mathcal{R}^{3 \times 3 \times 3 \times 3}$ by their components $$ n_i = \begin{pmatrix}1.0\\0.0\\0.0\end{pmatrix} $$ $$ \pmb{\varepsilon}_{ij} = \begin{pmatrix} 5.0 & 0.1 & 0.0\\ 0.1 & 2.0 & 0.0\\ 0.0 & 0.0 & 1.0 \end{pmatrix} $$ $$ C_{ijkl} = \lambda \delta_{ij} \delta_{kl} + \mu \left(\delta_{ik}\delta_{jl} + \delta_{il}\delta_{jk}\right) $$ with $\lambda=1.0$ and $\mu=2.0$

a) Define the tensors $n$ and $\pmb{\varepsilon}$ using torch.tensor() and print them.

In [ ]:
# Implement your solution here

b) Define the tensor $\mathbb{C}$ using torch.einsum(). See https://pytorch.org/docs/stable/generated/torch.einsum.html

In [ ]:
# Implement your solution here

b) Compute a stress tensor $$\pmb{\sigma} = \mathbb{C} : \pmb{\varepsilon}$$ using torch.tensordot() and print the result.

In [ ]:
# Implement your solution here

c) Compute a traction tensor $$\mathbf{t} = \pmb{\sigma} \cdot \mathbf{n}$$ using torch.matmul() or its short form @ and print the result.

In [ ]:
# Implement your solution here

d) Compute a strain energy density $$E = \frac{1}{2}\pmb{\sigma} : \pmb{\varepsilon}$$ using torch.tensordot() and print the result.

In [ ]:
# Implement your solution here

Task 4: Gradients in 1D¶

We are given a function $g: \mathbf{R} \rightarrow \mathbf{R}$ defined as

$$ g(x) = x^2+x+1 $$

define the function, compute its gradient and plot it on $x \in [-5, 5]$.

a) Define the function.

In [ ]:
# Implement your solution here

b) Use torch.linspace(start, end, steps, requires_grad=True) to create a variable $x \in [-5, 5]$. Setting requires_grad=True enables the computation of gradients using automatic differentiation. We treat this powerful feature as a black-box in this lecture.

In [ ]:
# Implement your solution here

# Compute the gradient
dgdx = torch.autograd.grad(g(x).sum(), x)[0]

c) Verify that the shown results are correct by hand calculations.

In [ ]:
# Plot function and gradient
with torch.no_grad():
    fig, ax = plt.subplots(1, 2, figsize=(10, 5))
    ax[0].plot(x, g(x), color="black")
    ax[0].set_xlabel("x")
    ax[0].set_ylabel("g(x)")
    ax[0].set_title("Function g(x)")
    ax[0].grid()
    ax[1].plot(x, dgdx, color="deeppink")
    ax[1].set_xlabel("x")
    ax[1].set_ylabel("dg/dx")
    ax[1].set_title("Gradient dg/dx")
    ax[1].grid()
    plt.tight_layout()
    plt.show()

Task 5: Gradients in 2D¶

Given is a vectorfield $f: \mathcal{R}^2 \rightarrow \mathcal{R}$ defined as

$$ f(\mathbf{x}) = (\mathbf{x} - \tilde{\mathbf{x}}) \cdot \mathbf{Q} \cdot (\mathbf{x} - \tilde{\mathbf{x}}) $$ with $$ \mathbf{Q} = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix} \quad \text{and} \quad \tilde{\mathbf{x}} = \begin{pmatrix} -1\\ 1 \end{pmatrix} $$

a) Compute the gradient analytically.

Doing these computations by hand takes a while. Therefore we take a look at how to compute gradients using PyTorch. To do so, we start by defining $\mathbf{Q}$, $\tilde{\mathbf{x}}$ and the function $f(\mathbf{x})$.

b) Implement the function $f(\mathbf{x})$.

In [ ]:
# Implement your solution here

We would like to be able to evaluate the function for many values of $\mathbf{x}$ at the same time. This is equivalent to passing a tensor of the shape $\mathcal{R}^{... \times 2}$ with arbitrary dimensions except the last axis.

c) Reimplement the function $f(x)$ employing an ellipsis ... in torch.einsum().

In [ ]:
# Implement your solution here

If your function is defined correctly, the following cell should plot the function values as a contour plot.

In [ ]:
# Define x grid
x0 = torch.linspace(-3, 3, steps=100, requires_grad=True)
x1 = torch.linspace(-3, 3, steps=100, requires_grad=True)
x_grid = torch.stack(torch.meshgrid(x0, x1, indexing="xy"), dim=2)

plot_contours(x_grid, f(x_grid), title="f(x)", figsize=(5, 5))

Note that the requires_grad=True argument defines that these specific tensors will be used in gradient computations. They reserve storage for the tensor data as well as the gradients. Now, lets compute the actual gradients with automatic differentiation.

d) Check if the gradients are computed correctly by comparing the result with the analytical derivation and your visual understanding of the gradient.

In [ ]:
# Compute gradient
dfdx = torch.autograd.grad(f(x_grid).sum(), x_grid)[0]

# Reproduce basic plot
plot_contours(x_grid, f(x_grid), title="f(x)", figsize=(5, 5))

# Plot gradient vectors as arrows on top of previous plot
with torch.no_grad():
    stride = 5
    plt.quiver(
        x_grid[::stride, ::stride, 0],
        x_grid[::stride, ::stride, 1],
        dfdx[::stride, ::stride, 0],
        dfdx[::stride, ::stride, 1],
    )