This commit is contained in:
Tiziano Zito 2024-08-05 12:42:05 +02:00
commit 57a8ab8eba
2 changed files with 211 additions and 0 deletions

43
README.md Normal file
View file

@ -0,0 +1,43 @@
# What every scientist should know about computer architecture
**Important**: these are instructor notes, remove this file before showing the materials to the students. The notes can be added after the lecture, of course.
## Introduction
- [Puzzle](puzzle.ipynb) (how swapping two nested for-loops makes out for a >27× slowdown
- Let students play around with the notebook and try to find the "bug"
- A more thorough benchmark using the same code is [here](benchmark_python/)
## A digression in CPU architecture and the memory hierarchy
- Go to [A Primer in CPU architecture](architecture)
- The need for a hierarchical access to data for the CPU should be clear now ➔ the "starving" CPU problem
- Have a look at the historical evolution of [speeds](speed/) of different components in a computer:
- the CPU clock rate
- the memory (RAM) bandwidth, latency clock rate
- the storage media access rates
- Measure size and timings for the memory hierarchy on my machine with a low level [C benchmark](benchmark_low_level)
## Back to the Python benchmark (second try)
- can we explain what is happening?
- it must have to do with the good (or bad) use of cache properties
- but how are numpy arrays laid out in memory?
## Anatomy of a numpy array
- [memory layout of numpy arrays](numpy)
## Back to the Python benchmark (third try)
- can we explain what is happening now? Yes, more or less ;-)
- quick fix for the [puzzle](puzzle.ipynb): try and add `order='F'` in the "bad" snippet and see that is "fixes" the bug ➔ why?
Notes on the [Python benchmark](benchmark_python/):
- while running it attached to the P-core (`cpu0`), the P-core was running under a constant load of 100% (almost completely user-time) and at a fixed frequency of 3.8 GHz, where the theoretical max would be 5.2 GHz
- while running it attached to the E-core (`cpu10`), the E-core was running under a constant load of 100% (almost completely user-time) and at a fixed requency of 2.5 GHz, where the theoretical max would be 3.9 GHz
- ... ➔ the CPU does not "starve" because it scales its speed down to match the memory throughput? Or I am misinterpreting this? This problem which at first sight should be perfectly memory-bound, becomes CPU-bound, or actually, exactly balanced? ;-)
## Excerpts of parallel Python
- [The dangers and joys of automatic parallelization](parallel) (like in numpy linear algebra routines) and the use of clusters/schedulers (but also on your laptop)
## Concluding remarks
- how is all of this relevant for the users of a computing cluster?

168
puzzle.ipynb Normal file
View file

@ -0,0 +1,168 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-03-04T09:40:28.904Z",
"iopub.status.busy": "2024-03-04T09:40:28.896Z",
"iopub.status.idle": "2024-03-04T09:40:28.978Z",
"shell.execute_reply": "2024-03-04T09:40:28.967Z"
}
},
"outputs": [],
"source": [
"import numpy as np"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-03-04T10:02:39.062Z",
"iopub.status.busy": "2024-03-04T10:02:39.057Z",
"iopub.status.idle": "2024-03-04T10:02:39.068Z",
"shell.execute_reply": "2024-03-04T10:02:39.071Z"
}
},
"outputs": [],
"source": [
"# create a collection of time series\n",
"# in real life, this data comes from an experiment/simulation\n",
"n_series = 30\n",
"len_one_series = 2**21 # ➔ 2^21 ≈ 2 millions (8Bytes x 2^21/2^20 [MB] = 16 MB)\n",
"time_series = []\n",
"for idx in range(n_series):\n",
" time_series.append(np.zeros((len_one_series,1), dtype='float64'))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-03-04T10:02:41.027Z",
"iopub.status.busy": "2024-03-04T10:02:41.020Z",
"iopub.status.idle": "2024-03-04T10:02:41.036Z",
"shell.execute_reply": "2024-03-04T10:02:41.040Z"
}
},
"outputs": [],
"source": [
"# how much memory does one time series need?\n",
"ts_size = time_series[0].nbytes/2**20 # -> 2^20 is 1MB\n",
"print('Size of one time series (MB):', ts_size)\n",
"print('Size of collection (MB):', n_series*ts_size)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-03-04T10:06:08.461Z",
"iopub.status.busy": "2024-03-04T10:06:08.459Z",
"iopub.status.idle": "2024-03-04T10:06:08.466Z",
"shell.execute_reply": "2024-03-04T10:06:08.468Z"
}
},
"outputs": [],
"source": [
"# let's load the collection in one big array\n",
"def load_data_row(x, time_series):\n",
" \"\"\"Store one time series per raw\"\"\"\n",
" for row, ts in enumerate(time_series):\n",
" x[row,:] = ts\n",
" return x"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-03-04T10:06:10.280Z",
"iopub.status.busy": "2024-03-04T10:06:10.277Z",
"iopub.status.idle": "2024-03-04T10:06:10.284Z",
"shell.execute_reply": "2024-03-04T10:06:10.288Z"
}
},
"outputs": [],
"source": [
"# let's load the collection in one big array\n",
"def load_data_column(x, time_series):\n",
" \"\"\"Store one time series per column\"\"\"\n",
" for column, ts in enumerate(time_series):\n",
" x[:,column] = ts\n",
" return x"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-03-04T10:06:14.959Z",
"iopub.status.busy": "2024-03-04T10:06:14.956Z",
"iopub.status.idle": "2024-03-04T10:06:17.437Z",
"shell.execute_reply": "2024-03-04T10:06:17.443Z"
}
},
"outputs": [],
"source": [
"x = np.zeros((n_series, len_one_series, 1), dtype='float64')\n",
"%timeit load_data_row(x, time_series)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"execution": {
"iopub.execute_input": "2024-03-04T10:06:20.056Z",
"iopub.status.busy": "2024-03-04T10:06:20.053Z",
"iopub.status.idle": "2024-03-04T10:06:21.695Z",
"shell.execute_reply": "2024-03-04T10:06:21.700Z"
}
},
"outputs": [],
"source": [
"x = np.zeros((len_one_series, n_series, 1), dtype='float64')\n",
"%timeit load_data_column(x, time_series)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.4"
},
"nteract": {
"version": "0.28.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}