added instructions for exercise
This commit is contained in:
parent
3967778878
commit
500dafacf5
70
README.md
70
README.md
|
@ -2,20 +2,51 @@
|
||||||
**Important**: these are instructor notes, remove this file before showing the materials to the students. The notes can be added after the lecture, of course.
|
**Important**: these are instructor notes, remove this file before showing the materials to the students. The notes can be added after the lecture, of course.
|
||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
- [Puzzle](puzzle.ipynb) (how swapping two nested for-loops makes out for a >27× slowdown
|
- [Puzzle](/puzzle.ipynb) ➔ [read-only rendered notebook](https://nbviewer.org/urls/git.aspp.school/ASPP/2024-heraklion-comp-arch/raw/branch/main/puzzle.ipynb)
|
||||||
|
- Question: how come that swapping dimensions in a for-loop makes out for a huge slowdown?
|
||||||
- Let students play around with the notebook and try to find the "bug"
|
- Let students play around with the notebook and try to find the "bug"
|
||||||
- A more thorough benchmark using the same code is [here](benchmark_python/)
|
- A more thorough [benchmark](benchmark_python/)
|
||||||
|
|
||||||
|
|
||||||
## A digression in CPU architecture and the memory hierarchy
|
## A digression in CPU architecture and the memory hierarchy
|
||||||
|
|
||||||
- Go to [A Primer in CPU architecture](architecture)
|
- Go to [A Primer in CPU architecture](architecture/)
|
||||||
- The need for a hierarchical access to data for the CPU should be clear now ➔ the "starving" CPU problem
|
- Measure size and timings for the memory hierarchy on my machine with a low level [C benchmark](benchmark_low_level/)
|
||||||
- Have a look at the historical evolution of [speeds](speed/) of different components in a computer:
|
|
||||||
- the CPU clock rate
|
## Analog programming
|
||||||
- the memory (RAM) bandwidth, latency clock rate
|
Two exercises to activate the body and the mind
|
||||||
- the storage media access rates
|
|
||||||
|
Common goal of both exercises is to sort a deck of tarot cards by value
|
||||||
|
### First experiment: human sorting
|
||||||
|
Setup:
|
||||||
|
- 1 volunteer to keep the time spent sorting
|
||||||
|
- each person picks up a tarot card from the randomly shuffled deck on the table
|
||||||
|
- moving around and speaking is allowed until the tarot cards are displayed sorted on the table
|
||||||
|
|
||||||
|
### Second experiment: machine sorting
|
||||||
|
Setup:
|
||||||
|
- 2 volunteers to keep the time:
|
||||||
|
- one volunteer keeps the time spent *programming*
|
||||||
|
- one volunteer keeps the time spent *executing* the program
|
||||||
|
- 2 volunteers to be the *programmers*:
|
||||||
|
- can use the whiteboard
|
||||||
|
- can and should speak and think loudly and ask for help
|
||||||
|
- 2 volunteers to be two CPUs:
|
||||||
|
- only understand the instructions:
|
||||||
|
- **fetch** a value from a memory address into register `N` ➔ returns `0` if succeded else `1`
|
||||||
|
- **push** the value from register `N` to a memory address ➔ returns `0` if succeded else `1`
|
||||||
|
- **compare** var0 and var1 ➔ returns `0` if `var0 ≥ var1` else `1`
|
||||||
|
- 4 volunteers to be CPU registers:
|
||||||
|
- each register has a tag: `R1`, `R2`, `R3`, `R4`
|
||||||
|
- a value fetched from memory is kept in short-term memory by the registers
|
||||||
|
- the result value of an operation is stored in one register
|
||||||
|
- everyone else sits on their seats and represent RAM:
|
||||||
|
- they own a *value*, i.e. they hold on a tarot card
|
||||||
|
- they have an address based on their seating order: 0th seat, 1st seat, 2nd seat, 3rd seat, 4th seat, etc…
|
||||||
|
- when *fetched*, walk to the corresponding register and hand in their *value* (card)
|
||||||
|
- when *pushed*, walk to the corresponding register and fetch their new *value* (card)
|
||||||
|
- each RAM address comes and picks up a random tarot card as initialization step
|
||||||
|
|
||||||
- Measure size and timings for the memory hierarchy on my machine with a low level [C benchmark](benchmark_low_level)
|
|
||||||
|
|
||||||
## Back to the Python benchmark (second try)
|
## Back to the Python benchmark (second try)
|
||||||
|
|
||||||
|
@ -25,19 +56,28 @@
|
||||||
|
|
||||||
## Anatomy of a numpy array
|
## Anatomy of a numpy array
|
||||||
|
|
||||||
- [memory layout of numpy arrays](numpy)
|
- [memory layout of numpy arrays](numpy/)
|
||||||
|
|
||||||
## Back to the Python benchmark (third try)
|
## Back to the Python benchmark (third try)
|
||||||
- can we explain what is happening now? Yes, more or less ;-)
|
- can we explain what is happening now? Yes, more or less ;-)
|
||||||
- quick fix for the [puzzle](puzzle.ipynb): try and add `order='F'` in the "bad" snippet and see that is "fixes" the bug ➔ why?
|
- quick fix for the [puzzle](/puzzle.ipynb): try and add `order='F'` in the "bad" snippet and see that is "fixes" the bug ➔ why?
|
||||||
|
|
||||||
Notes on the [Python benchmark](benchmark_python/):
|
Notes on the [Python benchmark](benchmark_python/):
|
||||||
- while running it attached to the P-core (`cpu0`), the P-core was running under a constant load of 100% (almost completely user-time) and at a fixed frequency of 3.8 GHz, where the theoretical max would be 5.2 GHz
|
- while running it attached to the P-core (`cpu0`), the P-core was running under a constant load of 100% (almost completely user-time) and at a fixed frequency of 3.8 GHz, where the theoretical max would be 5.2 GHz
|
||||||
- while running it attached to the E-core (`cpu10`), the E-core was running under a constant load of 100% (almost completely user-time) and at a fixed requency of 2.5 GHz, where the theoretical max would be 3.9 GHz
|
|
||||||
- ... ➔ the CPU does not "starve" because it scales its speed down to match the memory throughput? Or I am misinterpreting this? This problem which at first sight should be perfectly memory-bound, becomes CPU-bound, or actually, exactly balanced? ;-)
|
|
||||||
|
|
||||||
## Excerpts of parallel Python
|
➔ the CPU does not "starve" because it scales its speed down to match the memory throughput? Or I am misinterpreting this? This problem which at first sight should be perfectly memory-bound, becomes CPU-bound, or actually, exactly balanced? From the [Intel documentation](https://lenovopress.lenovo.com/lp1836-tuning-uefi-settings-4th-gen-intel-xeon-scalable-processor):
|
||||||
- [The dangers and joys of automatic parallelization](parallel) (like in numpy linear algebra routines) and the use of clusters/schedulers (but also on your laptop)
|
> **Energy Efficient Turbo**
|
||||||
|
>
|
||||||
|
> When `Energy Efficient Turbo` is enabled, the CPU’s optimal turbo
|
||||||
|
> frequency will be tuned dynamically based on CPU utilization. The actual
|
||||||
|
> turbo frequency the CPU is set to is proportionally adjusted based on the
|
||||||
|
> duration of the turbo request. Memory usage of the OS is also monitored.
|
||||||
|
> If the OS is using memory heavily and the CPU core performance is limited
|
||||||
|
> by the available memory resources, the turbo frequency will be reduced
|
||||||
|
> until more memory load dissipates, and more memory resources become
|
||||||
|
> available. The power/performance bias setting also influences energy
|
||||||
|
> efficient turbo. `Energy Efficient Turbo` is best used when attempting to
|
||||||
|
> maximize power consumption over performance.
|
||||||
|
|
||||||
## Concluding remarks
|
## Concluding remarks
|
||||||
- how is all of this relevant for the users of a computing cluster?
|
- how is all of this relevant for the users of a computing cluster?
|
||||||
|
|
Loading…
Reference in a new issue