rewriting exercise
This commit is contained in:
parent
25dd96f746
commit
9bc9eb6292
|
@ -1,44 +1,52 @@
|
||||||
# Exercise 2b
|
# Exercise 2a: multithreading with NumPy
|
||||||
|
|
||||||
Execute numpy code with multiple threads.
|
Objective: investigate speed-up of numpy code with multiple threads.
|
||||||
|
|
||||||
```NOTE``` Remember to use `htop` in your terminal to track what the CPUs are doing.
|
```HINT``` Use `htop` in your terminal to track what the CPUs are doing.
|
||||||
|
|
||||||
## First
|
## First
|
||||||
|
|
||||||
Use the script `heavy_computation.py` with different numbers of threads.
|
The script `heavy_computation.py` performs some matrix calculations with numpy.
|
||||||
|
|
||||||
`OMP_NUM_THREADS` can be used to override the number of threads used:
|
You can change the number of threads that numpy uses for the calculation
|
||||||
|
using the `OMP_NUM_THREADS` environment variable like this:
|
||||||
```
|
```
|
||||||
OMP_NUM_THREADS=7 python heavy_computation.py
|
OMP_NUM_THREADS=7 python heavy_computation.py
|
||||||
```
|
```
|
||||||
|
|
||||||
The script will save the timing results into a `timings/` folder as `.txt` files.
|
The script will also measure the time to run the calculation and will save
|
||||||
|
the timing results into the `timings/` folder as a `.txt` file.
|
||||||
|
|
||||||
|
**TASK**: Execute the script `heavy_computation.py`, varying the numbers of threads.
|
||||||
|
You will plot the resulting calculating times in the second part below.
|
||||||
|
|
||||||
|
**QUESTION**
|
||||||
> What happens if `OMP_NUM_THREADS` is not set? How many threads are there? Why?
|
> What happens if `OMP_NUM_THREADS` is not set? How many threads are there? Why?
|
||||||
|
|
||||||
|
|
||||||
## Second
|
## Second
|
||||||
|
|
||||||
Plot the timing results from the first part, we wrote the IO for you in `timing_plot.py`.
|
In `plot.py`, we have given code that will load all of the timing data in `timings/`.
|
||||||
|
|
||||||
1. Plot a graph of execution duration vs. the number of threads
|
**TASK**: Add code to plot of the execution duration vs. the number of threads
|
||||||
2. Plot the execution speedup with respect to running a single-threaded process
|
|
||||||
|
|
||||||
Open a PR with your plotting code and post your plots in the conversation, don't upload binaries to the Git remote!
|
Open a PR with your plotting code and post your plots in the conversation, don't upload binaries to the Git remote!
|
||||||
|
|
||||||
|
**OPTIONAL TASK**: Add code to calculate and plot the speed-up time compared
|
||||||
|
to single-threaded execution. Include your code and plot in the PR.
|
||||||
|
|
||||||
|
**QUESTIONS**
|
||||||
|
|
||||||
> What does the result tell us about the optimum number of threads? Why?
|
> What does the result tell us about the optimum number of threads? Why?
|
||||||
|
|
||||||
> Does it take the same time as your colleagues to run? Why?
|
> Does it take the same time as your colleagues to run? Why?
|
||||||
|
|
||||||
## Extra
|
## Optional tasks
|
||||||
|
|
||||||
Investigate the runtime variability. Systematically run multiple instances with the same number of threads by modifying `heavy_computation.py`.
|
Investigate the runtime variability. Systematically run multiple instances with the same number of threads by modifying `heavy_computation.py`.
|
||||||
|
|
||||||
### Extra extra
|
|
||||||
|
|
||||||
How is the runtime affected when the problem becomes bigger? Is the optimum number of threads always the same?
|
How is the runtime affected when the problem becomes bigger? Is the optimum number of threads always the same?
|
||||||
|
|
||||||
How is the runtime affected when the memory is almost full? You can fill it up by creating a separate (unused) large numpy array.
|
How is the runtime affected when the memory is almost full? You can fill it up by creating a separate (unused) large numpy array.
|
||||||
|
|
||||||
How about running on battery vs having your laptop plugged in?
|
How about running on battery vs. having your laptop plugged in?
|
||||||
|
|
Loading…
Reference in a new issue