# E. Scientific coding exercises¶

-See scientific coding material here

## Modular code development¶

Use this script: `pi.py`

A+ presents the exercise submission form here.

## Software testing¶

Use these scripts: `test_fizzbuzz.py` and `fizzbuzz.py`.

A+ presents the exercise submission form here.

## Profiling¶

Use this script: `sorting.py`.

### Profiling exercises¶

In script `sorting.py` you can find an implementation of the insertion sort algorithm, and two different implementations of the quicksort algoritm.

Our goal in this exercise is to profile this script to get an idea of the running time and computational complexity of the algorithms.

#### Usage and profile¶

The `sorting.py` is run as follows.

Note

Running the script should take less than 20 seconds.

```\$ ./sorting.py
Now sorting using quicksort_lumoto
Now sorting using quicksort
Now sorting using insertion_sort
\$
```

Profile the three sorting algorithms and answer the following questions.

##### Hint¶

There are numerous ways to answer such questions, but we encourage the use of Python profiler `cProfile`. You can import the `cProfile` module and take advantage of its `run` function. Please refer to Hands-on Scientfic Computing for a tutorial and/or read through Python’s documentation of profilers for more information.

In terms of total running time, which algorithm among the three is best?
In function `quicksort`, which is taking more time, the `copy` function or the list comprehensions at lines 23 and 24? Write down “copy” or “list comprehension”.
Which of the two implementation of quicksort has a constant number of calls for the `copy` function That is it’s not a function of the input array size?

## Debugging¶

Use these scripts: `graph.edges`, `graph.py`, `main.py` and `test_graph.py`.

A+ presents the exercise submission form here.

## Software licensing¶

Choose whether a claim is true or false.

Publishing your code online in an open repository is already considered open-source software.
A DOI (digital object identifier) is the right tool to use when your material is used in multiple locations and you want to confirm that the objects are truly the same.
There is always going to be a risk of somebody stealing your ideas when sharing your work, so keeping important matters private is better.

## Documentation¶

Follow these exercises 1 and 2 and return URL to your finished repository. You can earn maximum 4 points for this exercise after course staff has evaluated your submission. Please make your repository public on Github so that your reviewer is able to assess this exercise. Notice: your submission will be unaccepted by default, so don’t worry. When you apply for credits, we will manually evaluate this exercise and award the points.

## Reproducible research¶

Below you can see different scenarios where you should figure out an area of reproducible research that would help.

You have found an interesting project directory which results you would like to reproduce. You can’t find a description of how the work should be replicated. What is the project missing?
You have now managed to find a `requirements.txt` file. The dependencies in the file are not quite complete. What are they missing?
Finally you have been able to reproduce the project and you have the results. You are again searching the directory because you don’t understand the results. What is the directory missing?

## Feedback¶

Posting submission...