- EDIT
- 6. Programming exercises
- 6.2 Hello Worlds
Hello Worlds¶
This chapter includes three exercises on one page. Questionnaires and submission forms can exist anywhere and as many on one page as required. The automatic assessment of a submission is defined in the referenced YAML file.
This is the configuration file docker-compose.yml
:
version: '3'
volumes:
data:
rubyricdb:
services:
grader:
image: apluslms/run-mooc-grader:1.19
volumes:
- data:/data
- /var/run/docker.sock:/var/run/docker.sock
- /tmp/aplus:/tmp/aplus
- .:/srv/courses/default:ro
ports:
- "8080:8080"
plus:
image: apluslms/run-aplus-front:1.19
environment:
APLUS_ENABLE_DJANGO_DEBUG_TOOLBAR: 'false'
volumes:
- data:/data
ports:
- "8000:8000"
depends_on:
- grader
- acos
- rubyric
jutut:
image: apluslms/run-mooc-jutut:2.5
volumes:
- data:/data
ports:
- "8082:8082"
acos:
image: apluslms/run-acos-server
user: $USER_ID:$USER_GID
ports:
- "3000:3000"
#volumes:
# - ./_data/acos/:/var/log/acos
radar:
image: apluslms/run-radar
ports:
- "8001:8001"
# Django debug
- "5678:5678"
# Celery debug
- "5679:5679"
# Flower (celery dashboard)
- "5555:5555"
depends_on:
- plus
- grader
volumes:
- data:/data
rubyric:
image: apluslms/run-rubyric
volumes:
- data:/data
ports:
- "8091:8091"
depends_on:
- rubyricdb
rubyricdb:
image: postgres:13-bullseye
restart: always
environment:
POSTGRES_USER: rubyric
POSTGRES_PASSWORD: rubyric
POSTGRES_DB: rubyric
volumes:
- rubyricdb:/var/lib/postgresql/data
Note: acos is an optional component used for interactive exercises.
A+ presents the exercise submission form here.
A+ presents the exercise submission form here.
A+ presents the exercise submission form here.
Be careful with the RST and YAML syntaxes. They are too easy to break with blank space and indentations.
Hello Python: grading steps in detail¶
The following description is targeted for people who develop automatically assessed exercises. Details on how A+ and mooc-grader work are omitted.
The sequence begins from the client side.
The student submits the file functions.py to A+.
The following occurs in the server side.
A+ requests mooc-grader to grade a Hello Python exercise with the submitted file. Information about the student is not sent to the mooc-grader.
Mooc-grader knows from exercises/hello_python/config.yaml that it must start a new instance of Docker container apluslms/grade-python:3.9-4.4-4.0.
Docker reads the grade-python container image and starts it. A minimal Debian GNU/Linux operating system starts inside it. Because the aforementioned config.yaml has a
container: mount:
subsection, the directory exercises/hello_python in the aplus-manual directory is set visible inside the grade-python container in the directory /exercise (read-only).The file functions.py is copied inside the grade-python container into
/submission/user/functions.py
.
The directory structure inside the grade-python container is now essentially this:
/
├── bin
| ├── _prewrap
| ├── bash
| ├── capture
| ├── err-to-out
| ├── grade
| ├── sh
├── exercise
| ├── config.yaml
| ├── grader_tests.py
| ├── model.py
| ├── run.sh
| └── solution_wrong.py
├── feedback
| └── grading-script-errors
├── gw
├── submission
| └── user
| └── functions.py
└── usr
├── bin
| └── python3
└── local
└── lib
└── python3.9
└── dist-packages
└── graderutils
└── graderunittest.py
The following steps are executed inside the grade-python container.
Docker runs the command
/gw
insinde the grade-python container. This is the main grading script, "grade wrapper", and it comes from the grading-base container. The script is run in Dash (/bin/sh). The wrapper script will take care of redirecting output (stdout and stderr; see standard streams on Wikipedia) from the programs it calls to the file/feedback/grading-script-errors
. In addition, thegw
script will make sure that the working directory is set correctly.That is, gw changes the current working directory to
/submission/user
.Because the config.yaml file has the subsection
container: cmd
, the command/exercise/run.sh
is given to thegw
script as a command line parameter. In this case,gw
executes the run.sh file.The first line of /exercise/run.sh is
#!/bin/bash
, therefore a BASH shell is invoked to interpret the script.run.sh line
export PYTHONPATH=/submission/user
is executed. The environment variable PYTHONPATH is set.run.sh line
capture python3 /exercise/grader_tests.py
is executed. Because the Dockerfile of the container apluslms/grade-python hasFROM apluslms/grading-base:$BASE_TAG
, it is based on another container apluslms/grading-base.BASH finds the
capture
command at/bin/capture
. It is another BASH script which is coming from the grading-base container. (contents of the script).The first line of /bin/capture is the hashbang
#!/bin/sh
, therefore a Dash shell is invoked to interpret the script. This script defines two variables:out = /feedback/out err = /feedback/err
The capture script sets up to redirect the output from the Python intepreter (command
python3
): standard output stream to the file /feedback/out and the standard error stream to the file /feedback/err. Both streams must be saved, because when Python is running unit tests, it prints into both the standard output and standard error streams.Now the Python intepreter is called with the parameter
/exercise/grader_tests.py
. The current working directory is still/submission/user
.Inside grader_tests.py, the main level script is executed:
if __name__ == '__main__': # Run tests from the test case and get result loader = unittest.defaultTestLoader suite = loader.loadTestsFromTestCase(TestHelloPython) runner = graderunittest.PointsTestRunner(verbosity=2) result = runner.run(suite) # Points are read from stdout and saved print("TotalPoints: {}\nMaxPoints: {}".format(result.points, result.max_points))
The Python unit testing framework is invoked. However, the test runner is set to
graderunittest.PointsTestRunner
. This is from the Python-grader-utils, which is included in the apluslms/grade-python container. The class PointsTestRunner is in the file graderunittest.py. For the details, Graderutils is inside the grade-python container at/usr/local/lib/python3.9/dist-packages/graderutils
.The Python module graderutils runs a PointsTestRunner instance. First it uses the Python unit test framework to run the methods
test_function()
,test_import()
, andtest_return()
from the classTestHelloPython
in the grader_tests.py. These test methods are recognized as unit tests and run in alphabetical order, because their name begins withtest
. (Reference: Python unittest library)The Python unit testing framework will print a typical unit test output into standard error stream. In the following snippet the solution was correct.
test_function (__main__.TestHelloPython) Check hello function exists (1p) ... ok test_import (__main__.TestHelloPython) Import the functions module (1p) ... ok test_return (__main__.TestHelloPython) Check hello function return value (3p) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.001s OK
Because
test_function
has the docstring"""Check hello function exists"""
, Graderutils will show this as the title of the test. The decorator@points(1)
grants one point if the test passes. Similarly,test_import
also yields one point if it passes andtest_return
will yield three points if it passes.The points data is printed into standard output stream:
TotalPoints: 5 MaxPoints: 5
The
capture
script will redirect the output. This results in two files as promised./feedback/err:
test_function (__main__.TestHelloPython) Check hello function exists (1p) ... ok test_import (__main__.TestHelloPython) Import the functions module (1p) ... ok test_return (__main__.TestHelloPython) Check hello function return value (3p) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.001s OK
/feedback/out:
TotalPoints: 5 MaxPoints: 5
Now
capture
exits. The execution continues in run.sh, which will next callerr-to-out
(in /bin/err-to-out; source here). This uses the_prewrap
to add HTML <pre> tags and append the standard error output after the standard output. Result:/feedback/out:
TotalPoints: 5 MaxPoints: 5 <pre> test_function (__main__.TestHelloPython) Check hello function exists (1p) ... ok test_import (__main__.TestHelloPython) Import the functions module (1p) ... ok test_return (__main__.TestHelloPython)The Check hello function return value (3p) ... ok ---------------------------------------------------------------------- Ran 3 tests in 0.001s OK </pre>
After the grading script has been executed, the grade wrapper script
/gw
will execute script grade (in/bin/grade
). This will parse the data in /feedback/out: first points, then the text feedback.The exit code from the grade wrapper script
/gw
is stored in/feedback/grading-script-errors
. The grade wrapper always exits with code 0.
The rest is executed outside the grade-python container.
The container grade-python shuts down. Points and feedback are sent to mooc-grader.
Mooc-grader forwards the data to A+.
A+ records the points and feedback as new exercise submission, which is unique for this particular user, course, exercise and submission attempt. A+ scales the points according to the maximum points setting in the config.yaml of the exercise (if the maximum score given by the grader differs from that).
The rest is executed at the client side.
The JavaScript at the web browser gets a response from A+ that the grading is ready. Thus the JavaScript shows the points and the feedback. The feedback is just interpreted as HTML. This is why it had to be wrapped into HTML <pre> tags.