How to design a programming exercise that teaches a particular
subject? What I need to know besides the A+ related technicalities?
Practical tips on how to design new programming exercises:
pedagogical and software development aspects.
Exercise instructions, code template, model solution, grading,
Read the documentation.
This chapter is meant for course assistants and lecturers who develop programming
exercises. It is generic knowledge on creating exercises besides on the A+ related
Best practice, not an authoritative guide
The knowledge here is a compilation of best practises that have been used on
years 2017-2021 on courses Data Structures and Algorithms Y and Computing
Applications. It is not solid pedagogical or software engineering
knowledge. Programming exercise design also depends on pedagogical techniques,
teacher preference, and the subject to be taught. Therefore, feel free to
apply the ideas below at your own consideration.
Developing a solution to a programming problem is creative work. Creating a
high-quality programming exercise is challenging for the same reason, and many
other reasons. Here is a short step by step guide, a sort of design and
development process, for programming exercises.
New programming exercises are needed yearly for two reasons. First, there are
new courses that need to teach new technologies. Second, it is known that if
the same programming exercise exists on the same course consecutively many
years, it is a tempting subject for plagiarism.
Plan the learning objectives.
Plan the context: what data, problem, and background story is there?
Plan the cognitive tasks: what the student is supposed to do (or figure out)?
Write instructions for the exercise.
Write a model solution.
Write a code template based on the model solution.
Write student's unit tests to guide the student in the development work.
Write grader unit tests: preferably one test per cognitive task. Use
pseudorandom test data to prevent gaming.
Check that exercise instructions correspond to the code template.
Test the model solution against student's and grader unit tests.
Write incorrect solutions: one solution designed to fail one particular unit test.
Test with incorrect solutions.
Let someone else do the exercise as if they were a student on your course.
Repeat some steps (iterate) when needed.
With automatic assessment and feedback, some students try to
game the system : they try to get
easy exercise points without learning.
One gaming behaviour is to guess the input-output pairs of grader unit tests.
This is easy, if the test input of each grader unit test is constant, and
moreover, if the test input and output are shown exactly. The one can write a
program that gives a hardcoded, correct answer to an expected question without
actually solving the problem.
Another gaming behaviour relies on verbose instructions that the intelligent
tutoring system gives. If the automatic feedback is too helpful, the student
can do gradual implementations based on the exact hints they are given, and
receive reasonable amount of correctness with their solution without actually
To prevent this kind of gaming, one can use pseudorandom unit test data with
varying size and contents. Depending on the exercise, just generating some
pseudorandom data with the standard library of the required programming language
is enough. In case you need something fancy, there are libraries for test data
generation, such as Hypothesis
for Python. However, beware that the APIs for the libraries may change, which
then complicates the maintenance of the automatic grader for your programming
Nifty assignments: a collection of
well-designed programming assigments presented yearly at