Note:

If you want to create a new page for developers, you should create it on the Moodle Developer Resource site.

Question Engine 2:Overview

From MoodleDocs

This page outlines how I think the question engine should work in Moodle 2.0 or 2.1.

Previous section: Rationale


Normalise the database structure

At the moment, the some parts of the question_sessions and question_states tables and their relations are not normalised. This is a technical term for describing database designs. Basically, if the tables are normalised, then each 'fact' is stored in exactly one place. A normalised database is much less likely to get into an inconsistent state. Changing to a normalised structure should therefore increase robustness.

In addition, I wish to change the tables, so that the responses received from the student are stored in a much more 'raw' form. That will mean that the responses can be saved much earlier in the sequence of processing, which will again increase robustness. It will also allow the sequence of responses from the student to be replayed more easily, making it easier to continue after a server crash, to regrade, and to write automatic test scripts.

The detailed design describes the new database structure.

(+++Robustness, +Correctness)

New concept: Question behaviours

At the moment, the various sequences of states that a question can move through in response to student input is hard-coded. It is controlled by a combination of the compare_responses and grade_responses methods of the question types, and tangled logic in the question_process_responses function. This makes it difficult to add new ways on interacting with questions, for example certainty based marking. Also, the current code is tricky to keep working.

I would like to separate out the control of how a question moves through different states, into what I will call question behaviours. Currently, Moodle has three or four of these:

Deferred feedback

This is how the quiz currently work when adaptive mode is off. The student enters a response to each question, then does Submit all and finish at the end of their attempt. Only then do they get feedback and/or grades on each question, depending on the review settings.

Adaptive

In this mode there is a separate submit button beside each question, so the student can submit each question individually during the attempt, and if they are wrong, try to improve their answer, although for a reduced grade.

Currently, the way adaptive mode works from the student point of view is not very good. I propose to replace it with a new Interactive mode. See below.

Manually graded

Essay questions need to be manually graded by the teacher, so you cannot really use them in adaptive mode (although currently there is nothing in the Quiz to stop you, which leads to confusing results). And it is not quite the same as Deferred feedback mode, because the student must wait for the teacher to grade their response after clicking submit all and finish.

Each attempt builds on last

This is very similary to the deferred feedback model, except that in subsequent attempts, each question does not start blank, but instead with the student's last response from the previous attempt.

Currently in Moodle there is nothing to stop you trying to combine Each attempt builds on last with Adaptive mode, although that combination does not make any sense to me. I think it simplifies things to treat this as a separate mode, although, of course, it will share code with Deferred feedback mode.


There are also some new modes that I propose to add, either immediately, or shortly after the main part of the work:

Interactive

This will replace adaptive mode. This is the model that has been used successfully for several years in the OU's OpenMark system. The OU has also modified Moodle to work like this, but because of the way the quiz core currently works, I was not happy just merging the OU changes into Moodle core. In a sense, this whole document grew out of my thoughts about how to implement the OU changes in Moodle core properly.

In the existing adaptive mode, after the student has clicked submit, the question both shows the feedback for the last answer the student submitted, while also letting the student change their answer. This can lead to weird results. For example create a numerical question 2 + 2 = 4. Attempt it. Enter 4. Click Submit. Enter 5. Click Save. You will see a page that seems to say that your answer is 5 and it is correct!

In Interactive mode, the question is either in a state for the student to enter their answer, with a Submit button; or it is showing feedback for the student's previous attempt, with a Try again button to get back to the first state.

The other difference is that students are only allowed a limited number of tries at each question (typically three). When they submit their third try, or when they submit a correct answer, that question is finished and they must go onto the next one.

(This is quite difficult to explain. Try this example and you should see how it works.)

Immediate feedback

This would be like cross between Deferred feedback an Interactive. Each question has a submit button beside it, like in interactive mode, so the student can get the feedback immediately while they still remember their thought processes while answering the question. However, unlike the interactive model, there is no try again button. You only get one chance at each question.

This is a necessary prerequisite for implementing MDL-11047, which is fairly frequently asked for in the quiz forum.

Certainty based marking with deferred feedback

This takes any question that can use the deferred feedback model, and adds three radio buttons to the UI (higher, medium, lower) for the student to use to indicate how certain they are that their answer is correct. If they are more certain, the get more marks if they are right, but lose marks if they are wrong. This encourages students to reflect about their level of knowledge.

Certainty based marking with immediate feedback

This is like immediate feedback mode with the certainty based marking feature.

Delegate to a remote system

The Opaque question type from contrib, which is used to run both OpenMark and STACK questions inside a Moodle quiz was very difficult to implement within the current code. Since the remote system controls the flow of the question, it makes sense to use a custom behaviour for this question type, and that will be possible in the new code.

Information item

Used for descriptions. There is no grade. Item gets marked as complete after the student has seen it once. Displays general feedback after the attempt is over.

Missing

This is like the Missing question type. This behaviour is used in the situation:

  1. Admin installs some behaviour.
  2. Student attempts quiz using that model.
  3. Admin uninstalls that model.
  4. Teacher tries to review that student's attempt.

The missing question behaviour does the best job it can to allow the review of the attempt with the missing model, while displaying an on-screen warning.


So, what exactly does the behaviour control? Given the current state of the question, it limits the range of actions that are possible. For example, in adaptive mode, there is a submit button next to each question. In non-adaptive mode, the only option is to enter an answer and have it saved, until the submit all and finish button is pressed. That is, there will be a class in the PHP code for each model, and it will have methods that replace the old question_extract_responses, question_process_responses, question_process_comment and save_question_state functions in the current code.

(+++Richness, ++Correctness, +Robustness)

Brief digression: who controls the model, the quiz or the question?

There is one design decisions that it took a long time for me to resolve, and I want to mention it here.

The question is: Is the behaviour a property of the question, or the quiz?

At the moment, there is a setting in the quiz that lets you change all the questions in the quiz from adaptive mode to non-adaptive mode at the flick of a switch. Or, at least, it allows you to re-use the same questions in both adaptive and non-adaptive mode. This suggest that the behaviour is a property of the quiz.

On the other hand, suppose that you wanted to introduce the feature that, in adaptive mode, the student gets different amounts of feedback after each try at getting the question right. For example, the first time they get it wrong the only get a brief hint. If they are wrong the second time they get a more detailed comment, and so on. In order to do this, you need more data (the various hints) in order to define an adaptive question that is irrelevant to a non-adaptive question. Also, certain question types (e.g. Opaque, from contrib) have to, of necessity, ignore the adaptive, non-adaptive setting. And I suggested above that, manually graded question types like Essay should really be considered to have a separate behaviour. This suggests that the behaviour is a property of the individual question. (Although usability considerations suggest that a single quiz should probably be constructed from questions with similar behaviour.)

I eventually concluded that both answers are right. That is, it is a good idea for the quiz to have a setting like adaptive/non-adaptive that sets out the teacher's intention of how each question in this quiz should behave. However, the exact choice of behaviour to use is up to the different question types. When a quiz attempts is started, each question type is asked something like "A quiz attempt is starting using 'adaptive mode'. Exactly which behaviour should be used for questions of this type in this attempt?" This gives questions that can only work in a certain way (for example essay questions) a chance to override the quiz setting.

Clarifying question states

Although there is a database table called question_states, the column there that stores the type of state is called event, and the values stored there are active verbs like open, save, grade. The question_process_responses function is written in terms of these actions too. One nasty part is that during the processing, the event type can be changed several times, before it is finally stored in the database as a state.

I would like to clearly separate the concept of the current state that a question is in, and the various actions that lead to a change of state. The actions will be handled by the behaviours, and the state will be stored in the database as a state.

The list of available states will also be changed slightly to match the following diagram. This came from thinking about what information it was important to display in the quiz navigation block.

Question state diagram.png

(+Correctness, +Richness).


What are the parts of a question?

When we come to think of outputting a question on-screen, we can now see that there are several things that affect what it will look like:

  1. The question type (multichoice, shortanswer, essay, ...)
  2. The behaviour (deferred feedback, interactive, CBM, ...)
  3. What state the question is in (has the student answered yet, has it been graded, ...)
  4. The options from the particular activity (whether any marks are available for this question, are students allowed to see their grades, how many decimal places to use to display grades, does this user have permission to make a manual comment ...)

Also, we want different question type and behaviours to have the freedom to display whatever they like to the students; but we also want all the questions in a quiz to look and behave consistently, so that the experience is not confusing.

In the sketch below, I try to summarise the bits that may be present in a typical question. If all questions can have these bits in this order, then that is probably enough consistency. The flexibility comes from allowing the question type or behaviour to put whatever it likes inside those bits.

We can divide what you see when looking at a question is into three main parts:

  1. Information (meta-data): Which question number this is, a summary of what state it is currently in, what grade you have got for it (or how many marks are available, if any), whether you have flagged it. This has a grey background in the sketch below.
  2. The question and the response to it. This has a blue background in the sketch.
  3. The outcome from submitting that response. This has a yellow background.

Parts of a question2.png

The full list of options that control which of those bits are visible at any time are:

correctness
(hidden / visible) whether the student gets told whether their answer was (in/partially)correct in the status summary under the question number, or instead are told something vague like 'Finished'.
marks
(hidden / max only / actual mark and max / actual mark and max with explanation ) whether the student can see the number of marks available, and how many marks they got, and how much detail of the marking to display.
marks d.p.
(0 .. 7) how many decimal places marks are displayed to.
flags
(hidden / visible / visible and editable) this is the feature that lets students bookmark or flag a question in an attempt for later reference.
read-only
Whether the question just shows the response was already entered, or whether it gives controls to the user enter/change their response.
specific feedback
(hidden / visible) feedback that relates to the particular response the student entered.
general feedback
(hidden / visible) whether the general feedback (same for all students) is visible.
correct response
(hidden / visible) whether the automatically generated message or other indication of the correct answer is visible.
manual comment
(hidden / visible / visible and editable) whether the comment manually added by the teacher is visible and editable by the current user.
response history
(hidden / visible) whether the list of steps the student went through to answer the question is displayed.

These options are initially set by whatever it is that is using the question (for example the quiz will initialise them from the quiz settings). Then these options are modified by the behaviour. For example it should ensure that no feedback is displayed until the student has actually submitted an answer, or that after they have submitted their final answer, the question only appears in read-only mode.

Reorganise the code

There will, of course, be a new class for each behaviour, that all inherit from the same base class.

At the moment question type classes have multiple responsibilities.

  1. They is information about the question types, and loading and saving, importing and exporting, backing up and restoring questions of that type.
  2. Then there is the processing of student responses for a particular instances of that question type.
  3. And then there is displaying the question in its various states.

(Fortunately, 4. Displaying an editing form, is already in a separate class.)

Moodle 2.0 introduces the renderer concept. Introducing qtype renderers will move the output code (3) into separate classes. This change to the question engine is the appropriate time to introduce qtype renderes.

I will also introduce a new set of classes to store the state of a question within a quiz attempt. That is basically the combination of the $question and $state objects that are passed around at the moment, but with a specific subclass for each question type. This moves all the processing logic (2) into a separate class. That leaves the question type class itself just responsible for (1).

Of course the behaviour classes and the question_state classes need to work together, and this is clearly a situation for the strategy pattern. However, I have not yet decided whether the question type is the context, and the iteration model is the strategy, or vice versa. I will have to see which comes out better when I do the implementation.

Finally, there will be a new class in questionlib.php for managing the set of questions in an attempt. Already, in the Moodle 2.0 quiz developments, I have changed the quiz code to create a quiz_attempt class. This keeps track of all the questions in the quiz attempt. However, this really needs to be split into two bits. The job of tracking a set of questions and what state they are in is exactly the job of the question engine. Therefore we should have a question_set_attempt class that does that job. That class will take over the job of a number of functions in questionlib.php, for example question_load_states. Then the quiz_attempt class can just focus on the quiz-specific things, and use the question_set_attempt class. This new class should make it easier to use questions in other modules.

It should be possible to organise the code so that only the question_set_attempt class had to load or save data to or from the database. It will then pass that data on to wherever it is needed. That should be good for efficiency.

(+Robustness, +Efficiency)


Simplified API for question types

In the summary of how the quiz currently works, I said, "The student enters an answer to each question which is saved. Then when they submit the quiz attempt, all the questions are graded." In fact, that was a lie. Whenever a response is saved, the grade_responses method of the question type is called, even in the state is only being saved. This is confusing, to say the least, and very bad for performance in the case of a question type like JUnit (in contrib) which takes code submitted by the student, and compiles it in order to grade it.

So some of the API will only change to the extent that certain functions will in future only be called when one would expect them to be. I think this can be done in a backwards-compatible way.

Another change will be that, at the moment, question types have to implement tricky load/save_question_state methods that, basically, have to unserialise/serialise some of the state data in a custom way, so it can be stored in the answer column. This is silly design, and leads to extra, unnecessary database queries. The changes to the database structure will to eliminate the need for these methods.

Hopefully the above changes to how the code is organised will make it mauch easier to write new question types.

(+Richness, +Correctness, +Efficiency)


Introduce more automated testing

Since the question_set_attempt class keeps track of all the data that is needed when processing questions, it should be very easy to write automatic tests (unit tests) for all the other parts of the question engine. That should greatly help in eliminating bugs.

I am intending to take a test-driven approach to implementing this proposal.

(+++Correctness, ++Robustness)


See also

In the next section, Design, gives the detailed design of the above solution.