Question Engine 2:Design
- Goals
- Rationale
- How it currently works
- New system overview
- Detailed design
- Question Engine 2 Developer docs:
- Implementation plan
- Testing
This page explains how I think the question engine should work in Moodle 2.0 or 2.1.
Previous section: Overview
Note: This page is a work-in-progress. Feedback and suggested improvements are welcome. Please join the discussion on moodle.org or use the page comments.
Within the quiz/question engine system we have to deal with three different types of grade/score/mark/whatever.
For any activity, say a quiz, it will eventually calculate a grade that is passed to the gradebook. For example, the quiz grade may be out of 100.
Within the quiz, there will be a number of questions. Let us suppose there are 6 questions each worth 3 marks, and 1 question worth 2 marks. Therefore, the quiz is out of 20 marks, and the student's mark is multiplied by 5 to get a grade out of 100 that is sent to the gradebook.
Finally, at the lowest level of the quiz, grades are stored on a scale of 0..1. We call this a fraction. So the student's mark for a question is their fraction, multiplied by the maxmark for the question. We do this so that it is easy to do things like change how many marks a question is worth within a quiz.
Therefore, we have the three words fraction, mark and grade that should be used consistently throughout the question engine code. Fractions are rarely shown in the user interface*, while I believe the quiz UI already uses the marks/grades terminology consistently.
When used as a verb - to assign a grade/mark/fraction - as in grade_response, regrade or manual_grade, the word grade is always used.
* the one place where fraction is displayed in the UI is on the question editing screens, where you set the 'grade' for a particular answer on a scale of 0 to 100%.
New database structure
question_usages
This is a rename of question_attempts.
Column | Type | Comment |
---|---|---|
id | INT(10) NOT NULL AUTO INCREMENT | Unique id used to link attempt question data to other things, for example a quiz_attempt. |
contextid | INT(10) NOT NULL | The context that this question attempt is associated with. For example the quiz context. |
owningplugin | NOT NULL VARCHAR(255) | The plugin this attempt belongs to, e.g. 'mod_quiz', 'block_questionoftheday', 'filter_embedquestion'. |
preferredbehaviour | NOT NULL VARCHAR(255) | The the archetypal behaviour that should be used for new questions added to this usage. |
question_attempts
This replaces question_sessions. Question sessions is not a great name because session has other connotations in the context of web applications. I think it is right to use the question_attempt name here, because this tables has one row for each attempt at each question.
There is now no requirement for (attemptid, questionid) to be unique.
Column | Type | Comment |
---|---|---|
id | INT(10) NOT NULL AUTO INCREMENT | Unique id. Linked to from question_states.attemptid. |
questionusageid | INT(10) NOT NULL REFERENCES question_usages.id | Which attempt this data belongs to. |
slot | INT(10) NOT NULL | As questions are added to a usage, they are numbered sequentially. |
behaviour | VARCHAR(32) NOT NULL | The question behaviour that is managing this question attempt. |
questionid | INT(10) NOT NULL REFERENCES question.id | Which question this is the attempt data for. |
maxmark | NUMBER(12,7) NOT NULL | The grade this question is marked out of in this attempt. |
minfraction | NUMBER(12,7) NOT NULL DEFAULT 0 | Some questions can award negative marks. This indicates the most negative mark that can be awarded, on the faction scale where the maximum positive mark is 1. |
flagged | INT(1) NOT NULL DEFAULT 0 | Whether this question has been flagged within the attempt. |
questionsummary | TEXT | If this question uses randomisation, it should set this field to summarise what random version the student actually saw. This is a human-readable textual summary of the student's response which might, for example, be used in a report. |
rightanswer | TEXT | This is a human-readable textual summary of the right answer to this question. Might be used, for example on the quiz preview, to help people who are testing the question. Or might be used in reports. |
responsesummary | TEXT | This is a textual summary of the student's response (basically what you would expect to in the Quiz responses report). |
timemodified | INT(10) NOT NULL | The time this record was last changed. |
- Need to store maxmark because it could come from anywhere, (e.g. quiz_question_instances, question.defaultgrade, ...). We need it available at various times (e.g. when displaying a question) so it is better to store it explicitly here.
question_attempt_step
Same purpose as the old question_states table, but simplified.
Column | Type | Comment |
---|---|---|
id | INT(10) NOT NULL AUTO INCREMENT | Unique id. Linked to from question_states.stateid. |
questionattemptid | INT(10) NOT NULL REFERENCES question_attempts.id | Which question attempt this data belongs to. |
sequencenumber | INT(4) NOT NULL | Numbers the steps in a question attempt sequentially. |
state | INT(4) NOT NULL | The type of state this is. One of the constants defined by the question_state class. |
fraction | NUMBER(12,7) | The grade the student has earned for this question, on a scale of 0..1. Needs to be multiplied by question_attempts.maxgrade to get the true grade. |
timecreated | INT(10) NOT NULL | Time-stamp of the event that lead to this state. |
userid | INT(10) NOT NULL | The user who created this state. For states created during the attempt, this would be the student id. For a state for adding a comment or manually grading, this would be the teacher id. |
- We store grade unscaled (as a value between 0.0 and 1.0) because that makes regrading easier. (You might think that you can adjust scaled grades later, and that is almost true, but if maxgrade used to be 0, then you can't change it to anything else.)
question_attempt_step_data
This stores the data submitted by the student (a list of name => value pairs) that lead to the state stateid. This replaces the old question_states.answer field.
There will be a convention that ordinary names like 'myvariable' should be used for submitted data belonging to the question type; names prefixed with a :, like ':myaction' should be used for data belonging to the question behaviour; and names prefixed with a _ can be used for internal things, for example, the random question might store '_realquestionid' attached to the 'open' state, or a question type that does a lot of expensive processing might store a '_cachedresult' value, so the expensive calculation does not need to be repeated when reviewing the attempt.
Note that, the old question_states.answer field used to save a lot of repetitive information from one state to the next, for example the other questionid for random questions, and the choices order for multiple-choice questions with shuffle-answers on. In future, this sort of repetitive information will not be saved. Instead, during question processing, the question types will be given access to the full state history.
Column | Type | Comment |
---|---|---|
id | INT(10) NOT NULL AUTO INCREMENT | Unique id. Not used much. |
attemptstepid | INT(10) NOT NULL REFERENCES question_attempt_steps.id | Which state the submission of this data lead to. |
name | VARCHAR(32) NOT NULL | The name of the parameter received from the student. |
value | TEXT | The value of the parameter. |
Upgrading the database
TODO
New list of states that a question may be in
The aim here is to have as few states as necessary. What is necessary? To make it clear what is going on, for example in the quiz navigation. Of course, that is only one case to consider.
- Incomplete
- This is the state that questions start in. They stay in this state as long as the student still needs to give this question attention. In deferred feedback (non-adaptive) mode, that is until the student has entered an answer. (For a short-answer question, any answer in the input box moves you out of this state; for a matching question, you only move out of this state when you have answered all the sub-questions.) In adaptive mode, the question stays in this state until either you have got it right, or you have run out of tries.
- In the state, the student can enter or change their answer.
- Complete
- This state is for questions where the student have done enough, but the attempt is still open, so they could change their answer if they wanted to. For example, this happens in deferred feedback mode when the student has entered a complete answer, and before they do submit all and finish. Also, a Description, after the student has seen it.
- In the state, the student can enter or change their answer.
- Graded(Correct/PartiallyCorrect/Incorrect)
- For computer-graded questions, once the student can no longer interact with the question, it goes to one of the sub-states of the graded state.
- Finished
- For questions that do not have a grade, for example descriptions, after the attempt is over, they go into this state.
- GaveUp
- This state is used for questions where it is impossible to assign a grade because the student did submit all and finish when the question was in the incomplete state. However, this does not necessarily happen, for example, we may choose to grade an incomplete matching question if the student has completed at least one sub-question.
- ManuallyGraded(Correct/PartiallyCorrect/Incorrect)
- Commented
- GaveUpCommented
- These three states correspond the the previous three states after the teacher has added a comment and/or manually graded.
API for modules using the question engine
Here is some proposed code from an integration test method. It creates an attempt containing one true/false question and walks through a student getting it right, and then at teacher overriding the grade.
public function test_delayed_feedback_truefalse() {
// Create a true-false question with correct answer true.
$tf = $this->make_a_truefalse_question();
$displayoptions = new question_display_options();
// Start a delayed feedback attempt and add the question to it.
$tf->maxgrade = 2;
$quba = question_engine::make_questions_usage_by_activity('unit_test');
$quba->set_preferred_behaviour('delayedfeedback');
$qnumber = $quba->add_question($tf);
// Different from $tf->id since the same question may be used twice in
// the same attempt.
// Verify.
$this->assertEqual($qnumber, 1);
$this->assertEqual($quba->question_count(), 1);
$this->assertEqual($quba->get_question_state($qnumber), question_state::NOT_STARTED);
// Begin the attempt. Creates an initial state for each question.
$quba->start_all_questions();
// Output the question in the initial state.
$html = $quba->render_question($qnumber, $displayoptions);
// Verify.
$this->assertEqual($quba->get_question_state($qnumber), question_state::INCOMPLETE);
$this->assertNull($quba->get_question_grade($qnumber));
$this->assertPattern('/' . preg_quote($tf->questiontext) . '/', $html);
// Simulate some data submitted by the student.
$prefix = $quba->get_field_prefix($qnumber);
$answername = $prefix . 'true';
$getdata = array(
$answername => 1,
'irrelevant' => 'should be ignored',
);
$submitteddata = $quba->extract_responses($qnumber, $getdata);
// Verify.
$this->assertEqual(array('true' => 1), $submitteddata);
// Process the data extracted for this question.
$quba->process_action($qnumber, $submitteddata);
$html = $quba->render_question($qnumber, $displayoptions);
// Verify.
$this->assertEqual($quba->get_question_state($qnumber), question_state::COMPLETE);
$this->assertNull($quba->get_question_grade($qnumber));
$this->assert(new ContainsTagWithAttributes('input',
array('name' => $answername, 'value' => 1)), $html);
$this->assertNoPattern('/class=\"correctness/', $html);
// Finish the attempt.
$quba->finish_all_questions();
$html = $quba->render_question($qnumber, $displayoptions);
// Verify.
$this->assertEqual($quba->get_question_state($qnumber), question_state::GRADED_CORRECT);
$this->assertEqual($quba->get_question_grade($qnumber), 2);
$this->assertPattern(
'/' . preg_quote(get_string('correct', 'question')) . '/',
$html);
// Process a manual comment.
$quba->manual_grade($qnumber, 1, 'Not good enough!');
$html = $quba->render_question($qnumber, $displayoptions);
// Verify.
$this->assertEqual($quba->get_question_state($qnumber), question_state::MANUALLY_GRADED_PARTCORRECT);
$this->assertEqual($quba->get_question_grade($qnumber), 1);
$this->assertPattern('/' . preg_quote('Not good enough!') . '/', $html);
}
Note that this code does not interact with the database at all. Data is only stored to or loaded from the database if you call $qag->load_... or $qag_save... methods.
New classes
question_engine
This is a static factory class that provides an entry point to all the other question engine classes.
question_state
An enumeration that defines constants the various states a question can be in, with some helper methods:
abstract class question_state {
const NOT_STARTED = -1;
const UNPROCESSED = 0;
const INCOMPLETE = 1;
const COMPLETE = 2;
const NEEDS_GRADING = 16;
const FINISHED = 17;
const GAVE_UP = 18;
const GRADED_INCORRECT = 24;
const GRADED_PARTCORRECT = 25;
const GRADED_CORRECT = 26;
const FINISHED_COMMENTED = 49;
const GAVE_UP_COMMENTED = 50;
const MANUALLY_GRADED_INCORRECT = 56;
const MANUALLY_GRADED_PARTCORRECT = 57;
const MANUALLY_GRADED_CORRECT = 58;
public static function is_active($state) { ... }
public static function is_finished($state) { ... }
// ...
}
question_display_option
This class contains all the options for what may, or may not, be visible when a question is rendered.
class question_display_options {
public $flags = QUESTION_FLAGSSHOWN;
public $readonly = false;
public $feedback = false;
// ...
}
question_definition
This class encapsulates the question definition. This used to be passed round in $question stdClass objects. Now we have a real class.
There will be subclasses like
- question_truefalse
- question_multichoice
- ...
I think some behaviour (e.g. grade_responses, get_renderer) will be in this class.
question_usage_by_activity
Related to the question_usages table in the DB.
This is the main class that activity modules will use. For example, there might be a question_usage_by_activity for a quiz attempt or a lesson attempt.
There are methods to add questions to the attempt, start and finish the attempt, and submit data to a particular question.
question_attempt
Related to the question_attempts table in the DB.
Stores all the information about the student's attempt at one particular question as part of a question_usage_by_activity.
question_attempt_step
Related to the question_attempts_step and question_attempts_data table in the DB.
A question_attempt comprises a sequence of steps. Each step has an associative array of submission data, that is, principally the data that was submitted in the HTTP request that created the new step.
Each step also has a state. That is, one of the question_state constants, and optionally a grade.
There are two helper classes question_attempt_step_iterator and question_attempt_reverse_step_iterator which let you write code like
foreach ($qa->get_iterator() as $stepindex => $step) {
// Do something with each step of the question attempt in order.
}
question_behaviour
Will have subclasses like
- qbehaviour_delayedfeedback
- qbehaviour_interactive
- ...
Question behaviours control exactly what happens when as a question_attempt is started, a submission is processed, or finished, etc.
core_question_renderer
Renderers are responsible for generating the HTML to display a question in a particular state. The core_question_renderer it responsible for all the bits that do not depend on the current question type or behaviour.
qtype_renderer
Base class for
- qtype_truefalse_renderer
- qtype_multichoice_renderer - and possibly also qtype_multichoice_horizontal_renderer
- ...
Responsible for generating the bits of HTML that depend on the question type. For example the question text, and input elements.
qbehaviour_renderer
Base class for
- qbehaviour_delayedfeedback_renderer
- qbehaviour _interactive_renderer
- ...
Responsible for generating the bits of HTML that depend on the behaviour. For example the submit button in adaptive mode.
Changes to the question type API
Can this be backwards compatible? It is looking like it will be better to break backwards compatibility - or at least to introduce new API methods. It may prove possible to keep old question types mostly working by providing implementations of the new API in terms of the old API methods.
Proposed robustness and performance testing system
A major change to the question engine should really only be contemplated in combination with the introduction of a test harness that makes it easy to run correctness, performance and reliability tests.
One advantage of the way data will be stored in the new system is that everything originally submitted by the user will be stored in the database in a format very close to the one in which it was originally received by the web server. Therefore, it should be easy to write a script that replays saved quiz attempts. This is the basis of a test harness. I will create such a test script as part of this work.
Huge class diagram
This diagram shows all the classes in the question code (no including specific plugins), with most of the core of my new proposal implemented.
See also
In the next section, Overview_of_the_Moodle_question_engine summarises the new system. It is intended to be the developer documentation for the new system once it is finished.
- Back to Question Engine 2