Note:

If you want to create a new page for developers, you should create it on the Moodle Developer Resource site.

Changing the Moodle Question Engine: Difference between revisions

From MoodleDocs
(Replacing page with ' The content that used by be here has been moved to Question Engine 2 and split into more managable chunks.')
 
(12 intermediate revisions by 4 users not shown)
Line 1: Line 1:
__TOC__


The Question Engine is the part of the question code in Moodle that deals with what happens when a question is attempted by a Student, or later reviewed by the Teacher or the student themselves or included in reports. Roughly speaking this is half the question code, the other part being the Question Bank, which allows people to create, edit, organise, import, export and share questions. The Question Engine is the part that has to perform when you have a large class of students taking a quiz simultaneously.
The content that used by be here has been moved to [[Question Engine 2]] and split into more managable chunks.
 
[[Image:Sydney_harbour.jpg|frame|An irrelevant but pretty picture of Sydney, taken from the hotel room where most of this was written.]]This document is a by-product of three years working on the Moodle quiz and question code. During that time I have fixed hundreds of bugs and feature requests. Some of the bugs with the Question Engine have requried careful fixing in order to avoid regressions, and there are other issues that are hard to do anything about at the moment.
 
As well as bug fixes, I have also overseen major developments in the quiz/question system. For example Jamie's changes to the Question Bank, Ollie's new quiz editing interface, and Mahmoud's new quiz navigation that I added to Core Moodle while refactoring the quiz attempt code.
 
Over the three years, I have also followed the Moodle quiz forum and seen what it is that people struggle with, complain about, want to change (and also what they like!). For example, having coached some people when writing new question types I realise that creating a new questions type is more difficult than it needs to be.
 
The changes proposed here fix parts of the question engine code that I have been dissatisfied with since about the first time I looked at it. There are several reasons why I have not addressed them before now.
 
# To fix them properly, we must change how the data is stored in some of the key question tables. In some sites, these tables contain millions of rows, and the processing required on update is non-trivial and is different for different question types. This is not something to undertake lightly.
# It took me about two and a half years to work out this solution. Some parts of this proposal where clear from the start, but the exact notion of question interaction models, and how they would relate to question types, only crystallised recently in my mind. Because of 1. I did not want to change anything until I was sure what I was proposing was right, not just in general, but in all the details.
# The mundane reason that there have been plenty of other things to work on, and I have not had time to address this until now.
# These are not critical problems. There have been no major changes to the question engine since the 1.6 release. Clearly there are not fundamental flaws with the current system. It is just not as good as it could be.
 
As this is a major change, I will spend some time justifying it, before describing the change itself. My estimate is that doing what I propose here is a couple of month's solid work.
 
==Goals of an online assessment system==
 
Before going further, I think it is worth being clear about what properties we require of a question engine. In my view, we want:
 
1. Correctness
2. Robustness
3. Richness
4. Efficiency
 
Let me explain in more detail, and summarise how I thing Moodle currently performs.
 
===1. Correctness===
 
This says that, assuming nothing untoward happens, given certain inputs (question definitions, student responses, ...); and an understanding (specification) of what is supposed to happen; then the quiz module will always produce the right outputs (feedback, grades, ...). This is most important, because without this, you can't rely on anything.
 
I think Moodle does quite well in this regard, mainly due to many committed users who will report bugs as they are encountered, and smaller group of developers who then fix those bugs.
 
===2. Robustness===
 
The previous section started, "assuming nothing untoward happens." Robustness is about what happens if something does go wrong. What if the server crashes in the middle of a test? What if the connection to the database is lost in the middle of processing one student's submission? What if the PHP process runs out of memory in the middle of a re-grade? What if an impatient user clicks submit repeatedly or closes their browser window? And so on.
 
In some of these cases, it is inevitable that some data will be lost. But robustness says that no matter what happens, the student's quiz attempt must be left in a consistent state (presumably either just before, or just after the last submission) and no more than the last submission's worth of data is lost.
 
I think that this is currently a weakness for Moodle. If the server goes down at the end of a timed test, bad things happen. See the list of bugs below.
 
===3. Richness===
 
This is about supporting a rich range of educational experiences. One of Martin's 'five laws' is that we learn when creating something for another person to see. An online assessment system can sometimes take the place of the other person. The question is, how effective a substitute can it be in how many situations?
 
Moodle does quite well here, with adaptive mode, plug-in-able question types and so on. However, we could do better. At the moment, writing a new question type is harder than it need be. Also, the interaction model like adaptive or non-adaptive mode is hard coded into the question engine which makes it harder to support new interactions like certainty based marking.
 
===4. Efficiency===
 
Given all the above, how many simultaneous users can be supported with a particular amount of hardware?
 
Moodle is neither flagrantly inefficient, nor as efficient as it could be. We currently lack easy-to-perform performance measurement. We should establish some, so we can estimate current hardware requirements, and observe the effect of other developments on efficiency.
 
 
==Why things need to change==
 
I have just summarised how well Moodle meets my four goals. I will now presents a different picture, by listing bugs and feature requests in the Moodle tracker that I think can best be resolved by the changes I propose.
 
===Correctness issues===
 
{| class="nicetable"
! Key !! Votes !! Watchers !! Summary
|-
| MDL-3030 || 13 || 7 || attempts should be auto-closed
|-
| MDL-3936 || 2 || 1 || Essay Question used in score before grading
|-
| MDL-9303 || 2 || 5 || General Feedback not shown for essay questions on review screen
|-
| MDL-9327 || 4 || 3 || Text in Essay type questions is not saved in second or more attempts  when the teacher has not graded previous attempts
|-
| MDL-13289 || 0 || 0 || Essay question with adaptive mode not show feedback
|-
| MDL-17681 || 0 || 0 || Manually assigned grades cannot easily by re-scaled when the marks for a question is changed
|}
 
===Robustness issues===
 
{| class="nicetable"
! Key !! Votes !! Watchers !! Summary
|-
| MDL-11852 || 0 || 1 || Quiz does not score properly
|-
| MDL-12344 || 0 || 0 || Student takes quiz, upon completion of quiz receives the following
|-
| MDL-12665 || 0 || 0 || error: Must Specify course id, shorname, or idnumber quiz attempts sometimes show zero despite correct answers
|-
| MDL-13073 || 0 || 0 || Regrading (really) messes up grades
|-
| MDL-13246 || 0 || 1 || Quiz does not save score. Shows 0/20
|-
| MDL-14873 || 0 || 2 || quiz answers submit freezes (occationally)
|-
| MDL-15305 || 1 || 0 || Quiz doesn't work after submit all and finish button is selected - get a "No quiz with id=0" or "No attempt with id=0" error message
|-
| MDL-16451 || 0 || 1 || Quiz scores 0 with secure window; Grades do not transfer
|-
| MDL-16490 || 0 || 1 || timed quiz generates bad 2nd attempt
|-
| MDL-16663 || 0 || 1 || errors and missing work
|}
 
(Many of these are intermittent and only affect some people. Sometimes bugs like this turn out to be the result of a corrupt table in MySQL. Still, we really should not be getting problems like this with a system that is sometimes relied on as much as the quiz is.)
 
===Richness issues===
 
{| class="nicetable"
! Key !! Votes !! Watchers !! Summary
|-
| MDL-1647 || 4 || 6 || Allowing negative marks for questions
|-
|| MDL-4860 || 0 || 0 || Questiontype giving hints
|-
| MDL-12821 || 0 || 1 || Essay quiz question redundant submit button
|-
| MDL-17894 || 0 || 0 || Support certainty/Confidence based marking
|-
| MDL-17895 || 0 || 0 || Allow question types to validate responses before grading
|-
| MDL-17896 || 0 || 0 || Question types like Opaque should not require nasty hacks in question_process_responses
|}
 
===Issues I don't propose to fix with this change===
 
... but which would become easier to fix as a result:
 
{| class="nicetable"
! Key !! Votes !! Watchers !! Summary
|-
| MDL-3452 || 0 || 0 || Use saved answers when marking late submissions
|-
| MDL-4309 || 0 || 3 || Ability for late quizzes (with penalty)
|-
| MDL-15596 || 0 || 0 || Allow a question to be used more than once in an attempt
|}
 
 
==How this part of the quiz currently works==
 
Before saying what I would like to change, I think it is best to summarise how the question engine currently works.
 
===Database tables===
 
There are three database tables that store data about students' attempts at questions.
 
====question_attempts====
 
This table is basically used to generate unique ids for the other tables relating to question attempts, and allow one to link the data in those tables to the other parts of Moodle that use the question bank, for example from a quiz attempt.
 
{| class="nicetable"
! Column !! Type !! Comment
|-
| id || INT(10) NOT NULL AUTO INCREMENT || Unique id used to link attempt question data to other things, e.g. quiz_attempts.uniqueid
|-
| modulename || NOT NULL VARCHAR(20) || e.g. 'quiz' the thing linked to.
|}
 
====question_sessions====
 
There is one row in this table for each question in an attempt.
 
{| class="nicetable"
! Column !! Type !! Comment
|-
| id || INT(10) NOT NULL AUTO INCREMENT || Unique id. Not used much.
|-
| attemptid || INT(10) NOT NULL REFERENCES question_attempts.id || Which attempt this data belongs to.
|-
| questionid || INT(10) NOT NULL REFERENCES question.id || Which question this is the attempt data for.
|-
| newest || INT(10) NOT NULL REFERENCES question_states.id || The latest state of this question in this attempt.
|-
| newgraded || INT(10) NOT NULL REFERENCES question_states.id || The latest graded state of this question in this attempt.
|-
| sumpenalty || NUMBER(12,7) NOT NULL || Used for adaptive mode scoring.
|-
| manualcomment || TEXT || A teacher's manual comment.
|}
 
(attemptid, questionid) is an alternate key, but that makes MDL-15596 impossible to satisfy.
 
====question_states====
 
There are several rows here for each question session, recording the different states that the question went through as the student attempted it. For example open, save, submit, manual_grade.
 
{| class="nicetable"
! Column !! Type !! Comment
|-
| id || INT(10) NOT NULL AUTO INCREMENT || Unique id. Not used much.
|-
| attempt || INT(10) NOT NULL REFERENCES question_attempts.id || Which attempt this data belongs to.
|-
| question || INT(10) NOT NULL REFERENCES question.id || Which question this is the attempt data for.
|-
| seq_number || INT(10) NOT NULL || Numbers the states within this question attempt.
|-
| answer || TEXT NOT NULL || A representation of the student's response.
|-
| timestamp || INT(10) NOT NULL || Timestamp of the event that lead to this state.
|-
| event || INT(4) NOT NULL || The type of state this is. One of the QUESTION_EVENTXXX constants from the top of questionlib.php.
|-
| grade || NUMBER(12,7) NOT NULL || The grade that had been earned by this point, after penalties had been taken into account.
|-
| raw_grade || NUMBER(12,7) NOT NULL || The grade that had been earned by this point, before penalties had been taken into account.
|-
| penalty || NUMBER(12,7) NOT NULL || Used by the adaptive mode scoring.
|}
 
(attempt, question, seq_number) is an alternate key (see the remark in the previous sub-section).
 
The student's response is stored in the answer column. It is up to each question type to convert the data received from the student into a single string in whatever format seems best, even though the response come from a HTTP post, and so is just an array of key => value pairs, which could easily be stored directly in the database without each question type having to write some nasty string concatenation/parsing code.
 
===Code flow===
 
A part of Moodle, for example the Quiz module, wishes to use questions. First, mod/quiz/attempt.php will render a page of questions by calling
# load_questions,
# load_question_states, and
# print_question one or more times.
All three of these functions delegte part of the work to the appropriate question type.
 
This outputs each question in its current state. Any form controls that are output have a name="" attribute that is a combination of a prefix determined by the question engine (e.q. q123_), and a main part determined by the question type.
 
When this page is submitted, the goes to whichever script the question using code designate. In the case of the quiz it is mod/quiz/processattemptattempt.php. This calls
# load_questions,
# load_question_states,
# question_extract_responses,
# question_process_responses, and
# save_question_state.
# In the case of the quiz, it then saves the quiz_attempts row, which has been updated by some of the above methods.
Again, steps 1., 2., 4. and 5. delegate part of the work to the question types.
 
Processing manual grading by the teacher is similar. The sequece of calls is:
# load_questions,
# load_question_states,
# question_process_comment, and
# save_question_state.
# In the case of the quiz, it then saves the quiz_attempts row, which has been updated by some of the above methods.
 
One thing to note here is that a lot of processing is done before any data can be stored in the database.
 
Another point to note is that the question type classes are singleton classes. That is, only one instance of each question type is created. All the methods used by the question engine are passed a $question and possibly a $state object to tell the quetion type which question instance they should be processing. This is good design from several points of view. It works well with batch database queries (efficiency) and the question types to not have state, which might become inconsistent (robustness). However, there is a slight price here, since very rich question types cannot cache state in memory, so they may have to repeat processing (richness <-> efficiency trade off) however, they can store state in the database since they can put whatever they like in the question_states.answer field.
 
 
==Overview of the changes==
 
===Clarifying question states===
 
Althought there is a database table called question_states, the column there that stores the type of state is called event, and the values stored there are active verbs like open, save, grade. The question_process_responses function is written in terms of these actions too. One nasty part is that during the processing, the event type can be changed several times, before it is finally stored in the database as a state.
 
I would like to clearly separate the concept of the current state that a question is in, and the various actions that lead to a change of state.
 
(+Correctness, +Richness).
 
===Question interaction models===
 
At the moment, the various sequences of states that a question can move through is controlled by a combination of the compare_responses and grade_responses methods of the question types, and hard-coded logic in the question_process_responses function.
 
I would like to separate out the control of how a question moves through different states, into what I will call question interaction models. Currently there are two models supported by Moodle, and the choice is controlled by the adaptive mode setting in the quiz.
 
There is the deferred feedback model, which is what you get when adaptive mode is off. The student enters an answer to each question which is saved. Then, when they submit the quiz attempt, all the questions are graded. Then all the marks and feedback are available for review.
 
Alternatively, there is the adaptive model, where the student can submit one question at a time, get some feedback, and if they were wrong, immediately change their answer, with a reduced grade if they get it right the second time.
 
Currently, manually graded question use the same adaptive, or non-adaptive, models, with some slight changes. However, this does not work very well, as the number of essay question correctness bugs listed above shows. Really, manual grading should be treated as a separate interaction model.
 
So, what exactly does the interaction model control? Given the current state of the question, it limits the range of actions that are possible. For example, in adaptive mode, there is a submit button next to each question. In non-adaptive mode, the only option is to enter an answer and have it saved, until the submit all and finish button is pressed. That is, it rougly replaces question_extract_responses, question_process_responses, question_process_comment and save_question_state in the current code.
 
(+++Richness, ++Correctness, +Robustness)
 
===Simplified API for question types===
 
Although I said above "The student enters an answer to each question which is saved. Then when they submit the quiz attempt, all the questions are graded." In fact, that was a lie. Whenever a response is saved, the grade_responses method of the question type is called, even in the state is only being saved. This is confusing, to say the least, and very bad for performance in the case of a question type like JUnit (in contrib) which takes code submitted by the student, and compiles it in order to grade it.
 
So some of the API will only change to the extent that certian funcitons will in future only be called when one would expect them to be. I think this can be done in a backwards-compatible way.
 
Another change will be that, at the moment, question types have to implement tricky load/save_question_state methods that, basically, have to unserialise/serialise some of the state data in a custom way, so it can be stored in the answer column. This is silly design, and leads to extra, unnecessary database queries. By changing the database structure so that the students' responses are stored directly in a new table (after all, an HTTP POST request is really just an array of string key-value pairs, and there is a natural way to store that in a database) it should be posiblity to eliminate the need for these methods.
 
(+Richness, +Correctness, +Efficiency)
 
===Changed database structure===
 
At the moment, the some parts of the question_sessions and question_states tables and their relations are not normalised. This is a technical term for describing database designs. Basically, if the tables are normalised, then each 'fact' is stored in exactly one place. This makes it much less likely that the database can get into an inconsistent state. Changing to a normalised structure should therefore increase robustness.
 
In addition, I wish to change the tables, so the the responses recieved from the student are stored in a much more 'raw' form. That will mean that the responses can be saved much earlier in the sequence of processing, which will again increase robustness. It will also allow the sequence of responses from the student to be replayed more easily, making it easier to continue after a server crash, to regrade, and to write automatic test scripts.
 
(+++Robustness, +Correctness)
 
 
==Brief digression: who controls the model, the quiz or the question?==
 
Before proceeding to decribe in detail the design for how I will implement these changes, I want to digress and discuss one of the design decisions that it took a long time for me to resolve.
 
The question is, is the interaction model a property of the question, or the quiz?
 
At the moment, there is a setting in the quiz that lets you change all the questions in the quiz from adaptive mode to non-adptive mode at the flick of a switch. Or, at least, it allows you to re-use the same questions in both adaptive and non-adaptive mode. This suggest that the interaction model is a property of the quiz.
 
On the other hand, suppose that you wanted to introduce the feature that, in adaptive mode, the student gets different amounts of feedback after each try at getting the question right. For example, the first time they get it wrong the only get a brief hint. If they are wrong the second time they get a more detailed comment, and so on. In order to do this, you need more data (the various hints) in order to define an adaptive question that is irrelevant to a non-adaptive question. Also, certain question types (e.g. Opaque, from contrib) have to, of necessity, ignore the adaptive, non-adaptive setting. And I suggested above that, manually graded question types like Essay should really be considered to have a separate interaction model. This suggests that the interaction model is a properly of the individual question. (Although usability considerations suggest that a single quiz should probably be constructed from questions with similar interaction models.)
 
I evenutally concluded that both answers are right, and more importantly, worked out how a question engine could allow that to happen.
 
 
-----
 
{{Work in progress}}
 
''TODO finish the rest of this document.''
 
==Detailed design==
 
===New database structure===
 
====question_attempt_batches====
 
This is just a rename of question_attempts.
 
{| class="nicetable"
! Column !! Type !! Comment
|-
| id || INT(10) NOT NULL AUTO INCREMENT || Unique id used to link attempt question data to other things, e.g. quiz_attempts.uniqueid and question_attempts.batchid
|-
| modulename || NOT NULL VARCHAR(255) || e.g. 'quiz' the thing linked to.
|}
 
====question_attempts====
 
This replaces question_sessions. Question sessions is not a great name because session has other connotations in the context of web applications. I think it is right to use the question_attempt name here, because this tables has one row for each attempt at each question.
 
There is now no requirement for (attemptid, questionid) to be unique.
 
{| class="nicetable"
! Column !! Type !! Comment
|-
| id || INT(10) NOT NULL AUTO INCREMENT || Unique id. Linked to from question_states.attemptid.
|-
| batchid || INT(10) NOT NULL REFERENCES question_batch.id || Which attempt this data belongs to.
|-
| interactionmodel || VARCHAR(32) NOT NULL || The question interaction model that is managing this question attempt.
|-
| questionid || INT(10) NOT NULL REFERENCES question.id || Which question this is the attempt data for.
|-
| maxgrade || NUMBER(12,7) NOT NULL || The grade this question is marked out of in this attempt.
|-
| responsesummary || TEXT || This is a textual summary of the student's response (basically what you would expect to in the Quiz responses report).
|}
 
* Need to store maxgrade becuase it could come from anywhere, (e.g. quiz_question_instances, question.defaultgrade, ...). We need it available at various times (e.g. when displaying a question) so it is better to store it explicitly here.
 
====question_states====
 
Same purpose as the old question_states table, but simplified.
 
{| class="nicetable"
! Column !! Type !! Comment
|-
| id || INT(10) NOT NULL AUTO INCREMENT || Unique id. Linked to from question_states.stateid.
|-
| attemptid || INT(10) NOT NULL REFERENCES question_attempts.id || Which question attempt this data belongs to.
|-
| timestamp || INT(10) NOT NULL || Timestamp of the event that lead to this state.
|-
| state || INT(4) NOT NULL || The type of state this is. One of the QUESTION_STATEXXX constants from the top of questionlib.php.
|-
| grade || NUMBER(12,7) || The grade the student has earned for this question, on a scale of 0..1. Needs to be multiplied by question_attempts.maxgrade to get the true grade.
|}
 
* We store grade unscaled (as a value between 0.0 and 1.0) because that makes regrading easier. (You might think that you can adjust scaled grades later, and that is almost true, but if maxgrade used to be 0, then you can't change it to anything else.)
 
 
====question_responses====
 
This stores the data submitted by the student (a list of name => value pairs) that lead to the state stateid. This replaces the old question_states.answer.
 
There will be a convention that ordinary names like 'myvariable' should be used for submitted data belonging to the question type; names prefixed with a !, like '!myaction' should be used for data belonging to the question interaction model; and names prefixed with a _ can be used for internal things, for example, the random question might store '_realquestionid' attached to the 'open' state, or a question type that does a lot of expensive processing might store a '_cachedresult' value, so the expensive calculation does not need to be repeated when reviewing the attempt.
 
Note that, the old question_states.answer field used to save a lot of repetitive information from one state to the next, for example the other questionid for random questions, and the choices order for multiple-choice questions with shuffle-answers on. In future, this sort of repetitive information will not be saved. Instead, during question processing, the question types will be given access to the full state history.
 
{| class="nicetable"
! Column !! Type !! Comment
|-
| id || INT(10) NOT NULL AUTO INCREMENT || Unique id. Not used much.
|-
| stateid || INT(10) NOT NULL REFERENCES question_states.id || Which state the submission of this data lead to.
|-
| name || VARCHAR(20) NOT NULL || The name of the parameter received from the student.
|-
| value || TEXT || The value of the parameter.
|}
 
===Upgrading the database===
 
===New list of states that a question may be in===
 
===Interaction models to implement initially===
 
''I intend to draw state diagrams for each of these in due course.''
 
Note that the main blocker to supporting negative scoring in the quiz is that it is not compatible with adaptive mode. So separating the models will allow MDL-1647 to finally be resolved.
 
====Deferred feedback====
 
This is how the quiz currently work when adaptive mode is off. The student enters a response to each question, then does Submit all and finish at the end of their attempt. Only then do they get feedback and/or grades, depending on the review settings.
 
====Legacy adaptive mode====
 
This re-implements how adaptive mode currently works, so old attempts can still be reviewed. In future, adaptive mode will be replaced with ...
 
====Interactive====
 
This will replace adaptive mode. This is the model that has been used successfully for several years in the OU's OpenMark system. The OU has also  modified Moodle to work like this, but because of the way the quiz core currently works, I was not happy just merging the OU changes into Moodle core. In a sense, this whole document grew out of my thoughts about how to implement the OU changes in Moodle core properly.
 
====Manually graded====
 
This is a new mode, specifically tailored to manually graded questions. Should make it possible to fix a lot of the outstanding manual grading correctness bugs above.
 
====Each attempt builds on last====
 
This is for the benefit of the quiz feature of the same name. It is just like deferred feedback model, but subsequent attempts are seeded with the student's last response from the previous attempt. I think it leads to slightly cleaner code to do this as a separate interaction model. For example, it does not make any sense to use each attempt builds on last in combination with adaptive mode, although Moodle currently lets you do that.
 
====Immediate feedback====
 
This would be like cross between deferred feedback an interactive. Each question would have a submit button beside it, like in interactive mode, so the student can submit it immediately and get the feedback while they still remember their thought processes while answering the question. However, unlike the interactive model, they would not then be allowed to change their answer.
 
====Certainty based marking with deferred feedback====
 
This takes any question that can use the deferred feedback model, and adds three radio buttons to the UI (Higher, medium, lower) for the student to use to indicate how certain they are that their answer is correct. Their choice is used to alter the grade the receive for the question.
 
====Certainty based marking with immediate feedback====
 
This is like immediate feedback mode with the certainty based marking feature.
 
===Changes to the question type API===
 
Can this be backwards compatible?
 
===The interaction model API===
 
 
==Proposed robustness and performance testing system==
 
A major change to the question engine should really only be contemplated in combination with the introduction of a test harness that makes it easy to run correctness, performance and reliablitity tests.
 
One adavntage of the way data will be stored in the new system is that everything originally submitted by the user will be stored in the database in a format very close to the one in which it was originally received by the web server. Therefore, it should be easy to write a script that replays saved quiz attempts. This is the basis of a test harness. I will create such a test script as part of this work.
 
 
==Time estimates==
 
* Completing the above design: a few days.
* ...
 
===Time scales===
 
It seems highly unlikely that this could be done before Moodle 2.0 branches. Now that I have worked out the details of this proposal, I would really like to do it as soon as possible, so that would be for Moodle 2.1. However, that is dependant on my employer (either moodle.com or The Open University). This change is sufficiently big that I would not want to undertake it on my own time.
 
 
==Summary==
 
This change will cost about X months development. In exchange, the question engine will be more reliable, and will support a richer ranged of teaching and learning activities, through extending the range of interaction models supported, and maing it easier to write new question types.
 
 
==See also==
 
* The place to discuss this is the [http://moodle.org/mod/forum/view.php?f=121 quiz forum].
*There is also a [http://moodle.org/mod/forum/discuss.php?d=114527#p502692 Thread in Lesson forum].
* See the lists of [[#Why_things_need_to_change|related tracker issues]] above.
 
 
[[Category:Quiz]]

Latest revision as of 15:36, 1 October 2009

The content that used by be here has been moved to Question Engine 2 and split into more managable chunks.