Workshop 2.0 specification
Workshop 2.0 specification | |
---|---|
Project state | Implemented] |
Tracker issue | MDL-17827 |
Discussion | here |
Assignee | David Mudrak |
Moodle 2.0
This page tracks and summarizes the progress of my attempt to try and rewrite the Workshop module for Moodle 2.0 (yes, yet another attempt).
NOTE: This is the original Workshop 2.0 specification. See Workshop for the current documentation.
Introduction
Workshop module is an advanced Moodle activity designed for peer-assessments within a structured review/feedback/grading framework. It is generally agreed the Workshop module has huge pedagogical potential and there is some demand for such a module from the community (including some Moodle Partners). It was originally written by Ray Kingdon, and was the very first third-party module written for Moodle. For a long time it's been largely un-maintained except for emergency fixes by various developers to keep it operational. During last years, several unsuccessful attempts were made by various volunteers to rewrite the module or to replace it with an alternative.
Key concepts
There are two key features making the Workshop module unique among other Moodle activities
- Advanced grading methods - contrary to the standard Assignment module, Workshop supports more structured forms of assessment/evaluation/grading. Teacher defines several aspects of the work, they are assessed separately and then aggregated. Examples include multi-criterion evaluation and rubric.
- Peer-assessment - according to some psychology/education theories (for example the Bloom's taxonomy), the evaluation is one of the highest level of cognitive operations. It implicitly requires paying attention, understanding the concepts, critical thinking, knowledge applying and analysing of a subject being evaluated. Peer-assessment fits very well into the social constructionist model of learning. In Workshop, students not only create (which is the other non-trivial cognitive operation) and submit their own work. They also participate on the assessment of others' submissions, give them feedback and suggest a grade for them. The Workshop module provides some (semi)automatic ways how to measure the quality of the peer-assessment and calculates a "grading grade", ie. the grade for how well a student assessed her peers. Peer-assessment gives opportunity to see other work and learn from it, formulate quality feedback which will enhance learning, learn from feedback of more than one person (i.e. the teacher)
Basic scenario of the module usage
The following scenario includes all supported workshop features and phases. Some of them are optional, eg. teacher may decide to disable the peer-assessment completely so the Workshop behaves similarly to the Assignment module with the benefit of multi-criteria evaluation.
- Workshop setup - teacher prepares workshop assignment (eg. an essay, research paper etc.) and sets up various aspects of the activity instance. Especially, the grading strategy (see below) has to be chosen and the assessment dimensions (criteria) defined.
- Examples from teacher (optional)
- teacher uploads example submissions - eg. examples of a good work and a poor work
- teacher assesses example submissions. For every example submission, there is one and only one assessment. Students never see assessments of the example submissions.
- students try and train the process of assessment of example submissions. The grade for assessment is automatically calculated (given that the assessment is considered as the "best" assessment - see below) and displayed immediately.
- students can re-assess as many times as they want
- teacher can set if examples are to assess before (default) or after submission phase (community request)
- Students work on their submissions - the typical result is a file or several files that can be submitted into the workshop together with a comment. Students may upload draft version of their work, download it later, modify and upload again. Only the last uploaded version is stored. Other types of work are possible using the new Repository API - eg. students can prepare a view in Mahara, GoogleDoc, publish video at YouTube etc.
- Self-assessment (optional) - after the work is submitted, the student is asked to assess her own work using the selected grading strategy evaluation form. Self-assessment is part of the peer-assessment phase - the own work is just yet another work to review.
- Peer-assessment (optional)
- the module randomly selects a given number of submissions to be reviewed/commented/evaluated/assessed by student.
- grade for assessment is automatically calculated
- each grade for assessment can be overridden by teacher
- Assessment by teacher (optional)
- teacher evaluates submissions using the selected grading strategy evaluation form
- teacher evaluates the quality of peer-assessments
- Final grade aggregation
- generally, the final grade consists of a grade for submission and grade for assessments
- every grading strategy defines how the final grade for submission is computed
- grade for submission can be overridden by teacher
Notes about overriding:
- Teacher can manually override the grade for submission _after_ the aggregation. So even if - according to the peers' suggestions - the grade would be 60/100 (for example receiving 40, 60, 60, 80), teacher can say override it with 70/100.
- In case of grades for assessment, teacher can't override the total value, but the individual one _before_ the aggregation. So if a student is given grades for assessments 100, 100, 80, 66, teacher can only manually override each of them, not the result. So, she can override the last one and change the value from 66 to 80. Then, student gets 90/100 as a total grade for assessment.
Pedagogical use cases
Simple assignment with advanced grading strategy
- Students submit just one file as they do in Assignment module
- No peer grading
- Several dimensions (criteria) of the assessment
- Weighted mean aggregation
- if only one dimension is used, the Workshop behaves like an ordinary Assignment
Teacher does not have time to evaluate submissions
- Dozens/hundreds of students in the class, everyone submits an essay
- Every student is given a set of, say, five submissions from peers to review and evaluate
- The review process is bi-anonymous - the author does not know who reviewers are, the reviewer does not know who the author is
- The teacher randomly picks some peer-assessments and grade their quality
- The grading grade is automatically calculated according to the level of reliability. If reviewers did not reach a required level of the consensus (ie. the peer-assessments vary a lot), the teacher is noticed and asked for the assessment. The assessment of teacher is given more weight so it (hopefule) helps to decide and calculate the final grade for the author and the grading grades for the reviewers.
- When the teacher is happy about the results, she pushes the final grades into the course Gradebook.
Presentations and performance
- Students submits their slides and give a presentation in class
- Peer feedback on submitted materials and live presentation
- Randomly assigning assessments is a motivation to pay attention and take good notes
Activity focused on work quality
- Initially, teacher uses No grading strategy to collect comments from peers
- Students submit their drafts during the Submission period and get some feedback during assessment period
- Then, teacher reverts the Workshop back into the submission phase, allows re-submissions and changes the grading strategy to a final one (eg Accumulative)
- Students submit their final versions with the comments being considered
- Peers re-assess the final versions and give them grades
- Final grades are computed using the data from the last (final) round of assessment period.
- The reverting may happen several times as needed. The allocation of reviewers does not change.
Current implementation problems
- Very old and unmaintained code - the vast majority of the code is the same as it was in Moodle 1.1. mforms are not used (accessibility). Missing modularity. No unit testing.
- Does not talk to the Gradebook - there are some patches around in forums fixing partial problems.
- User interface and usability
- Grades calculation is kind of dark magic - see this discussion. It is not easy to understand the way how Workshop calculates the grading grade even if you read the code that actually does the calculation. More than that, the whole algorithm depends on a set of strange constants without any explanation or reasoning of the value. The grade calculation must be clear and easy to understand to both teacher and student. Teachers must be always able to explain why students got their final grades. Therefore the plan is to get rid of the fog above the grades calculation, even if it breaks backward compatibility.
- Lack of custom scales support - see this discussion
- Submitting and assessing phases may overlap - the fact that a student can start assess while some others have not submitted the work yet causes problems with assessment allocation balancing and has to be handled by a strange "overall allocation" setting - see this discussion. IMO there is no strong educational use case explaining why this should be possible. Therefore, it is proposed to break backwards compatibility here and keep all workshop phases distinct. This will simplify phases switching and controlling and module set-up.
- Performance - SQL queries inside loops, no caching, no comment.
- Weights not saved in the database - currently, assessment dimensions weights are saved in DB by the key in the array of the supported weights. Eg. there is weight "11" in DB and looking at WORKSHOP_EWEIGHTS you can see that "11" => 1.0. The new version will save both grades and weights as raw values (type double precision) so calculation with weights are possible at SQL level.
Project analysis
See the project analysis mindmap
Project schedule
Milestone 1
- Date: 15/05/2009
- Goal: The functional specification is in docs wiki and is reviewed and agreed by the community and Moodle HQ. The implementation plan is transferred into sub-tasks in the tracker.
Milestone 2
- Date: 15/07/2009
- Goal: All features implemented. The community is asked for the testing.
Milestone 3
- Date: 15/08/2009
- Goal: Major bugs fixed, upgrading from pre-2.0 versions works. The community is asked for the QA testing.
Milestone 4
- Date: 15/09/2009
- Goal: The module is moved from contrib back to the core.
User interface mockups
Implementation plan
Glossary of terms
- Allocation
- Process/result of assigning submissions to the peers to be reviewed/assessed.
- Assessment dimension
- A general umbrella term covering assessment elements, criteria or aspects. In various grading strategies, the dimension can have various meaning.
- Grade
- In the context of the DB structure, grade represents "Grade for submission".
- Grading grade
- In the context of the DB structure, grading grade represents "Grade for assessment".
- Grading strategy
- A method how the peer assessment is done and what the assessment form looks like. Four grading strategies will be implemented in 2.0: No grading, Accumulative, Number of errors and Rubric.
- Review
- In workshop, synonym for assessment.
- Reviewee
- Author of the reviewed/assessed submission.
- Reviewer
- Student who is reviewing/assessing an allocated submission.
Grading
The final grade (that is to be pushed into the Gradebook when the workshop is over) generally consists of two parts: grade for submission and grade for assessment. The default grade values will be 80 for grade for submission and 20 for grade for assessment. The final grade is always sum of these two components, giving default value 100. Maximum values for both grade for submission and grade for assessment are defined by teacher in mod_form (see the mockup UI in MDL-18688).
Workshop tries to compute grades automatically if possible. Teacher can always override the computed grade for submission and grade for assessment before the final grade is pushed into the Gradebook (where the grade can be overridden again).
Grading strategies
Grading strategy is a way how submissions are assessed and how the grade for submission is calculated. Currently (as of Moodle 1.9), the Workshop contains the following grading strategies: No Grading, Accumulative, Error Banded, Criterion, or Rubric.
My original idea was to have grading strategies as separate subplugins with their own db/install.xml, db/upgrade.php, db/access.php and lang/en_utf8/ - similar to what question/types/ have. This pluggable architecture would allow to easily implement custom grading strategies, eg. national-curriculum specific. However, Petr Škoda gave -2 to such proposal given that the Moodle core doesn't support them right now. So, from the DB tables, capabilities and language files perspective, grading strategies have to be part of monolithic module architecture.
Technically, grading strategies will be classes defined in mod/workshop/grading/strategyname/strategy.php. Grading strategy class has to implement workshop_grading_strategy interface, either directly or via inheritance.
interface workshop_grading_strategy { }
class workshop_strategyname_strategy implements workshop_grading_strategy { }
abstract class workshop_base_strategy implements workshop_grading_strategy { }
class workshop_strategyname_strategy extends workshop_base_strategy { }
Grading strategy classes basically provide grading forms and calculate grades. During the current phase, four strategies will be implemented. Grading strategy of a given workshop can not be changed after a work has been submitted.
In contrary to 1.9, the number of evaluation elements/critera/dimensions is not defined in mod_form but at the strategy level. There is no need to pre-set it, teacher just clicks on "Blanks for 2 more dimensions" and gets en empty form fields.
See MDL-18912 for UI mockups of grading forms.
No grading
- No grades are given by peers to the submitted work and/or assessments, just comments.
- Submissions are graded by teachers only
- There may be several assessment dimensions (aspects) to be commented separately.
- This may be used to practice submitting work and assessing peers' work through comment boxes (small textareas).
Number of errors
- In the pre-2.0 versions, this was called Error banded.
- Several (1 to N) assessment assertions
- The student evaluates the submission by Yes/No, Present/Missing, Good/Poor, etc. pairs.
- The grade for submission is based on the weighted count of negative assessment responses. A response with weight is counted -times. Teachers define a mapping table that converts the number of negative responses to a percentual grade for submission. No negative response is always automatically mapped to 100% grade for submission.
- This may be used to make sure that certain criteria were addressed in article reviews.
- Examples of assessment assertions (criteria): Has less than 3 spelling errors, Has no formatting issues, Has creative ideas, Meets length requirements
Accumulative
- Several (1 to N) assessment dimensions (criteria)
- Each can be graded using a grade (eg out of 100) or a scale (using site-wide scale or a scale defined in a course). Scales are considered as grades from 0 to M-1, where M is the number of scale items.
- The grade for submission is aggregated as a weighted mean of normalized dimension grades:
- where is the grade given to the i-th dimension, is the maximal possible grade of the i-th dimension, is the weight of the i-th dimension and is the number of dimensions.
- There is backwards compatibility issue as the current Workshop uses its own scales. During upgrade to 2.0, the module will create all necessary scales in the course or as standard scales (to be decided).
Rubric
- See the description of this scoring tool at Wikipedia
- Several (1 to N) evaluation categories/criteria/dimensions
- Each of it consists of a grading scale, ie. set of ordered propositions/levels. Every level is assigned a grade.
- The student chooses which proposition (level) answeres/describes the given criterion best
- The final grade is aggregated as
- where is the grade given to the i-th dimension, is the maximal possible grade of the i-th dimension and is the number of dimensions.
- This may be used to assess research papers or to assess papers in which different books were critiqued.
- Example: Criterion "Overall quality of the paper", Levels "5 - An excellent paper, 3 - A mediocre paper, 0 - A weak paper" (the number represent the grade)
This strategy merges the current Rubric and Criterion strategy into a single one. Conceptually, current Criterion is just the one dimension of Rubric. In Workshop 1.9, Rubric can have several evaluation criteria (categories) but are limited to a fixed scale 0-4 points. Criterion in Workshop 1.9 may use custom scale, but is limited to a single evaluation aspect. The new Rubric strategy combines the old two. To mimic the legacy behaviour:
- Criterion in 1.9 can be replaced by Rubric 2.0 using just one dimension
- Rubric in 1.9 can be replaced by Rubric 2.0 by using point scale 0-4 for every criterion.
- In 1.9, student could suggest an optional adjustment to a final grade. I propose to get rid of this. Eventually (as a new feature) have it as standard for all grading strategies, not only rubric.
Grade for assessment
Currently implemented in workshop_grade_assessments() and workshop_compare_assessments(). Before I have understood how it works, I was proposing very loudly this part has to be reimplemented (is it just me tending to push own solutions instead of trying to understand someone else?). Now I think the ideas behind the not-so-pretty code are fine and the original author seems to have real experiences with these methods. I believe the current approach will be fine if we provide clear reports that help teachers and student understand why the grades were given. The following meta-algorithm describes the current implementation. It is almost identical for all grading strategies.
For all assessment dimensions, calculate the arithmetic mean and sample standard deviation across all made assessments in this workshop instance. Grades given by a teacher can be given weight (integer 0, 1, 2, 3 etc.) in the Workshop configuration. If the weight is >1, figures are calculated as if the same grade was given by more reviewers. Backwards compatibility issue: currently weights are not taken into account when calculating sample standard deviation. This is going to be changed.
Try to find the "best" assessment. For our purposes, the best assessment is the one closest to the mean, ie. the one representing the consensus of reviewers. For each assessment, the distance from the mean is calculated similarly as the variance. Standard deviations very close to zero are too sensitive to a small change of data values. Therefore, data having stdev <= 0.05 are considered equal.
$variance = 0 for each assessment dimension if stdev > 0.05 then $variance += ((mean - grade) * weight / stdev) ^ 2
In some situations there might be two assessments with the same variance (distance from the mean) but the different grade. In this situation, the module has to warn the teacher and ask her to assess the submission (so her assessment hopefully helps to decide) or give grades for assessment manually. There is a bug in the current version linked with this situation - see MDL-18997
If there are less than three assessments for this submission (teacher's grades are counted weight-times), they all are considered "best". Also, in theory, there may be two "best" assessments - the one lower than the mean and the other higher than the mean, both with the same variance. In this case, they are both (or all, actually) considered "best". In other cases, having the "best" assessment as the reference point, we can calculate the grading grade for all peer-assessments.
The best assessment gets grading grade 100%. All other assessments are compared against the best one. If there are more best assessments, the one closest is used as a reference. The difference/distance from the the best assessment is calculated as sum of weighted square differences.
$sumdiffs = 0; $sumweights = 0; for each assessment dimension $sumdiffs += ((bestgrade - peergrade) * dimensionweight / maxpossiblescore) ^ 2; $sumweights += dimensionweight;
- Note: for Rubric strategy, the weight of every dimension is calculated as (maximum possible grade minus minus minimal possible grade).
- Example: we have accumulative grading strategy with a single assessment dimension. The dimension is graded as number of points of 100. The best assessment representing the opinion of the majority of reviewers was calculated as 80/100. So, if a student gave the submission 40/100, the distance from the bestgrade is sumdiffs = ((80 - 40) * 1 / 100) ^ 2 = 0.4 ^ 2 = 0.16
In the current implementation, this calculation is influenced by a setting called "comparison of assessments". The possible values are "Very strict", "Strict", "Fair", "Lax" and "Very lax". Their meaning is illustrated in the attached graph. For every level of the comparison, the factor f is defined: 5.00 - very strict, 3.00 - strict, 2.50 - fair, 1.67 - lax, 1.00 - very lax. The grade for assessment is then calculated as:
gradinggrade = (1 - f * sumdiffs/sumweights) * 100 [%]
- Example (cont.): in case of the default "fair" comparison of assessments: gradinggrade = (1 - 2.5 * 0.16 * 1) = 60/100
I remember me having difficulties trying to find good Czech terms for "Comparison of assessments", "Fair", "Lax" and "Strict" when I was translation Workshop module. I am not sure how clear they are for other users, including the native-speakers. I propose to change the original English terms to:
- "Comparison of assessments" to "Required level of assessment similarity"
- "Fair" to "Normal"
- "Lax" to "Low"
- "Strict" to "High"
Grades calculation examples
Example 1
- Grading strategy: Accumulative
- Assessment comparison: very strict (f = 5.00)
Max grade | Weight | Best assessment | Peer assessment | Difference | |
Dimension #1
|
10
|
1
|
6
|
6
|
0
|
Dimension #2
|
10
|
1
|
5
|
8
|
0,09
|
Dimension #3
|
8
|
2
|
8
|
6
|
0,25
|
Dimension #4
|
8
|
2
|
6
|
8
|
0,25
|
Dimension #5
|
6
|
5
|
6
|
5
|
0,69
|
Grade for submission
|
87,27%
|
82,42%
|
|||
Grade for assessment
|
100,00%
|
41,62%
|
Example 2
- Grading strategy: Rubric
- Assessment comparison: fair (f = 2.50)
Min grade | Max grade | Best assessment | Peer assessment | Difference | |
Dimension #1
|
1
|
4
|
4
|
3
|
0,56
|
Dimension #2
|
1
|
4
|
3
|
4
|
0,56
|
Dimension #3
|
0
|
10
|
5
|
6
|
1
|
Dimension #4
|
0
|
10
|
6
|
5
|
1
|
Dimension #5
|
0
|
10
|
8
|
7
|
1
|
Grade for submission
|
68,42%
|
65,79%
|
|||
Grade for assessment
|
100,00%
|
71,35%
|
Groups support
Visible groups mode
- The automatic allocator tries to pick submissions to assess from other groups.
- Example: Ideally, if there are three balanced groups and every submission is allocated to the three peers for review, each of the reviewer will be selected from different group
Separate groups mode
- The submission author and assessor must be from the same group
The structure of the code
Files, libraries, interfaces, classes, unit tests
Database structures
See MDL-19203 prepare install.xml for the 2.0 database structure
workshop
This table keeps information about the module instances and their settings.
id | int (10) | auto-numbered | |
course | int (10) | the course id this workshop is part of (violates 3NF) | |
name | char (255) | the title of the activity as it appears at the course outline | |
intro | text (medium) | the description/assignment of the workshop | |
introformat | int (3) | 0 | the format of the intro field |
timemodified | int (10) | 0 | the timestamp when the module was modified |
phase | int (2) | 0 | the current manually set phase of workshop (0 => not available, 1 => submission, 2 => assessment, 3 => closed) |
useexamples | int (2) | 0 | optional feature: students practise evaluating on example submissions from teacher |
usepeerassessment | int (2) | 0 | optional feature: students perform peer assessment of others' work |
useselfassessment | int (2) | 0 | optional feature: students perform self assessment of their own work |
grade | int (5) | 80 | the maximum grade for submission |
gradinggrade | int (5) | 20 | the maximum grade for assessment |
strategy | varchar (30) | not null | the type of the current grading strategy used in this workshop (notgraded, accumulative, noerrors, rubric) |
nattachments | int (3) | 0 | number of required submission attachments |
latesubmissions | int (2) | 0 | allow submitting the work after the deadline |
maxbytes | int (10) | 100000 | maximum size of the one attached file |
anonymity | int (2) | 0 | the anonymity mode (0 => not anonymous, 1 => authors hidden to reviewers, 2 => reviewers hidden to authors, 3 => fully anonymous) |
assesswosubmission | int (2) | 0 | if a student should participate in peer assessment phase even if she has not managed to submit her own work |
nsassessments | int (3) | 3 | number of required assessments of other students' work |
nexassessments | int (3) | 0 | if useexamples == 1: the number of required assessments of teacher examples (0 = all, >0 enough number) |
examplesmode | int (2) | 0 | 0 => example assessments are voluntary, 1 => examples must be assessed before submission, 2 => examples are available after own submission and must be assessed before peer/self assessment phase |
teacherweight | int (3) | 1 | the weight of the teacher's assessments |
agreeassessments | int (2) | 0 | boolean - determines if author may comment assessments and agree/disagree with it |
hidegrades | int (2) | 0 | boolean - if agreeassessments==1, should the grades be hidden for author? If hidden, only comments are visible |
assessmentcomps | int (3) | comparison of assessments = required level of assessment similarity (0 => very lax, 1 => lax, 2 => fair, 3 => strict, 4 => very strict) | |
submissionstart | int (10) | 0 | 0 = will be started manually, >0 the timestamp of the start of the submission phase |
submissionend | int (10) | 0 | 0 = will be closed manually, >0 the timestamp of the end of the submission phase |
assessmentstart | int (10) | 0 | 0 = will be started manually, >0 the timestamp of the start of the assessment phase |
assessmentend | int (10) | 0 | 0 = will be closed manually, >0 the timestamp of the end of the assessment phase |
releasegrades | int (10) | 0 | 0 = will be released manually, >0 the timestamp when final grades are published |
password | char (255) | the access password |
workshop_submissions
Info about the submission and the aggregation of the grade for submission, grade for assessment and final grade. Both grade for submission and grade for assessment can be overridden by teacher. Final grade is always the sum of them. All grades are stored as of 0-100.
id | int (10) | autonumbered | |
workshopid | int (10) | the id of the workshop instance | |
example | int (2) | 0 | Is this submission an example from teacher |
userid | int (10) | The author of the submission | |
timecreated | int (10) | Timestamp when the work was submitted for the first time | |
timemodified | int (10) | Timestamp when the submission has been updated | |
grade | number (10,5) | NULL | Grade for the submission calculated as average of the peer-assessments. The grade is a number from interval 0..100. If NULL then the grade for submission has not been aggregated yet. |
gradeover | number (10,5) | NULL | Grade for the submission manually overridden by a teacher. Grade is always from interval 0..100. If NULL then the grade is not overriden. |
gradeoverby | int (10) | NULL | The id of the user who has overridden the grade for submission. |
gradinggrade | number (10,5) | NULL | Grade for the assessment calculated by the module. The grade is a number from interval 0..100. If NULL then the grade for assessment has not been aggregated yet. |
workshop_assessments
Info about the made assessment and automatically calculated grade for it. It can be always overridden by teacher.
id | int (10) | auto-numberec | |
submissionid | int (10) | The id of the assessed submission | |
userid | int (10) | The id of the reviewer who created this assessment | |
timecreated | int (10) | 0 | If 0 then the assessment was allocated but the reviewer has not assessed yet. If greater than 0 then the timestamp of when the reviewer assessed for the first time |
timemodified | int (10) | 0 | If 0 then the assessment was allocated but the reviewer has not assessed yet. If greater than 0 then the timestamp of when the reviewer assessed for the last time |
timeagreed | int (10) | 0 | If 0 then the assessment was not agreed by the author. If greater than 0 then the timestamp of when the assessment was agreed by the author |
grade | number (10,5) | NULL | The aggregated grade for submission suggested by the reviewer. The grade is computed from the values assigned to the assessment dimensions. If NULL then it has not been aggregated yet. |
gradinggrade | number (10,5) | NULL | The computed grade for this assessment. If NULL then it has not been computed yet. |
gradinggradeover | number (10,5) | NULL | Grade for the assessment manually overridden by a teacher. Grade is always from interval 0..100. If NULL then the grade is not overriden. |
gradinggradeoverby | int (10) | NULL | The id of the user who has overridden the grade for submission. |
generalcomment | text (medium) | Comment from the reviewer | |
generalcommentformat | int (3) | 0 | The format of generalcomment field |
teachercomment | text (medium) | The comment from the teacher. For example the reason why the grade for assessment was overridden | |
teachercommentformat | int (3) | 0 | The format of teachercomment field |
workshop_grades
How the reviewers filled-up the grading forms, given (sub)grades and comments
id | int (10) | auto-numbered | |
assessmentid | int (10) | Part of which assessment this grade is of | |
strategy | varchar (30) | not null | The type of the grading strategy used for this grade (notgraded, accumulative, noerrors, rubric) |
dimensionid | int (10) | Foreign key. References dimension id in one of the grading strategy tables. | |
grade | int (10) | Given grade in the referenced assessment dimension. | |
peercomment | text (medium) | Reviewer's comment to the grade value. | |
peercommentformat | int (3) | 0 | The format of peercomment field |
workshop_forms_<strategy>
In Workshop 2.0, all grading strategies store their data in their own tables (modularity, avoiding some if's and switch'es in the code). Should be easy to add additional grading strategies.
workshop_forms_nograding
The assessment dimensions definitions of No grading strategy forms.
id | int (10) | auto-numbered | |
workshopid | int (10) | The id of the Workshop instance where this dimension is used as a part of evaluation form. | |
sort | int (10) | 0
|
The order of the dimension in the form |
description | text (medium) | The description of the dimension | |
descriptionformat | int (3) | 0
|
The format of the description field |
workshop_forms_noerrors
The dimension definitions of Number of errors grading strategy forms.
id | int (10) | auto-numbered | |
workshopid | int (10) | The id of the Workshop instance where this dimension is used as a part of evaluation form. | |
sort | int (10) | The order of the element in the form | |
description | text (medium) | The description of the element | |
descriptionformat | int (3) | The format of the description field | |
grade0 | char (50) | NULL | The word describing the negative evaluation (like Poor, Missing, Absent, etc.). If NULL, it defaults to a translated string False |
grade1 | char (50) | NULL | A word for possitive evaluation (like Good, Present, OK etc). If NULL, it defaults to a translated string True |
weight | int (5) | 1 | Weight of this element |
workshop_forms_noerrors_map
This maps the number of errors to a percentual grade for submission
id | int (10) unsigned not null seq | |
workshopid | int (10) unsigned not null | The id of the workshop |
nonegative | int (10) unsigned not null | Number of negative responses given by the reviewer |
grade | int (4) unsigned not null | Percentual grade 0..100 for this number of negative responses |
workshop_forms_accumulative
The evaluation elements definitions of Accumulative grading strategy forms.
id | int (10) | auto-numbered | |
workshopid | int (10) | The id of the Workshop instance where this element is used as a part of evaluation form. | |
sort | int (10) | The order of the element in the form | |
description | text (medium) | The description of the element | |
descriptionformat | int (3) | The format of the description field | |
grade | int (10) | If greater than 0, then the value is maximum grade on a scale 0..grade. If lesser than 0, then its absolute value is the id of a record in scale table. If equals 0, then no grading is possible for this element, just commenting. | |
weight | int (5) | 1 | The weigh of the element |
workshop_forms_rubric
The evaluation elements definitions of Rubric grading strategy forms.
id | int (10) | auto-numbered | |
workshopid | int (10) | The id of the Workshop instance where this element is used as a part of evaluation form. | |
sort | int (10) | 0
|
The order of the element in the form |
description | text (medium) | The description of the element | |
descriptionformat | int (3) | 0
|
The format of the description field |
workshop_forms_rubric_levels
The definition of rubric rating scales.
id | int (10) | auto-numbered | |
dimensionid | int (10) | Which criterion this level is part of | |
grade | int (10) | Grade representing this level. | |
description | text (medium) | The definition of this level | |
descriptionformat | int (3) | 0
|
The format of the description field |
Capabilities
Capability | Description | Legacy roles with this capability set to allow by default | |
mod/workshop:edit | Can set the workshop via mod_form. It does NOT mean "do anything". Can access the activity even it is not available yet | POSTFIX: no, this is what 'moodle/course:manageactivities' does. Removal candidate | editingteacher, admin |
mod/workshop:switchphase | Can change the phase of the workshop, eg start submission phase, close assessment phase etc. Can access the activity even it is not available yet | editingteacher, teacher, admin | |
mod/workshop:editdimensions | Can modify the grading strategy evaluation forms. Can access the activity even it is not available yet | editingteacher, admin | |
mod/workshop:view | Access the activity if it is set as available | guest, student, teacher, editingteacher, admin | |
mod/workshop:submit | Can submit own work | student | |
mod/workshop:submitexamples | Can submit example submissions and assess them | teacher, editingteacher, admin | |
mod/workshop:allocate | Can allocate submissions for reviews | teacher, editingteacher, admin | |
mod/workshop:viewauthornames | Can see the name of a submission author | student, editingteacher, teacher, admin | |
mod/workshop:viewreviewernames | Can see the name of a submission reviewer | editingteacher, teacher, admin | |
mod/workshop:peerassess | Can be allocated as a peer-assessor of a submission | student | |
mod/workshop:viewallsubmissions | Allowed to view all submissions. Applies to the current user's group only. Or - if the user is allowed to access all groups - applies to any submission and assessment | teacher, editingteacher, admin | |
mod/workshop:assessallsubmissions | Allowed to view and assess all submissions and override grades. Applies to the current user's group only. Or - if the user is allowed to access all groups - applies to any submission and assessment | teacher, editingteacher, admin | |
mod/workshop:viewgradesbeforeagreement | Can see grades even before the assessment was agreed. Applies only if authors must agree with comments. | teacher, editingteacher, admin |
Other capabilities that are used
- Capabilities/moodle/site:accessallgroups - eg. a user with this capability set to allow can assess submissions in other groups when the workshop is set to separate groups. Should be set to allow for teachers and editingteachers
File API integration
- filearea "workshop_submission" used for images embedded in the submission editor, itemid = submissionid
- filearea "workshop_attachment" used for submission attachments, itemid = submissionid
Repository/portfolio API integration
If and how will be implemented.
Backwards compatibility issues
- The submission phase and the assessment phase can't overlap any more. This makes some settings (like "Allow resubmission" or "Overall allocation") obsolete.
- No League table of submitted work - can be replaced by a block or added as a feature later
- No optional adjustment of the final grade for submission can be suggested by student. If such feature is demanded by community, will be added later as a standard feature for all grading strategies (and not only rubric, as it is now).
Out of scope
What are not goals of this project (may be discussed later for 2.1 etc.)
- Generally no new/other grading strategies are to implement in the current phase
- No new Workshop features will be implemented - remember this is a revitalization of the current behaviour, not a new module
- No group grading support - every student submits his/her individual work
- Outcomes integration - it appears to me logical to integrate Outcomes directly into the Workshop. What about an Outcomes grading strategy subplugin, so peers can assess using outcomes description and scales.
- Tim's proposal (aka "David, I had a crazy idea about the workshop module rewrite."): a way to add peer assessment to anything in Moodle, like forum posts, wiki pages, database records etc. before they are published to other students
- Petr's proposal: TurnItIn or similar service integration
To be discussed/decided
- Backwards-commenting - there is a Workshop mode where the author has to agree the assessment to take it into account. Basically the discussion between the author and the reviewer has to be stored somewhere. We can either create new "workshop_comments" table (as it is in the pre-2.0 version) or use "post" core table.
Other links and resources
- Moodle Workshop Guide by Laura M. Christensen © 2007
- MDL-17827 Workshop upgrade/conversion from 1.9 to 2.0 (META)
- Yet another attempt to rewrite Workshop for 2.0 forum thread
- New Workshop Module forum thread
- the new workshop module forum thread
- a lot of interesting feature requests
- John Hamer, Catherine Kell, Fiona Spence. Peer Assessment Using Aropä. Appeared in ACE'07 (270kb PDF).
- Jonh Hamer, Paul Denny, Andrew Luxton-Reily. Contributing Student Pedagogy. Glasgow Caledonian University, 27 November 2008.
- Kathy Schrock's Guide for Educators - Examples of Assessment Rubrics
Diagrams
Credits
Many thanks to Stephan Rinke for his valuable comments and ideas.