Using Workshop

From MoodleDocs
Revision as of 09:10, 22 October 2024 by Mary Cooch (talk | contribs) (school demo account details)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This page explains how students and teachers can use the Workshop activity and explores ways to make the most of it in your Moodle course.

Workshop phases

The work flow for the Workshop module can be viewed as having five phases. The typical workshop activity can cover days or even weeks. The teacher switches the activity from one phase to another.

The typical workshop follows a straight path from Setup to, Submission, Assessment, Grading/Evaluation, and ending with the Closed phase. However, an advanced recursive path is also possible.

The progress of the activity is visualized in so called Workshop planner tool. It displays all Workshop phases and highlights the current one. It also lists all the tasks the user has in the current phase with the information of whether the task is finished or not yet finished or even failed.


Setup phase

In this initial phase, Workshop participants cannot do anything (neither modify their submissions nor their assessments). Course facilitators use this phase to change workshop settings, modify the grading strategy of tweak assessment forms. You can switch to this phase any time you need to change the Workshop setting and prevent users from modifying their work.

Submission phase

In the submission phase, Workshop participants submit their work. Access control dates can be set so that even if the Workshop is in this phase, submitting is restricted to the given time frame only. Submission start date (and time), submission end date (and time) or both can be specified.

The workshop submissions report allows teachers to see who has submitted and who has not, and to filter by submission and last modified:

workshopsubmisisonsreport.png

A student is able to delete their own submission as long as they can still edit it and it has not been assessed. A teacher can delete any submission at any time, however if it has been assessed, they will be warned that the assessments will also be deleted and reviewers' grades may be affected.

Assessment phase

If the Workshop uses peer assessment feature, this is the phase when Workshop participants assess the submissions allocated to them for the review. As in the submission phase, access can be controlled by specified date and time since when and/or until when the assessment is allowed.

Grading evaluation phase

The major task during this phase is to calculate the final grades for submissions and for assessments and provide feedback for authors and reviewers. Workshop participants cannot modify their submissions or their assessments in this phase any more. Course facilitators can manually override the calculated grades. Also, selected submissions can be set as published so they become available to all Workshop participants in the next phase. See Workshop FAQ for instructions on how to publish submissions.

Closed

A closed workshop

Whenever the Workshop is being switched into this phase, the final grades calculated in the previous phase are pushed into the course Gradebook.This will result in the Workshop grades appearing in the Gradebook and in the workshop. Participants may view their submissions, their submission assessments and eventually other published submissions in this phase.

Workshop grading

The grades for a Workshop activity are obtained gradually over several stages and are then finalised. The following scheme illustrates the process (with information about grade values stored in the database).

The scheme of grades calculation in Workshop

Template:clear

Participants get two grades which are calculated during the Grading evaluation phase. The teacher can edit these grades while still in this phase. They will not go to the gradebook until the workshop is closed in the final phase. Note that it is possible to move between phases and even when the workshop is closed, grades could be changed directly in the gradebook if necessary.

The table below explains how the grades display:

Value Meaning
- (-) < Alice There is an assessment allocated to be done by Alice, but it has been neither assessed nor evaluated yet
68 (-) < Alice Alice assessed the submission, giving the grade for submission 68. The grade for assessment (grading grade) has not been evaluated yet.
23 (-) > Bob Bob's submission was assessed by a peer, receiving the grade for submission 23. The grade for this assessment has not been evaluated yet.
76 (12) < Cindy Cindy assessed the submission, giving the grade 76. The grade for this assessment has been evaluated 12.
67 (8) @ 4 < David David assessed the submission, giving the grade for submission 67, receiving the grade for this assessment 8. His assessment has weight 4
80 (20 / 17) > Eve Eve's submission was assessed by a peer. Eve's submission received 80 and the grade for this assessment was calculated to 20. Teacher has overridden the grading grade to 17, probably with an explanation for the reviewer.

Grade for submission

The final grade for every submission is calculated as weighted mean of particular assessment grades given by all reviewers of this submission. The value is rounded to a number of decimal places set in the Workshop settings form. The teacher can influence the grade in two ways:

  • by providing their own assessment, possibly with a higher weight than usual peer reviewers have
  • by overriding the grade to a fixed value

Grade for assessment

The grade for assessment tries to estimate the quality of assessments that the participant gave to the peers. This grade (also known as grading grade) is calculated by the artificial intelligence hidden within the Workshop module as it tries to do a typical teacher's job.

During the grading evaluation phase, a Workshop subplugin is used to calculate the grades for assessment. Currently there is only one standard subplugin available called Comparison with the best assessment (other grading evaluation plugins can be found in the Moodle plugins directory). The following text describes the method used by this subplugin.

Grades for assessment are displayed in the brackets () in the Workshop grades report. The final grade for assessment is calculated as the average of particular grading grades.

There is not a single formula to describe the calculation. However the process is deterministic. The workshop picks one of the assessments as the best one - that is closest to the mean of all assessments - and gives it a grade of 100%. Then it measures the 'distance' of all other assessments from this best one and gives them lower grades depending on how different they are from the best assessment (given that the best one represents a consensus of the majority of assessors). The parameter of the calculation is how strict we should be, that is how quickly the grades fall down if they differ from the best one.

If there are just two assessments per submission, the workshop cannot decide which of them is 'correct'. Imagine you have two reviewers - Alice and Bob. They both assess Cindy's submission. Alice says it is rubbish and Bob says it is excellent. There is no way of deciding who is right. So the workshop simply says - OK, you are both right and I will give you both a grade of 100% for this assessment. To prevent this, you have two options:

  • Either you have to provide an additional assessment so the number of assessors (reviewers) is odd and workshop will be able to pick the best one. Typically, the teacher comes and provide their own assessment of the submission to judge it
  • Or you may decide that you trust one of the reviewers more. For example, you know that Alice is much better in assessing than Bob is. In that case, you can increase the weight of Alice's assessment, let us say to "2" (instead of default "1"). For the purposes of calculation, Alice's assessment will be considered as if there were two reviewers having the exactly same opinion and therefore it is likely to be picked as the best one.

It's not final grades that are compared

It is very important to know that the grading evaluation subplugin Comparison with the best assessment does not compare the final grades. Regardless of the grading strategy used, every filled assessment form can be seen as an n-dimensional vector of normalized values. So the subplugin compares responses to all assessment form dimensions (criteria, assertions, ...). Then it calculates the distance of two assessments, using the variance statistics.

To demonstrate this with an example, let us say you use the grading strategy Number of errors to peer-assess research essays. This strategy uses a simple list of assertions and the reviewer (assessor) just checks if the given assertion is passed or failed. Let us say you define the assessment form using three criteria:

  1. Does the author state the goal of the research clearly? (yes/no)
  2. Is the research methodology described? (yes/no)
  3. Are references properly cited? (yes/no)

Let us say the author gets 100% grade if all criteria are passed (that is answered "yes" by the assessor), 75% if only two criteria are passed, 25% if only one criterion is passed and 0% if the reviewer gives 'no' for all three statements.

Now imagine the work by Daniel is assessed by three colleagues - Alice, Bob and Cindy. They all give individual responses to the criteria in order:

  • Alice: yes / yes / no
  • Bob: yes / yes / no
  • Cindy: no / yes / yes

As you can see, they all gave 75% grade to the submission. But Alice and Bob agree in individual responses, too, while the responses in Cindy's assessment are different. The evaluation method Comparison with the best assessment tries to imagine, how a hypothetical absolutely fair assessment would look like. In the Development:Workshop 2.0 specification, David refers to it as "how would Zeus assess this submission?" and we estimate it would be something like this (we have no other way):

  • Zeus 66% yes / 100% yes / 33% yes

Then we try to find those assessments that are closest to this theoretically objective assessment. We realize that Alice and Bob are the best ones and give 100% grade for assessment to them. Then we calculate how much far Cindy's assessment is from the best one. As you can see, Cindy's response matches the best one in only one criterion of the three so Cindy's grade for assessment will not be as high.

The same logic applies to all other grading strategies, adequately. The conclusion is that the grade given by the best assessor does not need to be the one closest to the average as the assessments are compared at the level of individual responses, and not the final grades.

Groups and Workshop

When a workshop is used in a course using separate or visible groups and groupings, it is possible to filter by group in a drop-down menu at the Assessment phase, manual allocation page, grades report and so on.

"Group filtering"
Group filtering drop down

See also

  • Example workshop with data Log in with username teacherand explore the grading and phases of a completed workshop on the Moodle School demo site.
  • Research paper Moodle Workshop activities support peer review in Year 1 Science: present and future by Julian M Cox, John Paul Posada and Russell Waldron
  • Using Moodle Workshop module forum
  • Using Moodle forum discussion [1] where David explains a particular Workshop results