Difference between revisions of "Using analytics"
(→Target: link to new page, Curriculum theory)
|Line 90:||Line 90:|
=== Target ===
=== Target ===
[[File:create_model_2.png|thumb]]Targets represent a “known good”-- something about which we have very strong evidence of value. Targets must be designed carefully to align with the curriculum priorities of the institution. Each model has a single target. The “Analyser” (context in which targets will be evaluated) is automatically controlled by the Target selection. See [[Learning analytics targets]] for more information.
[[File:create_model_2.png|thumb]]Targets represent a “known good”-- something about which we have very strong evidence of value. Targets must be designed carefully to align with the curriculum prioritiesof the institution. Each model has a single target. The “Analyser” (context in which targets will be evaluated) is automatically controlled by the Target selection. See [[Learning analytics targets]] for more information.
=== Indicators ===
=== Indicators ===
Revision as of 01:18, 28 June 2019
- For site administrators
- For teachers
- For researchers
- 1 Overview
- 2 Existing models
- 3 Creating and editing models
- 4 Training models
- 5 Exporting and Importing models
The Moodle Learning Analytics API is an open system that can become the basis for a very wide variety of models. Models can contain indicators (a.k.a. predictors), targets (the outcome we are trying to predict), insights (the predictions themselves), notifications (messages sent as a result of insights), and actions (offered to recipients of messages, which can become indicators in turn).
Most learning analytics models are not enabled by default. Enabling models for use should be done after considering the institutional goals the models are meant to support. When selecting or creating an analytics model, the following steps are important:
- What outcome do we want to predict? Or what process do we want to detect? (Positive or negative)
- How will we detect that outcome/process?
- What clues do we think might help us predict that outcome/process?
- What should we do if the outcome/process is very likely? Very unlikely?
- Who should be notified? What kind of notification should be sent?
- What opportunities for action should be provided on notification?
Moodle can support multiple prediction models at once, even within the same course. This can be used for A/B testing to compare the performance and accuracy of multiple models.
in Moodle 3.8! Moodle learning analytics supports two types of models.
- Machine-learning based models, including predictive models, make use of AI models trained using site history to detect or predict hidden aspects of the learning process.
- "Static" models use a simpler, rule-based system of detecting circumstances on the Moodle site and notifying selected users.
Moodle core ships with three models, Students at risk of dropping out and the static models Upcoming activities due and No teaching. Additional prediction models can be created by using the Analytics API or by using the new web UI. Each model is based on the prediction of a single, specific "target," or outcome (whether desirable or undesirable), based on a number of selected indicators.
You can view and manage your system models from Site Administration > Analytics > Analytics models.
Moodle core ships with three models, Students at risk of dropping out and the static models Upcoming activities due and No teaching. Other models can be added to your system by installing plugins or by using the web UI (see below). Existing models can be examined and altered from the "Analytics models" page in Site administration:
These are some of the actions you can perform on an existing model:
- Get predictions Train machine learning algorithms with the new data available on the system and get predictions for ongoing courses. Predictions are not limited to ongoing courses-- this depends on the model.
- View Insights Once you have trained a machine learning algorithm with the data available on the system, you will see insights (predictions) here for each "analysable." In the included model "Students at risk of dropping out, insights may be selected per course. Predictions are not limited to ongoing courses-- this depends on the model.
- Evaluate This is normally done in the background as a series of scheduled tasks, but you can trigger the start of the process with this menu. Evaluate the prediction model by getting all the training data available on the site, calculating all the indicators and the target and passing the resulting dataset to machine learning backends. This process will split the dataset into training data and testing data and calculate its accuracy. Note that the evaluation process uses all information available on the site, even if it is very old. Because of this, the accuracy returned by the evaluation process may be lower than the real model accuracy as indicators are more reliably calculated immediately after training data is available because the site state changes over time. The metric used to describe accuracy is the Matthews correlation coefficient (a metric used in machine learning for evaluating binary classifications)
You can also force the model evaluation process to run from the command line:
- Log View previous evaluation logs, including the model accuracy as well as other technical information generated by the machine learning backends like ROC curves, learning curve graphs, the tensorboard log dir or the model's Matthews correlation coefficient. The information available will depend on the machine learning backend in use.
- Edit You can edit the models by modifying the list of indicators or the time-splitting method. All previous predictions will be deleted when a model is modified. Models based on assumptions (static models) can not be edited.
- Enable / Disable The scheduled task that trains machine learning algorithms with the new data available on the system and gets predictions for ongoing courses skips disabled models. Previous predictions generated by disabled models are not available until the model is enabled again.
- Export Export your site training data to share it with your partner institutions or to use it on a new site. The Export action for models allows you to generate a csv file containing model data about indicators and weights, without exposing any of your site-specific data. We will be asking for submissions of these model files to help evaluate the value of models on different kinds of sites. Please see the Learning Analytics community for more information.
- Invalid site elements Reports on what elements in your site can not be analysed by this model
- Clear predictions Clears all the model predictions and training data
Students at risk of dropping out
This model predicts students who are at risk of non-completion (dropping out) of a Moodle course, based on low student engagement. In this model, the definition of "dropping out" is "no student activity in the final quarter of the course." The prediction model uses the Community of Inquiry model of student engagement, consisting of three parts:
This prediction model is able to analyse and draw conclusions from a wide variety of courses, and apply those conclusions to make predictions about new courses. The model is not limited to making predictions about student success in exact duplicates of courses offered in the past. However, there are some limitations:
- This model requires a certain amount of in-Moodle data with which to make predictions. At the present time, only core Moodle activities are included in the indicator set (see below). Courses which do not include several core Moodle activities per “time slice” (depending on the time splitting method) will have poor predictive support in this model. This prediction model will be most effective with fully online or “hybrid” or “blended” courses with substantial online components.
- This prediction model assumes that courses have fixed start and end dates, and is not designed to be used with rolling enrollment courses. Models that support a wider range of course types will be included in future versions of Moodle. Because of this model design assumption, it is very important to properly set course start and end dates for each course to use this model. If both past courses and ongoing courses start and end dates are not properly set predictions cannot be accurate. Because the course end date field was only introduced in Moodle 3.2 and some courses may not have set a course start date in the past, we include a command line interface script:
This script attempts to estimate past course start and end dates by looking at the student enrolments and students' activity logs. After running this script, please check that the estimated start and end dates script results are reasonably correct.
Upcoming activities due
The static “upcoming activities due” model checks for activities with upcoming due dates and outputs to the user’s calendar page.
This model's insights will inform site managers of which courses with an upcoming start date will not have teaching activity. This is a simple "static" model and it does not use machine learning backend to return predictions. It bases the predictions on assumptions, e.g. there is no teaching if there are no students.
Creating and editing models
There are four components of a model that can be defined through the web UI:
Targetcurriculum priorities of the institution. Each model has a single target. The “Analyser” (context in which targets will be evaluated) is automatically controlled by the Target selection. See Learning analytics targets for more information.
Indicators are data points that may help to predict targets. We are free to add many indicators to a model to find out if they predict a target-- the only limit is that the data must be available within Moodle and must have a connection to the context of the model (e.g. the user, the course, etc.). The machine learning “training” process will determine how much weight to give to each indicator in the model.
We do want to make sure any indicators we include in a production model have a clear purpose and can be interpreted by participants, especially if they are used to make prescriptive or diagnostic decisions.
Indicators are constructed from data, but the data points need to be processed to make consistent, reusable indicators. In many cases, events are counted or combined in some way, though other ways of defining indicators are possible and will be discussed later. How the data points are processed involves important assumptions that affect the indicators. In particular, indicators can be absolute, meaning that the value of the indicator stays the same no matter what other samples are in the context, or relative, meaning that the indicator compares the sample to others in the context.
See Learning analytics indicators for more information.
- "Single range" indicates that predictions will be made once, but will take into account a range of time, e.g. one prediction at the end of a course. The prediction is made at the end of the range.
- "No splitting" indicates that the model generates an insight based on a snapshot of data at a given moment, e.g. the "no teaching" model looks to see if there are currently any teachers or students assigned to a course at a defined point before the start of the term, and issues one insight warning the site administrator that no teaching is likely to occur in that empty course.
- "Accumulative" methods differ in how much data is included in the prediction. Both "quarterly" and "quarterly accumulative" predictions are made at the end of each quarter of a time span (e.g. a course), but in "quarterly," only the information from the most recent quarter is included in the prediction, whereas in "quarterly accumulative" all information up to the present is included in the prediction.
Single range and no splitting methods do not have time constraints. They run during the next scheduled task execution, although models apply different restrictions (e.g. require that a course is finished to use it for training or some data in the course and students to use it to get predictions...). 'Single range' and 'No splitting' are not appropriate for students at risk of dropping out of courses. They are intended to be used in models like 'No teaching' or 'Spammer user' where you just want one prediction and done. To explain this with an example: 'No teaching' model uses 'Single range' analysis interval; the target class (the main PHP class of a model) only accepts courses that will start during the next week. Once we provide a 'No teaching' insight for a course we won't provide any further 'No teaching' insights for that course.
The difference between 'Single range' and 'No splitting' is that models analysed using 'Single range' will be limited to the analysable elements (the course in students at risk model) start and end dates, while 'No splitting' do not have any time contraints and all data available in the system is used to calculate the indicators.
Note: Although the examples above refer to courses, analysis intervals can be used on any analysable element. For example, enrolments can have start and end dates, so an analysis interval could be applied to generate predictions about aspects of an enrollment. For analysable elements with no start and end dates, different analysis intervals would be needed. For example, a "weekly" analysis interval could be applied to a model intended to predict whether a user is likely to log in to the system in the future, on the basis of activity in the previous week.
This setting controls which machine learning backend and algorithm will be used to estimate the model. Moodle currently supports two predictions processors:
- PHP machine learning backend - implements logistic regression using php-ml (contributed by Moodle)
- Python machine learning backend - implements single hidden layer feed-forward neural network using TensorFlow.
You can only choose from the predictions processors enabled on your site.
Each prediction processor may support multiple algorithms in the future.
Machine-learning based models require a training process using previous data from the site. "Static" models make use of sets of pre-defined rules, and do not need to be trained.
There are two main categories of machine-learning based analytics models: supervised and unsupervised.
- Supervised models must be trained by using a data set with the target values already identified. For example, if the model will predict course completion, the model must be trained on a set of courses and enrollments with known completion status.
- Unsupervised models look for patterns in existing data, e.g. grouping students based on similarities in their behavior in courses.
At the present time, Moodle Learning Analytics only supports supervised models.
While we hope to include pre-trained models with the Moodle core installation in the future, at the current time we do not have large enough data sets to train a model for external use. (If you would like to help contribute data for this effort, please see the Moodle Learning Analytics Working Group.)
The model code includes criteria for "training" and "prediction" data sets. For example, only courses with enrolled students and an end date in the past can be used to train the Students at risk of dropping out model, because it is impossible to determine whether a student dropped out until a course has ended. On the other hand, for this model to make predictions, there must be a course with students enrolled that has started, but not yet ended.
The training set is defined in the php code for the Target. Models can only be trained if a site contains enough data matching the training criteria. Most models will require Moodle log data for the time period covering the events being analysed. For example, the Students at risk of dropping out model can only be trained if there is log data covering student activity in the courses that meet the training criteria. It is possible to train a model on an "archive" system and then use the model on a production system.
Triggering model evaluationMatthews correlation coefficient (a metric used in machine learning for evaluating binary classifications)
You can also force the model evaluation process to run from the command line:
Review evaluation results
Check for warnings about evaluation completion, model accuracy, and model variability.
You can also check the invalid site elements list to verify which site elements were included or excluded in the analysis. If you see a large number of unexpected elements in this report, it may mean that you need to check your data. For example, if courses don't have appropriate start and end dates set, or enrolment data has been purged, the system may not be able to include data from those courses in the model training process.
Exporting and Importing modelsModels can also be exported from one site and imported to another.
Exporting modelsYou can export the data used to train the model, or the model configuration and the weights of the trained model.
Note: the model weights are completely anonymous, containing no personally identifiable data! This means it is safe to share them with researchers without worrying about privacy regulations.