Ollama
Neue Funktionalität
in Moodle 4.5!
Diese Seite muss übersetzt werden. Greif zu!
Wenn du dich um diesen Artikel kümmern willst, dann kennzeichne das, indem du die Vorlage {{Übersetzen}} durch die Vorlage {{ÜbersetzenVergeben}} ersetzt.
Wenn du mit deiner Arbeit fertig bist, dann entferne die Vorlage aus dem Artikel.
Danke für deine Mitarbeit!
Introduction
Moodle 4.5 and its AI subsystem comes with a basic number of supported LLM providers. One of them is the OpenAI provider.
While it's a "proprietary" API, it has become sort of a "de facto" standard' and many other LLM providers, both open and commercial, or tools like AI proxies or routers, come with an OpenAI API-compatible layer to use their services.
That enables to, virtually, use any model out there (text completions, images, embeddings, ...) as far as the provider has implemented those OpenAI API-compatible entry points. Just search out there for "OpenAI compatible" and you will find them out.
As an example, in this article, we'll be using Ollama that is one of the most popular open LLM providers that, apart from its own API, also comes with OpenAI compatibility, supports many open models, can be installed locally (or remotely) and works on CPUs (slow!), GPUs (faster) and it's really easy to manage.
But, again, don't forget that this is only an example and the very same applies to any other provider or tool supporting the OpenAI API. And they are legion! Depending of your needs, usage, monitoring, budget, LLM features... you may end using any of them.
Configuring Moodle to use a locally installed Ollama
Requirements
- An instance of Moodle (4.5 or later) is installed and running.
- Ollama is installed in the same computer where Moodle is running.
- Some LLM model has been pulled and is available to Ollama, we'll be using llama3.1 in this example.
- It's working ok. You can test it by executing this in the terminal:
ollama run llama3.1 "I love Moodle".
Or, in a complementary manner, this raw http call can be used to verify that the service is available via OpenAI API by executing:
curl --location 'http://127.0.0.1:11434/v1/chat/completions' \ --header 'Content-Type: application/json' \ --data '{ "model": "llama3.1", "messages": [ { "role": "user", "content": "I love Moodle" } ] }'
Moodle HTTP security configuration
By default, as a security measure, Moodle blocks local network hosts to be accessed with Curl. That includes the usual local networks, like 192.168.1.x, or 10.x.y.z, ... and also 127.0.0.1 or localhost. Similarly, by default, only ports 80 and 443 are allowed.
In our case, remember that this is a locally installed Ollama, we need to allow 127.0.0.1 (or localhost) to be accessed. And, also, we need to allow the 11434 port to be used. Note that those details match the curl command that we used in the previous section.
To do so, just go to Admin -> General -> Security -> HTTP security (or use the admin search utility to find it) and then proceed to apply these changes:
- In the "cURL blocked hosts list" (curlsecurityblockedhosts) setting, remove both the 127.0.0.1/8 and localhost entries.
- In the "cURL allowed ports list" (curlsecurityallowedport) setting, add the 11434 port.
- Save changes.
Done, now Moodle should be able to connect to 127.0.0.1:11434, that matches our local Ollama installation.
Note that if you're using any other OpenAI compatible local provider, or if you're running it in another local server, you may need to allow other addresses (by IP or by name) and ports, but the mechanism is exactly the same.
Moodle OpenAI provider configuration
Now that we have the local provider (Ollama) running and we have configured Moodle to have access to it, we are going to configure the OpenAI provider, these are the steps to follow:
- Go to Admin -> General -> AI -> Manage settings for AI providers
- Enable the "OpenAI API provider" by clicking on the toggle.
- Click the "OpenAI API provider" settings link.
- In the "Provider actions", enable both the "Generate text" and "Summarise text" actions, by clicking their "Enabled" toggle. Note that, in the case of Ollama, it doesn't provide any LLM able to generate images, so we cannot enable it for our case. Other OpenAI compatible provider may be able to do so.
- Both for the, now enabled, "Generate text" and "Summarise text" actions, click on their "Settings" link and proceed to configure the following:
- Model: llama3.1
- API endpoint: http://127.0.0.1:11434/v1/chat/completions
- Save changes.
And that's all. Now we should be able to use our local Ollama provider for the AI placements available in our Moodle instance: HTML Editor and Course Assistance placements. Don't forget to enable and configure their settings at Admin -> General -> AI -> Manage settings for AI placements.
A brief note about security
Note that, by default, Ollama only binds (listens) to 127.0.0.1 (or localhost) and port 11434. Or, in other words, its main use is to be installed in the same host where the client (Moodle in this case) is installed. And that's what the configuration above covers. And can be considered a "secure" configuration as far as nobody else (but the same host) can access to it.
If you want to install Ollama in another host, then you will have to open it for remote access (setting the OLLAMA_HOST, see Ollama documentation). But that opens it for remote access from anywhere by default. And that is an "insecure" configuration at all effects because Ollama doesn't support any Authorisation schema (say API keys, say auth headers, ...). Be warned about that and ask to the IT experts before allowing such a remote access, sure that they will help you to protect the Ollama installation using different mechanisms (firewalls, reverse proxies, auth layers, ...).
Finally, once again, note that this documentation is 100% focussed on a local (same host) Ollama installation. But there are dozens of alternative OpenAI compatible providers out there, and many of them do support Authorisation or virtual API keys and offer more security for remote access, support for different LLMs, ... so it's recommended to try a few of them depending on each's one own requirements and infrastructure availability.