Moodle can be deployed in a cluster as described in various forum threads and docs pages however these are hacks and not exactly performant or well designed. The main reason for this is that while Moodle allows for basic separation of deployment stack layers (database and application) but nothing further. The aim of this page is to evaluate the particular challenges to a true highly available load balanced Moodle cluster and any development tasks required to get past these.
This is mostly from my experience with moodle.org but I think that although moodle.org faces different problems to most education institution deployments, the core of the issues are the same. Moodle can currently be balanced between servers as long as servers are in sync. Sync can be accomplished in multiple ways, such as DRBD for storage, or just rsync. The problem is that every server must be in sync at all times with all files. Most cluster filesystems perform badly or have too much latency for this to be viable. A better alternative would be decentralised storage via a CDN or remote bucket like Amazon S3. Other issues mostly revolve around database access. Sharding a DB is possible but unlikely with MySQL and requires a proxy for PostgreSQL. Moodle also lacks support for read slaves or any other system of reducing load on a single server. Lastly, while the Moodle 2.4 MUC should help with this, frontend caching support is minimal at best and systems such as Varnish or Pound cannot work effectively on Moodle.
Suggestions are as follows:
- Rewrite the File API to be more modular for backends such as filesystem, S3, and bitcask
- Allow read-slaving or data-partitioning awareness in Moodle
- Support heavy use of etags in http cache headers
- Standard location in wwwroot for plugins and language packs
Additionally, having a tiered caching architecture can massively reduce load on databases as well as PHP which may make queries fast enough to get around problems with FastCGI timeouts, allowing for the use of NginX or Lighttpd webservers with php-fpm instead of Apache.
I'm open to any thoughts from others who have worked on large Moodle deployments or cluster web serving setups but I think these are the most critical issues for the moment, particularly the File API.
Thanks for your comments, my comments on them: --Dan Poltawski 08:40, 13 September 2012 (WST)
You are right, the filesystem is the hardest part of Moodle to horizontally scale, it is a single point of failure. Shared and network filesystems are a pain. Its always been the a pain on Moodle deployments i've worked on. We've generally stuck with NFS, but tried all sorts of things for replication, from rsync, to custom zfs transfers to drbd + OCFS2. I agree that support for an s3 backend would be nice to have.
However, this goes down my list of prorities, as:
- In my experience, filesystem performance is not in itself a performance bottleneck. Especially if I take your suggesiton as 'user uploaded content', rather than system generated.
- A bucket works well for things like user uploaded content (I assume, if it allows access control as strictly as we do), but not for other moodledata uses, like language files, preprocessed themes/js etc
- S3 is a way to outsource the problem, sure. But its still fairly expensive for Moodle sites. For example, a high school I used to host had a moodledata directory of about 800GB. (Costs for storing that data right now $1200/y + maybe 200GB a month of bandwidth = $360)
Seperating out 'user uploaded content' files from 'system created' files is something worth thinking about, and itll be interesting to see where we are when MUC lands.
I think with the filesystem, I look at the data dir as being explicitly user uploaded files. Caching stuff belongs in the MUC, sessions belong in a DB, and IMO language files belong in the wwwroot instead. Good permissions should make a standard single location for plugins and language packs perfectly safe while allowing for user uploads to be located wherever and the data directory deprecated unless the user specifically WANTS it for their user uploads.
You're correct that filesystem performance is not actually a bottleneck. It's more the overhead for maintenance and sync when using a cluster filesystem (I tried GlusterFS, OCFS2, and CEPH along with DRBD+XFS) and found none of them work very well. NFS is ok but relies on a single location being ok, and I know most people won't bother with HA failover just for this. Having files offsite though (whether CDN, S3, or whatever else) sorta gets rid of this problem by just making it irrelevant in the first place. If there is a separate location for files, it can be handled in multiple ways that don't rely on webservers having sync, and I think that's the important thing here.
A core function of Moodle is to grade 100 quizes. To do this efficiently we need to JOIN on quiz, grades, user, enrolments, user enrolments tables (guess). We JOIN because its the most efficent way to get this information from the database. This makes sharding close to a non-starter. (Note that we used to have far-less JOINing going on and that itself became a performance problem because of the latency introduced by the amount of database queries.)
There are a lot of ways to shard. Workloads that rely on large JOINs are extremely capable on databases like Greenplum which are drop ins for PgSQL, or even just doing proxy-level sharding via PgPool or PL/Proxy. There are test tickets for this currently in the tracker too. I don't think letting Moodle be aware of sharding is a good idea because of the complicated overhead but read slaving would definitely be something to think about. While moodle.org would only use one particular setup (leaning towards PgBouncer and PL/Proxy together), the multiple options are something best evaluated by a particular administrator, and I think all bare equal weight as far as support goes.
I'm talking specifically about dynamic content that can be returned without re-querying it from the database. This would be more useful for course information and forums I think but Varnish allows for fairly explicit access controls via HTTP headers to prevent unauthorized access to precached data.
Re: tiered caching architecture
I'm not sure if this is in the scope of the MUC or not but I mean something similar to Perl's CHI caching modules. I think the Zend caching framework does something similar too. The basic gist is multiple caching backends searched for either specific things or globally in a set order of priorities. An example would be rendered HTML snippets or small database query results being stored in local memory for PHP, things that are accessed more frequently but only by a few nodes in a localhost Redis or Memcached DB, and larger datasets in a networked Memcached DB. Basically, a caching hierarchy depending on what the data is and how it is accessed. Not entirely sure how helpful this would be for Moodle but if a particular node in a Moodle cluster is getting specific traffic, it would make sense to only cache that information locally to that node.