Note:

If you want to create a new page for developers, you should create it on the Moodle Developer Resource site.

Testing strategy/The Moodle Testing Process: Difference between revisions

From MoodleDocs
No edit summary
(This page contains very out-dated content.)
 
(37 intermediate revisions by 3 users not shown)
Line 1: Line 1:
<H1 CLASS="western">Methodologies in the Moodle Development Process</H1>
{{Template:obsolete}}
<P>Moodle has an established process for developing and integrating software. This section applies relevant testing methodology to the existing process.</P>
{{Work in progress}}
<H2 CLASS="western">Weekly (Continuous) Integration
=Methodologies in the Moodle Development Process=
Cycle</H2>
Moodle has an established process for developing and integrating software. This section describes relevant testing methodology within the existing process.
<P>A manual testing phase of integration issues occurs towards
The full [https://docs.moodle.org/dev/Process development]
life-cycle at Moodle is documented in detail [https://docs.moodle.org/dev/Releases#General_release_calendar elsewhere].
Moodle is currently developed using Agile methodologies including
Scrum. The following stages of development take place within this
life-cycle:
 
* Weekly integration cycle
* 3-weekly stable sprints
* 6-monthly major releases
* 3 minor releases between every major release
 
==Weekly (Continuous) Integration Cycle==
A manual testing phase of integration issues occurs towards
the end of the SDLC at Moodle during the Wednesday (AWST) testing
the end of the SDLC at Moodle during the Wednesday (AWST) testing
phase. This is the last possible chance Moodle gets to capture
phase. This is the last possible chance Moodle gets to capture
Line 9: Line 21:
during unguided exploratory testing and worse still the introduction
during unguided exploratory testing and worse still the introduction
of regressions into Master. Discovery of issues late in the SDLC
of regressions into Master. Discovery of issues late in the SDLC
prevent the issues from being dealt with in a timely manner when
prevents issues from being dealt with in a timely manner when
integrators or even those writing the code are able to resolve those issues relatively easily.
integrators, or even those writing the code, are able to resolve those issues relatively easily.
</P>
 
<P>The processes described here ensures the bulk of
The processes described here ensure the bulk of
testing and the majority of responsibility for capturing regressions
testing and the majority of responsibility for capturing regressions
does not lie with the Wednesday testing phase. Regression testing is performed much earlier in the SDLC and as an added bonus also
does not lie with the Wednesday testing phase. Testing is performed as early in the SDLC as possible.
increases test coverage.</P>
==New Unit Tests==
<H2 CLASS="western">Integration Steps</H2>
Creating unit tests prior to coding starts
<H3 CLASS="western">Step 1: GIT Push to integration test server</H3>
the whole SDLC off with a focus on quality. If code isn't
<P>Developers push changes to the integration test repository. The changes are picked up by the Jenkins CI server, which runs a set of duplicate set of jobs to those for the integration. The changes must not break the "build" before they pass through to the next step of the process. I.E. the Jenkins jobs must all pass.</P>
considered complete until unit tests pass, then the code itself will
<P>NOTE: the purpose of this step is not to test for "collisions" between conflicting patches but to verify that code meets the required standard to proceed to peer review.</P>
be of a high quality by the time it is implemented. This also creates a
<P>If changes cause a break then then they must be backed out by the developer before they can go to peer review.</P>
repository of unit regression tests in an iterative manner while not
<H3 CLASS="western">Step 2: Peer Review</H3>
requiring of a great deal of extra effort to write those tests. As
<P>Any code sent back to the developer at Peer Review, after any changes have been made, must be must pass step 1 again before re-review.</P>
these tests are added, they will provide early regression testing of
<H3 CLASS="western">Step 3: GIT Pull to integration</H3>
new code.
<P>Code passing peer review can be pulled from integration test to integration where, amongst other things the main Jenkins integration jobs run. At this stage integrators also check for "collisions" and conflicts between submitted patches.</P>
 
<H3 CLASS="western">Step 4: Functional automation suite</H3>
Unit tests in the PHPUnit framework are a requirement when submitting code to Moodle.
<P>On Tuesday (AWST) a set of regression tests are run overnight (Tues-Weds). These tests are scheduled to run automatically from the Moodle test automation framework and are based upon Moodle's User Acceptance Test suite.</P>
 
<H3 CLASS="western">Step 5: Issue retest</H3>
==Integration Steps==
<P>On Wednesday (AWST) the manual issue retesting is performed at Moodle HQ. Issue recreation test steps must be clear and concise allowing the issue to be recreated easily.</P>
===Step 1: GIT Pull to integration CI server===
<P>The format for writing Issue recreation steps is:</P>
When developers are ready to submit their code into the integration process, they set the status of the tracker issue to 'Ready for integration'. This in turn triggers the Jenkins CI job "Pre-check remote branch". This job performs the following tasks:
<P>'''Pre-requisites:'''</P>
* Pull the changes to a local branch on the Integration CI Server.
<P>Everything required to allow the test to start. This can include, but is not limited to: issue specific data, test platform information: e.g. OS.</P>
** git reset --hard origin => so it is in the same status than the official git repository
<P>Test Steps:</P>
** git pull => get the changes proposed by a developer
<P>A clear and concise list of test steps required to recreate the issue with the expected results of each step. The expected results must include include what was supposed to happen at the failure step.</P>
* Run checks on the code.
<P>Actual Result:</P>
* Generate a results artefact that is sent to the developer.
<P>The actual result of what happened when the test failed i.e. how the issue manifests itself.</P>
<H3 CLASS="western">Step 6: Guided exploratory testing</H3>
<P></P>


<H3 CLASS="western">Functional Test Automation</H3>
Please note; each job begins by resetting the local git repository to the original state so the pulls are never permanent. This mitigates the risk posed by "collisions" as the purpose of this step is not to test for "collisions" between conflicting patches but to verify that code meets the required standard to proceed to peer review.
<P>Frequently running tests similar to those currently
===Step 2: Peer Review===
executed during the 6 monthly major release  more regularly, earlier
Any code sent back to the developer at Peer Review, after any changes have been made, must be must pass step 1 again before re-review.
in the SDLC will assist with trapping regression issues at the point
===Step 3: GIT Pull to integration===
where integration into the existing code base occurs. This will help
Code passing peer review can be pulled from the developers repository to integration where, among other things, the main Jenkins integration jobs run. At this stage integrators also check for "collisions" and conflicts between submitted patches.
integrators either:</P>
===Step 4: GIT Pull to Nightly Build Machine, performance and browser driven automation suite===
<OL>
A set of regression tests are run at 2AM AWST Daily. These tests are scheduled to run automatically from the Moodle test automation framework and run on all supported platforms e.g. all databases MySQL, Oracle, MSSQL, PostgreSQL. The following tests are performed in this order:
<LI><P>Protect existing code by allowing integrators to
# Moodle unit tests, including those submitted by developers.
stop the new code being added.</P>
# PerfComp performance comparison tests.
<LI><P>Allow the immediate rectification of the
# Selenium acceptance tests, automated browser tests based upon the existing Moodle QA test suite.
regression issues at integration rather than finding them later in
===Step 5: Issue retest===
the SDLC.</P>
On Wednesday (AWST) the manual issue retesting is performed at Moodle HQ. Issue test steps must be clear and concise, allowing the issue to be recreated easily. Testing steps must go in the testing instructions field to prove that the bug is resolved. Areas of Moodle to investigate with extra exploratory testing should be entered here also.
</OL>
 
<P>The weekly integration cycle has a phase of manual
==Stable Sprints and Point Releases==
bug retesting throughout the working day on Wednesdays (AWST).  The
Development work performed during stable sprints and point releases are subject to the stringent integration review and testing process described above.
automated functional regression suite described above will run
 
outside of working hours (AWST) and can be run as often as required.
==Major Releases and New Development==
The set up of test VMs will be performed by the
The Moodle QA cycle is a user acceptance testing phase at the end of every 6 month development cycle and is a chance for the community to become directly involved in road testing the latest testing release.  
</P>
 
<H3 CLASS="western">Performance Testing Matrix</H3>
In reality testing starts much earlier in the development process. All code submitted to Moodle has to pass through the integration process and normally does so in an incremental manner. Any additional testing to be undertaken for a Moodle release is tailored to what will be in that release and defined in the [https://docs.moodle.org/dev/2.4_Test_Plan current release test plan test plan ].
<P>Measuring system performance during this testing will
assist with preventing degradation of performance in existing code
caused by regressions.</P>
<P>To assist with early capture of performance
degradation due to regression; a performance matrix will record
performance against automated user interactions generated by the
functional test harness.</P>
<P>Consideration will be given to modifying the
functional test harness, selecting relevant test cases that will be
re-run a given number of times to generate aggregated results as part
of a performance and load suite. i.e. running the same test several
times as part of a suite of tests in one “test run” will provide
more accurate performance data, providing mean values, than running a
test once.</P>
<H3 CLASS="western">New Unit Tests</H3>
<P>Creating unit tests prior to creating the code starts
the whole SDLC off with a focus on quality. If the code isn't
considered complete until the test passes then the code itself will
be of a high quality by the time it is implemented. It also creates a
repository of unit regression tests in an iterative manner whilst not
requiring of a great deal of extra effort to write those tests. As
these tests are added, they will provide early regression testing on
new code.</P>
<P>The various XUnit code driven unit testing frameworks
are universally popular, incredibly powerful and the industry
standard for unit testing tools. PHPUnit will provide developers with
a more powerful suite of tools for unit testing.</P>
<P>Everything submitted to integration must include unit
tests in PHPUnit. It is to be encouraged that anyone developing
software for Moodle should work in a test driven manner, where unit
tests are written before writing the code that makes the test pass.
The unit tests will remain with the system under test and can be run
as regression tests using a CI server. The tests will run whenever
any new code is added to any monitored GIT repository.</P>
<H3 CLASS="western">Legacy Unit Tests</H3>
<P>Legacy unit tests are not run automatically. These
tests can be run automatically on a regular basis via CI. Running
these using CI, even using a workaround by executing them
automatically via the Moodle user interface, will provide early
regression testing on legacy code.</P>
<P>Unit tests are currently written using the SimpleTest
tool. These tests are included with Moodle and can be run as an Admin
user. These legacy tests must run automatically using a CI server.</P>
<H3 CLASS="western">Causal Analysis of Issues</H3>
<P>As part of the weekly integration cycle, issues will
be analysed and documented to determine the likely cause of the
issue. The analysis will provide metrics for determining and removing
areas of risk where defects are injected during development.</P>
<P>The metrics gathered during this exercise can be used
to identify problem areas of the system.</P>
<OL>
<LI><P>Problematic areas of the system can be tested
more thoroughly than less problematic areas.</P>
<LI><P>The cause of the issues can be identified and
eliminated.</P>
</OL>
<P>Fewer route causes of issues will reduce the number
of issues and increase ROI and overall quality.</P>
<H3 CLASS="western">Restricted Scope Manual Testing</H3>
<P>The scope of testing issue fixes must be limited to
retesting the issue itself. Clear instructions to specifically
recreate the issue must be included in the testing instructions of
the issue.</P>
<P>Restricting the scope of manual testing during the
Wednesday (AWST) testing phase will reduce the effort required to
retest the issues and yet there will actually be more testing
performed than there is at the moment and earlier in the SDLC. This
will only be possible if those raising issues provide precise
instructions for recreating the problem.</P>
<H2 CLASS="western">Stable Sprints</H2>
<P>Development work during stable sprints is performed
in-house by Moodle HQ developers and will follow a process of Test
Driven Development (TDD). The requirements for Stable sprints are
gathered with testing steps from tracker issues. To allow this
process to work effectively, so that requirements are not ambiguous,
any new tracker issues must be raised with clear and unambiguous
testing instructions.
</P>
<P>Programmers working on the stable sprint will write
unit tests and component tests from Q1 of the agile testing quadrants
matrix that will prove their fixes to code works. Then they will
write their fixes</P>
<H3 CLASS="western">Continuous Integration</H3>
<P>To allow early feedback on fixes provided by the
stable team they will require their own branch in GIT and their own
instance of Jenkins. Developers will check their code into this
branch at the end of each day and unit tests will run at check in and
again overnight.</P>
<H3 CLASS="western">Functional Test Automation</H3>
<P>Automated functional testing will take place as part
of the regular integration cycle.</P>
<H2 CLASS="western">Major Releases and New
Development</H2>
'''TODO'''
<H2 CLASS="western">Point Releases</H2>
'''TODO'''

Latest revision as of 23:17, 1 May 2024

Warning: This page is no longer in use. The information contained on the page should NOT be seen as relevant or reliable.

Note: This page is a work-in-progress. Feedback and suggested improvements are welcome. Please join the discussion on moodle.org or use the page comments.

Methodologies in the Moodle Development Process

Moodle has an established process for developing and integrating software. This section describes relevant testing methodology within the existing process. The full development life-cycle at Moodle is documented in detail elsewhere. Moodle is currently developed using Agile methodologies including Scrum. The following stages of development take place within this life-cycle:

  • Weekly integration cycle
  • 3-weekly stable sprints
  • 6-monthly major releases
  • 3 minor releases between every major release

Weekly (Continuous) Integration Cycle

A manual testing phase of integration issues occurs towards the end of the SDLC at Moodle during the Wednesday (AWST) testing phase. This is the last possible chance Moodle gets to capture regressions. Relying soley on this phase to capture regressions could result in the late discovery of regressions during unguided exploratory testing and worse still the introduction of regressions into Master. Discovery of issues late in the SDLC prevents issues from being dealt with in a timely manner when integrators, or even those writing the code, are able to resolve those issues relatively easily.

The processes described here ensure the bulk of testing and the majority of responsibility for capturing regressions does not lie with the Wednesday testing phase. Testing is performed as early in the SDLC as possible.

New Unit Tests

Creating unit tests prior to coding starts the whole SDLC off with a focus on quality. If code isn't considered complete until unit tests pass, then the code itself will be of a high quality by the time it is implemented. This also creates a repository of unit regression tests in an iterative manner while not requiring of a great deal of extra effort to write those tests. As these tests are added, they will provide early regression testing of new code.

Unit tests in the PHPUnit framework are a requirement when submitting code to Moodle.

Integration Steps

Step 1: GIT Pull to integration CI server

When developers are ready to submit their code into the integration process, they set the status of the tracker issue to 'Ready for integration'. This in turn triggers the Jenkins CI job "Pre-check remote branch". This job performs the following tasks:

  • Pull the changes to a local branch on the Integration CI Server.
    • git reset --hard origin => so it is in the same status than the official git repository
    • git pull => get the changes proposed by a developer
  • Run checks on the code.
  • Generate a results artefact that is sent to the developer.

Please note; each job begins by resetting the local git repository to the original state so the pulls are never permanent. This mitigates the risk posed by "collisions" as the purpose of this step is not to test for "collisions" between conflicting patches but to verify that code meets the required standard to proceed to peer review.

Step 2: Peer Review

Any code sent back to the developer at Peer Review, after any changes have been made, must be must pass step 1 again before re-review.

Step 3: GIT Pull to integration

Code passing peer review can be pulled from the developers repository to integration where, among other things, the main Jenkins integration jobs run. At this stage integrators also check for "collisions" and conflicts between submitted patches.

Step 4: GIT Pull to Nightly Build Machine, performance and browser driven automation suite

A set of regression tests are run at 2AM AWST Daily. These tests are scheduled to run automatically from the Moodle test automation framework and run on all supported platforms e.g. all databases MySQL, Oracle, MSSQL, PostgreSQL. The following tests are performed in this order:

  1. Moodle unit tests, including those submitted by developers.
  2. PerfComp performance comparison tests.
  3. Selenium acceptance tests, automated browser tests based upon the existing Moodle QA test suite.

Step 5: Issue retest

On Wednesday (AWST) the manual issue retesting is performed at Moodle HQ. Issue test steps must be clear and concise, allowing the issue to be recreated easily. Testing steps must go in the testing instructions field to prove that the bug is resolved. Areas of Moodle to investigate with extra exploratory testing should be entered here also.

Stable Sprints and Point Releases

Development work performed during stable sprints and point releases are subject to the stringent integration review and testing process described above.

Major Releases and New Development

The Moodle QA cycle is a user acceptance testing phase at the end of every 6 month development cycle and is a chance for the community to become directly involved in road testing the latest testing release.

In reality testing starts much earlier in the development process. All code submitted to Moodle has to pass through the integration process and normally does so in an incremental manner. Any additional testing to be undertaken for a Moodle release is tailored to what will be in that release and defined in the current release test plan test plan .