Pronunciation evaluation question type

Jump to: navigation, search

Note: This page is a work-in-progress. Feedback and suggested improvements are welcome. Please join the discussion on moodle.org or use the page comments.

Pronunciation evaluation question type
Project state Community bonding period
Tracker issue CONTRIB-4336
Discussion XXX
Assignee Troy Lee

GSOC '13

Introduction

In this GSoC project, we would like to integrate our open source CMU Sphinx-based online automatic pronunciation self-study instructional system with the Moodle instructional platform to assess language learners’ spoken reading and pronunciation skill. This will provide a new tool for language teachers beyond direct instruction, immersion, and other language learning tasks to save time and increase the efficiency of instruction. Besides the integration with Moodle, we also would like to port the automatic pronunciation evaluation system to mobile and OLPC platforms, and if time permits, stand-alone bundles.


Requirement

At the end of this project, there will a new type of pronunciation evaluation question type in the Moodle system. Teachers could select a bunch of questions or creating new questions for pronunciation evaluation. Student who are presented with the questions could record and upload their pronunciations. Then the system will automatically generate evaluation scores in different linguistic levels such as utterance, word or even phoneme level.


Feature

  • Question bank maintenance: It includes operations such as creating new questions, modifying existing questions and deleting old questions.
  • Online audio recording and saving to the server, uploading pre-recorded files.
  • Automatic pronunciation evaluation.
  • Score analysis.

Schedule

  • Present ~ June 16: Read community docs and study the working process of Moodle to get familiar with the platform.
  • Present ~ June 30 (concurrent): Refine the existing phrase exemplar data collection tools to enable crowdsourcing of a large native pronunciation database with corresponding part-of-speech and phoneme coding for words and their expected pronunciations, allowing for homographs and accented pronunciations. Deploy the exemplar data collection system to collect native speech data.
  • June 17 ~ June 30: Revise exemplar sufficiency index formulas to properly account for homographs and accented pronunciation. Revise the Model-View-Controller dataflow diagram based on the current PHP-based pronunciation evaluation system to allow for extensions of tonal pronunciations using pitch tracking. Integrate the SQL database design of our pronunciation evaluation system including biphone score analysis and allowing for diphone score analysis into the Moodle platform as a stub question type.
  • July 1 ~ July 14: Test the database with the Moodle process-concurrent upload process for wami-recorder, WebRTC, and multipart/form-encoded streaming speex audio file uploads to the Moodle server system from web browser and mobile devices. Test the various audio upload failure modes and fix any bugs which might result in disk space issues or privacy failures.
  • July 15 ~ July 28: Integrate the Model-View-Controller dataflow to complete the new Moodle pronunciation assessment question type and add any remaining portions of the Controller involving question sequencing and selection into the Moodle platform.
  • July 29 ~ August 2: Prepare the mid-term report with multiple screencasts describing the system, its user experience, and a technical walkthrough of the code and database.
  • August 3 ~ September 1: Test and fix bugs. Add additional user interface support for mobile and OLPC platforms. Test mobile and OLPC uploads. Revise screencasts.
  • September 1 ~ September 15: Test, fix bugs, and optimize. Revise screencasts.

If time permits: Bundle stand-alone systems and distribute them.

  • September 16 ~ September 27: Final report preparation and submission.

Design

Credit

Mentors: James Salsman & Tim Hunt

Tracker

See also