Workshop Objectives

Learning is an innately multimodal activity. Students collaborate through face-to-face conversations, while using digital and non-digital representations to support their ideas. Teachers use digital multimedia to explain a topic, while the students take notes on a multitude of digital and non-digital platforms. Students work together to build physical projects, while annotating drawings and taking digital photographs of their developments.

Learning Analytics research, however, has concentrated mainly on computer-based learning contexts, where tools tend to automatically capture, in a fine-grained level of detail, the interactions with their users. The relative abundance of readily available data and the low technical barriers to process it, make computer-based learning systems the ideal place to conduct Learning Analytics research. On the contrary, in learning contexts where computers are not used, the actions of the actors of the learning process are not automatically captured. Without traces to be analyzed, computational models and tools used traditionally in Learning Analytics are not applicable. While this bias towards computer-based learning context has helped the initial stages of Learning Analytics, computer-based learning still represents a small subset of the learning contexts.

Multimodal Learning Analytics(MLA) seeks to expand the current scope of Learning Analytics, focusing on the analysis of learning processes that happen on the physical or physical/virtual world and require the capture, processing and analysis of more natural signals such as speech, writing, sketching, facial expressions, hand gestures, object manipulation, tool use, artifact building, etc. This workshop is an opportunity to introduce members of the Learning Analytic community to methodologies, techniques and tools to capture, process and analyze multimodal learning traces.

The main objectives of the workshop are:

Facilitate access to multimodal datasets: One of the main barriers to start research in MLA is access to high-quality, annotated multimodal recordings. By offering these datasets to any interested researcher, the MLA community seeks to expand the available human talent capable of conducting learning analytics studies with multimodal signals.

Sharing advanced approaches and techniques: The ability to contrast and compare approaches and techniques to analyze diverse multimodal signals is a result of working on common datasets and questions. Research teams are able to directly learn from the developments of other teams and the current state-of-the-art is easily determined.

Disseminate the state of MLA research: A goal for the workshop this year is to disseminate the current capabilities of MLA to analyze non-computer-based learning contexts among the wide Learning Analytics community.

Identify new datasets: Including ones involving additional modalities, languages, and learning activities.

Important Dates

Challenge contributions deadline: January 25, 2016

Challenges acceptance notification: February 16, 2016

Challenge publishing deadline: March 16, 2016

Participants pre-registration deadline: March 30, 2016

Multimodal Learning Analytics Challenges

Following the successful experience of the Multimodal Learning Analytics Grand Challenge in ICMI 2013 and 2014, this year this event will provide two data sets with diverse research questions to be tackled by interested participants:

Math Data Challenge:


The Math Data Corpus (Oviatt, 2013) is available for analysis. It involves 12 sessions, with small groups of three students collaborating while solving mathematics problems (i.e., geometry, algebra). Data were collected on their natural multimodal communication and activity patterns during these problem-solving and peer tutoring sessions, including students’ speech, digital pen input, facial expressions, and physical movements. In total, approximately 15­18 hours of multimodal data is available during these situated problem­solving sessions. Participants were 18 high-school students, including 3-person male and female groups. Each group of three students met for two sessions. These student groups varied in performance characteristics, with some low-to-moderate performers and others high-performing students. During the sessions, students were engaged in authentic problem solving and peer tutoring as they worked on 16 mathematics problems, four apiece representing easy, moderate, hard, and very hard difficulty levels. Each problem had a canonical correct answer. Students were motivated to solve problems correctly, because one student was randomly called upon to explain the answer after solving it. During each session, natural multimodal data were captured from 12 independent audio, visual, and pen signal streams. Software was developed for accurate time synchronization of all twelve of these media streams during collection and playback. The data have been segmented by start and end time of each problem, scored for solution correctness, and also scored for which student solved the problem correctly. This corpus used for the ICMI Grand Challenge on Multimodal Learning Analytics in 2013. This dataset has been expanded with full manual and automatic transcripts of the students speech. It also contains more than 10.000 annotations of the students diagrams during problem solving. The main research questions behind this dataset will be automatic prediction of which math problems will be solved correctly or not, which student in a group is the dominant domain expert, and identification of significant precursors of performance and learning. Predictors could be based on information from unimodal or multimodal signals, lexical/representational content, individual or group dynamics, or combined information sources.

The actual data and more information about the Math dataset challenge can be found here.

Presentation Quality Challenge:


This challenge includes a data corpus that involves 40 oral presentations of Spanish-speaking students in groups of 4 to 5 members presenting projects (entrepreneurship ideas, literature reviews, research designs, software design, etc.). Data were collected on their natural multimodal communication in regular classroom settings. The following data is available: speech, facial expressions and physical movements in video, skeletal data gathered from Kinect for each individual, and slide presentation files. In total, approximately 10 hours of multimodal data is available for analysis of these presentations. In addition grading for individuals when doing their presentations is included as well as a group-grade related to the quality of the slides used when doing each presentation.
This challenge seeks to solve the following questions:
a) How multimodal techniques can help us in evaluating presentation skills when doing presentations and
b) How good is a group presentation based on the individual presentations and the quality of the slides used in a presentation.

The information about the Oral Presentation Quality challenge and the download links are available here.

Guidelines

To contribute to the workshop or the grand challenges, authors will be required to submit a scientific paper answering one of the solicited questions (or a new question) based on the data provided in any of the challenges. The papers should describe the methodology followed and the results, and should provide a meaningful discussion of their interpretation. The maximum length of the paper is 10 pages. The final version of the paper should follow the ACM conference guidelines. The submissions will be reviewed in a single-blind way, so identifying information can be used in the review version.

All the accepted papers will be published in the CEUR workshop proceedings .

The papers should be submitted using the Easychair platform.

Organization

  • Xavier Ochoa, ESPOL, Ecuador
  • Marcelo Worsley, University of Southern California, USA
  • Sharon Oviatt, Incaa Designs, USA
  • Nadir Weibel, University of California San Diego, USA

Email contact: mla2016lak@gmail.com