IVA 2013 Workshop
Multimodal Corpora: Beyond Audio and Video

1 September 2013, Edinburgh, UK

+++ NEW DEADLINE: 3 JUNE 2013 +++

Download Call for papers.

Currently, the creation of a multimodal corpus involves the recording, annotation and analysis of several communication modalities such as speech, hand gesture, facial expression, body posture, etc. An increasing number of research areas are transgressing from focused single modality research to fully fledged multimodality research, and multimodal corpora are becoming a core research asset and an opportunity for interdisciplinary exchange of ideas, concepts and data.

The increasing interest in multimodal communication and multimodal corpora is evidenced by numerous European Networks of Excellence and integrated projects (e.g. HUMAINE, SIMILAR, CHIL, AMI, CALLAS and SSPNet); the success of recent conferences and workshops dedicated to multimodal communication (e.g. ICMI-MLMI, IVA, Gesture, PIT, Nordic Symposium on Multimodal Communication, Embodied Language Processing); and the creation of the Journal of Multimodal User Interfaces all testify to the interest in this area, and to the general need for data on multimodal behaviours.

We are particularly pleased to announce that in 2013, for the first time, the 9th Workshop on Multimodal Corpora will be collocated with IVA – a conference with deep ties to multimodal corpora, as the development of models for IVAs is a prominent reason to collect and analyse multimodal corpora.

This workshop follows similar events held at LREC 00, 02, 04, 06, 08, 10, ICMI 11, and LREC 2012. All workshops are documented under http://www.multimodal-corpora.org and complemented by a special issue of the Journal of Language Resources and Evaluation which came out in 2008, a state-of-the-art book published by Springer in 2009 and a special issue of the Journal of Multimodal User Interfaces currently under publication.

Theme

As always, we aim for a wide cross-section of the field, with contributions ranging from collection efforts, coding, validation and analysis methods, to tools and applications of multimodal corpora. This year, however, we want to emphasize the fact that an increasing amount of data collections incorporate more channels than audio and video, from the relatively straightforward motion capture and gaze tracking, to more unusual physiological measures such as breathing, perspiration and pupil size. We see this both in broad-scoped data collections designed to capture as much as possible (massively multimodal corpora) and in narrow-scoped data collections custom-designed to investigate a particular phenomenon (e.g. sign language corpora). The IVA'2013 workshop on multimodal corpora will feature a special session on new and/or unusual modalities. Demos, posters and full papers are all welcome to the special session, which we hope will present an inspiring as well as rewarding smorgasbord of all aspects of data beyond audio and video.

Other topics to be addressed include, but are not limited to:

  • Multimodal corpus collection activities (e.g. direction-giving dialogues, emotional behaviour, human-avatar interaction, human-robot interaction, etc.) and descriptions of existing multimodal resources
  • Relations between modalities in natural (human) interaction and in human-computer interaction
  • Multimodal interaction in specific scenarios, e.g. group interaction in meetings
  • Coding schemes for the annotation of multimodal corpora
  • Evaluation and validation of multimodal annotations
  • Methods, tools, and best practices for the acquisition, creation, management, access, distribution, and use of multimedia and multimodal corpora
  • Interoperability between multimodal annotation tools (exchange formats, conversion tools, standardization)
  • Collaborative coding
  • Metadata descriptions of multimodal corpora
  • Automatic annotation, based e.g. on motion capture or image processing, and the integration with manual annotations
  • Corpus-based design of multimodal and multimedia systems, in particular systems that involve human-like modalities either in input (Virtual Reality, motion capture, etc.) and output (virtual characters)
  • Automated multimodal fusion and/or generation (e.g., coordinated speech, gaze, gesture, facial expressions)
  • Machine learning applied to multimodal data
  • Multimodal dialogue modelling

Important Dates

Deadline for paper submission (complete paper): 3 June 2013
Notification of acceptance: 17 July
Final version of accepted paper: 21 July
Final program and proceedings: 7 August
Workshop: 1 September

Submissions

The workshop will consist primarily of paper presentations and discussion/working sessions. Submissions should be 4-6 pages long, must be in English, and follow the submission guidelines. Demonstrations of multimodal corpora and related tools are encouraged as well (a demonstration outline of 2 pages can be submitted).

Time schedule and registration fee

The workshop will consist of a morning session and an afternoon session. There will be time for collective discussions.

The fee will be specified on the MMC 2013 website (/).

Organizing Committee