Continual and Multimodal Learning for Internet of Things

September 9, 2019 • London, UK

A UbiComp 2019 Workshop


Internet of Things (IoT) provides streaming, large-amount, and multimodal sensing data over time. The statistical properties of these data are often significantly different by sensing modalities and temporal traits, which are hardly captured by conventional learning methods. Continual and multimodal learning allows integration, adaptation and generalization of the knowledge learnt from previous experiential data collected with heterogeneity to new situations. Therefore, continual and multimodal learning is an important step to improve the estimation, utilization, and security of real-world data from IoT devices.

Call for Papers

This workshop aims to explore the intersection and combination of continual machine learning and multimodal modeling with applications in Internet of Things. The workshop welcomes works addressing these issues in different applications and domains, such as human-centric sensing, smart cities, health and wellness, privacy and security, etc. We aim at bringing together researchers from different areas to establish a multidisciplinary community and share the latest research.

We focus on the novel learning methods that can be applied on streaming multimodal data:

  • online learning
  • transfer learning
  • few-shot learning
  • multi-task learning
  • reinforcement learning
  • learning without forgetting
  • individual and/or institutional privacy
  • balance on-device and off-device learning
  • manage high volume data flow

  • We also welcome continual learning methods that target:

  • data distribution changed caused by the fast-changing dynamic physical environment
  • missing, imbalanced, or noisy data under multimodal sensing scenarios

  • Novel applications or interfaces on streaming multimodal data are also related topics.

    As examples, the data modalities include but not limited to: WiFi, GPS, RFID, vibration, accelerometer, pressure, temperature, humidity, biochemistry, image, video, audio, speech, natural language, virtual reality, etc.

    Important Dates

  • Submission deadline: June 29, 2019
  • Notification of acceptance: July 8, 2019
  • Deadline for camera ready version: July 12, 2019
  • Workshop: September 9, 2019
  • Submit Now

    Submission Guidelines

    Please submit papers using the ACM SIGCHI portrait template. We invite papers of varying length from 2 to 6 pages, plus additional pages for the reference; i.e., the reference page(s) are not counted to the limit of 6 pages. Accepted papers will be included in the ACM Digital Library and supplemental proceedings of the conference. Reviews are not double-blind, and author names and affiliations should be listed.


    Workshop Chairs
  • Tong Yu (Samsung Research America)
  • Shijia Pan (Carnegie Mellon University)
  • Susu Xu (Carnegie Mellon University)
  • Yilin Shen (Samsung Research America)
  • Botao Hao (Purdue Unversity)

  • Advising Committee
  • Pei Zhang (Carnegie Mellon University)
  • Hae Young Noh (Carnegie Mellon University)
  • Jennifer Healey (Adobe Research)
  • Thomas Ploetz (Georgia Institute of Technology)
  • Branislav Kveton (Google Research)
  • Hongxia Jin (Samsung Research America)

  • Technical Program Committee
  • Sheng Li (University of Georgia)
  • Yuan Tian (University of Virginia)
  • Chenren Xu (Peking University)
  • Jun Han (National University of Singapore)
  • Shuai Li (Chinese University of Hong Kong)
  • Xiaoxuan Lu (University of Oxford)
  • Dezhi Hong (University of California San Diego)
  • Mostafa Mirshekari (Carnegie Mellon University)
  • Ming Zeng (Carnegie Mellon University)
  • Ruiyi Zhang (Duke University)
  • Charles Chen (Ohio University)
  • Kaifei Chen (Waymo)
  • Avik Ray (Samsung Research America)
  • Yue Deng (Samsung Research America)
  • Xiao Wang (Facebook)
  • Bing Liu (Facebook AI)
  • Copyright © All Rights Reserved | This template is made with by Colorlib