Continual and Multimodal Learning for Internet of Things

September 9, 2019 • London, UK

A UbiComp 2019 Workshop

About CML-IOT

Internet of Things (IoT) provides streaming, large-amount, and multimodal sensing data over time. The statistical properties of these data are often significantly different by sensing modalities and temporal traits, which are hardly captured by conventional learning methods. Continual and multimodal learning allows integration, adaptation and generalization of the knowledge learnt from previous experiential data collected with heterogeneity to new situations. Therefore, continual and multimodal learning is an important step to improve the estimation, utilization, and security of real-world data from IoT devices.



Call for Papers

This workshop aims to explore the intersection and combination of continual machine learning and multimodal modeling with applications in Internet of Things. The workshop welcomes works addressing these issues in different applications and domains, such as human-centric sensing, smart cities, health and wellness, privacy and security, etc. We aim at bringing together researchers from different areas to establish a multidisciplinary community and share the latest research.

We focus on the novel learning methods that can be applied on streaming multimodal data:

  • online learning
  • transfer learning
  • few-shot learning
  • multi-task learning
  • reinforcement learning
  • learning without forgetting
  • individual and/or institutional privacy
  • balance on-device and off-device learning
  • manage high volume data flow

  • We also welcome continual learning methods that target:

  • data distribution changed caused by the fast-changing dynamic physical environment
  • missing, imbalanced, or noisy data under multimodal sensing scenarios

  • Novel applications or interfaces on streaming multimodal data are also related topics.


    As examples, the data modalities include but not limited to: WiFi, GPS, RFID, vibration, accelerometer, pressure, temperature, humidity, biochemistry, image, video, audio, speech, natural language, virtual reality, etc.



    Important Dates

  • Submission deadline: June 29, 2019
  • Notification of acceptance: July 8, 2019
  • Deadline for camera ready version: July 12, 2019
  • Workshop: September 9, 2019
  • Submit Now



    Submission Guidelines

    Please submit papers using the ACM SIGCHI portrait template. We invite papers of varying length from 2 to 6 pages, plus additional pages for the reference; i.e., the reference page(s) are not counted to the limit of 6 pages. Accepted papers will be included in the ACM Digital Library and supplemental proceedings of the conference. Reviews are not double-blind, and author names and affiliations should be listed.



    Keynote

    The Deep (Learning) Transformation of Mobile and Embedded Computing, Speaker: Nic Lane, University of Oxford & Samsung AI Center, Cambridge

    Abstract: Mobile and embedded devices increasingly rely on deep neural networks to understand the world -- a formerly impossible feat that would have overwhelmed their system resources just a few years ago. The age of on-device artificial intelligence is upon us; but incredibly, these dramatic changes are just the beginning. Looking ahead, mobile machine learning will extend beyond just classifying categories and perceptual tasks, to roles that alter how every part of the systems stack of smart devices function. This evolutionary step in constrained-resource computing will finally produce devices that meet our expectations in how they can learn, reason and react to the real-world. In this talk, I will briefly discuss the initial breakthroughs that allowed us to reach this point, and outline the next set of open problems we must overcome to bring about this next deep transformation of mobile and embedded computing.

    Speaker Bio: Nic Lane is an Associate Professor in the Computer Science Department at the University of Oxford and Program Director (AI Systems) at the recently announced Samsung AI Center at Cambridge. Before joining Oxford, he held dual appointments at University College London (UCL) and Nokia Bell Labs; at Nokia, as a Principal Scientist, Nic founded and led DeepX – an embedded focused deep learning unit at the Cambridge location. Of late his research has specialized in the study of efficient machine learning under computational constraints, and over the last three years he has pioneered a range of embedded and mobile forms of deep learning. This work formed the basis for his 2017 Google Faculty Award in machine learning. More generally, Nic’s research interests revolve around the modelling and systems challenges that arise when computers collect and reason over various types of complex real-world people-centric data. Nic has received multiple best paper awards, including ACM/IEEE IPSN 2017 and two from ACM UbiComp (2012 and 2015). In 2018 and 2019, he (and his co-authors) received the ACM SenSys Test-of-Time award and ACM SIGMOBILE Test-of-Time award for pioneering research, performed during his PhD thesis, that devised machine learning algorithms used today on devices like smartphones. This year Nic served as the PC-chair of ACM MobiSys 2019, a role he has performed also for ACM HotMobile and ACM SenSys in the past. Prior to moving to England, Nic spent 4-years at Microsoft Research based in Beijing as a Lead Researcher. He received his PhD from Dartmouth College in 2011.



    Organizers

    Workshop Chairs (Feel free to contact us by cmliot2019@gmail.com, if you have any questions.)
  • Tong Yu (Samsung Research America)
  • Shijia Pan (Carnegie Mellon University)
  • Susu Xu (Carnegie Mellon University)
  • Yilin Shen (Samsung Research America)
  • Botao Hao (Purdue Unversity)


  • Advising Committee
  • Pei Zhang (Carnegie Mellon University)
  • Hae Young Noh (Carnegie Mellon University)
  • Jennifer Healey (Adobe Research)
  • Thomas Ploetz (Georgia Institute of Technology)
  • Branislav Kveton (Google Research)
  • Hongxia Jin (Samsung Research America)


  • Technical Program Committee
  • Sheng Li (University of Georgia)
  • Yuan Tian (University of Virginia)
  • Chenren Xu (Peking University)
  • Jun Han (National University of Singapore)
  • Shuai Li (Chinese University of Hong Kong)
  • Xiaoxuan Lu (University of Oxford)
  • Dezhi Hong (University of California San Diego)
  • Mostafa Mirshekari (Carnegie Mellon University)
  • Jonathon Fagert (Carnegie Mellon University)
  • Ming Zeng (Carnegie Mellon University)
  • Ruiyi Zhang (Duke University)
  • Charles Chen (Ohio University)
  • Kaifei Chen (Waymo)
  • Avik Ray (Samsung Research America)
  • Yue Deng (Samsung Research America)
  • Xiao Wang (Facebook)
  • Bing Liu (Facebook AI)


  • Agenda

    Registration/Doors Open (9:30)
    Welcome! (10:00 - 10:15), Speaker: Shijia Pan, University of California Merced
    Keynote (10:15 - 11:00), Speaker: Nic Lane, University of Oxford & Samsung AI Center, Cambridge
    Session 1: Adaptation in Mutimodal Learning (11:00 - 12:30), Chair: Botao Hao
  • Unsupervised Domain Adaptation for Robust Sensory Systems, Akhil Mathur, Anton Isopoussu, Nadia Bianchi-Berthouze, Nicholas D Lane, Fahim Kawsar
  • AutoTag: Visual Domain Adaptation for Autonomous Retail Stores through Multi-Modal Sensing, Carlos Ruiz, Joao Falcao, Pei Zhang
  • AttriNet: Learning Mid-Level Features for Human Activity Recognition with Deep Belief Networks, Harideep Nair, Shunwen Tan, Ming Zeng, Ole Mengshoel, John Paul Shen
  • iSCAN: Automatic Speaker Adaptation via Iterative Cross-modality Association, Yuanbo Xiangli, Chris Xiaoxuan Lu, Peijun Zhao, Changhao Chen, Andrew Markham

  • Lunch (12:30 - 14:00)
    Session 2: Mutimodal Mobile Sensing (14:00 - 15:30), Chair: Zhihan Fang
  • Inferring Fine-Grained Air Pollution Map via a Spatiotemporal Super-Resolution Scheme, Ning Liu, Rui Ma, Yue Wang, Lin Zhang
  • mLung++: Automated Characterization of Abnormal Lung Sounds in Pulmonary Patients using Multimodal Mobile Sensors, Soujanya Chatterjee, Md Mahbubur Rahman, Ebrahim Nemanti, Viswam Nathan, Korosh Vatanparvar, Jilong Kuang
  • PRECEPT: Occupancy Presence Prediction Inside A Commercial Building, Anooshmita Das, Mikkel Baun Kjærgaard
  • Towards a Taxonomy of Interactive Continual and Multimodal Learning for the Internet of Things, Agnes Tegen, Paul Davidsson, Jan A Persson

  • Coffee Pause (15:30 - 16:10)
    Session 3: Vision and Language (16:10 - 17:20), Chair: Carlos Ruiz
  • Audio-Visual TED Corpus: Enhancing the TED-LIUM Corpus with Facial Information, Contextual Text and Object Recognition, Guan-Lin Chao, Chih Chi Hu, Bing Liu, John Paul Shen, Ian Lane
  • Neural Caption Generation over Figures, Charles Chen
  • Apply Event Extraction Techniques to the Judicial Field, Chuanyi Li, Yu Sheng, Jidong Ge, Bin Luo

  • Registration/Doors Close (17:50)

    Note: For each paper presentation, there are 15 minutes for presentation and 5 minutes for Q&A.

    Copyright © All Rights Reserved | This template is made with by Colorlib