MSCORE Project

A Social Action Sharing System using Augmented Reality-based Reenactment

Social networking services (SNSs), such as Facebook, Twitter, Flickr, and YouTube, are widely used for sharing their users’ ideas, experiences, and so forth. Most SNSs use text, photos, and video as media for this purpose. However, these media may not be appropriate for some types of experiences of users. For example, a user wants to share an event with human’s motion such as a dance step so that many other users can watch and learn it; however, even a video clip may fail in fully communicating its motion due to the missing three dimensional information and the viewpoint limited only to the actual camera’s position as shown in the figure below.

fig1

The goal of this project is to establish an SNS with another medium, i.e., humans’ motion so that the users can re-experience an event that another user has experienced. For this goal, our proposed system, which we call social action sharing (SAS) system, provides a reenactment of a person, which is a playback of the person’s motion during the event from an arbitrary viewpoint (above figure, bottom). Thus, the SAS system provides a more intuitive and immersive way for re-experiencing an event that the user could not participate in.

This project is supported by Microsoft Research Core 11.

In Microsoft Research Japan-Korea Academic Day 2016, our project was recognized as excellent research work and collaboration! Our project is extended for another year! Thank you very much.

Publication

Coming soon…

Related Publication

  1. F. Dayrit, Y. Nakashima, T. Sato, and N. Yokoya, “Increasing pose comprehension through augmented reality reenactment”, Multimedia Tools and Applications (Online First), 22 pages, December 2015.
  2. F. Dayrit, Y. Nakashima, T. Sato, and N. Yokoya, “Free-viewpoint AR human-motion reenactment based on a single RGB-D video stream”, Proc. IEEE Int. Conf. on Multimedia and Expo (ICME2014), 6 pages, July 2014.
  3. F. Dayrit, Y. Nakashima, T. Sato, and N. Yokoya, “Single RGB-D video-stream based human-motion reenactment”, 映像情報メディア学会技術報告, ME2014-7, February 2014.