I have moved to Osaka University

I’m now with Institute for Datability Science, Osaka University, Japan.

It’s a new institute and might not be very ready yet, but I’m very excited to set things up (kind of) from scratch.

Drop by Suita Campus!

A paper included in MMM

Our paper will be included in International Conference on Multimedia Modeling 2017, which will be held in Iceland in January (sounds cold!).

  • Fabian Lorenzo Dayrit et al., “ReMagicMirror: Action learning using human reenactment with the mirror metaphor”
    Proc. 23rd International Conference on Multimedia Modeling, 12 pages, Jan. 2017

This work is a part of our MSCORE Project.

In this paper, we use two Kinect sensors facing to each other with an instructor between them to capture her/his motion, and reenacts the motion on a display with the mirror metaphor. We demonstrated its usefulness through user study.

%e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2016-11-08-9-28-40%e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2016-11-08-9-29-01

One paper in IEEE TVCG

Our paper has been accepted to IEEE Transactions on Visualization and Computer Graphics!

  • Norihiko Kawai, Tomokazu Sato, Yuta Nakashima, and Naokazu Yokoya: “Augmented reality marker hiding with texture deformation,”
    IEEE Transactions on Visualization and Computer Graphics, 13 pages, Oct. 2016

The online version is available here.

This work is to remove AR markers in a live video stream for visual aesthetics, which can handle even non planar surfaces. I joined this project just for technical help, mainly on GPU implementation of, e.g., Poisson blending.

%e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2016-11-08-9-15-19

A paper accepted to MTAP!

Our paper has been accepted to Multimedia Tools and Applications!

  • M. Otani, Y. Nakashima, T. Sato, and N. Yokoya: “Video summarization using textual descriptions for authoring video blogs”,
    Multimedia Tools and Applications (Online First), 19 pages, Oct. 2016.

The online version is available here.

This paper is on how to automatically generate a video summary for a blog post. We use the main text of that blog post to control the content of the video summary, so that it well suits to the blog post.

%e3%82%b9%e3%82%af%e3%83%aa%e3%83%bc%e3%83%b3%e3%82%b7%e3%83%a7%e3%83%83%e3%83%88-2016-11-08-9-07-17

One paper accepted for ACCV 2016!

Our paper has been accepted to ACCV 2016.

  • Video Summarization using Deep Semantic Features
    Mayu Otani, Yuta Nakashima, Esa Rahtu, Janne Heikkilä, and Naokazu Yokoya

Thank you very much for all authors!

This work proposes a new video summarization approach that uses deep features of a video to get its semantics. By training a deep neural network with sentences, our deep features encode “sentence-level” semantics of the video, which boosts the performance of a standard, clustering-based video summarization approach.

See you at ACCV 2016 at Taipei!

スクリーンショット 2016-08-16 16.47.15

Our paper has been accepted to VSM!

Our paper has been accepted to 4th Workshop on Web-scale Vision and Social Media (VSM) in conjunction with ECCV 2016!

This paper is on video/text retrieval by text/video queries. Our approach uses LSTM to encode text as many other existing approaches, but our observation is that LSTM tends to forget about the detail in the text (It mixes up “typing the keyboard” and “playing the keyboard”). The main contribution of this paper is to fuse to text representation web images retrieved using the text as query, which can disambiguates text.

Looking forward to see you at the venue!

スクリーンショット 2016-08-05 11.28.32

Updated:

Now we have arXiv preprint. Please find it at: arXiv:1608.02367

New papers in EURASIP J. Image and Video Process. and ICME 2016

Our papers have accepted to EURASIP Journal on Image and Video Processing and ICME 2016.

  • Antonio Tejero de Pablos et al., “Human action recognition-based video summarization for RGB-D personal sports video” to appear in ICME 2016.
  • Antonio Tejero de Pablos et al., “Flexible human action recognition in depth video sequences using masked joint trajectories” to appear in EURASIP J. Image and Video Processing.

The journal paper is on human action recognition using depth video sequence, and the conference paper is its application to sport video summarization.

See you if you attend ICME 2016!