EE-USAD'18- Proceedings of the 2018 Workshop on Understanding Subjective Attributes of Data, with the Focus on Evoked Emotions


SESSION: Oral Session

Artificial Empathic Memory: Enabling Media Technologies to Better Understand Subjective User Experience

  •      Bernd Dudzik
  • Hayley Hung
  • Mark Neerincx
  • Joost Broekens

An essential part of being an individual is our personal history, in particular our episodic memories. Episodic memories revolve around events that took place in a person's past and are typically defined by a time, place, emotional associations, and other contextual information. They form an important driver for our emotional and cognitive interpretation of what is currently happening. This includes interactions with media technologies. However, current approaches for personalizing interactions with these technologies are neither aware of what episodic memories are triggered in users, nor of their emotional interpretations of those memories. We argue that this is a serious limitation, because it prevents applications from correctly estimating users' experiences. In short, such technologies lack empathy. In this position paper, we argue that media technologies need an Artificial Empathic Memory (AEM) of their users to address this issue. We propose a psychologically inspired architecture, examine the challenges to be solved, and highlight how existing research can become a starting point for overcoming them.

What Makes Natural Scene Memorable?

  •      Jiaxin Lu
  • Mai Xu
  • Ren Yang
  • Zulin Wang

Recent studies on image memorability have shed light on the visual features that make generic images, object images or face photographs memorable. However, a clear understanding and reliable estimation of natural scene memorability remain elusive. In this paper, we provide an attempt to answer: "what exactly makes natural scene memorable''. Specifically, we first build LNSIM, a large-scale natural scene image memorability database (containing 2,632 images and memorability annotations). Then, we mine our database to investigate how low-, middle- and high-level handcrafted features affect the memorability of natural scene. In particular, we find that high-level feature of scene category is rather correlated with natural scene memorability. Thus, we propose a deep neural network based natural scene memorability (DeepNSM) predictor, which takes advantage of scene category. Finally, the experimental results validate the effectiveness of DeepNSM.

SESSION: Poster Session

Depth-Aware Image Colorization Network

  •      Wei-Ta Chu
  • Yu-Ting Hsu

The color bleeding problem remains a challenging issue in image colorization. That is, different objects share the same color when they are nearby, leading to the boundary between objects looks unnatural. In this paper, we study how to combine depth information into a neural network and achieve better image colorization. The reasons to integrate depth information are twofold: (1) Depth information clearly provides boundary information between objects, and (2) depth information is commonly available as the development of RGB-D cameras. To the best of our knowledge, depth information was not considered in image colorization before. We evaluate the proposed method from both objective and subjective perspectives, and demonstrate that better colorization results can be obtained when depth information is further considered.

Perceptual Similarity Ranking of Temporal Heatmaps Using Convolutional Neural Networks

  •      Sana Malik
  • Sungchul Kim
  • Eunyee Koh

Similarity ranking is central to various analytic tasks. While current approaches work well on low-dimensional datasets, it becomes difficult to define similarity for more complex data types, like event sequences with multidimensional attributes. Often, its definition needs to be manually tuned according to the target domain or dataset. Visualizations are similarly manually tuned by analysts and can contain important clues about relevant features. In this paper, we propose using computer vision techniques on visualizations as a means for similarity ranking. We visualize sequential datasets as temporal heatmaps and show through user studies with 132 participants that humans agree in ranking results to a query based on perceptual similarity. We design and implement Heat2Vec, a convolutional neural network (CNN) to learn latent representations from heatmaps using color, opacity, and position. We evaluate our method against 11 baselines using a wide range of techniques and show that Heat2Vec provides rankings that are most consistently in line with human-annotated similarity ranking.