Keynote Speakers
Harry Shum, Microsoft
Tuesday, November 4th
Image Understanding: A practical perspective from Bing
Abstract
Since the launch of Bing (www.bing.com) in June 2009, we have seen Bing web search market share in the US more than doubled and Bing image search query share more than quadrupled. In this talk, I will share and discuss the challenges and opportunities in image understanding based on our experience building Bing image search. Specifically, I will talk about how we have significantly improved image search quality, and built differentiated image search user experience using NLP, entity, big data, machine learning and computer vision technologies. By leveraging big data from billions of search queries, billions of images on the web and from the social networks, and billions of user clicks, we have designed massive machine learning systems to continuously improve image search quality. With the focus on natural language and entity understanding, for instance, we have improved Bing’s ability to understand the user intent beyond queries and keywords. I will demonstrate with many examples how Bing has delivered a superior image search user experience, quantitatively, qualitatively and aesthetically, by utilizing computer vision techniques.
Author Keywords
Search engines; image search
ACM Classification Keywords
Web searching and information discovery; Web mining; information retrieval query processing; multimedia and multimodal retrieval
Bio
Former managing director of Microsoft Research Asia, Dr. Harry Shum, after leading Bing product development for several years as a Corporate Vice President, has taken the new role of Microsoft Executive Vice President, Technology and Research.
Dr. Shum is an Institute of Electrical and Electronics Engineers (IEEE) Fellow and an Association for Computing Machinery (ACM) Fellow. He served on the editorial board of the International Journal of Computer Vision, and was a Program Chair of the International Conference of Computer Vision (ICCV) 2007. Dr. Shum has published more than 100 papers in computer vision, computer graphics, pattern recognition, statistical learning, and robotics. He holds more than 50 U.S. patents.
Dr. Shum joined Microsoft Research in 1996 where he worked in Redmond, WA as a researcher on computer vision and computer graphics. In 1999, Shum moved to Beijing to help start Microsoft Research China (later renamed Microsoft Research Asia). His tenure there began as a research manager and subsequently moved up to Assistant Managing Director, Managing Director of Microsoft Research Asia, Distinguished Engineer, and Corporate Vice President. In 2007, Shum became Microsoft Corporate Vice President responsible for Bing product development. In 2013, he took on the responsibilities of Microsoft Executive Vice President including oversight of Microsoft Research.
Dr. Shum received a doctorate in robotics from the School of Computer Science at Carnegie Mellon University. In his spare time, he enjoys playing basketball, rooting for the Pittsburgh Steelers, and spending quality time with his family.
Rosalind Picard, MIT Media Lab
Thursday, November 6th
Affective Media and Wearables: Surprising Findings
Abstract
Over a decade ago, I suggested that computers will need the skills of emotional intelligence in order to interact with regular people in ways that they perceive as intelligent. Our lab embarked on this journey of “affective computing” with a focus on first enabling computers to better understand and communicate human emotion. Our main tools have been wearable sensors (several which we created), video, and audio, coupled with signal processing, machine learning and pattern analysis of multimodal human data. Along the way we encountered several surprises. This talk will highlight some of the challenges we have faced, some accomplishments, and the most surprising and rewarding findings. Our findings reveal the power of the human emotion system not only in intelligence, in social interaction, and in everyday media consumption, but also in autism, epilepsy, and sleep memory formation.
Author Keywords
Affective computing, wearable sensors, emotion, autonomic nervous system, measuring engagement, stress and frustration
ACM Classification Keywords
A.0 General: Biographies, I.5 Pattern Recognition, J.3 Life and Medical Sciences, Health
Bio
Professor Rosalind W. Picard, Sc.D., FIEEE, is founder and director of the Affective Computing Research Group at the Massachusetts Institute of Technology (MIT) Media Lab where she also leads the Media Lab’s Technology for Health initiative. Picard has co-founded two businesses, Empatica, Inc. creating wearable sensors and analytics to improve health, and Affectiva, Inc. delivering technology to help measure and communicate emotion using video.
Picard holds a bachelor’s degree in electrical engineering with highest honors from the Georgia Institute of Technology, and masters and doctorate degrees, both in electrical engineering and computer science, from MIT. She started her career as a member of the technical staff at AT&T Bell Laboratories, where she designed VLSI chips for digital signal processing and developed algorithms for image compression. In 1991 she joined the MIT Media Lab Faculty where she rose to the level of full professor. Picard became known internationally for her work designing novel analytical models for content-based retrieval of images and for work pioneering methods of automated search and annotation in digital video, including the Photobook system. In 1996 she wrote and published the book Affective Computing, which envisioned a new area of research and today is credited as starting a new field by that name. Today that field has its own journal, international conference, and professional society. Picard also served as a founding member of the IEEE Technical Committee on Wearable Information Systems in 1998, helping launch the field of wearable computing.
Picard has authored or co-authored over two hundred scientific articles and chapters spanning computer vision, pattern recognition, machine learning, human-computer interaction, wearable sensors and affective computing. She is a recipient of several best paper prizes, including work on machine learning with multiple models (with Minka, 1998), a best theory paper prize for affect in human learning (with Kort and Reilly, 2001) a best paper prize for work with facial expressions (with McDuff, Kaliouby and Demirdjian, 2013) and a best UBICOMP 2013 paper for an automated conversation coach (with Hoque et al, 2013). Her paper (with Healey, 2005) on how to measure driver stress won best paper of the decade
Picard is an active inventor with patents including wearable and non-contact sensors, algorithms, and systems for sensing, recognizing, and responding respectfully to human affective information. Her inventions have been applied in autism, epilepsy, sleep, stress, autonomic nervous system disorders, human and machine learning, health behavior change, market research, customer service, and humancomputer interaction. In 2005 she was named a Fellow of the IEEE for contributions to image and video analysis and affective computing. Picard has been honored with dozens of distinguished and named lectureships and other international awards, and has given over 100 keynote talks.
Prof. Picard has served on international and national science and engineering program committees, editorial boards, and review panels, including the Advisory Committee for the National Science Foundation’s (NSF’s) division of Computers in Science and Engineering (CISE), the Advisory Board for the Georgia Tech College of Computing, and the Editorial Board of User Modeling and User-Adapted Interaction: The Journal of Personalization Research. Picard has also served on non-profit boards, including a school for children with special needs, and a pro-life organization that helps educate and support women with crisis pregnancies.
Picard interacts regularly with industry and has consulted for many companies including Apple, AT&T, BT, HP, i.Robot, Merck, Motorola, and Samsung. Her group’s achievements have been featured in forums for the general public such as The New York Times, The London Independent, National Public Radio, Scientific American Frontiers with Alan Alda, ABC’s Nightline and World News Tonight, Time, Vogue, Wired, Voice of America Radio, New Scientist, and BBC programs such as “Hard Talk” and BBC Horizon with Michael Mosley. Picard lives in Newton, Massachusetts with her amazing husband and three energetic sons.
References
- R.W. Picard (2009), “Future Affective Technology for Autism and Emotion Communication,” Philosophical Transactions of the Royal Society B: Biological Sciences, doi: 10.1098/rstb.2009.0143 Phil. Trans. R. Soc. B 12 December 2009 vol. 364 no. 1535 3575- 3584.
- M.Z. Poh, N.C. Swenson, R.W. Picard (2010), “A Wearable Sensor for Unobtrusive, Long-term Assessment of Electrodermal Activity,” IEEE Transactions on Biomedical Engineering, Volume 57, Number 5, pp. 1243-1252. May 2010. doi: 10.1109/TBME.2009.2038487.
- Picard, R.W. (2010), “Emotion research by the people, for the people,” Emotion Review, Volume 2, Issue 3, July 2010.
- Poh, M.Z., Loddenkemper, T., Reinsberger, C., Swenson, N.C., Goyal, S., Madsen, J.R., Picard, R.W. (2012) “Autonomic Changes with Seizures Correlate with Postictal EEG Suppression,” Neurology, Volume 78, Number 23, 1868-1876, June 5, 2012.
- Hoque, M.E., McDuff, D.J., Picard, R.W. (2012), “Exploring Temporal Patterns in Classifying Frustrated and Delighted Smiles,” IEEE Transactions on Affective Computing, Volume 3, Issue 3, September 2012.
- Dinakar, K., Jones, B., Havasi, C., Lieberman, H., Picard, R.W. (2012) “Common Sense Reasoning for Detection, Prevention, and Mitigation of Cyber bullying.” ACM Transactions on Interactive Intelligent Systems, archive, Volume 2 Issue 3, September 2012, Article No. 18,. doi>10.1145/2362394.2362400.
- Sano A., Picard R.W., “Recognition of Sleep Dependent Memory Consolidation with Multi-modal Sensor Data”, The 10th Annual Body Sensor Networks Conference 2013, Cambridge, MA, May 2013.
- McDuff D., Kaliouby R., Senechal T., Amr M., Cohn J., Picard R.W., “AMFED Facial Expression Dataset: Naturalistic and Spontaneous Facial Expressions Collected In-the-Wild”, The 2013 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW’10), Portland, OR, USA, June 2013.
- Hoque, M. E., Courgeon, M., Martin, J.-C., Mutlu, B., Picard, R. W., ”MACH: My Automated Conversation coacH”, 15th International Conference on Ubiquitous Computing (Ubicomp), September 8-12, 2013.
- Ahn, H.I. and Picard R.W., ”Measuring Affective- Cognitive Experience and Predicting Market Success,” IEEE Transactions on Affective Computing, (To Appear) 2014.
- McDuff, D., Gontarek, S. & Picard, R. W., ”Improvements in Remote Cardio-Pulmonary Measurement Using a Five Band Digital Camera.” Transactions on Biomedical Engineering, (To Appear) 2014.
- Picard, R. W., Fedor, S., & Ayzenberg Y., ”Multiple Arousal Theory and Daily-Life Electrodermal Activity Asymmetry” Emotion Review, (To Appear) 2014.