News Flash

Registration site is open now.
 
Tutorials Print E-mail

We’ve assembled a wonderful group of tutorials for this year’s ACM Multimedia. We have something for everybody, from new students just starting work in multimedia to the most seasoned researcher. Our presenters have worked hard to distill the essence of their topics into a half-day tutorial. Spend the day learning everything you need to understand and work in these areas.

The tutorials will be held on Monday, 27 October 2008, and they are grouped into four areas:

 
Tools You Can Use - How do you make your multimedia ideas real?
 

We hope you will enjoy these tutorials. We think there is something for everyone!

Schedule (Monday, October 27, 2008)

Time
 Tutorial
 0830-1200 T1
Dulce Ponceleón and Nelly Fazio
  T3
Music Recommendation
Òscar Celma and Paul Lamere
  T4
A Glimpse of Multimedia Ambient Intelligence
Abdulmotaleb El Saddik and Rosa Iglesias
  T6
  T8
 1200-1330  Lunch
 1330-1700 T2
Rong Yan and Winston Hsu
  T5
  T7
  T9
 

Content Access

Tutorial 1: Multimedia Content Protection (Presenters: Dulce Ponceleón and Nelly Fazio)

Multimedia content protection is a controversial topic. Content owners want to protect their rights while consumers want flexible usage, privacy, and seamless content flow. In this tutorial we cover content protection from its cryptography fundamentals, to its history, emerging standards, state-of-the-art approaches and live demos. We also review several content protection standards such as 4C and Advanced Access Content System (AACS).

This half-day introductory tutorial teaches multimedia researchers, practitioners and consumers in general, the basic cryptographic techniques used in content protection. In this tutorial we focus on entertainment content. We also provide a brief history of content protection systems giving an insight into their evolution and a peek into current technical and business trends. The tutorial is targeted at a beginner to intermediate audience, i.e. no background on cryptography is assumed. Intermediate students will have the opportunity to get an overview of current content protection systems and emerging standards. We will provide context for all the important ideas, with as much depth as possible, and provide pointers to the literature for those who want more details.

Nelly FazioPresenter’s biography: Dr. Nelly Fazio earned her M.Sc. (‘03) and Ph.D. (‘06) in Computer Science from New York University. During her studies, she also conducted research at Stanford University, École Normale Supérieure (France) and Aarhus University (Denmark). In 2003, she was awarded the NYU CIMS Sandra Bleistein prize, for "notable achievement by a woman in Applied Mathematics or Computer Science." Her Ph.D. thesis was nominated with honorable mention for the NYU J. Fabri prize, awarded yearly for the "most outstanding dissertation in Computer Science." Dr. Fazio’s research interests are in cryptography and information security, with a focus on digital content protection. Since July 2006, she is part of the Content Protection group at IBM Almaden Research Center, where she has been conducting research on advanced cryptographic key management, tracing technologies, and authenticated communication in dynamic federated environments. Currently, she is a visiting research scientist in the Security group at IBM T.J. Watson Research center, working on security issues of decentralized environments such as mobile ad-hoc networks (MANETs) and sensor networks.

Dulce B.   PonceleonPresenter’s biography: Dr. Dulce B. Ponceleón holds an M.S. and a Ph.D. degree in computer science from Stanford University. She worked in the Advanced Technology Group at Apple Computer, Inc., where she worked on information retrieval, video compression and audio compression technologies for QuickTime. She was a key contributor to the first software-only videoconferencing system. She is currently at the IBM Almaden Research Center, where she manages the Content Protection Competency Center. She has worked on multimedia content analysis and indexing, video summarization, applications of speech recognition, storage systems, and content protection. She contributed to the ISO MPEG-7 standardization efforts, specifically in Multimedia Description Schemes. She is an IBM technical representative in 4C and Advanced Access Content System (AACS). The 4C Entity has developed content protection standards for recordable and pre-recorded media (CPRM/CPPM). Dr. Ponceleón is the Chair of the 4C Technical Group since 2004. AACS is a content protection standards for managing content stored on the next generation of pre-recorded and recorded optical media for consumer use with PCs and CE devices. Dr. Ponceleón has been on the Scientific Advisory Board of a leading NSF multimedia school, and a program committee member of ACM Multimedia, SPIE, SIGIR, IEEE, and several multimedia workshops. She has held workshops on multimedia standards (ACM MM 2000), panels on streaming video (ACM MM 2001), and multimedia information retrieval tutorials (SIGIR 2002, SIGIR 2005 and ICASPP 2006). She holds patents and numerous publications in video and audio compression, multimedia information retrieval, numerical linear algebra and non-linear programming. She holds several patents and numerous publications in video and audio compression, multimedia information retrieval, content protection, human computer interfaces, numerical linear algebra and non-linear programming.

Tutorial 2: Recent Developments in Content-based and Concept-based Image/Video Retrieval  (Presenters: Rong Yan and Winston Hsu)

Recent advancements in processor speed, high-speed network, and the availability of massive digital storages have led to an explosive amount of image/video data. Although difficult to approximate the exact amount, its enormous scale can be estimated from the following two statistics: there are about 83 million digital still cameras sold in 2006, and video already account for more than half of the internet traffic, with YouTube alone taking 10%. While many users would like to spend their leisure time watching videos on YouTube or browsing photos through Flickr, effectively searching large-scale multimedia collections for useful information, especially those outside of the open web, is still a problem yet to be addressed.

Visual retrieval systems, with the goal to find images or videos in response to queries, offer an important platform to access and manage this vast amount of image/video content. Increasingly, this technique has drawn more and more attention from existing search engines, such as Google, Yahoo and Blinx. The current commercial visual search engines are mostly built on either plain text information (e.g., caption, file name and so on), or the tags manually provided by the users. However, this is far from sufficient for indexing the semantic content of the image/video data. This is reflected in one of the recent Wired article: "Search engines can not index video files as easily as text. That is tripping up the Web’s next great leap forward."

This tutorial aims to provide the participants a broad and comprehensive coverage on the foundations and recent developments of content-based and concept-based image and video retrieval, including both theoretical and practical results as well as illustrative demos. In contrast to previous tutorials on image/video retrieval, we will focus on presenting and discussing the theoretical advance of the emerging visual retrieval approaches from the perspectives of content and concepts, and offer useful and complementary information to the multimedia researchers who prepare for pursuing this research area.

Rong YanPresenter’s biography: Dr. Rong Yan is a Research Staff Member in the Intelligent Information Management Department at the IBM T. J. Watson Research Center since 2006. Dr. Yan received his M.Sc. (2004) and Ph.D. (2006) degrees from Carnegie Mellon University’s School of Computer Science. His research interests include multimedia retrieval, video content analysis, data mining, machine learning and computer vision. Dr. Yan is the principle designer of the automatic/manual video retrieval system that achieves the best performance in the world-wide TRECVID evaluation in 2003 and 2005. He received the Best Paper Runner-Up award in ACM Multimedia 2004 and ACM CIVR 2007. Dr. Yan has authored or co-authored more than 55 international conference and journal papers. He holds 1 U.S. patent and 3 pending patents. He has served in the NSF Career Proposal review panel. Dr. Yan has served or is serving on the Program Committees of ICME’06-08, CIKM’08, SIGIR’07-08, CIVR’07-08, ACM MM’04/07 and several other conferences. He has been a co-chair for the industrial program in ISM’08, the VideOlympics showcase in CIVR’07-08 and a special session in ICSC 2007. Dr. Yan is a reviewer for more than 10 international journals. He is a member of IEEE and ACM.

Winston HsuPresenter’s biography: Dr. Winston Hsu is an Assistant Professor in the Graduate Institute of Networking and Multimedia, National Taiwan University and the founder of MiRA (Multimedia indexing, Retrieval, and Analysis) Research Group. He received his Ph.D. (2006) degree from Columbia University, New York. Before that, he was devoted to a multimedia software company, where experiencing Engineer, Project Leader, and R&D Manager. Dr. Hsu’s current research interests are to enable "Next-Generation Multimedia Retrieval" and generally include content analysis, mining, retrieval, and machine learning over large-scale multimedia databases. Dr. Hsu’s research work in video analysis and retrieval had achieved one of the best systems in TRECVID benchmarks since 2003. He received the Best Paper Runner-Up award in ACM Multimedia 2006 and was named in the "Watson Emerging Leaders in Multimedia Workshop 2006" by IBM. Dr. Hsu is a frequent reviewer for major international journals. He is a member of IEEE and ACM.

Tutorial 3: Music Recommendation (Presenters: Òscar Celma and Paul Lamere)

As the world of online music grows, music recommendation systems become an increasingly important way for music listeners to discover new music. Commercial recommenders such as Last.fm and Pandora have enjoyed commercial and critical success. But how well do these systems really work? How good are the recommendations? How far into the 'long tail' do these recommenders reach? In this tutorial we look at the current state-of-the-art in music recommendation. We examine current commercial and research systems, focusing on the advantages and the disadvantages of the various recommendation strategies. We look at some of the challenges in building music recommenders and we explore some of the ways that Multimedia Information Retrieval techniques can be used to improve future recommenders, including a multi-modal approach merging the different fields (audio, image, video and text).

Oscar CelmaPresenter’s biography: Òscar Celma is a researcher at Music Technology Group since 2000, and Associate Professor at the Pompeu Fabra University, Barcelona (Spain). Since 2006 he is an Invited Expert of the W3C Multimedia Semantics Incubator Group. He is a member of the program committee of the Workshop on Learning the Semantics of Audio Signals (LSAS). In 2006, he received 2nd prize in the International Semantic Web Challenge for the system named "Foafing the Music" (a personalized music recommendation and discovery application).

 

 

  

Paul LamerePresenter’s biography: Paul Lamere is the Principal Investigator for a project called Search Inside the Music at Sun Labs where he explores new ways to help people find highly relevant music, even as music collections get very large. Paul is especially interested in hybrid music recommenders and using visualizations to aid music discovery. Paul serves on the program committee for ISMIR 2008, the International Conference on Music Information Retrieval as well as on the program committee for RecSys08, the ACM conference on Recommender Systems. Paul also authors Duke Listens! a blog focusing on music discovery and recommendation.

 

 


Ubiquitous Multimedia

Tutorial 4: A Glimpse of Multimedia Ambient Intelligence (Presenters: Abdulmotaleb El Saddik and Rosa Iglesias)

The tutorial first presents a brief overview of systems and technologies that are part of Multimedia AmI. It is then introduced multimedia AmI services, components and access networks, which are essential for the development of AmI enviroments, will be discussed. Next, fundamental technologies are presented such as wired/wireless, gateway, and middleware technologies. Finally, some applications together with the concepts of "Personal Administrator" and "Extended Home Environment" are introduced. These applications range from "Home care and safety" and "Health care" to "Information and Entertainment" and "Follow-me multimedia services". In brief, it will cover the following topics: Introduction of Multimedia AmI, Multimedia AmI Tecnology Roadmap, Multimedia AmI Services, Multimedia AmI Components, Access Networks, Wired Technology, Wireless Technology, Gateway Technology, Middleware Technology, "Personal Administrator (PA)" Concept, PA Scope-of-Work, PA Merging Various User Interface Paradigms, PA Outlines, PA Architecture, "Extended Home Environment (EHE)" Concept, EHE Scope-of-Work, EHE enhancing user experiences, Closing Remarks.

Abdulmotaleb El SaddikPresenter's biography: Dr. Abdulmotaleb El Saddik, University Research Chair and Associate Professor, SITE, University of Ottawa and recipient of, among others, the Friedrich Wilhelm-Bessel Research Award from Germany’s Alexander von Humboldt Foundation (2007) the Premier’s Research Excellence Award (PREA 2004). He is the director of the Multimedia Communications Research Laboratory and of the ICT cluster of the Ontario research Network on E-Commerce. He is leading researcher in haptics, service-oriented architectures, collaborative virtual environments and ambient interactive media and communications. He has authored and co-authored three books and more than 200 publications. His research has been selected for the BEST Paper Award 2 times. He is an IEEE Distinguished Lecturer. Visit: http://www.mcrlab.uottawa.ca or http://www.site.uottawa.ca/~elsaddik

 

Rosa IglesiasPresenter’s biography: Dr. Rosa Iglesias is currently a researcher at Ikerlan Technological Research Center, Spain. Iglesias has a PhD in Computer Science from the University of the Basque Country in Spain. Her PhD was developed on the subject of networked haptic virtual environments for assembly tasks at Labein (Spain) and it was partly carried out during a visiting stay at MIT and Queen’s University Belfast. She also hold a post-doctoral position at SITE, University of Ottawa, Canada. She obtained the second Thesis Award by the Research Basque association in 2007. Her research interests span ambient intelligent technologies and applications, networked haptic virtual environments and haptic applications. She is currently involved in several European projects on Ambient Intelligence including SmartTouch and AmIe. She is a member of IEEE and ACM and of the editorial board of the International Journal of Advanced Media and Communciation (IJAMC).

Tutorial 5: Storage, Retrieval, and Communication of Body Sensor Network Data (Presenter: Balakrishnan Prabhakaran)

Recently, Body Sensor Networks (BSNs) are being deployed for monitoring and managing medical conditions as well as human performance in sports. These BSNs include various sensors such as accelerometers, gyroscopes, EMG (Electro myograms), EKG (Electro-cardiograms), and other sensors depending on the needs of the medical conditions. Data from these sensors are typically Time Series data and the data from multiple sensors form multiple, multidimensional time series data. Analyzing data from such multiple medical sensors pose several challenges: different sensors have different characteristics, different people generate different patterns through these sensors, and even for the same person the data can vary widely depending on time and environment.

This tutorial describes the technologies that go behind BSNs - both in terms of the hardware infrastructure as well as the basic software. First, we outline the BSN hardware features and the related requirements. We then discuss the energy and communication choices for BSNs. Next, we discuss approaches for classification, data mining, visualization, and securing these data. We also show several demonstrations of body sensor networks as well as the software that aid in analyzing the data.

Balakrishnan PrabhakaranPresenter's biography: Dr. B. Prabhakaran is an Associate Professor with the faculty of Computer Science Department, University of Texas at Dallas. He has been working in the area of multimedia systems : animation & multimedia databases, authoring & presentation, resource management, and scalable web-based multimedia presentation servers. Dr. Prabhakaran received the prestigious National Science Foundation (NSF) CAREER Award in 2003 for his proposal on Animation Databases. He is also the Principal Investigator for US Army Research Office (ARO) grant on 3D data storage, retrieval, and delivery. He has published several research papers in various refereed conferences and journals in this area. He has served as an Associate Chair of the ACM Multimedia Conferences in 2006 (Santa Barbara), 2003 (Berekeley, CA), 2000 (Los Angeles, CA) and in 1999 (Orlando, FL) He has served as guest-editor (special issue on Multimedia Authoring and Presentation) for ACM Multimedia Systems journal. He is also serving on the editorial board of Multimedia Tools and Applications journal, Springer Publishers. He has also served as program committee member on several multimedia conferences and workshops. He has presented tutorials in ACM Multimedia and other multimedia conferences. D. Prabhakaran has served as a visiting research faculty with the Department of Computer Science, University of Maryland, College Park. He also served as a faculty in the Department of Computer Science, National University of Singapore as well as in the Indian Institute of Technology, Madras, India.


Multimedia Devices

Tutorial 6: Multimedia Power Management on a Platter: From Audio to Video & Games (Presenters: Samarjit Chakraborty and Ye Wang)

Multimedia applications today constitute a sizeable workload that needs to be supported by a host of mobile devices ranging from cell phones, to PDAs and portable game consoles. Battery life is a major design concern for all of these devices. Whereas both - the complexity of multimedia applications and the hardware architecture of these devices - have progressed at a phenomenal rate over the last one decade, progress in the area of battery technology has been relatively stagnant. As a result, currently a lot of effort is being spent to develop high-level power management and application tuning techniques to minimize energy consumption and thereby prolong battery life. Such techniques include dynamically scaling the underlying processor's voltage and clock frequency in response to a time-varying workload, powering down certain system components when not being frequently used, and backlight scaling in LCDs with controlled image-quality degradation. Some of the application tuning techniques include selectively ignoring certain perceptually-irrelevant computations during audio decoding, and injecting metadata with workload information into video clips which can then be used to accurately estimate the decoding workload at runtime for better power management. In this tutorial, we plan to give a comprehensive overview of this area and discuss power management schemes for a broad spectrum of multimedia applications. In particular, we will talk about several power management and application tuning techniques specifically directed towards audio decoding, video processing and interactive 3-D game applications. Starting from the basics of power management for portable devices, we will discuss the necessary mathematical techniques, give high-level overviews of relevant algorithms and also present the hardware setup that is necessary to perform research and development in this area.

This tutorial will be specifically directed towards an audience who has a fair amount of background in multimedia systems, but is relatively new to embedded systems and power management techniques. The level of the tutorial will be from introductory to intermediate and no background in embedded systems, computer architecture or hardware design issues will be assumed. The lectures will introduce the relevant background material; give an overview of the current state-of-the-art, focussing some of the important algorithms from each class - audio, video and games; introduce the hardware setup necessary to work in this area; and finally, talk about the issues and challenges currently being faced by the multimedia power management domain. The material to be presented will be useful to researchers, students, and software developers focussing on multimedia applications. It will particularly appeal to those who are familiar with multimedia systems and are interested in working on issues related to power management.

Samarjit ChakrabortyPresenter's biography: Dr. Samarjit Chakraborty is an Assistant Professor of Computer Science at the National University of Singapore. He obtained his Ph.D. in Electrical and Computer Engineering from ETH Zurich in 2003. For his Ph.D. thesis, he received the ETH Medal and the European Design and Automation Association's "Outstanding Doctoral Dissertation Award" in 2004. His work has also received Best Paper Award nominations at DAC 2005, CODES+ISSS 2006 and ECRTS 2007, all of which are premier conferences in the real-time/embedded systems area. Samarjit's research interests are primarily in system-level power/performance analysis of embedded systems. He has extensively published in major research forums on this topic including DAC, DATE, CODES+ISSS, ASP-DAC, RTSS and RTAS, and regularly serves on the technical program committees of many of these conferences. Over the last few years he has been working on various problems specifically related to power management of multimedia applications and have co-authored several papers and patents in this area.

Ye WangPresenter's biography: Dr. Ye Wang received his Dr.-Tech. degree from the Department of Information Technology, Tampere University of Technology, Finland. In 2001, he spent a research term at the University of Cambridge, U.K., working with Prof. Brian Moore on compressed domain audio processing. He is currently an Assistant Professor with the Department of Computer Science, School of Computing, National University of Singapore. Dr. Wang has had a nine-year career with Nokia Research Center in Finland as research engineer and senior research engineer, where he worked on Digital Audio Broadcasting (DAB) receiver prototype development, optimization of perceptual audio coding algorithms, error resilient audio content delivery to mobile phones and compressed domain audio processing for multimedia applications on small devices. His research interests include audio compression and content-based processing, perception-aware and low-power audio processing, and error resilient content delivery to handheld devices via wireless networks. He holds a dozen patents in these areas and has published about 30 international journal and conference papers. He is a member of the technical committee, Coding of Audio Signals of the Audio Engineering Society; and a member of the Multimedia Communications Technical Committee, IEEE Communications Society.

Tutorial 7: Haptics Technologies: Theory and Applications from a Multimedia Perspective (Presenters: Kanav Kahol and Abdulmotaleb El Saddik)

This tutorial aims to provide an initial impetus in the direction of enabling multimedia researchers to conduct research in the area of haptic user interfaces. The tutorial will present audience with an introduction to the field of haptics. The presented material will be made accessible to the multimedia community relating material from the haptics domain to multimedia algorithms, systems etc. For example haptic sensing which refers to the systems and algorithms that can record haptic data which included texture, shape will be compared and contrasted with shape from shading, shape from motion approaches. We hope that this approach will catalyze interest in haptics from the multimedia community enabling a growth of research labs in academia and industry that study haptic systems. The advent of recent devices such as multitouch screens, next generation movement based gaming systems such as the Nintendo Wii®, have made touch based interfaces as an intriguing novelty. Our tutorial is timely as it would enable the multimedia community to gain an understanding of haptic devices and systems and how to effectively employ them in their systems and interfaces.

Kanav KaholPresenter's biography: Dr. Kanav Kahol is an assistant professor in department of biomedical Informatics at Arizona State University. He is the manager of the Human Machine Symbiosis Lab. He is affiliated with Banner Good Samaritan Medical Center, Phoenix as a research faculty in simulation education and training center (SimET Center). Dr Kanav Kahol’s primary research interest lies in design, development and evaluation of haptic user interfaces. He views haptic interfaces as a major component of developing human machine symbiotic entities. In keeping with this view, Dr Kahol focuses on an interdisciplinary approach to research in haptics spanning cognitive psychology, neurology, computer science, signal processing and informatics. He has conducted applied research in the areas of medical simulation, multimodal user interfaces including mobile device interfaces and assistive and rehabilitative devices. Dr Kahol has published several journal papers and conference papers. He is also the organizer of workshops and special issue in journals pertaining to haptics research. Please visit http://www.public.asu.edu/~kkahol for more information.

Abdulmotaleb El SaddikPresenter's biography: Dr. Abdulmotaleb El Saddik, University Research Chair and Associate Professor, SITE, University of Ottawa and recipient of, among others, the Friedrich Wilhelm-Bessel Research Award from Germany’s Alexander von Humboldt Foundation (2007) the Premier’s Research Excellence Award (PREA 2004). He is the director of the Multimedia Communications Research Laboratory and of the ICT cluster of the Ontario research Network on E-Commerce. He is leading researcher in haptics, service-oriented architectures, collaborative virtual environments and ambient interactive media and communications. He has authored and co-authored three books and more than 200 publications. His research has been selected for the BEST Paper Award 2 times. He is an IEEE Distinguished Lecturer. Visit: http://www.mcrlab.uottawa.ca or http://www.site.uottawa.ca/~elsaddik


Tools You Can Use

Tutorial 8: Mobile Phone Programming for Multimedia (Presenter: Jürgen Scheible)

If you are an enthusiastic mobile phone user who has many ideas and new ways of using your phone, this practical hands-on tutorial will show you how to realize your own novel concepts and ideas without spending too much time and effort. It aims to equip you with some practical skills of programming mobile devices for your projects and to bring inspiration for innovation.

Whether you are a novice programmer having some basic programming or scripting knowledge (Flash, Php, ...) or you are an experienced programmer, you will get a quick overview and understanding about programmable phone features especially for multimedia,. But first of all you will gain practical experience on how to write mobile applications with Mobile Python (Python for S60 - Nokia) - even within this short tutorial time.

Topics to be covered:

    1. Introduction to Mobile Python (Python for S60)
    2. Demo examples (capabilities and limitations of Python S60)

    Hands-on session:
    3. GUI programming, SMS sending/receiving
    4. Text to speech, Sound recording/playing, MIDI
    5. Camera (taking photo & video), 2D Graphics, OpenGL ES
    6. Bluetooth, Keyboard key programming, Video playing
    7. GPS location reader
    8. Networking, HTTP, HTTPS, Socket communication, WLan
    9. Motion sensor (e.g., for gesture based UI)
    10. Client-server applications
    (Phones will be provided, but bring your own Laptop along: Mac, Windows, Linux)

Online tutorial of Python for S60: http://www.mobilenin.com/pys60/menu.htm

Recommended reading:
Book: Mobile Python - Rapid prototyping of applications on the mobile platform (2007) Scheible J., Tuulos V., Publisher: Wiley, ISBN: 978-0-470-51505-1. http://www.mobilepythonbook.org/

The mobile space and the internet are rapidly converging and are turning into a rich source of opportunities. Modern mobile phones offer a large set of features including camera, sound, video, messaging, telephony, location, Bluetooth, Wifi, GPS, Internet access, motion sensor, etc. Features that could easily be combined and used for creating new types of applications that kick and bring engaging experiences to users.

The problem: Developing applications on the mobile platform has been time consuming in the past and required a steep learning curve. As a result, people often gave up early or never started to turn their innovative ideas into working solutions. And in research projects we often face the situation that we have lack of time and resources. Additionally we often need to apply a rapid iterative design process for building our applications, but we might lack suitable tools for doing so. 

Python for S60 which is introduced in this workshop offers a crucial turning point here. It allows developing mobile applications even by novice programmers, artists and people from the creative communities enabling them to contribute applications and concepts to the mobile space.

  • Python for S60 is easy to learn
  • It can drastically reduce development time
  • It makes rapid prototyping easy and efficient by wrapping complex low-level technical details behind simple interfaces
  • and above all - it makes programming on the mobile platform fun.

Jürgen ScheiblePresenter’s biography: Jürgen Scheible is an Engineer (Telecommunications), a music and media artist. He is a doctoral student at the Media Lab, University of Art and Design, Helsinki where he runs the Mobile Hub, a prototype development environment for mobile client/server applications with a strong focus on artistic approaches and creative design. He spent several months in 2006 as a visiting scientist at MIT, Boston, CSAIL and worked previously for Nokia for 8 years. In 2006 and 2007 Jürgen was recognized as a Forum Nokia Champion for his driving vision to be a bridge builder between art, engineering and research. He is internationally active in teaching innovation workshops on rapid mobile application prototyping in academic but also professional settings e.g. at Stanford University, MIT, NTU Taiwan, Yahoo Research Berkeley, Nokia. In the 2nd half of 2007 his book “Mobile Python” was published by Symbian Press/Wiley, bringing ‘easy programming’of mobile phones to the creative communities. He was one of the ACM Computers in Entertainment Scholarship Award winners in 2006 and Best Arts Paper Award winner at ACM Multimedia 2005 conference. His research focuses on designing multimodal user interfaces for creating and sharing interactive artistic experiences.

Tutorial 9: Authoring Educational Multimedia (Presenter: Nalin Sharda)

The lack of systematic processes for authoring Educational Multimedia content is impeding the realization of its full potential. Catering to the different learning styles is another challenge in creating effective Educational Multimedia content. Story telling has been recognized as an effective pedagogical technique. Nalin Sharda has invented Movement Oriented Design (MOD) to provide a framework for systematically developing Educational Multimedia stories using good story telling principles articulated by the masters of story telling, such as Aristotle and Robert McKee.

This half-day tutorial will first acquaint the participants with Movement Oriented Design principles and imperatives for creating good Educational Multimedia stories. Then each participant will develop an Educational Multimedia story on a topic of their choice, and share it with other participants in a group. This tutorial will be useful for academics as well as industry participants involved in creating educational multimedia. Participants will learn the process for developing Educational Multimedia content using MOD superimposed with good story telling principles, and how to cater for different learning styles. Each participant will develop and take away the outline of an Educational Multimedia story on a topic of their choice. Seminar and other notes will be provided to each participant.

Nalin ShardaPresenter’s biography: Dr. Nalin Sharda gained B.Tech. and Ph.D. degrees from the Indian Institute of Technology, Delhi and he is an Associate Professor of Computer Science and Multimedia at Victoria University, Australia. His publications include the Multimedia Information Networking textbook, and over 100 papers and handbook chapters. He has invented Movement Oriented Design (MOD) paradigm for the creation of effective multimedia stories, and applied it to e-Learning and other applications. He has led projects for the Australian Sustainable Tourism CRC, to develop e-Tourism using Semantic Web technologies, and innovative visualization methodologies. Nalin has been invited to present lectures and seminars in the Distinguished Lecturer series of the European Union’s Prolearn program. He has presented over fifty seminars, lectures, and Key Note addresses in Austria, Australia, Germany, Hong Kong, India, Malaysia, Pakistan, Japan, Singapore, Slovenia, Sweden, Switzerland, UAE, and USA. For further details visit http://sci.vu.edu.au/~nalin.


Contacts

For any questions regarding tutorials please email the Chair:

This e-mail address is being protected from spam bots, you need JavaScript enabled to view it   (NUS, Singapore)

 
© 2010 ACM Multimedia 2008