Technical Program
03 June 2009
1800 |
Registration Opens
|
1900 |
Reception
|
04 June 2009
05 June 2009
0800 |
Registration and Breakfast
|
0830 - 1000 |
Session 4: Virtual Environments and Games
Discussion Lead: Kuan-Ta Chen (Academia Sinica)
Scribe: Pengpeng Ni (Simula Research Lab and University of Oslo)
|
Presented by: Saurabh Ratti (University of Ottawa) |
This article proposes a new distributed architecture for update message
exchange in massively multi-user virtual environments (MMVE). MMVE
applications require delivery of updates among various locations in the
virtual environment. The proposed architecture here exploits the
location addressing of geometrical routing in order to alleviate the
need for IP-specific queries. However, the use of geometrical routing
requires careful choice of overlay to achieve high performance in terms
of minimizing the delay. At the same time, the MMVE is dynamic, in
sense that users are constantly moving in the 3D virtual space. As
such, our architecture uses a distributed topology control scheme that
aims at maintaining the requires QoS to best support the greedy
geometrical routing, despite user mobility or churn. We will further
prove the functionality and performance of the proposed scheme through
both theory and simulation.
|
Presented by: John Miller (Microsoft Research and University of Cambridge) |
Peer-to-peer distributed virtual environments (DVE's) distribute state
tracking and state transitions. Many DVE's - such as online games -
require ways to fairly determine the outcome of probabilistic events.
While trivial when a trusted third party is involved, resolving these
actions fairly between adversaries without a trusted third party is
much more difficult. This paper proposes the Pairwise Random Protocol
(PRP), which uses secure coin flipping to enable adversaries to fairly
determine the result of a probabilistic event without a trusted third
party. Three different variations of PRP are presented, and the time
impact and network overhead are examined. We conclude that PRP enables
DVE's to distribute the work of determining probabilistic events
between adversaries without loss of security or fairness, and with
acceptable overhead.
|
Presented by: Ke Liang (National University of Singapore) |
In recent years, integrated spatialized voice services have become an
appealing application for networked virtual environments (NVE), e.g.,
Second Life. With a spatialized voice service, people can identify who
is talking if there are several participants in the vicinity. The key
challenge in a spatialized audio streaming application is how to
disseminate audio streams while observing bandwidth limits of end-user
computers and tight latency constraints, and it can be modeled as
NP-complete problem. In this paper, we propose a heuristic algorithm
called CTA for spatialized audio streaming over NVEs in a peer-to-peer
manner. The proposed algorithm was applied to real avatar mobility
traces collected from Second Life, and the simulation results
demonstrate that (a) CTA can achieve a high ratio of successful to
candidate receivers and (b) CTA enables most of the successful
receivers to enjoy minimum latency.
|
1000 - 1030 |
Coffee Break
|
1030 - 1200 |
Session 5: Security
Discussion Lead: Klara Nahrstedt (UIUC)
Scribe: Liang Ke (National University of Singapore)
|
Presented by: Mohamad Hefeeda (Simon Fraser University) |
We investigate the problem of securing the delivery of scalable video
streams so that receivers can ensure the authenticity (originality and
integrity) of the video. Our focus is on recent scalable video coding
techniques, e.g., H.264/SVC, that can provide three scalability types
at the same time: temporal, spatial, and quality (or PSNR). This
three-dimensional scalability offers a great flexibility that enables
customizing video streams for a wide range of heterogeneous receivers
and network conditions. This flexibility, however, is not supported by
current stream authentication schemes in the literature. We propose an
efficient authentication scheme that accounts for the full scalability
of video streams: it enables verification of all possible substreams
that can be extracted and decoded from the original stream. Our
evaluation study shows that the proposed authentication scheme is
robust against packet losses, adds low communication and computation
overheads, and is suitable for live streaming systems as it has short
delay.
|
Presented by: Tony Thomas (Nanyang Technological University) |
For scalability of business, multiparty multilevel digital rights
management (DRM) architecture, where a multimedia content is delivered
by an owner to a consumer through several levels of distributors has
been suggested as an alternative to the traditional two party
(buyer-seller) DRM architecture. A combination of cryptographic and
watermarking techniques are usually used for secure content delivery
and protecting the rights of seller and buyer in the two party DRM
architecture. In a multiparty multilevel DRM architecture the
cryptographic and watermarking mechanism need to ensure the secure
delivery of the content as well as the security concerns of the owner,
multiple levels of the distributors and the consumer. In this paper, we
propose a mechanism which takes care of the above security issues, for
delivering multimedia content through multiparty multilevel DRM
architecture.
|
Presented by: Philip Branch (Swinburne University of Technology) |
In this paper we present results of experimental work using machine
learning techniques to rapidly identify Skype traffic. We show that
Skype traffic can be identified by observing 5 seconds of a Skype
traffic flow, with recall and precision better than 98%. We found the
most effective features for classification were characteristic packet
lengths less than 80 bytes, statistics of packet lengths greater than
80 bytes and inter-packet arrival times. Our classifiers do not rely on
observing any particular part of a flow. We also report on the
performance of classifiers built using combinations of two of these
features and of each feature in isolation.
|
1200 - 1330 |
Lunch
|
1330 - 1500 |
Session 6: Understanding and Improving User Experience
Discussion Lead: Mohamed Hefeeda (Simon Fraser University)
Scribe: Wei Cheng (National University of Singapore)
|
Presented by: Kuan-Ta Chen (Academia Sinica, Taiwan) |
An
Empirical Evaluation of VoIP Playout Buffer Dimensioning in Skype,
Google Talk, and MSN Messenger
VoIP playout buffer dimensioning has long been a challenging
optimization problem, as the buffer size must maintain a balance
between conversational interactivity and speech quality. The
conversational quality may be affected by a number of factors, some of
which may change over time. Although a great deal of research effort
has been expended in trying to solve the problem, how the research
results are applied in practice is unclear. In this paper, we
investigate the playout buffer dimensioning algorithms applied in three
popular VoIP applications, namely, Skype, Google Talk, and MSN
Messenger. We conduct experiments to assess how the applications adjust
their playout buffer sizes. Using an objective QoE (Quality of
Experience) metric, we show that Google Talk and MSN Messenger do not
adjust their respective buffer sizes appropriately, while Skype does
not adjust its buffer at all. In other words, they could provide better
QoE to users by improving their buffer dimensioning algorithms.
Moreover, none of the applications adapts its buffer size to the
network loss rate, which should also be considered to ensure optimal
QoE provisioning.
|
Presented by: Pengpeng Ni (Simula Research Lab and University of Oslo) |
Scalable video is an attractive option for adapting the bandwidth
consumption of streaming video to the available bandwidth. Fine-grained
scalability can adapt most closely to the available bandwidth, but this
comes at the cost of a high compression penalty. In the context of VoD
streaming to mobile end systems, we have therefore explored whether a
similar adaptation to the available bandwidth can be achieved by
performing layer switching in coarse-grained scalable videos. In this
approach, enhancement layers of a video stream are switched on and off
to achieve any desired longer-term bandwidth. We performed user studies
to evaluate the idea, and came to the far-from-obvious conclusion that
layer switching is viable way for bit-rate savings and fine-grained
bitrate adaptation even for rather short times between layer switches.
|
Presented by: Ishan Vaishnavi (Centrum voor Wiskunde en Informatica) |
This paper presents a new scheduling algorithm for real time network
delivery of packets over Diffserv networks for delay sensitive
applications. We name the networks that support this algorithm as
Estimated Service (Estserv) networks. These networks, for real time
packets, estimate the probability of the packet meeting its deadline
and schedule it according to this estimation. This paper validates,
given this estimation mechanism, the better performance of the
scheduling algorithm over traditional solutions. We show that using
Estserv for delay sensitive applications, we can provide out-of-band
scheduling, save bandwidth on packets with expired deadlines and handle
bursts without loosing the scalability of Diffserv. We show with the
help of an implementation in the Linux kernel's ip-forwarding part,
that, given the estimation value, Estserv performs better than Diffserv
in terms of deadlines, while still saving bandwidth.
|
1500 - 1530 |
Coffee Break
|
1530 - 1700 |
Session 7: P2P Streaming II
Discussion Lead: Dongyan Xu (Purdue University)
Scribe: Amir Hassan Rasti Ekbatani (University of Oregon)
|
Presented by: Yih-Farn Robin Chen (AT&T Laboratories - Research) |
P2P file transfers and streaming have already seen a tremendous growth
in Internet applications. With the rapid growth of IPTV, the need to
efficiently disseminate large volumes of Video-on-Demand (VoD) content
has prompted IPTV service providers to consider peer-assisted VoD
content delivery. This paper describes Zebroid, a VoD solution that
uses IPTV operational data on an on-going basis to determine how to
pre-position popular content in customer set-top boxes during idle
hours to allow these peers to assist the VoD server in content delivery
during peak hours. Latest VoD request distribution, set-top box
availability, and capacity data on network components are all taken
into consideration in determining the parameters used in the striping
algorithm of Zebroid. We show both by simulation and emulation on a
realistic IPTV testbed that the VoD server load can be significantly
reduced by more than 50-80% during peak hours by using Zebroid.
|
Presented by: Remo Meier (ETH Zurich) |
Data dissemination in decentralized networks is often realized by using
some form of swarming technique. Swarming enables nodes to gather
dynamically in order to fulfill a certain task collaboratively and to
exchange resources (typically pieces of files or packets of a
multimedia data stream). As in most distributed systems, swarming
applications face the problem that the nodes in a network have
heterogeneous capabilities or act selfishly. We investigate the problem
of efficient live data dissemination (e.g., TV streams) in swarms. The
live streams should be distributed in such a way that only nodes with
sufficiently large contributions to the system are able to fully
receive it -- even in the presence of freeloading nodes or nodes that
upload substantially less than required to sustain the multimedia
stream. In contrast, uncooperative nodes cannot properly receive the
data stream as they are unable to fill their data buffers in time,
incentivizing a fair sharing of resources. If the number of selfish
nodes increases, our emulation results reveal that the situation
steadily deteriorates for them, while obedient nodes continue to
receive virtually all packets in time.
|
Presented by: Lisong Xu (University of Nebraska-Lincoln) |
Most of the literature on peer-to-peer (P2P) live streaming focuses on
how to provide best-effort streaming quality by efficiently using the
system bandwidth; however, there is no guarantee about the provided
streaming quality. This paper considers how to provide statistically
guaranteed streaming quality to a P2P live streaming system. We study a
class of admission control algorithms which statistically guarantee
that a P2P live streaming system has sufficient overall bandwidth. Our
results show that there is a tradeoff between the user blocking rate
and user-behavior insensitivity (i.e., whether the system performance
is insensitive to the fine statistics of user behaviors). We also find
that the system performance is more sensitive to the distribution
change of user inter-arrival times than to that of user lifetimes.
|
1700 - 1715 |
Concluding Remarks
|