PhD thesis abstracts

December 2011

PhD thesis abstracts


Andreas Reinhardt

Designing Sensor Networks for Smart Spaces - Unified Interfacing and Energy Efficient Communication between Wireless Sensor and Actuator Nodes

Wireless sensor and actuator networks are comprised of embedded systems with sensing, actuation, computation, and wireless communication capabilities. Their untethered character provides installation flexibility and has in consequence led to their application in a large range of domains, e.g. environmental and habitat monitoring, or industrial process surveillance and control. Besides these traditional application areas, the vision of smart spaces foresees the transparent integration of sensing and actuation components into everyday environments. Smart services that rely on information about the current situation and the possibility of physical interaction are envisioned to emerge in versatile ways, such as context-aware building automation or support for ambient assisted living.

From a technological perspective, wireless sensor and actuator networks represent an adequate infrastructure for the realization of smart spaces. As a result of the different application scenarios however, concepts resulting from research on traditional sensor and actuator networks can only be applied to a limited extent. Most prominently, the heterogeneous nature of devices in smart environments necessitates dedicated means to cater for their interoperability. At the same time, the need for small-sized devices entails tight resource and energy constraints, which need to be carefully regarded during application design. Finally, the collection and wireless transmission of data from mobile entities play a vital role in smart environments, whereas they are rarely considered in traditional sensor network deployments.

We address the requirements of smart environments by presenting the Sensor-RPC framework, which enables the generic interoperability between diverse wireless sensor and actuator devices. The presented solution applies the remote procedure call paradigm to abstract from the underlying hardware platforms, i.e. sensing, processing, and actuation functionalities are encapsulated into remotely invocable functions. Sensor-RPC makes use of binary packet representations and a modular parameter serialization concept in order to ensure its efficient applicability on resource-constrained embedded systems.

In order to maximize the utilization of the available energy budget, Sensor-RPC is complemented by Squeeze.KOM, a framework for lossless packet payload compression. Squeeze.KOM takes temporal correlations between successive data packets into account and exploits the observed similarities in order to reduce the size of transmitted packets, and thus the energy demand of their transmission. Depending on the characteristics of the underlying data, the actual data compression step is realized by means of binary distance coding of packet differences, or by applying adaptive Huffman coding with a code tree of limited size. Both take advantage of the specific properties of real-world sensor data sets, in which strongly biased symbol distributions are frequent. Besides the lossless compression of packet payloads, the further reduction of packet sizes by means of header compression is presented. Our stateful header compression mechanism SFHC.KOM omits header fields with constant or deterministically changing values from their transmission by encapsulating them into so called compression contexts. Tailored to its application in smart spaces, SFHC.KOM adapts to the presence of both static and mobile nodes.

The practicality of the devised solutions is investigated through prototypical implementations and the validation of their function on widely adopted wireless sensor and actuator node platforms. We substantiate the evaluations of the presented solutions by detailed analyses of their resource and energy demands. In order to assess the applicability of the contributions in smart environments, real-world data traces from the envisioned application scenario have been collected and extensively used in simulations.

Advisor(s): Ralf Steinmetz (supervisor), Adam Wolisz (Gutachter), Matthias Hollick (Gutachter)

SIG MM member(s): Ralf Steinmetz

URL: http://tuprints.ulb.tu-darmstadt.de/2844/

Sensor Network Technology Area at KOM

The research carried our within the Sensor Network Technology Area at KOM finds applications in diverse fields. Some of these are:

  • Logistics Monitoring and Transportation Optimization
  • Industry/Factory Automation
  • Smart Communication Services
  • Green/Smart Houses and Buildings
  • Ambient Intelligent Systems

 

Our research in this area encompasses analytical tools, simulation studies, user tests, as well as design and development of a heteregenous sensor network testbed platform.


Manfred del Fabro

Non-Sequential Decomposition, Composition and Presentation of Multimedia Content

This thesis discusses three major issues that arise in the context of non-sequential usage of multimedia content, i.e. a usage, where users only access content that is interesting for them. These issues are (1) semantically meaningful segmentation of videos, (2) composition of new video streams with content from different sources and (3) non-sequential presentation of multimedia content.

A semantically meaningful segmentation of videos can be achieved by partitioning a video into scenes. This thesis gives a comprehensive survey of scene segmentation approaches, which were published in the last decade. The presented approaches are categorized based on the underlying mechanisms used for the segmentation. The characteristics that are common for each category as well as the strengths and weaknesses of the presented algorithms are stated. Additionally, an own scene segmentation approach for sports videos with special properties is introduced. Scenes are extracted based on recurring patterns in the motion information of a video stream.

Furthermore, different approaches in the context of real-life events are presented for the composition of new video streams based on content from multiple sources. Community-contributed photos and videos are used to generate video summaries of social events. The evaluation shows that by using content provided by a crowd of people a new and richer view of an event can be created. This thesis introduces a new concept for this emerging view, which is called ``The Vision of Crowds''.

The presentation of such newly, composed video streams is described with a simple but powerful formalism. It provides a great flexibility in defining the temporal and spatial arrangement of content. Additionally, a video browsing application for the hierarchical, non-sequential exploration of video content is introduced. It is able to interpret the formal description of compositions and can be adapted for different purposes with plug-ins.

Advisor(s): Laszlo Böszörmenyi (1st supervisor), Klagenfurt University

SIG MM member(s): Manfred del Fabro, Laszlo Böszörmenyi, Alan Hanjalic

unpublished

Distributed Multimedia Systems Group

www.aau.at/tewi/inf/itec/dms/

Current research topics:

  • Self-organizing Content Delivery
  • Interactive Image and Video Search
  • Multimedia Content Visualization
  • Social Aspects of Multimedia Information Systems
  • User-centered Multimedia Information Retrieval
  • Creating Summaries and Stories out of Large Social Events
  • Applications in the Medical Domain (Endoscopy) and in Traffic Surveillance

 

While our main interest lies in basic research, we aim to actively participate in the international scientific community and strive to apply our results in close cooperation with industry.


Mu Mu

Parametric Assessment of Video Quality in Content Distribution Networks

IP-based packet-switched networks have become one of the main content distribution platforms for emerging multimedia services such as IPTV, thanks to the rapidly growing bandwidth and exclusive inter-networking and interactivity features of IP-based networks. Meanwhile, high quality video content services are becoming particularly popular within content delivery networks (CDN). During content distribution, packets of encoded video streams can be delayed, corrupted or dropped due to network impairments in packet-switched networks. This leads to perceptual quality degradations of the delivered content at receiver. Although network impairments are rare in commercial managed networks, any distortion caused by impairments can be highly detrimental to end users' experience. Consequently the ability to meet customer expectations on video quality has become a critical service differentiator. Quality of Experience (QoE) that was merely recognised as a value-added service of traditional content distribution services is now one of the fundamental requirements and challenges of providing high quality video services. In order to maintain a high level of user experience throughout the life-cycle of a video service, a service quality measurement and management service must be established.

The thesis first outlines the problem space of video content distribution in packet-switched networks and motivates the in-service quality assessment model to evaluate quality degradation caused by the content loss effect. This is followed by the background research which explores the key elements and mechanisms of video content distribution systems including video codecs, content encapsulation, packet-switched networks and content presentation in conjunction with discussion of human vision and end user expectations. The potential causes of quality degradation and factors that could influence the perceptual impact of degradation in distribution systems are also discussed.

A detailed analysis of existing objective evaluation methodologies with regard to the performance in accessing user experience and the feasibility of implementing them in video content distribution networks is also carried out. It is concluded that quality of delivery (QoD) is the critical part of the overall acceptability of a video service and should be assessed in-service. Further, packet analysis is the ideal method of providing such an in-service assessment in distribution networks due to its operational nature.

In order to investigate the origin, appearance and perception of content loss in the distribution network, initial subjective experiments are necessary. One of the prerequisites of these experiments is the use of a testbed system that processes the source video content and produces test sequences for the experiment according to the test plan. The testbed system is also an essential tool for designing and validating the objective QoE model. The LA2 testbed system has been developed for the development and evaluation of objective quality assessment models. Under the framework of the LA2 system, operational functions (such as deep packet inspection and header analysis) of the proposed model have been realised. The LA2 system facilitates the efficient generation of test sequences for subjective experiments according to designated test plans with multiple functional modules such as the network impairment emulator (NIE) and configuration tools. The testbed system has been used to facilitate early exploratory data analysis which leads to the formulation of relevant scientific questions and helps to establish further quantitative data analysis. Test results prove that the perceivable impact of content loss on video content is determined by the joint influence of multiple impact factors.

With the results from analytical study and a number of exploratory tests, a discrete parametric assessment model to provide high performance in-service video quality measurements is introduced. A distinctive discrete network analysis methodology is outlined followed by details of three key functions: packet inspection, perceptual impact assessment and impact aggregation. Content factors, error factors, system factors, and user factors are identified as the essential impact factors from the modelling. Each impact factor is composed by one or multiple impact indices reflecting different aspects of the impact effect. Quantitative metrics are also defined for each impact index.

The thesis then elaborates on the subjective experiment and modelling efforts to realise the perceptual impact assessment model. Source content selection, test condition design, test environment establishment, and test procedures are introduced. After an overview of the observed user opinion scores obtained in the subjective, the analysis of the statistical inference process, including model specification, estimation of model parameters and estimation of precision, which are relevant to the modelling of assessment functions are introduced. Two assessment functions have been derived to evaluate the dichotomous and polytomous visibility of content loss respectively. Both functions provide high performance estimations according to their fitness to the subjective data.

Advisor(s): Andreas Mauthe (Supervisor), Ralf Steinmetz(Examiner)

SIG MM member(s): Andreas Mauthe, Ralf Steinmetz

unpublished

School of Computing and Communications

http://www.scc.lancs.ac.uk/


Previous Section Table of Contents Next Section