ARIA - Quality-Adaptive Media-Flow Architectures for Sensor Data Management
  NSF Award : IIS-0308268
Home People Research Publications Directions Screenshots
     


ARIA Contact Information

ARIA Project
Computer Science and Engineering Dept.
College of Eng. and Applied Sciences
Arizona State University
Box 875406
Tempe, AZ 85287-5406
Office phone: (480) 965-2770
Fax: (480) 965-2751
e-mail: aria@asu.edu




ARIA Goals

The objective of the research at the is to incorporate realtime and archived media and audience responses into live performances, on-demand. This requires an innovative information architecture that will process, filter, and fuse sensory inputs and actuate audio-visual responses in real-time while providing Quality of Service (QoS) guarantees. We are developing an adaptive and programmable media-flow ARchitecture for Interactive Arts (ARIA) that will enable design, simulation, and execution of interactive performances. ARIA provides a language interface for specifying intended mappings of the sensory inputs to audio-visual responses. Based on these specifications, different inputs are gathered from performers as well as from the audience. They are then streamed, fused and mapped to different outputs, while maintaining end-to-end delay requirements as well as providing response-quality guarantees. Through a fully integrated multimedia interface, ARIA also brings in external data from databases into live performances.

ARIA Architecture

From an information management stand-point, this goal involves various technical challenges. An efficient solution to the problem will require development of a real-time media-flow architecture capable of sensing and streaming various types of environmental data captured by spatially distributed audio, video, and motion sensors, accessing external data sources to bring in required data, extracting various features from streamed raw input data, fusing feature-streams, and mapping fused features onto output devices.

Most importantly, these capabilities will be constrained by the quality requirements of the target application and the precision/accuracy of the components of the architecture. The media-flow architecture will be composed of a run-time environment and a specification language. The run-time kernel will be an adaptive, programmable, media-flow network consisting of fusion operators, and media-stream integration paths. The specification language will describe the various components in the architecture and their properties, which include (a) streaming characteristics, such as object-precision, refresh rates and bandwidth requirements of the sensors, (b) accuracy and corresponding computational overheads of the feature extractors, (c) schemas, interfaces, and data access costs of the external data sources, (d) functionalities and QoS of the fusion operators, (e) integration pathways in the media-flow network, and (f) display and audio features of the output devices.


ARIA Outcomes

The specific outcomes of this project includes the following: (1) an adaptive and programmable kernel that can extract, process, fuse, and map media-flows while ensuring quality of service guarantees, (2) a specification language and user interface capable of specifying the components of the media-flow network, (3) QoS scalable fusion and filter operators, (4) a framework that integrates the input sensors, media pathways, external data sources, output sensors, run-time kernel, and the specification language into a real-time quality-adaptive media-flow architecture, and (5) an ARIA testbed.
 




1st Intl. Workshop on Ambient Intelligence, Media, and Sensing
2007
Istanbul, Turkey