The Moving Picture Experts Group

Lightweight Application Scene Representation

Standard: 
Part number: 
20
Activity status: 
Open
Technologies: 

MPEG Lightweight Application Scene Representation (LASeR)

 

MPEG doc#: N7507
Date: July 2005
Author: 
Cyril Concolato (ENST)

 

Introduction

MPEG-4 Part 20 is a specification designed for representing and delivering rich-media services to resource-constrained devices such as mobile phones. It defines two binary formats: LASeR, Lightweight Application Scene Representation, a binary format for encoding 2D scenes, including vector graphics, and timed modifications of the scene; and SAF, Simple Aggregation Format, a binary format for aggregating in a single stream LASeR content with audio/video streams.

 

LASeR, a binary format for representing rich-media services content

The LASeR specification has been designed to allow the efficient representation of 2D scenes describing rich-media services for constraint devices. A rich-media service is a dynamic and interactive presentation comprising 2D vector graphics, images, text and audiovisual material. The representation of such a presentation includes describing the spatial and temporal organization of its different elements as well as its possible interactions and animations.

MPEG evaluated the state-of-art technologies in the field of composition coding. Seeing that none were satisfactory for constraint devices like mobiles phones, MPEG decided to create the LASeR standard. The LASeR requirements included compression efficiency, code and memory footprint. The LASeR standard fulfills these requirements by building upon the existing the Scalable Vector Graphics (SVG) format defined by the World Wide Web Consortium  REF _Ref110007975 \r \h [2] and particularly on its Tiny profile already adopted in the mobile industry. LASeR complements SVG by defining a small set of compatible key extensions tuned according to the requirements. These key extensions permit among others: the frame-accurate synchronization of the scene with the audio-visual elements, the streaming and efficient compression of SVG content. The workflow of LASeR content from creation based on SVG to consumption is depicted in Figure 1.

The streaming capability of LASeR is a benefit of the concept of LASeR stream, inspired from the MPEG-4 BIFS standard  REF _Ref110008443 \r \h [1]. A LASeR stream is the concatenation of an initial scene and the timed modifications of it, which can be sent in a streaming mode from a server to a client in a timed manner.

Efficient compression improves delivery and decoding times, as well as storage size and is achieved by a compact binary representation of the SVG scene tree. This compact representation is tailored for the efficient compression of SVG content. Specific encoding techniques have been designed for simple yet efficient encoding of SVG specific data.

 

Figure  1 - LASeR Workflow – from creation to consumption

 

SAF, the aggregation of LASeR and audiovisual material

The delivery of Rich Media content to constraint devices is a challenging task which consists in delivering the representation of the presentation along with all the audiovisual material used in it. Efficient delivery, especially on mobile low bandwidth networks, requires reactivity and fluidity.

The SAF specification defines tools to enable the transport of LASeR content along with its attached audiovisual material according to these requirements. The SAF specification defines a binary format for a SAF stream, made of a LASeR stream with any type of media stream. SAF streams are low overhead multiplexed streams which can be successfully delivered using any delivery mechanism: download-and-play, progressive download, streaming or broadcasting. To achieve reactivity, the SAF specification defines the concept of cache unit which allows sending in advance sub-content which will be used later on in the presentation.

 

Target applications

Mobile interactive portals (as depicted in Figure 1), Mobile TV (over 3G, DVB-H, DMB, …), 2D cartoons, interactive vector graphics maps, 2D widgets, etc.

Figure 2 - Mobile Interactive Portal using MPEG-4 LASeR

References

[1]     ISO/IEC 14496-11, Coding of audio-visual objects, Part 11: Scene description and Application engine (BIFS, XMT, MPEG-J)

[2]     ISO/IEC 15938-1, Multimedia content description interface, Part 1: Systems

[3]     Scalable Vector Graphics (SVG) Tiny 1.2 Specification,  http://www.w3.org/TR/SVGMobile12/

 

MPEG-4 LASeR Presentation and Modification of Structured Information

 

N11959
March 2011
Source: Jihun Cha, Electronics and Telecommunications Research Institute (ETRI)

 

1         Motivation

MPEG-4 LASeR Presentation and Modification of Structured Information (ISO/IEC 14496-20:2008 AMD3) introduces a specific mechanism to present any Structured Information (any document conforming a given schema such as MPEG-21 DI or TVAnytime) in a consistent manner using Presentation Information (MPEG-4 LASeR). It also provides a means to reflect the result of the presentation back into the Structured Information.

The PMSI functions provide a rich presentation tool with the structured information as well as a flexible scene representation method with the presentation information. Therefore, the scene representation author can take advantage in composing the scene by separating into two parts as the presentation and structured information. It is efficient scene composition method because the modification from the one part does not affect the other one such as HTML and CSS. For instance, a feature-rich (such as graphical layout of channel information and predefined user interactions) channel guide can be provided using this specification by accessing incoming EPG structured information and presenting declaratively. Furthermore, the PMSI can be applied to a daily or hourly changed scene with a static layout, multilingual subtitles in a single video clip and so on.

The two main functions of the MPEG-4 LASeR PMSI standard are External Reference and External Update. Moreover, there is a unique referencing method of the fragments of structured information. These functions and method are described in the following clauses.

2         Referencing the fragments of structured information

The file type of the structured information is an XML-based document. To access a specific fragment of structured information, a pointer scheme for structured information is defined. This scheme is a specific URI form and composed by using the W3C XPointer framework and XPath function.

3         External Reference

The External Reference is a function of referencing a specific fragment of Structured Information from Presentation Information. The purpose of the External Reference function is to identify the scope of the scene to be updated regularly with the latest version of Structured Information at the specified interval of time.

4         External Update

The External Update function supports a modification of Structured Information. It supports three types of modification: "replace", "insert", and "delete". A part of Structured Information can be replaced with a new element or an attribute value, a new element or an attribute value is inserted into Structured Information, or a part of Structured Information can be deleted according to the provided ‘type’ attribute.