Systems


What is MPEG-2 systems?

MPEG-2 Systems is an ISO/IEC standard (13818-1) that defines the syntax and semantics of bitstreams in which digital audio and visual data are multiplexed. Such bitstreams are said to be MPEG-2 Systems compliant. This specification does not mandate, however, how equipment that produces, transmits, or decodes such bitstreams should be designed. As a result, the specification can be used in a diverse array of environments, including local storage, broadcast (terrestrial and satellite), as well as interactive environments.

Who developed this standard?

This standard was industry driven, and complemented the MPEG-2 activities in audio and video coding. The consumer TV industry actively participated in the definition of MPEG-2 Systems, to ensure that a low-complexity receiver can be built at a reasonable cost.

Which industries have adopted this standard?

The lest is extensive and continuously growing: consumer TV (cable, satellite and terrestrial broadcast), Video on Demand, Digital Video Disc, personal computing, card payment, test and measurement, etc.

Which standard bodies have adopted this standard ?

In Europe, DVB (Digital Video Broadcast). In the USA, FCC (Federal Communications Comission), ATSC, SCTE. In Japan, MITI/JISC. The DAVIC consortium. DVD.

Why do I need to be MPEG-2 Systems compliant?

In the design of equipment, you have to be MPEG-2 Systems compliant for several reasons. First, if your equipment has to be compliant with DVB, FCC, JISC, ATSC, SCTE, DVD, or DAVIC requirements, as these bodies require MPEG-2 systems compliance. Secondly, because your design can rely on Integrated Circuits developed in that area. Finally, your application will be opened to all the MPEG-2 Systems world, and will be usable over a large number of networks. The MPEG2 Systems standard enables the widest interoperability in digital video and audio applications and services.

Is there a reference implementation?

Yes, reference bitstreams are described in the MPEG-2 document TR 13813-5..

What are the different MPEG-2 Systems components?

MPEG-2 Systems provides a two-layer multiplexing approach. The first layer is dedicated to ensure tight synchronisation between video and audio. It is a common way for presenting all the different materials which require synchronisation (video, audio, and private data). This layer is called Packetized Elementary Stream (PES). The second layer is dependent on the intended communication medium. The specification for error free environments such as local storage is called MPEG-2 Program Stream, while the specificaiton addressing error prone environments is called MPEG-2 Transport Stream.

What is the difference between MPEG-1 Systems and MPEG-2 Systems?

MPEG-2 Systems mandated compatibility with MPEG-1 Systems. The MPEG-2 Program Stream is designed for that purpose. MPEG-2 Systems also addresses error prone environments, and provides all the hooks for Conditional Access systems.

What is the difference between an MPEG-1 Systems stream and an MPEG-2 Systems Program stream?

The major difference lies on the signaling which is present in MPEG-2 Program Streams and was absent in MPEG-1 Systems. A minor difference also exists in the PES format.

Is the MPEG-2 Transport Stream a transport multiplex?

No, MPEG-2 transport is rather a service multiplex. No mechanism, within the syntax, exists to ensure the reliable delivery of the transported data. MPEG-2 transport relies on underlying layers for such services. MPEG-2 transport requires the underlying layer to identify the transport packets, and to indicate in the transport packet header, when a transport packet has been erroneously transmitted. The MPEG-2 Transport Stream is so named to signify that it is the input to the Transport layer in the OSI seven-layer network model. It is not, in itself, the Transport layer.

What carries MPEG-2 Transport Streams?

MPEG-2 Transport Streams carry transport packets. These packets carry two types of information: the compressed material and the associated signaling tables. A transport packet is identified by its PID (Packet Identifier). Each PID is assigned to carry either one particular compressed material (and only this material) or one particular signaling table. The compressed material consists of elementary streams which may be built from video, audio or data material. These elementary streams may be tightly synchronized (as it is usually necessary for Digital TV programs, or for Digital Radio programs), or not synchronised (in the case of programs offering downloading of software or games, as an example).

The associated signaling tables consist of the description of the elementary streams which are combined to build programs, and in the description of those programs. Tables are carried in sections. The signaling tables are called PSI (Program Specific Information).

Why do Transport packets are 188 bytes long?

Because MPEG-2 wanted these packets to be carried over ATM. At that time, according to the AAL which was envisaged, ATM cells were supposed to have a payload of 47 bytes.

188 = 4 * 47.

What about the programs carried within the MPEG-2 Transport Stream?

There is a decription of each program carried within the MPEG-2 Transport Stream. This description usually requires a particular table, the Program Map Table, with one table per program. This table is only sent periodically. On the other hand, the elementary streams which make up a program are continously carried in PES streams. In that sense it could be said that an MPEG-2 Transport Stream does not carry programs, but only carries elementary streams and the instructions required to associate particular elementary streams into particular programs.

Why do so many application use MPEG-2?

One of the attractive aspects of MPEG-2 comes from its fundamental requirement to be generic.

Another reason is that in all the syntax and signaling provisions are made to allow applications to develop their own private syntaxes and signaling. A lot of private needs may be satisfied.

What is the main assumption made by MPEG-2 systems?

That the network is ideal, and that each byte is transmitted with a constant delay.

What is a syntax?

Generally speaking, a syntax specifies the structure of a bitstream: how different parameters, tags, etc., are mapped and laid out on the bitstream. For multiplexing purposes, it is important for the syntax to provide pattterns which can be recognized with an extremely high degree of confidence. These patterns are called synchronisation patterns. In addition, an indication of time and of the bitrate of the bitstream may also be provided.

Equiped with such elements, a bitstream coresponding to an MPEG-2 syntax is a self contained bitstream on which a receiver can slave itself, in order to acquire that bitstream exactly synchronised with the production of that bitstream. However, time indication and bitrate indication are not mandatory.

What is a time stamp?

They are two types of timestamps:

The first type is usually called a reference time stamp. This time stamp is the indication of time mentionned in the previous question. Reference time stamps are to be found in the PES syntax (ESCR), in the program syntax (SCR), and in the transport syntax (PCR).

The second type of time stamp is called DTS (Decoding Time Stamp) or PTS (Presentation Time Stamp). They indicate the exact moment where a video frame or an audio frame has to be decoded or presented to the user respectively.

Are time stamps mandatory?

No, they are not mandatory.

Some applications like Digital TV broadcast, where tight synchronisation is required, will make an extensive use of time stamps. In that case both reference time stamp and DTS/PTS are used.

In other cases (game or software downloading for example) neither reference nor DTS/PTS time stamps are necessary.

DTS and PTS time stamps are not relevant if reference time stamps are not present.

Where are the PTSs and DTSs inserted?

They are inserted as close as possible to the video, audio, or data material. They are inserted in the PES packet headers, in a syntax which is common to all material.

What is PSI?

PSI (Program Specific Information) carries the signaling.

PSI has no synchronisation pattern in the section headers.

What is the difference between a PES packet and a PSI section?

PES

A PES packet is a way to uniformly packetize elementary streams.

Embedded in PES packets, elementary streams may be synchronized with time stamps.

They are not protected.

The PES packets may be of variable length, which allows them to be also of fixed length. PES packets may be rather long packets.

But as elementary streams are continuous streams, it is also possible to know when a PES packet is finished when the next PES packet arrives. Sometimes the length is not even relevant (for video PES packets).

PSI sections

A PSI section is a way to carry a portion of a PSI table.

A PSI section is a way to uniformly represent signaling.

PSI sections are not synchronized.

They may be protected by a CRC.

The sections are of a variable length. They are rather small.

The length is always relevant. It is the only mechanism to go from one section to the next section when they are carried in the same packet.

An update mechanism is also supported which allowsassociation of a version number to a section.

What will be found embedded in a PES packet?

Continuous streams are to be found within the PES packets: Video, audio and data material.

The video may be of different kinds (MPEG-1, MPEG-2), and the same for audio.

There is no assumption about elementary data streams.

One of the first uses is for subtitling.

What will be found in PSI sections?

Signaling is carried in PSI sections.

Conditional Access messages are usually also carried in PSI sections.

Downloading of data will almost certainly use the PSI section mechanism.

Did MPEG-2 do research on Conditionnal Access methods?

MPEG-2 only provided hooks for the Conditionnal Access systems:

The means to carry the messages (key words in ECMs, and entitlement in EMMs), and the means to declare them (in PSI tables, in Transport packet headers, and in PES headers).

The syntax of the messages is determined by each particular Conditionnal Access system.

Why is PSI information not synchronised?

Because tight synchronisation between signaling and elementary streams was not required.

It is enough, generally speaking, to signal an event a little in advance.

In some cases, however, that makes dynamic changes quite tricky, specially when elementary streams are scrambled, or when a program changes from an "in the clear" state to scrambled state.

Why is Video or Audio material not protected at MPEG-2 systems level?

The error concealment techniques are implemented within the audio and the video layers.

MPEG-2 systems relies on the underlying layer in order to bring transport packets with a BER rate around 10-10

Is the MPEG-2 transport stream a two or a three level multiplex?

It is a two layer multiplex, as the pure audio and video material are first packteized in PES packets and secondly packetized in transport packets.

Transport packets are multiplexed.

It is not a three level multiplex as there is no packetization related to programs.

Is the MPEG-2 program stream a two or a three level multilex?

It is a one layer multiplex, as the pure audio and video material are only packteized in PES packets.

PES packets are then multiplexed.

It is not a two level multiplex as there is no packetization related to the carried program.

What is the use of NULL packets?

They are usually used as a provision for rate stuffing in order to avoid the bottleneck of insufficient resource.

Usually transport packets have to be declared in the PSI information tables.

A NULL packet is a particular undeclared transport packet that belongs to nobody.

Its payload is undefined.

Some applications use NULL packets in order to insure a good and quick synchronisation mechanism, as their modulation scheme is not aligned on the start of transport packets.

Is the bitstream rate is always explicitly carried?

Yes for the PES syntax and the MPEG-2 program syntax.

No for the MPEG-2 transport syntax. For that syntax, the transport rate may appear in the PSI information.

How do DVB and ATSC use MPEG-2 transport?

In defining their own operational rules and implementation guidelines.

They specified also their own signaling (Service Information) using the already defined MPEG-2 private sections.

Modulation scheme adapted to satellite, to cable and to terrestrial broadcast have been adopted.

Physical interfaces between equipments have been specified.

Parameters relevant for realtime and offline measures have been specified.

Are there any error detection mechanisms?

Two CRCs are to be found:

one in the PES syntax, but its purpose is to check the error robustness of a network link. It a CRC calculated over the previous transmitted PES packet.

The second in the PSI information. It is a way to insure that a section has not been corrupted.

That is why sections have to be short, in order for the CRC to be efficient.

What is a model?

A model is a virtual decoder, They are two models, one within the MPEG-2 program syntax (the P-STD), the other within the MPEG-2 transport syntax (The T-STD).

A model defined buffer sizes, their input and output rates, and timing constraints related to timestamps values.

Why have the models been invented?

Not to be implementation dependent.

The first model comes from MPEG-1 systems.

Some of the assumption in the T-STD are even not realistic at all: buffers, for instance, when decoding occurs are supposed to be emptied instantaneously.

What are the constraints imposed by the T-STD model?

They apply to different elements:

timing information: if the time stamps are wrong the buffers may underflow or overflow.

The regularity of Transport packets carring the same elementary stream:

If they are too many consecutive packets buffers will overflow.

The manipulation of MPEG-2 transport bitsreams:The least remultiplexing operation may violate the T-STD.