Solve Puzzles
Publish Articles

Connect to us Using

    Facebook Login
    Site Registration Why to Join

    Get Free Article Updates

What is Jitter in RTP(Real Time Tranfer Protocol)/ RTCP (Real Time Transfer Protocol) ?

+9 votes

What is Jitter ?

If you ever experimented with the program ping you probably know that if you send a sequence of packets from point A to some point B, each of the packets will need a slightly different time to reach the destination. The varying transit times are not an issue if you are downloading a web page but they matter if you wish to transmit a stream of real-time data. For example, let's suppose that a VoIP device sends out one RTP packet each 20 milliseconds. Below figure shows what the stream might look like at the receiving end. The fact that the packets do not arrive precisely each 20 milliseconds means that we cannot play them out as they arrive unless we are willing to accept poor quality of the audio output.

Define Jitter:

Formally, jitter is defined as a statistical variance of the RTP data packet inter-arrival time. In the Real Time Protocol, jitter is measured in timestamp units. For example, if you transmit audio sampled at the usual 8000 Hertz, the unit is 1/8000 of a second.
The first step to dealing with jitter successfully is to know how large it is. However, we do not need to compute the precise value. In RTP, the receiving endpoint computes an estimate using a simplified formula (a first-order estimator). The jitter estimate is sent to the other party using RTCP (the Real Time Control Protocol).
enter image description here

Jitter Buffer

The network delivers RTP packets asynchronously, with variable delays. To be able to play the audio stream with reasonable quality, the receiving endpoint needs to turn the variable delays into constant delays. This can be done by using a jitter buffer.
The jitter buffer implementation is quite simple: You create a buffer to hold, say, 100 milliseconds of audio with the sampling rate of 8000 Hz, 100 milliseconds correspond to 800 samples. You place incoming audio frames to the buffer and start the playout when the buffer is, say, at least half full.

Once you start to play the audio, it's a bit of a gamble: you risk both buffer underflow (you need to play another frame but the buffer is empty) and buffer overflow (the buffer is full and you need to throw away the just received packet). To reduce the risk, you can increase the size of the buffer, but you simultaneously increase latency: if you start playing when there's at least 50 milliseconds of audio, you delay the signal by those 50 milliseconds. To improve the odds, you can implement an adaptive buffer — the buffer will change its size based on the current jitter.

Sources of Jitter

I would like to conclude this piece with an observation about the sources of jitter. In addition to varying transit times, jitter can sometimes originate right in the sending computer. This happens when the audio data is not read directly from a sound card (sound cards have a very stable clock, more precise than the computer's on-board clock) but comes from another source — for example, the audio stream is generated by a text-to-speech software or simply read from a file. In other words, we are talking about applications like voice mail and interactive voice response (IVR) systems.

*When run on a standard operating system, IVR and voice mail applications can have a problem with precise timing and thus cause a high jitter. Quite often, the operating system process schedulers works with 10 milliseconds quanta. Consider an application that wants to send one RTP packet each 30 milliseconds. The application spends, say, 5 milliseconds doing some processing (e.g. text-to-speech synthesis). After that, it would need to sleep for precisely 25 milliseconds, so that the interval between packets is exactly 30 ms. But because of the 10 ms quantum, the length of the sleep is rounded up to the nearest multiple of 10ms. In other words, the interval between packets ends up being 35 milliseconds. Should this happen in between each pair of packets, you will get a really poor audio quality.*

To overcome the issue, you can do two things:
• Reconfigure the operating system or install a kernel module or driver that will support a more precise timing.
• Or, at the very least, use an adaptive sending algorithm that tries to compensate the incorrect sleep lengths (see section 6 of the OpenH323 tutorial for more about how to do this).

posted Mar 6 by anonymous

  Promote This Article
Facebook Share Button Twitter Share Button Google+ Share Button LinkedIn Share Button Gmail Share Button

Related Articles

Specified in TS 36.321

TTI bundling is a feature of LTE like other enhancements happened. TTI bundling triggered by UE informing the eNB about its power limitations at the present state.

Required Introduction:
When the base station detects that the mobile can't increase it's transmission power and reception is getting worse it can instruct the device to activate TTI bundling and send the same packet but with different error detection and correction bits in 2, 3 or even 4 consecutive transmit time intervals.

The advantage over sending the packet in a single TTI and then detecting that it wasn't received correctly which in turn would lead to one or more re-transmissions is that it saves a lot of signaling overhead. Latency is also reduced as no waiting time is required between the re-transmissions. In case the bundle is not received correctly, it is repeated in the same way as an ordinary transmission of a packet.

UL resources over multiple TTI’s can be assigned with a single grant (same format as for ordinary HARQ),

  • decreases the signaling overhead

There is only one HARQ feedback message per bundle. requires less re-transmissions

  • Low control signaling and the vulnerability to NACK to-ACK errors that can lead to data loss.

Due to no segmentation, RLC & MAC overhead headers as well as CRC is decreased

  • more efficient use of radio resources.

Improves quality delay sensitive services, such as VoIP

Why do we need this?
The purpose of TTI Bundling is to improve cell edge coverage and in-house reception for voice.In LTE the UE has limited uplink power only 23dBm which can result in many re-transmissions at cell edge (poor radio). Re transmission means delay and control plan overhead which may not be acceptable for certain services like VoIP. This feature has more relevance for TDD over FDD as coverage issues are likely to be more challenging in TD-LTE. Simulation results reported in publications indicate a 4 dB gain due to TTI bundling on the UL.

TTI BUndling

How it is implemented ?
In normal case, when network send a grant (DCI 0), UE transmit PUSCH at only one specific subframe (4 ms after the DCI 0 reception). TTI Bundling is a method in which UE transmit a PUSCH in multiple subframes in a row (4 subframes according to current specification). In other words, UE transmit a PUSCH in a 'BUNDLED TTI'.
Typical case of TTI Bundling can be illustrated as shown below.

TTI Bundling

How to enable ttiBundling for an specific UE?
You just have to Enable a ttiBundling IE as shown below:

TTI Bundling

Side effect:

  • The granularity to match the number of transmissions exactly to required amount decreases.
  • With segmentation, when a single segment is decoded, retransmissions of that segment can be stopped. For TTI bundling always a bundle of retransmissions is done.
  • For that reason TTI bundling should be used only for the UEs that are at the cell edge.

You May Also Like Following Articles