GStreamer pipeline using rtmpsink

Get Complete Project Material File(s) Now! »


GStreamer is a framework for media streaming which can be used in many di erent applica-tions. GStreamer framework is originally written in C language and is widely used in many di erent applications e.g. Xine, Totem, VLC etc. [39]. The reason of it’s widely usage is because:
(a) It is available as an Open Source.
(b) It provides an API for multimedia applications.
(c) It comes with many audio video codec and other functionality to support various for-mats.
(d) It provides bindings for di erent programming languages e.g. Python, C etc.
GStreamer is packed with the core libraries (which provides all the core GStreamer services such as plugin management and its types, elements and bins etc.) [2]. It comes with the four major plugins, which are also used in this project for media streaming, are described in Table 1.1. [39, 37].


The main goal of this project is to build an application for monitoring and controlling a system which is used for real-time video broadcasting. It should give the user the possibility to launch an application where the settings are pre-de ned for the stream. To accomplish this, a series of sub-goals must be accomplished:
{ A requirements study must be carried out in order to determine what the users need in a Monitoring and Control system and how the issues associated with Bambuser streaming can be solved.
{ A prototype needs to be build and evaluated.
{ The system must be implemented as a web application and must have troubleshooting support for the users.
{ The resulting system needs to be evaluated with user tests.


The purpose of doing this project is to improve the streaming mechanism for the Bambuser. So the media can be streamed in various formats and supports most of the OS platforms. As GStreamer is Open Source software, so the company will also save the licensing cost. Another important part of this project is to implement a Monitoring and Controlling func-tionality that allows the Bambuser users to view the status of necessary services for the streaming.

General Principle

Before getting into details of the streaming, Let us rst de ne a general principle that describes how should the streaming media (i.e. audio and video) can be combined together as a single unit. As shown in the gure 3.1, we are taking input for both the audio and the video from a source. This source can either be a le or a hardware device. After that this raw input is passed to the ltering unit. The ltering unit performs the ltering operations such as audio/video rate, height, width etc. Later it is passed to the conversion unit which converts the les into di erent formats (e.g. v, avi, mp3, ogg). Finally they are multiplexed together in a synchronized way as a single unit that can be sent to the sink. The sink can either be a le, device or server.

User Factors

The above general principle may get changed as it depends on the user needs. For example, if the user wants to have better quality of either audio or video then more processing components will be involved. Similarly, if the user wants to have a speci c le type extension (for example, mp3 for the audio and avi for the video) then it is possible that there will be an involvement of additional plugins and thus the principle will have some changes.

Technical Factors

The support of libraries and system driver les is also an important technical factor involved that may a ect the general principle. If the system driver/library les are outdated/obsolete then it is possible that the principle will fail. Also variations in Operating System distribu-tions (e.g. Debian, RedHat etc.) might cause some problems (such as synchronization, high use of resources etc.) due to the device driver les they support.

Possible Solution

In this section we shall discuss the possible solutions that can be used for the success of this project. We use the same general principle for getting the streaming media as previously discussed in the section 3.2. Also we assume that le extension for the video must be v (which is also the need of this project).


GStreamer pipeline using FFmpeg

After setting up a successful working environment, next step is to make the streaming pipeline using the GStreamer elements. When we started this project we created the stream-ing pipeline from the stable release.
The idea behind the pipeline working is that we have two FIFO Queues (one for audio and one for video) and then combining them together to send the stream as shown in the gure 5.1. The rst queue i.e. for video is able to handle video either from a web cam or a USB video compliant device (i.e. V4L2). The audio queue is taken from ALSA (Advanced Linux Sound Architecture) via built in microphone or some other audio input device. It is encoded in GStreamer and then saved as . v le. Then this Flash le is fed to FFmpeg which was sent to the server [21].
This solution is a working one but it is not very e ective, as there was the overhead of rst saving the le into an . v format and then calling FFmpeg to send it for the broad-casting.

GStreamer pipeline using rtmpsink

Research work was done to make the streaming more e cient. We found that a new ele-ment rtmpsink in the gst-plugins-bad is available but not as a stable release [16]. The main reason to use this plugin is that it does not need to have separate audio and video queues as it takes them together and directly sends it to the server that is also shown in the gure 5.2. We had also discussed this scenario with the company and they recommended the same.
Therefore to test and make the application working with a good streaming method we setup the environment to use the GStreamer unstable release, as previously discussed in section 4.2.

Pipeline Overview

We have made a new pipeline for streaming and used the Python binding with GStreamer. We have used the gst.parse launch() for our pipeline as it works perfectly for our application [22]. In a pipeline, we are taking the audio from the ALSA source and Video from the V4L or V4L2 compliant devices.
For the video, it rst gets the synchronized video frames (by using GEntrans which is a 3rd party plugin [3]) and then applying the ltering (i.e. de ning the parameters such as height, width, etc.). The last step for video is to convert it into a proper v format using the mpegcolorspace.
For the audio, it performs the ltering operation (such as audio rate, channels etc.) and then converting it into proper format.
The nal step is to combine both (audio and video) using a multiplexer and which sends the output results to the RTMP Server, as shown in the gure 5.3 below.
To have a better control over the pipeline, we have used the bus signals, by which it detects the states (e.g. PLAY, PAUSE, NULL) and helps in better error handling [39]. Since we have used the unstable release of GStreamer therefore we have to make some additional adjustments in our PyGSt code by adding the proper path of the GStreamer launcher as de ned in previous section  4.2.2. We have used the secret user stream id when sending the data to the server. To get the secret stream id, we have used the Python with XML that collects the required information from the PHP-cURL code.

Table of contents :

1 Introduction 
1.1 Background
1.1.1 Video Streaming
1.1.2 Video4Linux
1.1.3 Linux Audio
1.1.4 GStreamer
1.1.5 Monitoring and Control
2 Problem Description 
2.1 Goals
2.2 Purpose
3 Comparison of Framework/Technique 
3.1 Background
3.2 General Principle
3.2.1 User Factors
3.2.2 Technical Factors
3.3 Possible Solution
3.3.1 FFmpeg
3.3.2 VLC
3.3.3 GStreamer
3.4 Summary
4 Design 
4.1 System Design Overview
4.2 Tools and Environment Setup
4.2.1 Method 1
4.2.2 Method 2
5 Implementation 
5.1 GStreamer pipeline using FFmpeg
5.2 GStreamer pipeline using rtmpsink
5.2.1 Pipeline Overview
5.2.2 Pipeline Elements
5.3 Monitoring and Controlling
5.3.1 Battery
5.3.2 Microphone
5.3.3 Server
5.3.4 Camera
5.4 The Web Interface
6 Evaluation 
6.1 Test Results
6.2 Bugs and other Problems
7 Conclusion 
7.1 Limitations
7.2 Restrictions
7.3 Related Work
7.4 Future Work
8 Acknowledgements 


Related Posts