{"id":2179,"date":"2018-03-21T10:40:44","date_gmt":"2018-03-21T09:40:44","guid":{"rendered":"https:\/\/blog.mi.hdm-stuttgart.de\/?p=2179"},"modified":"2023-08-06T21:50:03","modified_gmt":"2023-08-06T19:50:03","slug":"livestreaming-with-libav-tutorial-part-2","status":"publish","type":"post","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/03\/21\/livestreaming-with-libav-tutorial-part-2\/","title":{"rendered":"Livestreaming with libav* &#8211; Tutorial (Part 2)"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-large\" src=\"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/a\/af\/Green_screen_live_streaming_production_at_Mediehuset_K%C3%B8benhavn.jpg\" alt=\"Green screen live streaming production at Mediehuset K\u00f8benhavn\" width=\"1920\" height=\"1080\"><\/p>\n<p>If&nbsp;you want to create videos&nbsp;using <a href=\"https:\/\/ffmpeg.org\/\" target=\"_blank\" rel=\"noopener\">FFmpeg<\/a>&nbsp;there is a basic&nbsp;pipeline setup to go with. We will first take a short overview over this pipeline and then&nbsp;focus on each individual section.<\/p>\n<p><!--more--><\/p>\n<h2>The basic pipeline<\/h2>\n<p>I&#8217;m assuming you have already captured your video\/audio data. Since this step is highly platform dependent it will not be covered in this tutorial. But there are plenty of great tutorials on this from other people: <a href=\"https:\/\/lwn.net\/Articles\/203924\/\" target=\"_blank\" rel=\"noopener\">using v4l2<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/freedesktop.org\/software\/pulseaudio\/doxygen\/index.html#intro_sec\" target=\"_blank\" rel=\"noopener\">using pulseaudio<\/a><\/p>\n<pre class=\"prettyprint\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">video capturing --&gt; scaling ----&gt; encoding \\\n                                            \\\n                                             muxing --&gt; ....\n                                            \/\naudio capturing --&gt; filtering --&gt; encoding \/<\/pre>\n<ul>\n<li>Scaling\/resampling: This is the first step after capturing your video data. Here the per-pixel manipulation&nbsp;like scaling or resampling is done. Because the raw video image can be quite huge you may want to think about doing some of your pixel-magic on the GPU (compositing would fit nicely there). Because FFmpeg uses the planar YUV 4:2:2 pixelformat internally you might need to convert the pixel format you get from your source device (especially webcams only output in packed format).<br \/>\nA list of the FFmpeg pixel formats can be found <a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/pixfmt_8h.html#a9a8e335cf3be472042bc9f0cf80cd4c5\" target=\"_blank\" rel=\"noopener\">here<\/a>.<\/li>\n<li>Filtering: If you want to filter your raw audio input, adjust the volume, do some mixing or other crazy audio stuff, this is the place to do it. But because I&#8217;m not great with audio I will skip this part and leave it to the professionals to explain this part \ud83d\ude42<\/li>\n<li>Encoding: This step is similar for both video and audio. Depending on the codec you want to use, you first have to get some settings straight and then consume the raw frames provided by the previous parts of your pipeline. This step&nbsp;is the most resource-demanding step.<\/li>\n<li>Muxing: This is the step where you combine your audio and video data. Each audio\/video\/subtitle track will be represented as stream in the FFmpeg-muxer. You will most likely have to do some timestamp-magic in this step. After you have muxed your streams you can then dump the final video into a file or stream it to a server.<\/li>\n<\/ul>\n<p>You can pack all of these components into one process\/thread which makes handling memory a little easier and reduces copying of large memory chunks. If you are planning on using OpenGL for pixel manipulation and hardware acceleration like <a href=\"https:\/\/www-ssl.intel.com\/content\/www\/us\/en\/architecture-and-technology\/quick-sync-video\/quick-sync-video-general.html\" target=\"_blank\" rel=\"noopener\">Quick SYNC<\/a> for encoding, it might be a good idea to isolate these steps into their own threads so they won&#8217;t mess with the rest of your program. This however makes memory handling and communication between the sections much more complicated.&nbsp;Also keep in mind that some of the libav*-calls (eg. sws_scale()) might be blocking.<\/p>\n<h2>Datatypes<\/h2>\n<p>If you interact with the FFmpeg API you need to use three data types:<\/p>\n<ul>\n<li><a href=\"https:\/\/www.ffmpeg.org\/doxygen\/3.1\/structAVFrame.html\" target=\"_blank\" rel=\"noopener\">AVFrame<\/a>: This struct holds raw video or audio data. If you want to manipulate your image or sound on a pixel\/signal basis you need to do it while holding this struct.<br \/>\nYou can allocate an AVFrame by simply calling the provided constructor.<br \/>\nDepending on your capture technique you can reuse the structs and simply replace the pointers. In this case, you can reset your frame with <a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/group__lavu__frame.html#ga0a2b687f9c1c5ed0089b01fd61227108\" target=\"_blank\" rel=\"noopener\">av_frame_unref()<\/a> to its original state. A call to av_frame_unref() will free all your buffers. Keep in mind that this will also reset all your fields.<\/li>\n<\/ul>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">AVFrame* raw_frame = av_frame_alloc();\n\nwhile() {\n    \/\/ DO STUFF\n    av_frame_unref(raw_frame);\n}\n\nav_frame_free(&amp;raw_frame);<\/pre>\n<ul>\n<li><a href=\"https:\/\/www.ffmpeg.org\/doxygen\/3.1\/structAVPacket.html\" target=\"_blank\" rel=\"noopener\">AVPacket<\/a>: The AVPacket struct holds encoded video or audio data. This struct doesn&#8217;t need to be allocated on the heap so you probably won&#8217;t run into any memory issues.<\/li>\n<li>Bytesstream: If you are writing a <a href=\"https:\/\/www.ffmpeg.org\/doxygen\/3.1\/structAVIOContext.html\" target=\"_blank\" rel=\"noopener\">custom output<\/a> for the FFmpeg-muxer (which you will probably do if you want to do anything other than dumping everything into a file) your custom IO-function will receive a bytestream from the muxer. Because the muxer combines both audio and video it makes sense that this component unpacks and marshals all the structs. So from here on you don&#8217;t have to worry about managing memory in crude structs any more \ud83d\ude42<\/li>\n<\/ul>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">\/\/ uint8* buffer contains the muxed bytestream\nvoid custom_io_write(void* opaque, uint8_t *buffer, int32_t buffer_size);<\/pre>\n<h2>More details, more code<\/h2>\n<p>As promised we will now take a closer look on each step of our processing pipeline. There will be quite a lot of code but hopefully this will help make starting with libav* a little less painful.<\/p>\n<h3>Resampling<\/h3>\n<p>The first step in the scaling component is to set up a <a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/structSwsContext.html\" target=\"_blank\" rel=\"noopener\">Sws_context<\/a>. This only has to be done once in your program. Also&nbsp;set the input and target resolution as well as the respective pixel formats. &nbsp;As mentioned earlier we have to transform the pixel format from a packed to a planar format. If you want to&nbsp;change the resolution use the bicubic resampling algorithm, since it produces the best image quality with okish performance. With the last parameter you can tune your resampling algorithm.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">int source_width = 1920, source_height = 1080;\nint target_width = 1920, target_height = 1080;\n\nstruct SwsContext* scaler = sws_getContext(\n    source_width, source_height, AV_PIX_FMT_YUYV422,\n    target_width, target_height, AV_PIX_FMT_YUV422P,\n    SWS_BICUBIC, NULL, NULL, NULL\n);<\/pre>\n<p>From now on everything has to be done on a per-image-basis, so you have to wrap the next calls in some kind of loop.<\/p>\n<p>First we allocate an output buffer for the scaler. Because the scaler has to copy all the data anyway we don&#8217;t have to trouble ourselves with reusing buffers from the capturing process.<\/p>\n<p>With <a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/group__lavu__picture.html#ga841e0a89a642e24141af1918a2c10448\" target=\"_blank\" rel=\"noopener\">av_image_alloc()<\/a> we can allocate the actual memory in the AVFrame-container. The buffer size alignment doesn&#8217;t seem to influence anything,&nbsp;even the libsws-sourcecode doesn&#8217;t give any clue as to what value to use. It could be optimization for hardware instructions on the CPU but I couldn&#8217;t find prove for that.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">AVFrame* scaled_frame = av_frame_alloc();\n\nscaled_frame-&gt;format = AV_PIX_FMT_YUV422P;\nscaled_frame-&gt;width  = target_width; \nscaled_frame-&gt;height = target_height;\n\nav_image_alloc(\n    scaled_frame-&gt;data, scaled_frame-&gt;linesize, \n    scaled_frame-&gt;width, scaled_frame-&gt;height, \n    scaled_frame-&gt;format, 16);<\/pre>\n<p>The last step is to call <a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/group__libsws.html#gae531c9754c9205d90ad6800015046d74\" target=\"_blank\" rel=\"noopener\">sws_scale()<\/a>. &nbsp;The source_data and source_linesize parameters are both arrays with&nbsp;an entry for each plane of the source image (4 in total). Because we are provided with a packed pixel format from our webcam, only the first element of the source-arrays will be set.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">sws_scale( scaler,\n    (const uint8_t * const*) source_data, source_linesize,\n    0, source_height,\n    scaled_frame-&gt;data, scaled_frame-&gt;linesize);<\/pre>\n<p>After you have passed your scaled frame to the encoder you have to free the frame yourself using <a href=\"https:\/\/www.ffmpeg.org\/doxygen\/3.1\/group__lavu__frame.html#ga979d73f3228814aee56aeca0636e37cc\" target=\"_blank\" rel=\"noopener\">av_frame_free()<\/a>.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">av_frame_free(&amp;scaled_frame);<\/pre>\n<p><span style=\"font-size: 24px; font-weight: 600;\">Encoding<\/span><\/p>\n<p>Now to the fun part.&nbsp;With a mandatory call to <a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/group__lavf__core.html#ga917265caec45ef5a0646356ed1a507e3\" target=\"_blank\" rel=\"noopener\">avcodec_register_all()<\/a> we initialize the avformat library. Afterwards we can get a handle to the codec we want to use.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">avcodec_register_all();\nAVCodec* codec = avcodec_find_encoder(AV_CODEC_ID_H264);<\/pre>\n<p>Next we have to&nbsp;configure the encoder.&nbsp;You might want to adjust these settings according to your needs.<\/p>\n<p><strong>Because most of your users will watch the stream either on a computer&nbsp;or mobile device with a 60Hz display, you should set&nbsp;the framerate only to a divider&nbsp;of 60 to avoid stuttering.<\/strong><br \/>\n<strong> If you are using 23.976, 24, 25 or 50 frames per second there&nbsp;might be something wrong with your setup.<\/strong><br \/>\n<strong> Also only use progressive scanning!<\/strong><\/p>\n<p>When using h264 you can also set <a href=\"https:\/\/trac.ffmpeg.org\/wiki\/Encode\/H.264\" target=\"_blank\" rel=\"noopener\">codec presets<\/a>. These influence the image quality as well as the encoding time needed, with lower speeds yielding better image quality but longer encoding times and vice versa. &#8220;ultrafast&#8221;, &#8220;superfast&#8221; and &#8220;veryfast&#8221; seem to be the only presets that can keep up with 1080p 60fps while&nbsp;livestreaming (and they have a nice ring to it). This was tested with an <a href=\"https:\/\/ark.intel.com\/de\/products\/88193\/Intel-Core-i5-6200U-Processor-3M-Cache-up-to-2_80-GHz\" target=\"_blank\" rel=\"noopener\">Intel Core i5-6200U<\/a>.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">AVCodecContext* encoder = avcodec_alloc_context3(codec);\n\nencoder-&gt;bit_rate = 10 * 1000 * 10000;\nencoder-&gt;width = 1920;\nencoder-&gt;height = 1080;\nencoder-&gt;time_base = (AVRational) {1,60};\nencoder-&gt;gop_size = 30;\nencoder-&gt;max_b_frames = 1;\nencoder-&gt;pix_fmt = AV_PIX_FMT_YUV422P;\n\nav_opt_set(encoder-&gt;av_codec_context-&gt;priv_data, \"preset\", \"ultrafast\", 0);\n\navcodec_open2(encoder, codec, NULL);<\/pre>\n<p>With the <a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/group__lavc__decoding.html#ga9395cb802a5febf1f00df31497779169\" target=\"_blank\" rel=\"noopener\">avcodec_send_frame()<\/a>&nbsp;call we can send our raw frames to the encoder. This of course is also done on a per-image basis, so once again you have to wrap it in a loop. Because the encoder copies the frame to an internal buffer we can then safely free all our frame buffers.<\/p>\n<p>A word on timestamps: simply incrementing an integer isn&#8217;t the correct way to do it, but it seems to work since it meets&nbsp;the monotonic-requirement of the encoder.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">AVFrame* raw_frame = scaled_frame; \n\nraw_frame-&gt;pts = pts++;\navcodec_send_frame(encoder, raw_frame);\n\nav_freep(&amp;raw_frame-&gt;data[0]);\nav_frame_free(&amp;raw_frame);\n<\/pre>\n<p>The correct (but in my case untested) solution would be to use this formula below: You basically have to increment the pts for each&nbsp;interval on your timebase, even if you haven&#8217;t read a frame. Therefore you could substitute the skipped_frames-variable with the number of past timebase&nbsp;intervals since the previous_pts.<\/p>\n<pre class=\"prettyprint lang-javascript\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">int64_t previous_pts = 0; \n\nraw_frame-&gt;pts = previous_pts + 1 + skipped_frames;\nprevious_pts = raw_frame-&gt;pts;<\/pre>\n<p>To read encoded packets from your encoder you simply call the <a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/group__lavc__decoding.html#ga5b8eff59cf259747cf0b31563e38ded6\" target=\"_blank\" rel=\"noopener\">avcodec_recieve_packet()<\/a> function. Because the encoder may&nbsp;combine information from several input frames into one output frame, the first few calls to avcodec_recieve_packet() will not return any packets but an EAGAIN-error.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">AVPacket encoded_frame; \nint got_output = avcodec_receive_packet(encoder, &amp;encoded_frame);\n\nif(got_output == 0) {\n    \/\/ yeah :)\n}\n<\/pre>\n<p>When ending your stream you will have to do&nbsp;two things:<\/p>\n<ul>\n<li>call avcodec_send_frame() with the second argument set to NULL. This will start draining&nbsp;the internal buffers of the encoder and ensures that all frames that have been sent to the encoder actually get put in the encoded video.<\/li>\n<li>call av_recieve_packet() until you get an&nbsp;AVERROR_EOF-error. This will indicate that all packets have been read from the encoder.<\/li>\n<\/ul>\n<p>This steps are usually called &#8220;draining&#8221; or &#8220;flushing&#8221; the encoder.<\/p>\n<h3>Muxing<\/h3>\n<p>To set up the muxer we first have to set our output format. While the <a href=\"https:\/\/www.ffmpeg.org\/doxygen\/3.1\/group__lavf__encoding.html#ga8795680bd7489e96eeb5aef5e615cacc\" target=\"_blank\" rel=\"noopener\">av_guess_format()<\/a> call doesn&#8217;t seem to be the most pretty solution it works fairly well.<\/p>\n<p>With <a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/group__lavf__core.html#gadcb0fd3e507d9b58fe78f61f8ad39827\" target=\"_blank\" rel=\"noopener\">avformat_new_stream()<\/a> we create both an audio and a video track. You can also create subtitle tracks or add multiple audio tracks for different languages in your video. Because at playback time the decoder has to know which codec has been used to encode the tracks we have to embed this information in our output format. The IDs for the codecs can be found <a href=\"https:\/\/www.ffmpeg.org\/doxygen\/3.1\/group__lavc__core.html#gaadca229ad2c20e060a14fec08a5cc7ce\" target=\"_blank\" rel=\"noopener\">here<\/a>. Because <a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/group__lavc__core.html#ga0c7058f764778615e7978a1821ab3cfe\" target=\"_blank\" rel=\"noopener\">avcodec_parameters_from_context()<\/a> sets only codec-specific settings we&nbsp;have to set the&nbsp;timebase and framerate of our tracks manually.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">AVFormatContext* muxer = avformat_alloc_context();\n\nmuxer-&gt;oformat = av_guess_format(\"matroska\", \"test.mkv\", NULL);\n\nAVStream* video_track = avformat_new_stream(muxer, NULL);\nAVStream* audio_track = avformat_new_stream(muxer, NULL);\nmuxer-&gt;oformat-&gt;video_codec = AV_CODEC_ID_H264;\nmuxer-&gt;oformat-&gt;audio_codec = AV_CODEC_ID_OPUS;\n\navcodec_parameters_from_context(video_track-&gt;codecpar, encoder); \nvideo_track-&gt;codecpar-&gt;codec_type = AVMEDIA_TYPE_VIDEO;\n\nvideo_track-&gt;time_base = (AVRational) {1,60};\nvideo_track-&gt;avg_frame_rate = (AVRational) {60, 1};<\/pre>\n<p>The muxer has to know where to write the resulting bytestream. Therefore we must use an IO-context.&nbsp;You can get this by either using the <a href=\"https:\/\/www.ffmpeg.org\/doxygen\/3.1\/avio_8h.html#ade8a63980569494c99593ebf0d1e891b\" target=\"_blank\" rel=\"noopener\">avio_open2()<\/a>-function or by creating your own custom context. In any case the muxer will handle&nbsp;calling these functions, you don&#8217;t have to worry about that. Since I wanted to write the output to an unix domain socket I had to use a context with a custom write-callback. If you want to read more about custom io,&nbsp;<a href=\"https:\/\/www.codeproject.com\/Tips\/489450\/Creating-Custom-FFmpeg-IO-Context\" target=\"_blank\" rel=\"noopener\">here<\/a> is a tutorial.<\/p>\n<p>First we have to set up a buffer for the bytestream which we then provide to <a href=\"https:\/\/www.ffmpeg.org\/doxygen\/3.1\/avio_8h.html#a853f5149136a27ffba3207d8520172a5\" target=\"_blank\" rel=\"noopener\">avio_alloc_context()<\/a>. The third parameter&nbsp;sets the buffer to be writeable (0 if you want&nbsp;read-only). The fourth parameter can be used to pass custom data to the IO-functions. The last three parameters are the functions for reading input (not required here since we are not decoding anything), seeking (also only needed when building a player) and writing.<\/p>\n<p>To add the IO-context to the muxer&nbsp;set the <a href=\"https:\/\/www.ffmpeg.org\/doxygen\/3.1\/structAVFormatContext.html#a1e7324262b6b78522e52064daaa7bc87\" target=\"_blank\" rel=\"noopener\">pb field<\/a> of the muxer context.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">int avio_buffer_size = 4 * KB;\nvoid* avio_buffer = av_malloc(avio_buffer_size);\n\nAVIOContext* custom_io = avio_alloc_context (\n    avio_buffer, avio_buffer_size,\n    1,\n    (void*) 42,\n    NULL, &amp;custom_io_write, NULL);\n    \nmuxer-&gt;pb = custom_io;\n<\/pre>\n<p>The custom writing function has the following signature.&nbsp;You can access the muxer&#8217;s bytestream via the buffer-parameter.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">int custom_io_write(void* opaque, uint8_t *buffer, int32_t buffer_size);<\/pre>\n<p>Before we can start to put the actual data into our output format we first have to write a header. Here you can also provide&nbsp;additional flags to the muxer. The <a href=\"https:\/\/www.ffmpeg.org\/doxygen\/3.1\/group__lavu__dict.html#ga8d9c2de72b310cef8e6a28c9cd3acbbe\" target=\"_blank\" rel=\"noopener\">av_dict_set()<\/a>-function consumes all the flags it can process from your dictionary. The &#8220;live&#8221; option tells the muxer to output frames with&nbsp;strictly ascending presentation timestamps and prevents the muxer from reordering frames. Also the muxer writes the entire header at the beginning of the video (wit a placeholder for the length of the video) instead of just a placeholder for the entire header.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">AVDictionary *options = NULL;\nav_dict_set(&amp;options, \"live\", \"1\", 0);\navformat_write_header(muxer, &amp;options);<\/pre>\n<p>With everything set up we can now send packets to our muxer. Once again this is done on a per-packet basis, a loop would do nicely here.<\/p>\n<p>To add the packets to the correct track (audio or video) we need to add an identifying stream index to each packet. The index of the track is simply incremented for each call to&nbsp;avformat_new_stream().<\/p>\n<p>Now for the timestamp-magic-part: Because some containers (e.g. Matroska) force&nbsp;a fixed timebase on their tracks (in this case 1\/1000) we need to scale the timestamps of each track (timebase 1\/60) to match the containers timebase. If we woulnd&#8217;t do this, &nbsp;decoders would play the video with a wrong framerate, which would at best look funny.<br \/>\nThis has to be done both for the presentation and decoding time stamps. Because the documentation is very vague on how to use this function:&nbsp;<a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/group__lavu__math.html#gaf02994a8bbeaa91d4757df179cbe567f\" target=\"_blank\" rel=\"noopener\">av_rescale_q()<\/a> first expects the timebase of your track (1\/60) and then the target timebase of your container (1\/1000).<\/p>\n<p>From there its as simple as calling <a href=\"https:\/\/ffmpeg.org\/doxygen\/3.1\/group__lavf__encoding.html#gaa85cc1774f18f306cd20a40fc50d0b36\" target=\"_blank\" rel=\"noopener\">av_write_frame()<\/a> and freeing your input packets. The muxer then writes the resulting bytestream to the IO-context.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">AVPacket encoded_packet; \nAVRational encoder_time_base = (AVRational) {1, 60};\n\nencoded_packet.stream_index = video_track-&gt;index;\n\nint64_t scaled_pts = av_rescale_q(encoded_packet.pts, encoder_time_base, video_track-&gt;time_base);\nencoded_packet.pts = scaled_pts;\n\nint64_t scaled_dts = av_rescale_q(encoded_packet.dts, encoder_time_base, video_track-&gt;time_base);\ninput.packet.dts = scaled_dts;\n\nint ret = av_write_frame(muxer-&gt;av_format_context, &amp;encoded_packet);\n\nav_packet_unref(&amp;encoded_packet);\nav_packet_free(&amp;encoded_packet);<\/pre>\n<p>At the end of your stream you have to remember to write a trailer to your video stream.<\/p>\n<pre class=\"prettyprint lang-c_cpp\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">av_write_trailer(muxer);<\/pre>\n<h2>Testing<\/h2>\n<p>And that&#8217;s it.&nbsp;With some glue code and coffee you should now be able to see a moving picture. If you can, simply write the video&nbsp;to stdout and pipe it in ffplay.<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">ffplay -f matroska pipe:0<\/pre>\n<p>If this doesn&#8217;t work for you, you can dump&nbsp;the video to a file and watch it with any videoplayer.<\/p>\n<p>If you want to test the streaming capabilities of your program you&nbsp;can use this command to open an http-server, listen for incoming mkv and display it directly.<\/p>\n<pre class=\"prettyprint lang-sh\" data-start-line=\"1\" data-visibility=\"visible\" data-highlight=\"\" data-caption=\"\">ffplay -f matroska -listen 1 -i http:\/\/&lt;SERVER_IP&gt;:&lt;SERVER_PORT&gt;<\/pre>\n<p>If you have made some experiences in video streaming yourselves, feel free to post&nbsp;helpful tutorials, improvements to this post, or any other tips in the comments.<\/p>\n<h5>Image sources:<\/h5>\n<ul>\n<li>title image: <a href=\"https:\/\/commons.wikimedia.org\/wiki\/File:Green_screen_live_streaming_production_at_Mediehuset_K%C3%B8benhavn.jpg\" target=\"_blank\" rel=\"noopener\">https:\/\/commons.wikimedia.org\/wiki\/File:Green_screen_live_streaming_production_at_Mediehuset_K%C3%B8benhavn.jpg<\/a>, Author: Rehak<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>If&nbsp;you want to create videos&nbsp;using FFmpeg&nbsp;there is a basic&nbsp;pipeline setup to go with. We will first take a short overview over this pipeline and then&nbsp;focus on each individual section.<\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[22],"tags":[4,102,103,104],"ppma_author":[681],"class_list":["post-2179","post","type-post","status-publish","format-standard","hentry","category-student-projects","tag-linux","tag-livestreaming","tag-streaming","tag-video"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack-related-posts":[{"id":2120,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2017\/03\/31\/livestreaming-with-libav_-tutorial-part-1\/","url_meta":{"origin":2179,"position":0},"title":"Livestreaming with libav* &#8211; Tutorial (Part 1)","author":"Benjamin Binder","date":"31. March 2017","format":false,"excerpt":"Lifestreaming is the real deal of video today, however\u00a0there aren't that many content creation tools to choose from.\u00a0YouTube, Facebook and Twitter are pushing hard to enable their users to stream vlogging-style content live from their phones with proprietary Apps, and OBS is used for Let's Plays and Twitch streams. But\u2026","rel":"","context":"In &quot;Interactive Media&quot;","block_context":{"text":"Interactive Media","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/interactive-media\/"},"img":{"alt_text":"Green screen live streaming production at Mediehuset K\u00f8benhavn. Author: Rehak","src":"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/a\/af\/Green_screen_live_streaming_production_at_Mediehuset_K%C3%B8benhavn.jpg","width":350,"height":200,"srcset":"https:\/\/upload.wikimedia.org\/wikipedia\/commons\/a\/af\/Green_screen_live_streaming_production_at_Mediehuset_K%C3%B8benhavn.jpg 1x, https:\/\/upload.wikimedia.org\/wikipedia\/commons\/a\/af\/Green_screen_live_streaming_production_at_Mediehuset_K%C3%B8benhavn.jpg 1.5x, https:\/\/upload.wikimedia.org\/wikipedia\/commons\/a\/af\/Green_screen_live_streaming_production_at_Mediehuset_K%C3%B8benhavn.jpg 2x, https:\/\/upload.wikimedia.org\/wikipedia\/commons\/a\/af\/Green_screen_live_streaming_production_at_Mediehuset_K%C3%B8benhavn.jpg 3x, https:\/\/upload.wikimedia.org\/wikipedia\/commons\/a\/af\/Green_screen_live_streaming_production_at_Mediehuset_K%C3%B8benhavn.jpg 4x"},"classes":[]},{"id":305,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/02\/17\/jenkbird-the-art-of-deployment-tutorial-part-2\/","url_meta":{"origin":2179,"position":1},"title":"Jenkbird &#8211; The art of deployment &#8211; Part 2","author":"J\u00f6rg Einfeldt","date":"17. February 2016","format":false,"excerpt":"\u00a0 One stage. Two stages. THREE STAGES FOR DEPLOYMENT! \u2014 Count von Count on his deployment pipeline Hi, it's us again, the guys with the strange idea of using Sesame Street characters in a blog series about CI.\u00a0Since we didn't really cover the reasons, why you should use CD \/\u2026","rel":"","context":"In &quot;DevOps&quot;","block_context":{"text":"DevOps","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/devops\/"},"img":{"alt_text":"count_dracula","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/02\/count_dracula-300x300.png?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]},{"id":3822,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/08\/05\/3822\/","url_meta":{"origin":2179,"position":2},"title":"Web Performance Optimization for Continuous Deployment &#8211; Move fast and don&#8217;t lose performance","author":"Benjamin Kowatsch","date":"5. August 2018","format":false,"excerpt":"The performance of websites today is a decisive factor in how many users visit them and thus how much money can be earned from them. The impact of this fact is further enhanced by the widespread use of mobile devices and the speed of the mobile Internet. To counteract the\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"","width":0,"height":0},"classes":[]},{"id":3348,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2018\/03\/30\/continuous-integration-pipeline-for-unity-development-using-gitlab-ci-and-aws\/","url_meta":{"origin":2179,"position":3},"title":"Continuous Integration Pipeline for Unity Development using GitLab CI and AWS","author":"Jonas Graf, Christian Gutwein","date":"30. March 2018","format":false,"excerpt":"This blog entry describes the implementation of a Continous Integration (CI) pipeline especially adapted for Unity projects. It makes it possible to automatically execute Unity builds on a configured build server and provide it for a further deployment process if required.","rel":"","context":"In &quot;DevOps&quot;","block_context":{"text":"DevOps","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/scalable-systems\/devops\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/CI_process.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/CI_process.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/CI_process.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2018\/03\/CI_process.png?resize=700%2C400&ssl=1 2x"},"classes":[]},{"id":9816,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2020\/02\/24\/using-gitlab-to-set-up-a-ci-cd-workflow-for-an-android-app-from-scratch\/","url_meta":{"origin":2179,"position":4},"title":"Using Gitlab to set up a CI\/CD workflow for an Android App from scratch","author":"Johannes Mauthe","date":"24. February 2020","format":false,"excerpt":"Tim Landenberger (tl061) Johannes Mauthe (jm130) Maximilian Narr (mn066) This blog post aims to provide an overview about how to setup a decent CI\/CD workflow for an android app with the capabilities of Gitlab. The blog post has been written for Gitlab Ultimate. Nevertheless, most features are also available in\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"","src":"https:\/\/lh3.googleusercontent.com\/TILM-T31y5pbvWRvoZbA53hR9mLaqMjANXKq7iGX_j-c19K_uiVnmKVDZV9DHBnGdPMgFogHmaNvLSy9gguK5rkMVLlosa4YuvYQQy-d090w90UjqUX_MbwizDt6_zQ1BlT6TrJ5","width":350,"height":200},"classes":[]},{"id":1711,"url":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/2016\/11\/26\/snakes-exploring-pipelines-a-system-engineering-and-management-project\/","url_meta":{"origin":2179,"position":5},"title":"Snakes exploring Pipelines &#8211; A \u201cSystem Engineering and Management\u201d Project","author":"Yann Loic Philippczyk","date":"26. November 2016","format":false,"excerpt":"Part 0: Introduction This series of blog entries describes a student project focused on developing an application by using methods like pair programming, test driven development and deployment pipelines. Once upon a time, which was about one and a half months ago, an illustrious group of three students found together,\u2026","rel":"","context":"In &quot;Allgemein&quot;","block_context":{"text":"Allgemein","link":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/category\/allgemein\/"},"img":{"alt_text":"A python. (Because snakes. Not the language.) Source: https:\/\/rashmanly.files.wordpress.com\/2008\/10\/1439659.jpg","src":"https:\/\/i0.wp.com\/blog.mi.hdm-stuttgart.de\/wp-content\/uploads\/2016\/11\/0_1-300x300.jpg?resize=350%2C200&ssl=1","width":350,"height":200},"classes":[]}],"jetpack_sharing_enabled":true,"authors":[{"term_id":681,"user_id":5,"is_guest":0,"slug":"bb074","display_name":"Benjamin Binder","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/b39750be005f19ce71d3af93115f9d5f02d18769c36bfa750ca4de423b20d981?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"_links":{"self":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/2179","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/comments?post=2179"}],"version-history":[{"count":48,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/2179\/revisions"}],"predecessor-version":[{"id":24744,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/posts\/2179\/revisions\/24744"}],"wp:attachment":[{"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/media?parent=2179"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/categories?post=2179"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/tags?post=2179"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/blog.mi.hdm-stuttgart.de\/index.php\/wp-json\/wp\/v2\/ppma_author?post=2179"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}