EasyPlayer Android implements simultaneous video playback (RTSP/RTMP/HTTP/HLS) based on ffmpeg

Keywords: codec Android

Previously, a blog specifically introduced EasyPlayer's local video recording function. Simply speaking, EasyPlayer is an RTSP player, which parses the audio and video media frames in RTSP streams and videos them with the MediaMuxer class provided by Android. Can EasyPlayer Pro be implemented in this way? The answer is not realistic, because Pro supports most streaming media protocols, not just RTSP protocols, packages. Formats such as hTTP RTSP RTMP HLS FILE are supported. It is not appropriate to parse these data separately and throw them to the upper layer. EasyPlayer Pro is ultimately based on FFMPEG for demux. Since FFMPEG is used for demux, it can also be based on FFMPEG for video recording. Video in FFMPEG is the process of mux.

Referring to the relevant codes of ffmpeg mux, the author successfully realized the simultaneous video playing function on Pro. Now it is still in the testing stage, and this article also serves as a record and summary purpose.

EasyPlayerPro refers to the implementation of ffplay and opens a special receiving thread to receive and analyze audio and video media frames. We can perform MUX operation after receiving media frames.

Created with Raphaël 2.1.0av_read_frameav_read_frameav_interleaved_write_frameav_interleaved_write_frameMUX

In the receiving thread, we can set a video flag: record. If we set the value of 0, it means that the video is not in the video; 1 means to start the video; 2 means that the video is starting, waiting for the key frame; greater than 2 means that the video is in the video is in the start. Let's stop talking about it here, and describe it in combination with the code.
See the relevant code and comments:

The receiving thread reads the media frame.

ret = av_read_frame(ic, pkt);
if (ret < 0) {
    .. // error handle
}

Video begins

External threads change the record state of read threads to control the state of the video. When the video starts, the state bit is set to 1. At this time, read threads perform some initialization operations for the video:

START_RECORDING:
    if (is->recording == 1){    // Video is about to start
        do
        {
            // First of all, let's judge that if it's video at present, it needs to be a key frame.
            av_log(NULL, AV_LOG_INFO, "try start record:%s", is->record_filename);
            AVStream *ins = ic->streams[pkt->stream_index];
            if (ins->codec->codec_type == AVMEDIA_TYPE_VIDEO ){
                av_log(NULL, AV_LOG_DEBUG, "check video  key frame.");
                if (!(pkt->flags & AV_PKT_FLAG_KEY)){   // It's not the key frame. Jump out of the video block. Keep trying next time.
                    av_log(NULL,AV_LOG_WARNING,"waiting for key frame of video stream:%d.", pkt->stream_index); 
                    break;
                }
                is->recording++;
            }
            // So far, the first key frame has been found and the video can be started.
            av_log(NULL, AV_LOG_INFO, "start record:%s", is->record_filename);
            // Create AVFormat Context as Content for Video
            avformat_alloc_output_context2(&o_fmt_ctx, NULL, NULL, is->record_filename);
            if (!o_fmt_ctx){    // Creation failed.
                is->recording = 0;
                av_log(NULL, AV_LOG_WARNING, "avformat_alloc_output_context2 error");
                o_fmt_ctx = is->oc = NULL;
                goto START_RECORDING;
            }
            ofmt = o_fmt_ctx->oformat;
            // Travel through all media streams here
            for (i = 0; i < ic->nb_streams; i++) {  
                // At present, the AV_CODEC type supported by MP4 muxer is added to print logs. These codes are extracted from the muxer part of FFMPEG.
                AVStream *in_stream = ic->streams[i];  
                AVCodecParameters *par = in_stream->codecpar;
                unsigned int tag = 0;            
                if      (par->codec_id == AV_CODEC_ID_H264)      tag = MKTAG('a','v','c','1');
                else if (par->codec_id == AV_CODEC_ID_HEVC)      tag = MKTAG('h','e','v','1');
                else if (par->codec_id == AV_CODEC_ID_VP9)       tag = MKTAG('v','p','0','9');
                else if (par->codec_id == AV_CODEC_ID_AC3)       tag = MKTAG('a','c','-','3');
                else if (par->codec_id == AV_CODEC_ID_EAC3)      tag = MKTAG('e','c','-','3');
                else if (par->codec_id == AV_CODEC_ID_DIRAC)     tag = MKTAG('d','r','a','c');
                else if (par->codec_id == AV_CODEC_ID_MOV_TEXT)  tag = MKTAG('t','x','3','g');
                else if (par->codec_id == AV_CODEC_ID_VC1)       tag = MKTAG('v','c','-','1');
                else if (par->codec_id == AV_CODEC_ID_DVD_SUBTITLE)  tag = MKTAG('m','p','4','s');
                av_log(NULL, AV_LOG_INFO, "par->codec_id:%d, tag:%d\n", par->codec_id, tag);
                if (tag == 0) {
                    // This CODEC is not supported. Print it.
                    av_log(NULL, AV_LOG_WARNING, "unsupported codec codec_id:%d\n", par->codec_id);
                    // continue;
                }
            // av_log(NULL, AV_LOG_INFO, "-ffplay : %d", __LINE__);
                //Create output AVStream according to input AVStream  
                if(ic->streams[i]->codec->codec_type ==AVMEDIA_TYPE_VIDEO){     // This is a video frame.
                    // Let's check here to see if Kuan Gao is legal.
                    if ((par->width <= 0 || par->height <= 0) &&
                        !(ofmt->flags & AVFMT_NODIMENSIONS)) {
                        av_log(NULL, AV_LOG_ERROR, "dimensions not set\n");
                        continue;
                    }
                    // Add video stream to Muxer.
                    AVStream *out_stream = avformat_new_stream(o_fmt_ctx, in_stream->codec->codec);
                    // Copy some parameters of the video stream to muxer.
                    if (avcodec_copy_context(out_stream->codec, in_stream->codec) < 0) {
                        // Failed, do some error handling release.
                        // printf( "Failed to copy context from input to output stream codec context\n");  
                        av_log(NULL, AV_LOG_WARNING,
                            "Failed to copy context from input to output stream codec context\n");
                        is->recording = 0;
                        avformat_free_context(o_fmt_ctx);
                        o_fmt_ctx = is->oc = NULL;
                        goto START_RECORDING;  
                    }
                    // av_log(NULL, AV_LOG_INFO, "-ffplay:%d out_stream:%p, in_stream:%p", __LINE__, out_stream->codec, in_stream->codec);
                    out_stream->codec->codec_tag = 0;  
                    if (o_fmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)  
                        out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;  
                    av_log(NULL, AV_LOG_INFO, "-ffplay : %d video added", __LINE__);
                }else if(ic->streams[i]->codec->codec_type==AVMEDIA_TYPE_AUDIO){  //    This is an audio frame.
                    av_log(NULL, AV_LOG_INFO, "-ffplay : %d", __LINE__);
                    // Let's check if the sampling rate is legitimate.
                    if (par->sample_rate <= 0) {
                        av_log(NULL, AV_LOG_ERROR, "sample rate not set\n");
                        continue;
                    }
                    // Add audio stream to Muxer.
                    AVStream *out_stream = avformat_new_stream(o_fmt_ctx, in_stream->codec->codec);  
                    // Copy some parameters of audio stream to muxer.
                    if (avcodec_copy_context(out_stream->codec, in_stream->codec) < 0) {
                        // Failed, do some error handling release.
                        av_log(NULL, AV_LOG_WARNING,
                            "Failed to copy context from input to output stream codec context 2\n"); 
                        is->recording = 0;
                        avformat_free_context(o_fmt_ctx);
                        o_fmt_ctx = is->oc = NULL;
                        goto START_RECORDING;
                    }  
                    out_stream->codec->codec_tag = 0;  
                    if (o_fmt_ctx->oformat->flags & AVFMT_GLOBALHEADER)  
                        out_stream->codec->flags |= CODEC_FLAG_GLOBAL_HEADER;
                    av_log(NULL, AV_LOG_INFO, "-ffplay : %d audio added", __LINE__);
                }    
            }  
            // At this point, AVFormatContext should contain at least one stream. Otherwise, the video will not start.
            if (o_fmt_ctx->nb_streams < 1){ 
                av_log(NULL, AV_LOG_WARNING,
                    "NO available stream found in muxer \n"); 
                is->recording = 0;
                avformat_free_context(o_fmt_ctx);
                o_fmt_ctx = is->oc = NULL;
                goto START_RECORDING;
            }
            // Let's start creating files
            if (!(ofmt->flags & AVFMT_NOFILE)){
                av_log(NULL, AV_LOG_INFO, "-ffplay : %d AVFMT_NOFILE", __LINE__);
                if (avio_open(&o_fmt_ctx->pb, is->record_filename, AVIO_FLAG_WRITE) < 0) {  
                    // Error handling.
                    av_log(NULL,AV_LOG_WARNING, "Could not open output file '%s'", is->record_filename);
                    is->recording = 0;
                    avformat_free_context(o_fmt_ctx);
                    o_fmt_ctx = is->oc = NULL;
                    goto START_RECORDING;
                }  
            } 
            // Write the header. Allocate the stream private data and write the stream header to an output media file.
            int r = avformat_write_header(o_fmt_ctx, NULL);
            if (r < 0) {    // error handle    
                av_log(NULL,AV_LOG_WARNING, "Error occurred when opening output file:%d\n",r);
                is->recording = 0;
                avformat_free_context(o_fmt_ctx);
                o_fmt_ctx = is->oc = NULL;
                goto START_RECORDING;
            }  
            // Output OUTPUT format.
            av_dump_format(o_fmt_ctx, 0, is->record_filename, 1);
            // Change the flag bit to 2 to indicate that the video has been started. Start waiting for the key frame.
            is->recording = 2;
        }
        while(0);            
    }

Video recording

When the video initialization is completed, the state variable will change to 2, and the state will change to the video:

    do{
        if (is->recording >= 2){
            // Ignore unrelated stream s
            if (pkt->stream_index >= o_fmt_ctx->nb_streams)
            {
                av_log(NULL,AV_LOG_WARNING,"stream_index large than nb_streams %d:%d\n", pkt->stream_index,  o_fmt_ctx->nb_streams); 
                break; 
            }

            AVStream *ins = ic->streams[pkt->stream_index];
            av_log(NULL,AV_LOG_DEBUG,"before write frame.stream index:%d, codec:%d,type:%d\n", ins->index, ins->codec->codec_id,  ins->codec->codec_type); 
            if(is->recording == 2)  // Wait for the key frame.
            if (ins->codec->codec_type == AVMEDIA_TYPE_VIDEO ){
                av_log(NULL, AV_LOG_DEBUG, "check video  key frame.");
                if (!(pkt->flags & AV_PKT_FLAG_KEY)){
                    av_log(NULL,AV_LOG_WARNING,"waiting for key frame of video stream:%d.", pkt->stream_index); 
                    break;
                }
                // Represents that the key frame has been obtained.
                is->recording++;
            }
            // To receiveAVPacketCopy a copy
            AVPacket *newPkt = av_packet_clone(pkt);
            AVStream *in_stream, *out_stream; 
            in_stream  = ic->streams[newPkt->stream_index];
            out_stream = o_fmt_ctx->streams[newPkt->stream_index];
            // Here's a conversionPTSandDTS.These sentences probably mean,Input time_base Baseline timestamp converted to output time_base Time stamp as benchmark,
            // These sentences should be added to make the final video timestamp normal.
            //Convert PTS/DTS  
            newPkt->pts = av_rescale_q_rnd(newPkt->pts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
            newPkt->dts = av_rescale_q_rnd(newPkt->dts, in_stream->time_base, out_stream->time_base, AV_ROUND_NEAR_INF|AV_ROUND_PASS_MINMAX);
            newPkt->duration = av_rescale_q(newPkt->duration, in_stream->time_base, out_stream->time_base);  
            // Start writing belowAVPacket.
            int r;
            if (o_fmt_ctx->nb_streams < 2){ // If there is only video or audio, write it directly
                r = av_write_frame(o_fmt_ctx, newPkt); 
            }else {                         // In the case of multiple streams, call av_interleaved_write_frame, and some sort will be done internally before writing.
                r = av_interleaved_write_frame(o_fmt_ctx, newPkt);
            }
            if (r < 0) {
                printf( "Error muxing packet\n");  
                break;
            }
        }
    }while(0);

Stop video

When the recording state is set to zero, it means that the recording will stop. At this time, do some anti-initialization work:

    if (is->recording == 0){    // It's time to stop the video.
        if (o_fmt_ctx != NULL){
            av_log(NULL, AV_LOG_INFO, "stop record~~~~~~~");
            // Be sure to write trailer first
            av_write_trailer(o_fmt_ctx);
            // Release context
            avformat_free_context(o_fmt_ctx);
            o_fmt_ctx = is->oc = NULL;
        }
    }

EasyPlayerPro Download Address: https://fir.im/EasyPlayerPro

For an introduction, see: http://www.easydarwin.org/article/news/117.html

Get more information

Mail: support@easydarwin.org

WEB: www.EasyDarwin.org

QQ Exchange Group: 587254841

Copyright © EasyDarwin.org 2012-2017

Posted by VBAssassin on Thu, 03 Jan 2019 10:48:11 -0800