Because the direction of the paper is divided (yes, it is divided, not chosen by oneself, there are many reasons. Postgraduate students, at the very beginning, in December, the reply was done in content development. Although it was very hopeless, it still needs to be done a little bit. The direction of the paper is the coding optimization of HEVC. At present, there is no train of thought. Some articles have been read, and six months have been used for a variety of questions. Anyway, it is lazy. It's the second time that the senior high school students reviewed three days before the entrance examination. It's also the peak of procrastination. No more bullshit, let's go. PS: in the middle of next month, people, really can't procrastinate too much. Never do this in the future.)
Let's call this the second one. In fact, the first one is to teach students how to configure ffmpeg and how to use the command line of ffmpeg. Later supplement
The code of playing video directly on ffmpeg
/** * The simplest decoder based on FFmpeg * Simplest FFmpeg Decoder * * Lei Xiaohua * leixiaohua1020@126.com * China Media University / digital TV Technology * Communication University of China / Digital TV Technology * http://blog.csdn.net/leixiaohua1020 * * This program realizes the decoding of video files (support HEVC, H.264, MPEG2, etc.). * Is the simplest FFmpeg video decoding tutorial. * By learning this example, we can understand the decoding process of FFmpeg. * This software is a simplest video decoder based on FFmpeg. * Suitable for beginner of FFmpeg. * */ #include <stdio.h> #include <iostream> #Define ﹣ STDC ﹣ constant ﹣ macro / / a sentence that ffmpeg must add according to its own regulations extern "C"//ffmpeg is written in C language, but this program runs in C + +, so extern is needed. { #include "libavcodec/avcodec.h" / / codec library #include "libavformat/avformat.h" / / package format processing #Include "libswscale / swscale. H" / / YUV will have black edge area after decoding, and use sws.scale function to cut. Video pixel data format conversion /* #include "libavfilter/avfilter.h" //Filter special effect processing #include "libavdevice/avdevice.h" //Input and output of various devices #include "libavutil/avutil.h" //Tool library (most libraries need the support of this library) #include "libpostproc/postprocess.h" //Post processing #include "libswresample/swresample.h"//Audio sampling data format conversion */ }; int main(int argc, char* argv[]) { AVFormatContext *pFormatCtx; //Encapsulation format information int i, videoindex; // AVCodecContext *pCodecCtx; //Encoding format information AVCodec *pCodec; //Coded information AVFrame *pFrame,*pFrameYUV; //image data uint8_t *out_buffer; // AVPacket *packet; //Stream data int y_size; //Size of Y (brightness) data int ret, got_picture; // struct SwsContext *img_convert_ctx; //Cropped information //Input file path char filepath[]="Titanic.ts"; int frame_cnt; //Count the number of frames to be decoded circularly, initially zero, until the total number of frames is reached //Register all components av_register_all(); //Initialize network avformat_network_init(); //Initializes the format context and creates a space for storing the encapsulated format information pFormatCtx = avformat_alloc_context(); //Judge whether the input file is valid. Avformat open ﹐ input(): open the input video file if(avformat_open_input(&pFormatCtx,filepath,NULL,NULL)!=0){ printf("Couldn't open input stream.\n"); return -1; } // Avformat? Find? Stream? Info(): get video file information if(avformat_find_stream_info(pFormatCtx,NULL)<0){ printf("Couldn't find stream information.\n"); return -1; } //Get the sequence number of the video stream videoindex=-1; for(i=0; i<pFormatCtx->nb_streams; i++) //NB streams can be used specifically to store stream information if(pFormatCtx->streams[i]->codec->codec_type==AVMEDIA_TYPE_VIDEO){ videoindex=i; //Video stream sequence number break; } if(videoindex==-1){ printf("Didn't find a video stream.\n"); return -1; } pCodecCtx=pFormatCtx->streams[videoindex]->codec; //Avcodec? Find? Decoder(): find the decoder corresponding to the video encoding format. pCodec=avcodec_find_decoder(pCodecCtx->codec_id); if(pCodec==NULL){ printf("Codec not found.\n"); return -1; } //Avcodec? Open2(): open the decoder. Open here to collect the information you need later if(avcodec_open2(pCodecCtx, pCodec,NULL)<0){ printf("Could not open codec.\n"); return -1; } /* * Add code to output video information here * From pFormatCtx, use fprintf() */ /* std::cout << pFormatCtx->duration << std::endl; system("pause"); */ //File output FILE *fp = fopen("info.txt", "wb+"); //Output package format fprintf(fp, "Encapsulation format:%s\n", pFormatCtx->iformat->name); //Output bit rate fprintf(fp, "Bit rate:%d\n", pFormatCtx->bit_rate); //Duration in ms fprintf(fp,"Duration:%dms\n", pFormatCtx->duration); //Output encoding mode fprintf(fp, "Coding method:%d\n", pCodec->id); //Output long name, that is, the long name of the format fprintf(fp,"Long name:%s\n", pFormatCtx->iformat->long_name); //Width and height of output video fprintf(fp,"Width:%d,Gao:%d\n", pFormatCtx->streams[videoindex]->codec->width, pFormatCtx->streams[videoindex]->codec->height); //The unit is ms. printf("Duration:%dms\n", pFormatCtx->duration); //Output package format printf("Encapsulation format:%s", pFormatCtx->iformat->name); //Output long name, that is, the long name of the format printf("Long name:%s", pFormatCtx->iformat->long_name); //Width and height of output video printf("Width:%d,Gao:%d", pFormatCtx->streams[videoindex]->codec->width, pFormatCtx->streams[videoindex]->codec->height); //>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> pFrame=av_frame_alloc(); //Assign image data to memory pFrameYUV=av_frame_alloc(); //Assign YUV image data to memory out_buffer=(uint8_t *)av_malloc(avpicture_get_size(PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height)); // avpicture_fill((AVPicture *)pFrameYUV, out_buffer, PIX_FMT_YUV420P, pCodecCtx->width, pCodecCtx->height); packet=(AVPacket *)av_malloc(sizeof(AVPacket)); //Output Info----------------------------- printf("--------------- File Information ----------------\n"); av_dump_format(pFormatCtx,0,filepath,0); //Print basic information printf("-------------------------------------------------\n"); img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt, pCodecCtx->width, pCodecCtx->height, PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL); //Create files in two formats to prepare for storage FILE *fp_264 = fopen("test264.h264", "wb+"); FILE *fp_yuv = fopen("test_yuv.yuv", "wb+"); //At the end of the decoding cycle, frame_cnt=0; while(av_read_frame(pFormatCtx, packet)>=0){ //Fetch frames if(packet->stream_index==videoindex){ //Determine whether it is a video frame /* * Add code to output H264 bitstream here * From packet, use fwrite() */ fwrite(packet->data,1,packet->size,fp_264); //Video frame parameters before decoding: frame size, code stream, i.e. in AVPacket fprintf(fp, "The first%d frame packet Size:%d\n", frame_cnt, packet->size); //Video frame parameters after decoding: frame type, image, i.e. in AVFrame fprintf(fp, "The first%d frame frame type:%d\n", frame_cnt, pFrame->pict_type); //Avcodec? Decode? Video2(): decode a frame of compressed data. ret = avcodec_decode_video2(pCodecCtx, pFrame, &got_picture, packet); if(ret < 0){ printf("Decode Error.\n"); return -1; } //Cut the black edge. The frame in YUV format will have black edge. The actual line size is wider than the width (consider not writing completely when writing) if(got_picture){ sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize); printf("Decoded frame index: %d\n",frame_cnt); /* * Add code to output YUV here * From pFrameYUV, use fwrite() */ fwrite(pFrameYUV->data[0], 1, pCodecCtx->width*pCodecCtx->height, fp_yuv); //Data has three sets of data, YUV in turn. Write Y data fwrite(pFrameYUV->data[1], 1, pCodecCtx->width*pCodecCtx->height/4, fp_yuv); //Write U data fwrite(pFrameYUV->data[2], 1, pCodecCtx->width*pCodecCtx->height/4, fp_yuv); //Write Y data frame_cnt++; } } av_free_packet(packet); } fclose(fp_264); fclose(fp_yuv); fclose(fp); //Memory occupied before release sws_freeContext(img_convert_ctx); av_frame_free(&pFrameYUV); av_frame_free(&pFrame); avcodec_close(pCodecCtx); avformat_close_input(&pFormatCtx); //system("pause"); return 0; }
According to the tasks that Mr. Lei left in the ppt, he made changes. Welcome to see your correction.