IOS Micro Chat Sends Small Video

Keywords: Android C

For playing videos, you should start with the idea of using the simple MPMoviePlayerController class more conveniently and quickly. It is true that there are many things that do not bother us with the API that Apple officially wrapped for us. We can quickly make a video player, but unfortunately, highly encapsulated things prove that customizability is more limited. MPMovie Player Controller is proving this. So you think of AVPlayer relatively. Yes, AVPlayer is a good custom player, but AVPlayer has performance limitations. The Wechat team also confirms that AVPlayer can only play 16 videos by colleagues, and then create a video, which is a very lethal performance limitation for scrollable chat interface.

AVAssetReader+AVAssetReaderTrackOutput

So since AVPlayer has performance limitations, let's make a player that belongs to us. AVAssetReader can get decoded audio and video data from raw data. Combined with AVAssetReaderTrackOutput, CMSPLEBufferRef can read a frame. CMSampleBufferRef can be converted into CGImageRef. To do this, we can create an ABSMovie Decoder A class is responsible for video decoding, passing each CMSampleBufferRef read to the upper layer.

Then the steps of decoding AVAssetReader+AVAssetReaderTrackOutput using ABSMovieDecoder's -(void) transform ViedoPath to SampBufferRef:(NSString*) videoPath method are as follows:
1. AVURLAsset, a resource for obtaining media files

// To get the URL of the media file path, you must use file URL WithPath: to get the file URL
NSURL *fileUrl = [NSURL fileURLWithPath:videoPath];
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:fileUrl options:nil];
NSError *error = nil;
AVAssetReader *reader = [[AVAssetReader alloc] initWithAsset:asset error:&error];

2. Create a reader AVAssetReader that reads media data

AVAssetReader *reader = [[AVAssetReader alloc] initWithAsset:asset error:&error];

3. AVAssetTrack is actually our video source.

NSArray *videoTracks = [asset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *videoTrack =[videoTracks objectAtIndex:0];

4. Configure our reader AVAssetReader, such as configuring the read pixels, video compression, etc., to get our output port video Reader Output trajectory, which is our data source.

 int m_pixelFormatType;
//     When the video is playing,
m_pixelFormatType = kCVPixelFormatType_32BGRA;
// Other uses, such as video compression
//    m_pixelFormatType = kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange;

NSMutableDictionary *options = [NSMutableDictionary dictionary];
[options setObject:@(m_pixelFormatType) forKey:(id)kCVPixelBufferPixelFormatTypeKey];
AVAssetReaderTrackOutput *videoReaderOutput = [[AVAssetReaderTrackOutput alloc] initWithTrack:videoTrack outputSettings:options];

5. Add an output port to the reader and turn on the reader

[reader addOutput:videoReaderOutput];
[reader startReading];

6. Get the reader output data source CMSampleBufferRef

// To ensure nominal FrameRate > 0, there have been 0 frames of android videos before.
while ([reader status] == AVAssetReaderStatusReading && videoTrack.nominalFrameRate > 0) {
    // Read video sample
    CMSampleBufferRef videoBuffer = [videoReaderOutput copyNextSampleBuffer];
    [self.delegate mMoveDecoder:self onNewVideoFrameReady:videoBuffer];

    // Sleep for a period of time as needed; for example, when the upper layer plays video, there is a gap between frames, where I set the sampleInternal to 0.001 seconds.
    [NSThread sleepForTimeInterval:sampleInternal];
}

7. Tell the upper decoder to finish by proxy

// Tell the upper video decoder to end
[self.delegate mMoveDecoderOnDecoderFinished:self];

So far, we can get the element CMSampleBufferRef for every frame of video, but we need to convert it into something useful to us, such as pictures.

// AVFoundation captures video frames, often requiring a frame to be converted into an image
+ (CGImageRef)imageFromSampleBufferRef:(CMSampleBufferRef)sampleBufferRef
{
  // Setting up a CMSampleBufferRef for media data
  CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBufferRef);
  // Lock the base address of pixel buffer
  CVPixelBufferLockBaseAddress(imageBuffer, 0);
  // Get the base address of pixel buffer
  void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
  // Get the number of line bytes of pixel buffer
  size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
  // Get the width and height of pixel buffer
  size_t width = CVPixelBufferGetWidth(imageBuffer);
  size_t height = CVPixelBufferGetHeight(imageBuffer);

  // Create a device-dependent RGB color space
  CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

  // Create a graphic context object in bitmap format with sampled cached data
  CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
  //Create a Quartz image object based on the pixels in the bitmap context
  CGImageRef quartzImage = CGBitmapContextCreateImage(context);
  // Unlock pixel buffer
  CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

  // Release context and color space
  CGContextRelease(context);
  CGColorSpaceRelease(colorSpace);
  // Create a UIImage object with Quzetz image
  // UIImage *image = [UIImage imageWithCGImage:quartzImage];

  // Release Quartz image Object
  //    CGImageRelease(quartzImage);

  return quartzImage;

}

As you can see from the above, the most direct and effective way to get pictures and pictures is UI Image, but why don't I need UI Image? What about CGImageRef? That's because creating CGImageRef doesn't make in-memory copies of image data, it only executes when Core Animation Transaction:: When commit () triggers layer-display, the image data is copied to In layer buffer. Simple means that it won't consume too much memory!

Next we need to synthesize all the CGImageRef elements we get into the video. Of course, all CGImageRefs should be placed as objects in an array before that. So knowing that CGImageRef is the structure of C language, we need to use bridging to convert CGImageRef into objects we can use.

CGImageRef cgimage = [UIImage imageFromSampleBufferRef:videoBuffer];
if (!(__bridge id)(cgimage)) { return; }
[images addObject:((__bridge id)(cgimage))];
CGImageRelease(cgimage);
- (void)mMoveDecoderOnDecoderFinished:(TransformVideo *)transformVideo
{
  NSLog(@"Video Unarchiving Completion");
  // Access to media resources
  AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:[NSURL fileURLWithPath:filePath] options:nil];
  // Play our pictures through animation
  CAKeyframeAnimation *animation = [CAKeyframeAnimation animationWithKeyPath:@"contents"];
  // asset.duration.value/asset.duration.timescale gets the real time of the video
  animation.duration = asset.duration.value/asset.duration.timescale;
  animation.values = images;
  animation.repeatCount = MAXFLOAT;
  [self.preView.layer addAnimation:animation forKey:nil];
  // Ensure that memory is released in time
  [images enumerateObjectsUsingBlock:^(id  _Nonnull obj, NSUInteger idx, BOOL * _Nonnull stop) {
      if (obj) {
          obj = nil;
      }
  }];
}

@end



Author: Star Spectrum
Link: http://www.jianshu.com/p/3d5ccbde0de1
Source: Brief Book
Copyright belongs to the author. For commercial reprints, please contact the author for authorization. For non-commercial reprints, please indicate the source.

Posted by Morpheus on Thu, 30 May 2019 13:00:02 -0700