How to develop a complete set of live software source code, what needs to be prepared in the early stage?

Keywords: iOS Session SDK

How to develop a complete live broadcasting software, first of all, we need to collect the video and audio functions of the host, and then pass them into the streaming media server. This article mainly explains how to collect the video and audio functions of the host. At present, the front and rear cameras and focus cursors can be switched. Live APP has an independent beauty SDK. You can see different kinds of you. In the future, there will be live articles with other functions published one after another.
Firstly, the steps of capturing audio and video in live software source code are explained.

1.Establish AVCaptureSession object
2.Obtain AVCaptureDevicel Video recording equipment (camera), recording equipment (microphone), note that there is no input data function,It is only used to adjust the configuration of hardware devices.
3.According to the audio frequency/Video Hardware Equipment(AVCaptureDevice)Create audio/Video Hardware Input Data Object(AVCaptureDeviceInput),Specialize in managing data input.
4.Creating Video Output Data Management Objects( AVCaptureVideoDataOutput),And set up sample cache proxy(setSampleBufferDelegate)It can be used to get the collected video data.
5.Creating Audio Output Data Management Objects( AVCaptureAudioDataOutput),And set up sample cache proxy(setSampleBufferDelegate)You can get the audio data through it.
6.Input data into objects AVCaptureDeviceInput,Data output object AVCaptureOutput Added to Media Session Management Object AVCaptureSession in,It automatically connects audio input to output and video input to output..
7.Create a Video Preview Layer AVCaptureVideoPreviewLayer And specify media session, add layer to display container layer in
8.start-up AVCaptureSession,Only when turned on will input to output data stream transmission begin.
// Capture audio and video
- (void)setupCaputureVideo
{
    // 1. To create capture sessions, strong references must be made or they will be released.
    AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
    _captureSession = captureSession;
    // 2. Get the camera device, the default is the rear camera.
    AVCaptureDevice *videoDevice = [self getVideoDevice:AVCaptureDevicePositionFront];
    // 3. Acquisition of sound equipment
    AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    // 4. Create corresponding video device input objects
    AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:nil];
    _currentVideoDeviceInput = videoDeviceInput;
    // 5. Create the corresponding audio device input object
    AVCaptureDeviceInput *audioDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:nil];
    // 6. Add to the session
    // Note that "It's best to determine whether input can be added, and session cannot be empty.
    // 6.1 Add Video
    if ([captureSession canAddInput:videoDeviceInput]) {
        [captureSession addInput:videoDeviceInput];
    }
    // 6.2 Add Audio
    if ([captureSession canAddInput:audioDeviceInput]) {
        [captureSession addInput:audioDeviceInput];
    }
    // 7. Access to Video Data Output Device
    AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
    // 7.1 Setting up agent to capture video sample data
    // Note: Queues must be serial queues in order to get data, and not empty
    dispatch_queue_t videoQueue = dispatch_queue_create("Video Capture Queue", DISPATCH_QUEUE_SERIAL);
    [videoOutput setSampleBufferDelegate:self queue:videoQueue];
    if ([captureSession canAddOutput:videoOutput]) {
        [captureSession addOutput:videoOutput];
    }
    // 8. Acquisition of Audio Data Output Equipment
    AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init];
    // 8.2 Set up agent to capture video sample data
    // Note: Queues must be serial queues in order to get data, and not empty
    dispatch_queue_t audioQueue = dispatch_queue_create("Audio Capture Queue", DISPATCH_QUEUE_SERIAL);
    [audioOutput setSampleBufferDelegate:self queue:audioQueue];
    if ([captureSession canAddOutput:audioOutput]) {
        [captureSession addOutput:audioOutput];
    }
    // 9. Obtain video input and output connections for resolving audio and video data
    _videoConnection = [videoOutput connectionWithMediaType:AVMediaTypeVideo];
    // 10. Add a video preview layer
    AVCaptureVideoPreviewLayer *previedLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
    previedLayer.frame = [UIScreen mainScreen].bounds;
    [self.view.layer insertSublayer:previedLayer atIndex:0];
    _previedLayer = previedLayer;
    // 11. Start the session
    [captureSession startRunning];
}
// Specify the direction of the camera to get the camera
- (AVCaptureDevice *)getVideoDevice:(AVCaptureDevicePosition)position
{
    NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    for (AVCaptureDevice *device in devices) {
        if (device.position == position) {
            return device;
        }
    }
    return nil;
}
#Pragma mark - AV Capture Video Data Output Sample Buffer Delegate// Get input device data, possibly audio or video
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    if (_videoConnection == connection) {
        NSLog(@"Acquisition of Video Data");
    } else {
        NSLog(@"Acquisition of audio data");
    }
}

Secondly, explain the additional function of video capture in the source code of live broadcasting software (switching cameras)
Step of Switching Camera
1. Get the current video device input object
2. Judging whether the current video device is pre-or post-positioned
3. Determine the direction of camera switching
4. Acquire the corresponding camera equipment according to the direction of the camera
5. Create the corresponding camera input object
6. Remove the previous video input object from the session
7. Add a new video input object to the session.

// Switching Camera
- (IBAction)toggleCapture:(id)sender {
    // Get the current device direction
    AVCaptureDevicePosition curPosition = _currentVideoDeviceInput.device.position;    // Get directions that need to be changed
    AVCaptureDevicePosition togglePosition = curPosition == AVCaptureDevicePositionFront?AVCaptureDevicePositionBack:AVCaptureDevicePositionFront;
    // Getting Changed Camera Devices
    AVCaptureDevice *toggleDevice = [self getVideoDevice:togglePosition];
    // Camera Input Device for Acquiring Change
    AVCaptureDeviceInput *toggleDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:toggleDevice error:nil];
    // Remove the previous camera input device
    [_captureSession removeInput:_currentVideoDeviceInput];
    // Add a new camera input device
    [_captureSession addInput:toggleDeviceInput];
    // Recording current camera input device
    _currentVideoDeviceInput = toggleDeviceInput;
}

Video Capture Additional Function 2 (Focus Cursor), Focus Cursor Step
1. Monitor screen clicks
2. To get the location of the click point and convert it to the point on the camera, you have to go through the AV Capture Video Preview Layer (AV Capture Video Preview Layer).
3. Set the position of the focus cursor picture and animate it.
4. Set camera device focus mode and exposure mode (Note: Lock ForConfiguration must be locked here, otherwise the error will be reported)

// Click on the screen to see the focus view
- (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event
{
    // Get the click position
    UITouch *touch = [touches anyObject];
    CGPoint point = [touch locationInView:self.view];
    // Converting the current position to the position at the camera point
    CGPoint cameraPoint = [_previedLayer captureDevicePointOfInterestForPoint:point];
    // Setting Focus Cursor Position
    [self setFocusCursorWithPoint:point];
    // Set focus
    [self focusWithMode:AVCaptureFocusModeAutoFocus exposureMode:AVCaptureExposureModeAutoExpose atPoint:cameraPoint];
}
/**
 *  Setting Focus Cursor Position
 *
 *  @param point Cursor position
 */
-(void)setFocusCursorWithPoint:(CGPoint)point{
    self.focusCursorImageView.center=point;
self.focusCursorImageView.transform=CGAffineTransformMakeScale(1.5, 1.5);
    self.focusCursorImageView.alpha=1.0;
    [UIView animateWithDuration:1.0 animations:^{
  self.focusCursorImageView.transform=CGAffineTransformIdentity;
    } completion:^(BOOL finished) {
        self.focusCursorImageView.alpha=0;   
    }];
}
/**
 *  Set focus
 */
-(void)focusWithMode:(AVCaptureFocusMode)focusMode exposureMode:(AVCaptureExposureMode)exposureMode atPoint:(CGPoint)point{
    AVCaptureDevice *captureDevice = _currentVideoDeviceInput.device;
    // Lock configuration
    [captureDevice lockForConfiguration:nil];
    // Set focus
    if ([captureDevice isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {
        [captureDevice setFocusMode:AVCaptureFocusModeAutoFocus];
    }
    if ([captureDevice isFocusPointOfInterestSupported]) {
        [captureDevice setFocusPointOfInterest:point];
    }
    // Set exposure
    if ([captureDevice isExposureModeSupported:AVCaptureExposureModeAutoExpose]) {
        [captureDevice setExposureMode:AVCaptureExposureModeAutoExpose];
    }
    if ([captureDevice isExposurePointOfInterestSupported]) {
        [captureDevice setExposurePointOfInterest:point];
    }
    // Unlock configuration
    [captureDevice unlockForConfiguration];
}

Finally, the basic knowledge of AV Foundation in the application of live software source code is introduced.
AVFoundation: AVFoundation framework is needed for audio and video data acquisition.
AVCapture Device: Hardware devices, including microphones and cameras, through which you can set some properties of physical devices (such as camera focus, white balance, etc.)
AVCaptureDeviceInput: Hardware input object, you can create corresponding AVCaptureDeviceInput object according to AVCaptureDevice to manage hardware input data.
AVCaptureOutput: Hardware output object used to receive various types of output data, usually using the corresponding subclasses AVCaptureAudioData Output (voice data output object), AVCaptureVideoData Output (video data output object)
AVCaptionConnection: When an input and output are added to the AVCaptureSession, the AVCaptureSession establishes a connection between the input and output devices, and the connection object can be obtained through AVCaptureOutput.
AVCaptureVideoPreview Layer: Camera preview layer allows real-time viewing of photographs or video recordings. To create this object, you need to specify the corresponding AVCaptureSession object, because AVCaptureSession contains video input data and video data can be displayed.
AVCaptureSession: Coordinating data transfer between input and output
System Function: Hardware Devices can be Operated
Working Principle: Let live APP and the system produce a capture session, which is equivalent to live App and hardware devices connected. We only need to add hardware input objects and output objects to the session, the session will automatically connect hardware input objects and output, so that hardware input and output devices can transmit audio and video data.
Above is the introduction of direct mobilization of source code and system of live broadcasting software. This paper briefly introduces the important role of AVFoundation in live APP.

Posted by lovelf on Wed, 24 Apr 2019 16:21:35 -0700