How GPUImage works

Keywords: Javascript Design Pattern OpenGL

This article mainly explains how the GPUImage bottom layer is rendered. The GPUImage bottom layer uses OPENGL to manipulate the GPU to realize screen display

Because there are very few online OpenGL actual combat materials, the official documents can not explain some methods clearly, so as to avoid the majority of students from climbing the pit again. This article explains a lot of OpenGL knowledge, and also explains the points for attention that spend a lot of time solving bug s

Introduction to the working principle of GPUImage

  • The key to GPUImage is the GPUImageFramebuffer class, which will save the currently processed image information.
  • GPUImage processes pictures through a chain. Each chain is connected through a target. After each target processes the pictures, it will generate a GPUImageFramebuffer object and save the picture information to the GPUImageFramebuffer.
  • In this way, for example, if targetA is processed well and targetB is to be processed, targetA's picture will be taken out first, and then targetB will process it on the basis of targetA's picture

GPUImageVideoCamera

  • The collected video data can be captured

  • The key is how to display the captured video data frame by frame?

  • Through this method, the collected video data can be obtained

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
  • Attention points for video acquisition: set the acquisition vertical screen, otherwise the acquired data is horizontal screen
  • It can be set through AVCaptureConnection
[videoConnection setVideoOrientation:AVCaptureVideoOrientationPortraitUpsideDown];

Rendering and displaying frame data with OpenGL

  • Import header file #import < glkit / GLKit.h >, the bottom layer of GLKit.h uses OpenGLES. Importing it is equivalent to automatically importing OpenGLES
  • step
    • 01 - Custom layer type
    • 02 - initialize CAEAGLLayer layer properties
    • 03 - create EAGLContext
    • 04 - create render buffer
    • 05 - create frame buffer
    • 06 create shader
    • 07 - create shader program
    • 08 - create texture object
    • 09-YUV to RGB painting texture
    • 10 - render buffer to screen
    • 11 - clear memory

01 - Custom layer type

  • Why customize the layer type CAEAGLLayer? CAEAGLLayer is a layer specially used by OpenGL for rendering. This layer must be used when using OpenGL

02 - initialize CAEAGLLayer layer properties

  • 1. Opacity = yes, calayer is transparent by default. The transparency performance is poor. It is best to set it to opaque

  • 2. Set drawing properties

    • Kagldrawablepropertyretainedbacking: NO (tell CoreAnimation not to try to keep any previously drawn images for later reuse)
    • Kagldrawablepropertycolorformat: kaglcolorformatrgba8 (tell CoreAnimation to save the RGBA value with 8 bits)
  • In fact, it doesn't matter whether it is set or not. This value is also set by default

03 - create EAGLContext

  • It needs to be set as the current context, and all OpenGL ES rendering will be rendered to the current context by default
  • EAGLContext manages all States, commands and resource information depicted using OpenGL ES. To draw things, there must be a context, which is similar to the graphics context.
  • When you create an EAGLContext, you need to declare which version API you want to use. Here, we choose OpenGL ES 2.0

04 - create render buffer

  • With the context, openGL also needs to describe in a buffer, which is RenderBuffer

  • OpenGLES has three different color buffer s, depth buffer and stencil buffer

  • The most basic is the color buffer. Just create it

Function glGenRenderbuffers

  • It requests an id (name) for the renderbuffer (rendering cache) to create a rendering cache
  • Parameter n indicates the number of renderbuffer s to be generated
  • The parameter renderbuffers returns the id assigned to renderbuffer (rendering cache)
    . Note: the returned id will not be 0. id 0 is reserved by OpenGL ES, and we cannot use renderbuffer with id 0.

  Function glBindRenderbuffer

  • Tell OpenGL: I'll refer to GL later_ The renderbuffer is actually a reference_ colorRenderBuffer
  • Parameter target must be GL_RENDERBUFFER
  • The parameter renderbuffer is the id generated using glGenRenderbuffers
    . When the renderbuffer of the specified id is set to the current renderbuffer for the first time, the renderbuffer object will be initialized. Its initial value is:

Function renderbufferStorage

  • Bind the render buffer to the render layer and allocate a shared memory for it.
  • Parameter target, which renderbuffer to allocate storage space for
  • The parameter drawable, which rendering layer is bound to, will generate shared memory according to the drawing attributes in the rendering layer.

Actual code:

#pragma mark - 4. Create rendering cache
- (void)setupRenderBuffer
{
    glGenRenderbuffers(1, &_colorRenderBuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
    
    // Bind the rendering cache to the CAEAGLLayer on the rendering layer and allocate a shared memory for it.
    // The format and width of the rendering cache will be set
    [_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_openGLLayer];

}

  05 - create frame buffer

  • It is equivalent to the manager of buffer(color, depth, stencil). The three buffers can be attached to a framebuffer

  • The essence is to render the framebuffer content to the screen

Function glFramebufferRenderbuffer  

  • This function is to attach the related buffer() one of the three buffers) to the framebuffer, which will automatically fill the content of the rendering cache into the frame cache and render it to the screen from the frame cache
  • Parameter target, which frame is cached
  • The parameter attachment specifies the assembly point to which renderbuffer is assembled, and its value is GL_ COLOR_ ATTACHMENT0, GL_ DEPTH_ ATTACHMENT, GL_ STENCIL_ One of the attachments corresponds to color, depth and stencil buffers respectively.
  • renderbuffertarget: which render cache
  • renderbuffer render cache id
#pragma mark - 5. Create frame buffer
- (void)setupFrameBuffer
{
    glGenFramebuffers(1, &_framebuffers);
    glBindFramebuffer(GL_FRAMEBUFFER, _framebuffers);
    // Adds a color rendering cache to the GL of the frame cache_ COLOR_ On attachment0, the contents of the rendering cache will be automatically filled into the frame cache and rendered to the screen by the frame cache
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorRenderBuffer);
}

06 create shader

Shaders

  • What are shaders? It is usually used to process texture objects, and render the processed texture objects to the frame cache for display on the screen.
  • Extracting texture information can handle vertex coordinate space conversion, texture color adjustment (filter effect) and other operations.
  • Shaders are divided into vertex shaders and fragment shaders
    • Vertex shaders are used to determine the shape of the shape
    • Clip shaders are used to determine the graphics render color
  • Steps: 1. Edit shader code 2. Create shader 3. Compile shader
  • Once, you can create it at the beginning

07 - create shader program

  • Steps: 1. Create program 2. Paste vertex and fragment shaders 3. Bind attribute attribute 4. Connector 5. Bind uniform attribute 6. Run program
  • Note: in steps 3 and 5, the binding attributes must be in order, otherwise the binding is unsuccessful, resulting in a black screen
#pragma mark - 7. Create shader program
- (void)setupProgram
{
    // Create shader program
    _program = glCreateProgram();
    
    // Bind shader
    // Bind vertex shader
    glAttachShader(_program, _vertShader);
    
    // Bind clip shader
    glAttachShader(_program, _fragShader);
    
    // Bind shader attributes for easy acquisition in the future, and obtain them according to corner markers in the future
    // Be sure to bind the property before the linker, or you won't get it
    glBindAttribLocation(_program, ATTRIB_POSITION, "position");
    glBindAttribLocation(_program, ATTRIB_TEXCOORD, "inputTextureCoordinate");
    
    // Linker
    glLinkProgram(_program);
    
    // Get the global parameters. Be sure to do it after the connection is completed, otherwise you won't get them
    _luminanceTextureAtt = glGetUniformLocation(_program, "luminanceTexture");
    _chrominanceTextureAtt = glGetUniformLocation(_program, "chrominanceTexture");
    _colorConversionMatrixAtt = glGetUniformLocation(_program, "colorConversionMatrix");
    
    // Start program
    glUseProgram(_program);
}

08 - create texture object

texture

  • The collected images are one by one. You can convert the images into textures in OpenGL, and then draw the textures into the context of OpenGL

  • What is texture? A texture is actually an image.

  • Texture mapping, we can paste the whole or part of the image onto the object we previously outlined with vertices

  • For example, when drawing a brick wall, you can paste a real brick wall image or photo as a texture on a rectangle. In this way, a realistic brick wall can be painted. If texture mapping is not used, each brick on the wall must be drawn as an independent polygon. In addition, texture mapping can ensure that the texture pattern on the polygon changes when transforming the polygon.

  • Texture mapping is a very complex process. The basic steps are as follows:

    • 1) Activate texture unit, 2) create texture, 3) bind texture, 4) set filtering
  • Note: texture mapping can only be performed in RGBA mode

Function glTexParameter

  • Control filtering. Filtering is to remove useless information and retain useful information
  • Generally speaking, the texture image is square or rectangular. However, when it is mapped to a polygon or surface and transformed to screen coordinates, a single texture texture pixel rarely corresponds to the pixels on the screen image. According to the transformation and texture mapping used, a single pixel on the screen can correspond to a small part (i.e. enlargement) or a large number of pixels (i.e. reduction)

Function glPixelStorei

  • Set pixel storage mode
  • pname: pixel storage method name
  • One is GL_PACK_ALIGNMENT, used to package pixel data, generally used for compression.
  • The other is GL_UNPACK_ALIGNMENT is used to unpack pixel data. Generally, unpacking is needed to generate texture objects
  • param: used to specify how many bytes are aligned for each pixel row in memory. This value is usually 1, 2, 4 or 8,
    Generally fill in 1, one pixel corresponds to one byte;

Function CVOpenGLESTextureCacheCreateTextureFromImage

  • Generate texture from image

  • Parameter allocator kcreallocatordefault, memory allocated by default

  • Valuepoint textureCache texture cache

  • Parameter sourceImage image

  • Parameter textureAttributes NULL

  • Parameter target, GL_ TEXTURE_ 2D (create 2D texture object)

  • Parameterinternalformat GL_ Luminance, brightness format

  • Parameter width picture width

  • Parameter height picture height

  • Parameter format GL_LUMINANCE brightness format

  • Parameter type picture type GL_UNSIGNED_BYTE

  • Parameter planeIndex 0, tangent angle sign, represents the 0th tangent

  • Parameter textureOut output texture object

Actual combat code

#pragma mark - 7. Create texture objects and render the collected pictures to the screen
- (void)setupTexture:(CMSampleBufferRef)sampleBuffer
{
    // Get picture information
    CVImageBufferRef imageBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer);
    
    // Get picture width
    GLsizei bufferWidth = (GLsizei)CVPixelBufferGetWidth(imageBufferRef);
    _bufferWidth = bufferWidth;
    GLsizei bufferHeight = (GLsizei)CVPixelBufferGetHeight(imageBufferRef);
    _bufferHeight = bufferHeight;
    
    // Create brightness texture
    // Activate texture unit 0. If not, texture creation will fail
    glActiveTexture(GL_TEXTURE0);
    
    // Create texture object
    CVReturn err;
    err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCacheRef, imageBufferRef, NULL, GL_TEXTURE_2D, GL_LUMINANCE, bufferWidth, bufferHeight, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0, &_luminanceTextureRef);
    if (err) {
        NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
    }
    // Get texture object
    _luminanceTexture = CVOpenGLESTextureGetName(_luminanceTextureRef);
    
    // Bind texture
    glBindTexture(GL_TEXTURE_2D, _luminanceTexture);
    
    // Set texture filtering
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    
    // Activation unit 1
    glActiveTexture(GL_TEXTURE1);
    
    // Create chroma texture
    err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCacheRef, imageBufferRef, NULL, GL_TEXTURE_2D, GL_LUMINANCE_ALPHA, bufferWidth / 2, bufferHeight / 2, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 1, &_chrominanceTextureRef);
    if (err) {
        NSLog(@"Error at CVOpenGLESTextureCacheCreateTextureFromImage %d", err);
    }
    // Get texture object
    _chrominanceTexture = CVOpenGLESTextureGetName(_chrominanceTextureRef);
    
    // Bind texture
    glBindTexture(GL_TEXTURE_2D, _chrominanceTexture);
    
    // Set texture filtering
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}

09-YUV to RGB painting texture

  • Texture mapping can only be performed in RGBA mode
  • YUV is collected, so YUV needs to be converted to RGBA,
  • The essence is to change the matrix structure
  • Note: if you want to draw points and fragments on the shader, you must put them in a code block with the shader assignment code. Otherwise, if you can't find the drawing information, you can't draw them, resulting in a black screen
  • Previously, glDrawArrays was separated from YUV to RGB, and the screen was always black

Function glUniform1i

  • Specifies which layer of texture cell the luminance texture corresponds to in the shader
  • Parameter location: texture coordinates in shader
  • Parameter x: specifies which layer of texture

  Function glEnableVertexAttribArray

Open the vertex attribute array. Only when you open the vertex attribute can you assign value to the vertex attribute information

Function glVertexAttribPointer

  • Set vertex shader attributes to describe the basic information of the attributes
  • Parameter indx: attribute ID, which attribute description information to
  • Parameter size: the vertex attribute consists of several values, which must be in bits 1, 2, 3 or 4;
  • Parameter type: indicates the data type of the attribute
  • Parameter normalized:GL_FALSE means do not standardize data types
  • The parameter string represents the length of each element in the array;
  • The parameter ptr represents the first address of the array

Function glBindAttribLocation

  • Bind the ID to the attribute and obtain the attribute through the ID for future use
  • Parameter program
  • Parameter index attribute ID
  • Parameter name attribute name

  Function glDrawArrays

  • Effect: draws a basic shape using the vertex data and clip shader data of the currently active vertex shader
  • mode: the drawing method generally uses GL_TRIANGLE_STRIP, triangle drawing method
  • first: which vertex in the array to start painting from, usually 0
  • count: the number of vertices in the array, which was defined when the vertex shader was defined. For example, vec4 represents 4 vertices
  • Note that if you want to draw points and fragments on the shader, you must put them in a code block with the shader assignment code. Otherwise, if you can't find the drawing information, you can't draw them, resulting in a black screen.

Actual combat Code:

// When YUV is converted to RGB, the vertices and fragments inside should be converted
- (void)convertYUVToRGBOutput
{
    // Before creating the texture, the texture unit has been activated, which is the number. GL_TEXTURE0,GL_TEXTURE1
    // Specifies which layer of texture cell the luminance texture corresponds to in the shader
    // This will paste the brightness texture onto the shader
    glUniform1i(_luminanceTextureAtt, 0);
    
    // Specifies which layer of texture cell the chroma texture in the shader corresponds to
    glUniform1i(_chrominanceTextureAtt, 1);
    
    // YUV to RGB matrix
    glUniformMatrix3fv(_colorConversionMatrixAtt, 1, GL_FALSE, _preferredConversion);

    // Compute vertex data structure
    CGRect vertexSamplingRect = AVMakeRectWithAspectRatioInsideRect(CGSizeMake(self.bounds.size.width, self.bounds.size.height), self.layer.bounds);
    
    CGSize normalizedSamplingSize = CGSizeMake(0.0, 0.0);
    CGSize cropScaleAmount = CGSizeMake(vertexSamplingRect.size.width/self.layer.bounds.size.width, vertexSamplingRect.size.height/self.layer.bounds.size.height);
    
    if (cropScaleAmount.width > cropScaleAmount.height) {
        normalizedSamplingSize.width = 1.0;
        normalizedSamplingSize.height = cropScaleAmount.height/cropScaleAmount.width;
    }
    else {
        normalizedSamplingSize.width = 1.0;
        normalizedSamplingSize.height = cropScaleAmount.width/cropScaleAmount.height;
    }
    
    // Determine vertex data structure
    GLfloat quadVertexData [] = {
        -1 * normalizedSamplingSize.width, -1 * normalizedSamplingSize.height,
        normalizedSamplingSize.width, -1 * normalizedSamplingSize.height,
        -1 * normalizedSamplingSize.width, normalizedSamplingSize.height,
        normalizedSamplingSize.width, normalizedSamplingSize.height,
    };
    
    // Determine texture data structure
    GLfloat quadTextureData[] =  { // Normal coordinates
        0, 0,
        1, 0,
        0, 1,
        1, 1
    };
    
    // Activate ATTRIB_POSITION vertex array
    glEnableVertexAttribArray(ATTRIB_POSITION);
    // To ATTRIB_POSITION vertex array assignment
    glVertexAttribPointer(ATTRIB_POSITION, 2, GL_FLOAT, 0, 0, quadVertexData);
    
    // Activate attrib_ Texcord vertex array
    glVertexAttribPointer(ATTRIB_TEXCOORD, 2, GL_FLOAT, 0, 0, quadTextureData);
    // To ATTRIB_TEXCOORD vertex array assignment
    glEnableVertexAttribArray(ATTRIB_TEXCOORD);
    
    // When rendering texture data, be sure to put it together with texture code
    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}

10 - render buffer to screen

  • Note: you must set the window size glViewport
  • Note: the rendering code must call [EAGLContext setCurrentContext:_context]
  • Reason: because it is multithreaded, each thread has a context. As long as it is drawn in one context, the thread context can be drawn together by setting it as our own context, otherwise the screen will be black
  • Note: before creating a texture each time, clear the previous texture reference [self cleanUpTextures], otherwise it will get stuck

Function glViewport

  • Set the size of OpenGL rendering window, which is generally the same as the layer size
  • Note: there is another important thing to do before we draw. We must tell OpenGL the size of the rendering window

Method presentRenderbuffer  

Is to render the specified renderbuffer on the screen  

#pragma mark - 10. Render frame cache
- (void)displayFramebuffer:(CMSampleBufferRef)sampleBuffer
{
    // Because it is multithreaded, each thread has a context. As long as it is drawn in one context, the thread context can be drawn together by setting it as our own context, otherwise the screen will be black
    if ([EAGLContext currentContext] != _context) {
        [EAGLContext setCurrentContext:_context];
    }
    
    // Empty the previous texture, or create a new texture every time, which consumes resources and causes the interface to jam
    [self cleanUpTextures];
    
    // Create texture object
    [self setupTexture:sampleBuffer];
    
    // YUV to RGB
    [self convertYUVToRGBOutput];
    
    // Set window size
    glViewport(0, 0, self.bounds.size.width, self.bounds.size.height);
    
    // Render the context to the screen
    [_context presentRenderbuffer:GL_RENDERBUFFER];
    
}

11 - clear memory

  • Note: as long as there is a Ref ending, it needs to be manually managed and cleared

Function glClearColor

Set an RGB color and transparency, and then paint the full screen with this color

Function glClear

  • Used to specify that the buffer specified by the mask should be cleared with the clear screen color. The mask can be GL_COLOR_BUFFER_BIT,GL_DEPTH_BUFFER_BIT and GL_ STENCIL_ BUFFER_ Free combination of bits.
  • Here, we only use the color buffer, so the clolor buffer is cleared.
#pragma mark - 11. Clear memory
- (void)dealloc
{
    // wipe cache 
    [self destoryRenderAndFrameBuffer];
    
    // Empty texture
    [self cleanUpTextures];
}

#pragma mark - destroy rendering and frame cache
- (void)destoryRenderAndFrameBuffer
{
    glDeleteRenderbuffers(1, &_colorRenderBuffer);
    _colorRenderBuffer = 0;
    
    glDeleteBuffers(1, &_framebuffers);
    _framebuffers = 0;
}

// Empty texture
- (void)cleanUpTextures
{
    // Clear brightness reference
    if (_luminanceTextureRef) {
        CFRelease(_luminanceTextureRef);
        _luminanceTextureRef = NULL;
    }
    
    // Clear chroma reference
    if (_chrominanceTextureRef) {
        CFRelease(_chrominanceTextureRef);
        _chrominanceTextureRef = NULL;
    }
    
    // Empty texture cache
    CVOpenGLESTextureCacheFlush(_textureCacheRef, 0);
}

Posted by phoolster on Sat, 02 Oct 2021 15:57:17 -0700