Picture decoding
The common image compression formats are mainly PNG and JPEG. In iOS program development, it is generally not necessary to obtain the decoded data of a picture, but if you need to operate on pixels, you may need to know how to obtain relevant pixel values.
Bitmap image (bitmap) is composed of multiple pixel arrays. When we decode PNG and JPG, we should obtain a set of pixel data, which constitutes the decoded bitmap. Picture decoding can be regarded as a decompression, so the space occupied by the bitmap will be larger. The space occupied by the bitmap is easy to calculate. The area of the picture * the bytes occupied by each pixel.
+ (nullable UIImage *)imageWithContentsOfFile:(NSString *)path;
imageWithContentsOfFile is usually used to load a picture, so that the UIImage will not be decoded here when it is created, but only when it is drawn.
CGImage
CGImage is interpreted as a bitmap or picture mask, that is, pixel values can be read through CGImage. The following method can be used when directly reading the pixels of CGImage.
NSData *imageData = [NSData dataWithContentsOfFile:imagePath]; CFDataRef dataRef = (__bridge CFDataRef)imageData; CGImageSourceRef source = CGImageSourceCreateWithData(dataRef, nil); CGImageRef cgImage = CGImageSourceCreateImageAtIndex(source, 0, nil); width = (int)CGImageGetWidth(cgImage); height = (int)CGImageGetHeight(cgImage); size_t pixelCount = width * height; CGDataProviderRef provider = CGImageGetDataProvider(cgImage); CFDataRef data = CGDataProviderCopyData(provider); CFRelease(data); CGImageRelease(cgImage); CFRelease(source);
This way of reading pixels can not specify the color format, but read the combination of pixels composed of the original color space of the picture. It should be noted that all created data needs to be released by calling the corresponding release. Here, the method of creating CGImage can also set the cache mode of CGImage. Moreover, when we get data from the dataProvider of CGImage, we can only copy one copy to CFDataRef, but can't read it directly.
CGBitmapContext
Taking RGB color space as an example, there may be so many color related data. CGImage has relevant methods to read the corresponding data, but some methods are only supported after iOS12.
typedef CF_ENUM(uint32_t, CGImageAlphaInfo) { kCGImageAlphaNone, /* For example, RGB. */ kCGImageAlphaPremultipliedLast, /* For example, premultiplied RGBA */ kCGImageAlphaPremultipliedFirst, /* For example, premultiplied ARGB */ kCGImageAlphaLast, /* For example, non-premultiplied RGBA */ kCGImageAlphaFirst, /* For example, non-premultiplied ARGB */ kCGImageAlphaNoneSkipLast, /* For example, RBGX. */ kCGImageAlphaNoneSkipFirst, /* For example, XRGB. */ kCGImageAlphaOnly /* No color data, alpha data only */ }; typedef CF_ENUM(uint32_t, CGImageByteOrderInfo) { kCGImageByteOrderMask = 0x7000, kCGImageByteOrderDefault = (0 << 12), kCGImageByteOrder16Little = (1 << 12), kCGImageByteOrder32Little = (2 << 12), kCGImageByteOrder16Big = (3 << 12), kCGImageByteOrder32Big = (4 << 12) } CG_AVAILABLE_STARTING(10.0, 2.0); typedef CF_ENUM(uint32_t, CGImagePixelFormatInfo) { kCGImagePixelFormatMask = 0xF0000, kCGImagePixelFormatPacked = (0 << 16), kCGImagePixelFormatRGB555 = (1 << 16), /* Only for RGB 16 bits per pixel */ kCGImagePixelFormatRGB565 = (2 << 16), /* Only for RGB 16 bits per pixel */ kCGImagePixelFormatRGB101010 = (3 << 16), /* Only for RGB 32 bits per pixel */ kCGImagePixelFormatRGBCIF10 = (4 << 16), /* Only for RGB 32 bits per pixel */ } CG_AVAILABLE_STARTING(10.14, 12.0); typedef CF_OPTIONS(uint32_t, CGBitmapInfo) { kCGBitmapAlphaInfoMask = 0x1F, kCGBitmapFloatInfoMask = 0xF00, kCGBitmapFloatComponents = (1 << 8), kCGBitmapByteOrderMask = kCGImageByteOrderMask, kCGBitmapByteOrderDefault = kCGImageByteOrderDefault, kCGBitmapByteOrder16Little = kCGImageByteOrder16Little, kCGBitmapByteOrder32Little = kCGImageByteOrder32Little, kCGBitmapByteOrder16Big = kCGImageByteOrder16Big, kCGBitmapByteOrder32Big = kCGImageByteOrder32Big } CG_AVAILABLE_STARTING(10.0, 2.0); #ifdef __BIG_ENDIAN__ # define kCGBitmapByteOrder16Host kCGBitmapByteOrder16Big # define kCGBitmapByteOrder32Host kCGBitmapByteOrder32Big #else /* Little endian. */ # define kCGBitmapByteOrder16Host kCGBitmapByteOrder16Little # define kCGBitmapByteOrder32Host kCGBitmapByteOrder32Little #endif
There are many permutations and combinations of individual pixels in the picture according to color space, arrangement and occupation. Therefore, in order to avoid excessive processing, it may be a friendly way for users to use CGBitmapContext.
CGBitmapContext can be used to draw bitmaps into the canvas of memory bit by bit. Each pixel on the canvas is arranged according to the color space specified in CGBitmapContext.
When creating CGBitmapContext, you need to pass in some parameters to specify what kind of canvas we need.
CGContextRef CGBitmapContextCreate( //The memory address rendered in the canvas. This size needs at least bytesPerRow * height, //If NULL is passed in, this method will open up a space for the bitmap. void *data, //Width of bitmap size_t width, //The height of the bitmap size_t height, //The number of bits occupied by each element in a pixel, such as a 32-bit RGB format, and the number of bits occupied by an element is 8 size_t bitsPerComponent, //The number of bytes occupied by each row of the bitmap. If data passes NULL, 0 is passed here and calculated automatically size_t bytesPerRow, //color space CGColorSpaceRef space, //Some other relevant information, such as whether the alpha channel is included, the position of the alpha channel in the pixel, whether the element in the pixel is integer or floating-point, etc. uint32_t bitmapInfo);
When creating CGBitmapContext, some configurations are not supported. You can view the document in the corresponding supported formats Graphics Contexts . Here is an example. For images without alpha channel, it is not allowed to directly use kCGImageAlphaNone to generate canvas without alpha channel. However, it is possible to use kcgimagealphanoneskipplast to ignore the alpha channel. When you want to directly obtain non premultiplied pixel data, such as kCGImageAlphaLast, it is also not allowed. You can only use premultiplied pixel data.
Here we control the color space to BGRA, using the following parameters.
uint8_t* bitmapData = (uint8_t *)calloc(pixelCount * 4, sizeof(uint8_t)); int bytesPerRow = 4 * width; CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); CGContextRef context = CGBitmapContextCreate(bitmapData, width, height, 8, bytesPerRow, colorSpace, kCGImageAlphaPremultipliedFirst | kCGBitmapByteOrder32Little); if (!context) { return nullptr; } CGContextDrawImage(context, CGRectMake(0, 0, width, height), cgImage); CGColorSpaceRelease(colorSpace); CGContextRelease(context);
Then, the RGB elements are operated to obtain non premultiplied pixel data.
auto iter = bitmapData; int temp; for (int i = 0; i < pixelCount; i++) { uint8_t alpha = *(iter + 3); if (alpha != 0) { for (int j = 0; j < 3; j++) { temp = *iter * 255; *iter++ = temp /alpha; } iter++; }else { iter += 4; } }
There should be a lot of room for optimization. For example, some operations can be skipped directly for pictures without transparency.
Tags: CGBitmapContext, CGImage, iOS