人脸贴纸实现原理

Posted by

概述

本文是对以下代码基于GPUImage的实时视频流贴纸Demo分析贴纸的实现。

主要流程

GPUImageCamera -> GPUStickerFilter(根据人脸绘制贴纸) -> GPUImageView
GPUImageCamera (回调) -> 人脸识别 -> 人脸位置

人脸识别

// 人脸检测
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
// 人脸识别的算法
NSMutableArray *faces = [NSMutableArray arrayWithCapacity:arr.count];
for (NSArray *ele in arr) {
NSMutableArray *points = [NSMutableArray arrayWithCapacity:ele.count];
for (NSDictionary *dic in ele) {
CGPoint point = CGPointMake([dic[@"x"] floatValue], [dic[@"y"] floatValue]);
[points addObject:[NSValue valueWithCGPoint:point]];
}
[faces addObject:points];
}
self.stickerFilter.faces = faces;
}

GPUStickerFilter

设置人脸位置

- (void)setFaces:(NSArray<NSArray *> *)faces
{
runAsynchronouslyOnVideoProcessingQueue(^{
_faces = faces;
});
}

在视频处理线程设置人脸位置

设置贴纸素材

- (void)setSticker:(SKSticker *)sticker
{
runAsynchronouslyOnVideoProcessingQueue(^{
if (_sticker == sticker) {
return;
}
[GPUImageContext useImageProcessingContext];
[_sticker reset];
_sticker = sticker;
});
}

绘制贴纸

以下是主要的贴纸绘制代码

- (void)renderToTextureWithVertices:(const GLfloat *)vertices textureCoordinates:(const GLfloat *)textureCoordinates;
{
if (self.preventRendering)
{
[firstInputFramebuffer unlock];
return;
}
if (usingNextFrameForImageCapture)
{
[firstInputFramebuffer lock];
}
// 与上一帧的间隔
NSTimeInterval interval = 0;
if (CMTIME_IS_VALID(_lastTime)) {
interval = CMTimeGetSeconds(CMTimeSubtract(_currentTime, _lastTime));
}
_lastTime = _currentTime;
if (self.faces.count) {
// 开启混合
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBindFramebuffer(GL_FRAMEBUFFER, _framebufferHandle);
glViewport(0, 0, inputTextureSize.width, inputTextureSize.height);
// 直接绘制在源纹理上
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, firstInputFramebuffer.texture);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, firstInputFramebuffer.texture, 0);
[GPUImageContext setActiveShaderProgram:filterProgram];
for (NSArray *points in _faces) {
[_sticker drawItemsWithFacePoints:points
framebufferSize:inputTextureSize
timeInterval:interval
usingBlock:^(GLfloat *vertices, GLuint texture) {
glActiveTexture(GL_TEXTURE2);
glBindTexture(GL_TEXTURE_2D, texture);
glUniform1i(filterInputTextureUniform, 2);
glVertexAttribPointer(filterPositionAttribute, 2, GL_FLOAT, 0, 0, vertices);
glVertexAttribPointer(filterTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, textureCoordinates);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindTexture(GL_TEXTURE_2D, 0);
}];
}
glDisable(GL_BLEND);
}
// 直接用输入作为输出
outputFramebuffer = firstInputFramebuffer;
if (usingNextFrameForImageCapture)
{
dispatch_semaphore_signal(imageCaptureSemaphore);
}
}

Leave a Reply

Your email address will not be published. Required fields are marked *