Face Detection
API Interface of the iOS Framework
Define face detection model variable
private var faceDetectionModel: FaceDetectionModel?
Configure and Instantiate the model with FaceDetectionModelBuilder
@interface FaceDetectionModelBuilder : NSObject
- (instancetype _Nonnull)init;
/**
* \brief Sets the preferred model to use based on the use-case.
*
* \param type One of the available \a FaceDetectionModelType.
* Default is \a FaceDetectionModelType_ShortRange.
* \returns Pointer to the \a FaceDetectionModelBuilder
*/
- (FaceDetectionModelBuilder* _Nonnull)setFaceDetectionModelType:(FaceDetectionModelType)type;
/**
* \brief Creates a new instance of \a FaceDetectionModel.
*
* \param error Object containing error information if model instantiation fails.
*
* \returns Pointer to the new instance of \a FaceDetectionModel if instantiation
* is successful, \a nil otherwise.
*
* \note Model instantiation is a blocking call which can take some time, therefore
* this should be done on a separate serial dispatch queue.
* That won't block the main queue which keeps the UI responsive.
*/
- (FaceDetectionModel* _Nullable)build:(NSError* _Nullable* _Nonnull)error;
@end
Example:
do {
self.faceDetectionModel = try FaceDetectionModelBuilder()
.setFaceDetectionModelType(.shortRange)
.build()
} catch {
fatalError(
"Failed to instantiate face detection model: \(error.localizedDescription)"
)
}
Model instantiation is a blocking call that can take some time, therefore this should be done on a separate serial dispatch queue. That won't block the main queue which keeps the UI responsive.
Schedule the task with FaceDetectionModel.detect
method when the model is instantiated
func captureOutput(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from connection: AVCaptureConnection) {
guard let imageBuffer = sampleBuffer.imageBuffer else {
return
}
do {
try faceDetectionModel!.detect(on: imageBuffer,
at: sampleBuffer.outputPresentationTimeStamp)
} catch {
NSLog("Failed to submit face detection task: \(error.localizedDescription)")
}
}
FaceDetectionModel
returns its results through the FaceDetectionDelegate
@protocol FaceDetectionDelegate <NSObject>
/**
* \brief Callback triggered whenever the \a FaceDetectionModel completes the
* processing of the passed frame.
*
* \param model The \a FaceDetectionModel that acquired provided detections
* \param detections Collection of detected faces
*/
@optional
- (void)faceDetectionModel:(FaceDetectionModel* _Nonnull)model
didOutputDetections:(NSArray<FaceDetection*>* _Nonnull)detections;
@end
Example:
func faceDetectionModel(_ model: FaceDetectionModel,
didOutputDetections detections: [FaceDetection]) {
cameraPreviewView.draw(faces: detections)
}
Each FaceDetection
instance is represented with the following class
@interface FaceDetection : NSObject
/**
* \brief Confidence of the detected face.
*/
@property (readonly, nonatomic) float score;
/**
* \brief Face bounding box, normalized to [0.0, 1.0] range
*/
@property (readonly, nonatomic) CGRect boundingBox;
/**
* \brief Collection of 6 face landmarks.
*
* Specific landmark can be accessed by \a FaceLandmarkType RAW value or
* using \a faceLandmarkOfType method.
* Each landmark is a 2D Point with coordinates normalized to [0.0, 1.0] range.
*/
@property (readonly, nonatomic, nonnull) NSArray<Landmark*>* landmarks;
/**
* \brief Specific face landmark access
*
* \param type Type of the face Landmark. One of the \a FaceLandmarkType.
*
* \returns Landmark corresponding to the passed type.
*/
- (Landmark* _Nonnull)faceLandmarkOfType:(FaceLandmarkType)type;
@end
Last updated