Playground biometrics demo BioID home page

BioID Motion Detection

The BioID Liveness Detection analyzes the movement between two images. Therefore, the first image has to capture the face frontally. The second image has to capture the face after a SLIGHT movement.

The liveness detection algorithm will most likely NOT work as intended if you simply send two images which show the face in different positions. In most cases, the movement in the second image will be too much and thus the image cannot be used for performing the BioID liveness detection. The LivenessDetection API will give back the result “LiveDetectionFailed”. Please always make sure to use the BioID Motion Detection to ensure full functionality of the BioID Liveness Detection!

The BioID Motion Detection automatically detects the required movement and triggers the capturing of the second image.

Requirements for BioID Motion Detection

  • Implement video live capturing on your client - BioID provides full sample code for HTML, iOS and Android on GitHub
  • Implement/copy the Motion Detection algorithm for creating template & motion detection

How the BioID Motion Detection works

The algorithm is basically called Template Matching. This technique creates a template of a region (the face area) of the first image and tries to match each incoming image.

If the template match has enough movement (above the motion threshold) between the first image and the current image, you can use the first image and the current image as second image for the LivenessDetection API.

Motion Threshold

You can adjust the threshold for motion detection to make it more or less sensitive. Please adjust this threshold only if necessary.

For our web apps (browser apps) we use different thresholds if used on a desktop compared to a mobile device. On mobile devices we use a higher threshold because the user holds the phone and this causes additional movement compared to a fixed desktop/camera position. If the image capturing has been triggered through hand or device movement instead of head movement, Liveness Detection will most certainly fail. Thus, for web apps (browser apps) used on a mobile device, we recommend to use the motion detection with less sensitivity (higher threshold) to reduce mistakenly triggered images.

Advantages using Motion Detection

Your client is uploading 2 images instead of a video.

  • The capturing of 2 images is faster and less intrusive than recording a video
  • The uploading time is faster considering slow internet connections (e.g. 3G)

Motion Detection source code

Below you see the source code for creating a template and using the motion detection.

To understand how to handle this with incoming data, please take a look to GitHub for

Create a template

// Cut out the template that is used by the motion detection.
function createTemplate(imageData) {
    // cut out the template:
    // we use a small width, quarter-size image around the center as template
    var template = {
        centerX: imageData.width / 2,
        centerY: imageData.height / 2,
        width: imageData.width / 4,
        height: imageData.height / 4 + imageData.height / 8
    };
    template.xPos = template.centerX - template.width / 2;
    template.yPos = template.centerY - template.height / 2;
    template.buffer = new Uint8ClampedArray(template.width * template.height);

    let counter = 0;
    let p = imageData.data;
    for (var y = template.yPos; y < template.yPos + template.height; y++) {
        // we use only the green plane here
        let bufferIndex = (y * imageData.width * 4) + template.xPos * 4 + 1;
        for (var x = template.xPos; x < template.xPos + template.width; x++) {
            let templatepixel = p[bufferIndex];
            template.buffer[counter++] = templatepixel;
            // we use only the green plane here
            bufferIndex += 4;
        }
    }
    console.log('Created new cross-correlation template', template);
    return template;
}
// Cut out the template that is used by the motion detection.
-(void)createTemplate:(UIImage *)first {
    UIImage *resizedImage = [self resizeImageForMotionDetection:first];
    UIImage *resizedGrayImage = [self convertImageToGrayScale:resizedImage];

    resizeCenterX = resizedGrayImage.size.width / 2;
    resizeCenterY = resizedGrayImage.size.height / 2;

    if (resizedGrayImage.size.width > resizedGrayImage.size.height) {
        // Landscape mode
        templateWidth = resizedGrayImage.size.width / 10;
        templateHeight = resizedGrayImage.size.height / 3;
    }
    else {
        // Portrait mode
        templateWidth = resizedGrayImage.size.width / 10 * 4 / 3;
        templateHeight = resizedGrayImage.size.height / 4;
    }

    templateXpos = resizeCenterX - templateWidth / 2;
    templateYpos = resizeCenterY - templateHeight / 2;

    templateBuffer = nil;
    templateBuffer = malloc(templateWidth * templateHeight);

    CFDataRef rawData = CGDataProviderCopyData(CGImageGetDataProvider(resizedGrayImage.CGImage));
    int bytesPerRow = (int)CGImageGetBytesPerRow(resizedGrayImage.CGImage);
    const UInt8* buffer = CFDataGetBytePtr(rawData);

    int counter = 0;
    for (int y = templateYpos; y < templateYpos + templateHeight; y++) {
        for (int x = templateXpos; x < templateXpos + templateWidth; x++) {
            int templatePixel = buffer[x + y * bytesPerRow];
            templateBuffer[counter++] = templatePixel;
        }
    }

    // Release
    CFRelease(rawData);
}
// Cut out the template that is used by the motion detection.
void createTemplate(@NonNull Yuv420Image first) {

    GrayscaleImage resizedGrayImage = first.asDownscaledGrayscaleImage();

    resizeCenterX = resizedGrayImage.width / 2;
    resizeCenterY = resizedGrayImage.height / 2;

    if (resizedGrayImage.width > resizedGrayImage.height) {
        // Landscape mode
        templateWidth = resizedGrayImage.width / 10;
        templateHeight = resizedGrayImage.height / 3;
    } else {
        // Portrait mode
        templateWidth = resizedGrayImage.width / 10 * 4 / 3;
        templateHeight = resizedGrayImage.height / 4;
    }

    templateXpos = resizeCenterX - templateWidth / 2;
    templateYpos = resizeCenterY - templateHeight / 2;
    templateBuffer = new int[templateWidth * templateHeight];

    int counter = 0;
    for (int y = templateYpos; y < templateYpos + templateHeight; y++) {
        int offset = y * resizedGrayImage.width;
        for (int x = templateXpos; x < templateXpos + templateWidth; x++) {
            int templatePixel = resizedGrayImage.data[x + offset] & 0xff;
            templateBuffer[counter++] = templatePixel;
        }
    }
}

Motion Detection

// Motion detection by a normalized cross-correlation
function motionDetection(imageData, template) {
    // This is the major computing step: Perform a normalized cross-correlation between the template of the first image and each incoming image
    // This algorithm is basically called "Template Matching" - we use the normalized cross correlation to be independent of lighting changes
    // We calculate the correlation of template and image over the whole image area
    let bestHitX = 0,
        bestHitY = 0,
        maxCorr = 0,
        searchWidth = imageData.width / 4,
        searchHeight = imageData.height / 4,
        p = imageData.data;

        for (let y = template.centerY - searchHeight; y <= template.centerY + searchHeight - template.height; y++) {
            for (let x = template.centerX - searchWidth; x <= template.centerX + searchWidth - template.width; x++) {
                let nominator = 0, denominator = 0, templateIndex = 0;

                // Calculate the normalized cross-correlation coefficient for this position
                for (let ty = 0; ty < template.height; ty++) {
                    // we use only the green plane here
                    let bufferIndex = x * 4 + 1 + (y + ty) * imageData.width * 4;
                    for (let tx = 0; tx < template.width; tx++) {
                        let imagepixel = p[bufferIndex];
                        nominator += template.buffer[templateIndex++] * imagepixel;
                        denominator += imagepixel * imagepixel;
                        // we use only the green plane here
                        bufferIndex += 4;
                    }
                 }

                 // The NCC coefficient is then (watch out for division-by-zero errors for pure black images)
                 let ncc = 0.0;
                 if (denominator > 0) {
                    ncc = nominator * nominator / denominator;
                 }
                 // Is it higher than what we had before?
                 if (ncc > maxCorr) {
                    maxCorr = ncc;
                    bestHitX = x;
                    bestHitY = y;
                 }
            }
       }

       // Now the most similar position of the template is (bestHitX, bestHitY). Calculate the difference from the origin
       let distX = bestHitX - template.xPos,
           distY = bestHitY - template.yPos,
           movementDiff = Math.sqrt(distX * distX + distY * distY);

       // The maximum movement possible is a complete shift into one of the corners, i.e
       let maxDistX = searchWidth - template.width / 2,
            maxDistY = searchHeight - template.height / 2,
            maximumMovement = Math.sqrt(maxDistX * maxDistX + maxDistY * maxDistY);

       // The percentage of the detected movement is therefore
       var movementPercentage = movementDiff / maximumMovement * 100;
       if (movementPercentage > 100) {
            movementPercentage = 100;
       }
       console.log('Calculated movement: ', movementPercentage);
       return movementPercentage;
}
// Motion detection by a normalized cross-correlation
-(BOOL)motionDetection:(UIImage *)current {
    // This is the major computing step: Perform a normalized cross-correlation between the template of the first image and each incoming image.
    // This algorithm is basically called: "Template Matching" - we use the normalized cross correlation to be independent of lighting images.
    // We calculate the correlation of template and image over whole image area.
    UIImage *resizedImage = [self resizeImageForMotionDetection:current];
    UIImage *resizedGrayImage = [self convertImageToGrayScale:resizedImage];
  
    int bestHitX = 0;
    int bestHitY = 0;
    double maxCorr = 0.0;
    bool triggered = false;
  
    int searchWidth = resizedGrayImage.size.width / 4;
    int searchHeight = resizedGrayImage.size.height / 4;
     
    CFDataRef rawData = CGDataProviderCopyData(CGImageGetDataProvider(resizedGrayImage.CGImage));
    int bytesPerRow = (int)CGImageGetBytesPerRow(resizedGrayImage.CGImage);
    const UInt8* buffer = CFDataGetBytePtr(rawData);
     
    for (int y = resizeCenterY - searchHeight; y <= resizeCenterY + searchHeight - templateHeight; y++) {
        for (int x = resizeCenterX - searchWidth; x <= resizeCenterX + searchWidth - templateWidth; x++) {
            int nominator = 0;
            int denominator = 0;
            int templateIndex = 0;
             
            // Calculate the normalized cross-correlation coefficient for this position
            for (int ty = 0; ty < templateHeight; ty++) {
                int bufferIndex = x + (y + ty) * bytesPerRow;
                for (int tx = 0; tx < templateWidth; tx++) {
                    int imagePixel = buffer[bufferIndex++];
                    nominator += templateBuffer[templateIndex++] * imagePixel;
                    denominator += imagePixel * imagePixel;
                }
            }
         
            // The NCC coefficient is then (watch out for division-by-zero errors for bure black images)
            double ncc = 0.0;
            if (denominator > 0) {
                ncc = (double)nominator * (double)nominator / (double)denominator;
            }
            // Is it higher that what we had before?
            if (ncc > maxCorr) {
                maxCorr = ncc;
                bestHitX = x;
                bestHitY = y;
            }
        }
    }
     
    // Now the most similar position of the template is (bestHitX, bestHitY). Calculate the difference from the origin
    int distX = bestHitX - templateXpos;
    int distY = bestHitY - templateYpos;
 
    double movementDiff = sqrt(distX * distX + distY * distY);
     
    // The maximum movement possible is a complete shift into one of the corners, i.e.
    int maxDistX = searchWidth - templateWidth / 2;
    int maxDistY = searchHeight - templateHeight / 2;
    double maximumMovement = sqrt((double)maxDistX * maxDistX + (double)maxDistY * maxDistY);
     
    // The percentage of the detected movement is therefore
    double movementPercentage = movementDiff / maximumMovement * 100.0;
     
    if (movementPercentage > 100.0) {
        movementPercentage = 100.0;
    }
  
    // Trigger if movementPercentage is above threshold
    if (movementPercentage > MIN_MOVEMENT_PERCENTAGE) {
        triggered = true;
    }
     
    // Release
    CFRelease(rawData);
     
    return triggered;
}
// Motion detection by a normalized cross-correlation
boolean detect(@NonNull Yuv420Image current) {
    // This is the major computing step: Perform a normalized cross-correlation between the template of the first image and each incoming image.
    // This algorithm is basically called: "Template Matching" - we use the normalized cross correlation to be independent of lighting images.
    // We calculate the correlation of template and image over whole image area.
    if (templateBuffer == null) {
        throw new IllegalStateException("missing template");
    }

    GrayscaleImage resizedGrayImage = current.asDownscaledGrayscaleImage();

    int bestHitX = 0;
    int bestHitY = 0;
    double maxCorr = 0.0;
    boolean triggered = false;

    int searchWidth = resizedGrayImage.width / 4;
    int searchHeight = resizedGrayImage.height / 4;

    for (int y = resizeCenterY - searchHeight; y <= resizeCenterY + searchHeight - templateHeight; y++) {
        for (int x = resizeCenterX - searchWidth; x <= resizeCenterX + searchWidth - templateWidth; x++) {
            int nominator = 0;
            int denominator = 0;
            int templateIndex = 0;

            // Calculate the normalized cross-correlation coefficient for this position
            for (int ty = 0; ty < templateHeight; ty++) {
                int bufferIndex = x + (y + ty) * resizedGrayImage.width;
                for (int tx = 0; tx < templateWidth; tx++) {
                    int imagePixel = resizedGrayImage.data[bufferIndex++] & 0xff;
                    nominator += templateBuffer[templateIndex++] * imagePixel;
                    denominator += imagePixel * imagePixel;
                }
            }

            // The NCC coefficient is then (watch out for division-by-zero errors for pure black images)
            double ncc = 0.0;
            if (denominator > 0) {
                ncc = (double) nominator * (double) nominator / (double) denominator;
            }
            // Is it higher that what we had before?
            if (ncc > maxCorr) {
                maxCorr = ncc;
                bestHitX = x;
                bestHitY = y;
            }
        }
    }

    // Now the most similar position of the template is (bestHitX, bestHitY). Calculate the difference from the origin
    int distX = bestHitX - templateXpos;
    int distY = bestHitY - templateYpos;
    double movementDiff = Math.sqrt(distX * distX + distY * distY);

    // The maximum movement possible is a complete shift into one of the corners, i.e.
    int maxDistX = searchWidth - templateWidth / 2;
    int maxDistY = searchHeight - templateHeight / 2;
    double maximumMovement = Math.sqrt((double) maxDistX * maxDistX + (double) maxDistY * maxDistY);

    // The percentage of the detected movement is therefore
    double movementPercentage = movementDiff / maximumMovement * 100.0;

    if (movementPercentage > 100.0) {
        movementPercentage = 100.0;
    }

    log.d("detected motion of %.2f%%", movementPercentage);

    // Trigger if movementPercentage is above threshold (default: when 15% of the maximum movement is exceeded)
    if (movementPercentage > MIN_MOVEMENT_PERCENTAGE) {
        triggered = true;
    }

    log.stopStopwatch(stopwatchSessionId);
    return triggered;
}