POST /livedetection
Performs a face liveness detection on two images, see Face Liveness Detection.
To perform a live detection, exactly two live recorded images are required. These are sent to the quality-check where, among other things, the face detection is done. If the images are suitable, the live-detection is executed.
To perform a live detection with challenge-response,
please use the LivenessDetection API or one of the methods Enroll
, Verify
or Identify
.
You need to send two images in between which the user has moved slightly. Therefore, please implement the following:
App-ID
:App-Secret
. To receive the necessary BWS WebAPI access data
(App-ID
and App-Secret
) you have to register your application on the
BWS Portal first. This requires a valid BWS subscription.
state
parameter can be provided,
which is then simply passed through to the BWS log and to the returned object.
The body contains the two live images encoded into a Data-URL using the data URI scheme as described in RFC 2397 (see also at Wikipedia), e.g. using the application/json media type:
{ "liveimage1": "data:image...", "liveimage2": "data:image..." }
or using the application/x-www-form-urlencoded media type:
liveimage1=data:image...&liveimage2=data:image...
If all two provided images could be processed successfully, this method returns the OK HTTP status code (200) with true
or false
in the body content, indicating whether the submitted images prove that they are recorded from a live person or not.
If a state
argument has also been provided, a result object is returned instead, that primarily contains the flag Success,
which indicates, whether the live detection succeeded or not:
Success
JobID
State
Errors
Samples
Errors
EyeCenters
An example result object might look as follows:
{
"Success": true,
"JobID": "3e4ed0e8-5eb4-4684-930a-bdae875b854e",
"State": "details",
"Samples": [{
"Errors": [{
"Code": "ImageTooSmall",
"Message": "The part of the image containing the found face is too small.",
"Details": "The found face (with an eye-distance of 55 pixels) does not have the required eye-distance of at least 60 pixels."
}
],
"EyeCenters": {
"RightEyeX": 290.188,
"RightEyeY": 251.479,
"LeftEyeX": 345.9,
"LeftEyeY": 251.182
}
}, {
"Errors": [{
"Code": "ImageTooBlurry",
"Message": "The image is too blurry, i.e. it is not sharp enough.",
"Details": "An image blurring of 10.45% was calculated, where up to 9.00% is allowed. Note that compression artifacts might be the reason for this fuzziness as they reduce the objective sharpness more than the subjective sharpness."
}
],
"EyeCenters": {
"RightEyeX": 280.599,
"RightEyeY": 242.184,
"LeftEyeX": 333.96,
"LeftEyeY": 241.687
}
}
]
}
The reported sample-errors typically come from the quality-check, which is performed for each sample (refer to the SOAP Quality Check API for a list of possible error-codes), or from the live detection, that might report additional errors as follows:
In case something goes wrong, an error HTTP status code is returned together with some additional information if available.
The call returns one of the standard HTTP status codes:
Invalid samples have been uploaded or they could not be processed successfully, e.g. no face found, etc. The response body typically has a Message field containing the error code:
Message
and an ExceptionMessage
.
private static async Task<bool> LiveDetectionAsync(string dataUrlLiveImage1, string dataUrlLiveImage2)
{
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Basic", Convert.ToBase64String(Encoding.ASCII.GetBytes($"{APP_IDENTIFIER}:{APP_SECRET}")));
string mediaType, images;
// using the application/json media type
mediaType = "application/json";
images = $@"{{""liveimage1"":""{dataUrlLiveImage1}"""
+ $@",""liveimage2"":""{dataUrlLiveImage2}""}}";
// or using application/x-www-form-urlencoded
/*
mediaType = "application/x-www-form-urlencoded";
images = string.Format("liveimage1={0}&liveimage2={1}", HttpUtility.UrlEncode(dataUrlLiveImage1), HttpUtility.UrlEncode(dataUrlLiveImage2));
*/
using (var content = new StringContent(images, Encoding.UTF8, mediaType))
using (var response = await client.PostAsync(ENDPOINT + "livedetection", content))
{
Console.Write("LiveDetection Response... ");
string result = await response.Content.ReadAsStringAsync();
if (response.StatusCode == HttpStatusCode.OK)
{
if (bool.TryParse(result, out var isLive))
{
Console.WriteLine(isLive);
return isLive;
}
}
Console.WriteLine(response.StatusCode.ToString());
Console.WriteLine(result);
return false;
}
}
}
// using org.json.JSONObject from JSON-java library
JSONObject requestBody = new JSONObject();
requestBody.put("liveimage1", "data:image/png;base64," + Base64.getEncoder().encodeToString(png1AsByteArray));
requestBody.put("liveimage2", "data:image/png;base64," + Base64.getEncoder().encodeToString(png2AsByteArray));
// using OkHttpClient from the OkHttp library
Request request = new Request.Builder()
.url("https://bws.bioid.com/extension/livedetection")
.addHeader("Authorization", Credentials.basic(APP_IDENTIFIER, APP_SECRET))
.post(RequestBody.create(MediaType.parse("application/json"), requestBody.toString()))
.build();
OkHttpClient client = new OkHttpClient();
Response response = client.newCall(request).execute();
if (response.code() == 200) {
if (response.body().string().equals("true")) {
System.out.println("recorded from a live person");
} else {
System.out.println("not recorded from a live person");
}
}