Performs a one-to-one comparison of the submitted live images with the submitted ID photo in order to verify whether the live images and ID photo belong to the same person.
Additionally, if not explicitely denied, liveness detection is performed on the provided live images. This is done by internally calling into the LivenessDetection API.
PhotoVerify is a service, which uses one photo, e.g. a passport image from an ID document, and compares that to one or two "live" images of a person, to find out whether the persons shown are the same. No classes are created, no templates or patterns are stored. It fulfills all requirements for an anonymous ID proofing service.
To make a decision about the similarity of the photo and the live images, an accuracy level is calculated (returned in the verification_level field). The higher the accuracy level, the better the faces on the images match. We highly recommend to use high accuracy levels, at least level 4, for your decision. Lower accuracy levels can be used with low quality ID photos (e.g. scanned passport images), where a higher accuracy cannot be reached any more.
The PhotoVerify
API is defined as a unary RPC:
rpc PhotoVerify (PhotoVerifyRequest) returns (PhotoVerifyResponse);
message PhotoVerifyRequest {
repeated ImageData live_images = 1;
bytes photo = 2;
bool disable_liveness_detection = 3;
}
message PhotoVerifyResponse {
JobStatus status = 1;
repeated JobError errors = 2;
repeated ImageProperties image_properties = 3;
ImageProperties photo_properties = 4;
AccuracyLevel verification_level = 5;
double verification_score = 6;
bool live = 7;
double liveness_score = 8;
enum AccuracyLevel {
NOT_RECOGNIZED = 0;
LEVEL_1 = 1;
LEVEL_2 = 2;
LEVEL_3 = 3;
LEVEL_4 = 4;
LEVEL_5 = 5;
}
}
The PhotoVerify
message has the fields as follows:
live_images
photo
disable_liveness_detection
live_images
.
If you do not want to perform a liveness detection at all, simply set this flag to true
.
The maximum API request size is 50 MB
.
This API requires a valid JWT in the Authorization request header and accepts an optional reference number.
Authorization | Required Bearer authentication. Please refer to BWS API Authentication for a description of how to provide a valid JWT here. |
Reference-Number | Optional, client specific reference number, which will be added to the BWS bookkeeping as well as to the response header. You typically use this reference to link the resulting BWS bookkeeping entries with your logs. |
On success the API returns a PhotoVerifyResponse
message with the fields as follows:
status
errors
image_properties
photo_properties
verification_level
The actual level of accuracy the specified photo complies with. A decision about the similarity of the photo and the live images should be made on this response field. We recommend to at least only accept accuracy level 4 or higher.
The calculated accuracy level correlates with a false accepted rate (FAR) as follows:
Level 5 | FAR of 0.0001% | very high propability that the persons on the images are the same - the identity can be seen as approved |
Level 4 | FAR of 0.001% | good propability that the persons on the images are the same - recommended as minimum accepted level |
Level 3 | FAR of 0.01% | moderate accuracy level |
Level 2 | FAR of 0.1% | a relatively high false acceptance rate, should be the lowest acceptable level when using bad photo scans |
Level 1 | FAR of 0.5% | we do not recommend to use this level, which is intended for really bad ID photos only |
Not recognized | Not recognized at all. | when reported with no additional error, the photo simply does not match with the live image(s) |
verification_score
An informative verification score (a value between 0.0 and 1.0) that reflects the verification level. The higher the score, the more likely the live images and ID photo belong to the same person. If this score is exactly 0.0, it has not been calculated.
live
true
, the provided images are supposed to be recorded from a live person.
When this field is set to false
and disable_liveness_detection
was not set to true
in the request,
there is at least one error reported that explains, why the provided images are not supposed to be recorded from a live person.
liveness_score
In case that the liveness detection or the photo verification could not be performed, at least one of the following errors is reported in the errors
field:
FullyVisibleFace
quality check results.Beside of the success return status code OK (0), this call might also return one of the following gRPC error status codes to indicate an error:
All successful BWS gRPC calls return a response header and a response trailer containing additional information about the request:
Response Header | |
jobid | The Job-ID (a GUID) that has been assigned to this BWS call. |
bws-version | The version of the BWS gRPC service. |
reference-number | An optional reference number as provided in the request header. |
date | The timestamp when the request has been received at the server. |
... | Other headers that might have been added by the server (NGINX, Kestrel, ...) that was handling the request. |
Response Trailer | |
response-time-ms | The timespan im milliseconds the request spent at the BWS service. |
... | Other trailers, like exception trailers, which are added by the gRPC framework in case an RPC exception occurred. |
Here is a short example of how to call into the PhotoVerify gRPC API using a photo ID image and live images loaded from files.
Please refer to BWS API Authentication for a description of the methods CreateAuthenticatedChannel and GenerateToken.
The tooling package Grpc.Tools
can be used to generate the C# assets from the bws.proto
file.
Refer to overview for gRPC on .NET to learn more about how to call into a gRPC service with a .NET client.
try
{
using GrpcChannel channel = CreateAuthenticatedChannel(new Uri(options.Host), GenerateToken(options.ClientId, options.Key));
var client = new BioIDWebService.BioIDWebServiceClient(channel);
var request = new PhotoVerifyRequest
{
Photo = ByteString.CopyFrom(File.ReadAllBytes(options.Photo))
};
foreach (string file in options.Files)
{
request.LiveImages.Add(new ImageData { Image = ByteString.CopyFrom(File.ReadAllBytes(file)) });
}
var call = client.PhotoVerifyAsync(request);
PhotoVerifyResponse response = await call.ResponseAsync.ConfigureAwait(false);
// ...
Console.WriteLine($"VerificationLevel: {response.VerificationLevel} ({response.VerificationScore})");
}
catch (RpcException ex)
{
Console.Error.WriteLine($"gRPC error from calling service: {ex.Status.StatusCode} - '{ex.Status.Detail}'");
}
To generate Java classes from a bws.proto file use the Protobuf compiler (protoc) with the command in cmd or in terminal : protoc --java_out=OUTPUT_DIR bws.proto
For gRPC client stubs, add the gRPC plugin and use the command: protoc --java_out=OUTPUT_DIR --grpc-java_out=OUTPUT_DIR --plugin=protoc-gen-grpc-java=PATH_TO_PLUGIN bws.proto
For a detailed step by step guide on protobuf usage with Java, please refer to Java detailed guide
import java.util.concurrent.CompletableFuture;
import com.bioid.services.BioIDWebServiceGrpc;
import com.bioid.services.BioIDWebServiceGrpc.BioIDWebServiceStub;
import com.bioid.services.Bws.ImageData;
import com.bioid.services.Bws.PhotoVerifyRequest;
import com.bioid.services.Bws.PhotoVerifyResponse;
import com.google.protobuf.ByteString;
import io.grpc.ManagedChannel;
import io.grpc.StatusRuntimeException;
import io.grpc.stub.StreamObserver;
public void photoVerifyAsync(byte[] photo, byte[] liveImage1, byte[] liveImage2)
{
try
{
String jwtToken = generateToken(options.clientId, option.secretKey, options.expireMinutes);
ManagedChannel channel = createAuthenticatedChannel(option.host, jwtToken);
BioIDWebServiceStub bwsClientAsync = BioIDWebServiceGrpc.newStub(channel);
PhotoVerifyRequest verifyRequest = PhotoVerifyRequest.newBuilder()
.setPhoto(ByteString.copyFrom(photo))
.addLiveImages(ImageData.newBuilder().setImage(ByteString.copyFrom(liveImage1)).build())
.addLiveImages(ImageData.newBuilder().setImage(ByteString.copyFrom(liveImage2)).build())
.build();
CompletableFuture <PhotoVerifyResponse> photoVerifyResult = new CompletableFuture <>();
bwsClientAsync.photoVerify(verifyRequest, new StreamObserver <PhotoVerifyResponse>() {
@Override
public void onNext(PhotoVerifyResponse value)
{
photoVerifyResult.complete(value);
}
@Override
public void onError(Throwable t)
{
photoVerifyResult.completeExceptionally(t);
}
@Override
public void onCompleted()
{
}
});
PhotoVerifyResponse photoVerifyResponse = photoVerifyResult.get();
System.out.printf("VerificationLevel: %s (%s)%n", photoVerifyResponse.getVerificationLevel(), photoVerifyResponse.getVerificationScore());
} catch (StatusRuntimeException ex)
{
System.err.printf("gRPC error from calling service: %s - '%s'", ex.getStatus(), ex.getStatus().getDescription());
} catch (Exception ex)
{
System.err.printf("Error processing images: " + ex.getMessage());
}
}
Using the protocol buffer compiler protoc (install: python -m pip install grpcio-tools
)
from the Phython gRPC tools
you can create the Phyton client code from the .proto service definition:
python -m grpc_tools.protoc --proto_path=. --python_out=. --grpc_python_out=. bws.proto
import grpc
import bws_pb2_grpc
import bws_pb2
token = GenerateToken(args.clientid, args.key, 10)
with CreateAuthenticatedChannel(args.host, token) as channel:
stub = bws_pb2_grpc.BioIDWebServiceStub(channel)
request = bws_pb2.LivenessDetectionRequest()
for img_file in args.images:
with open(img_file, 'rb') as f:
request.live_images.add(image=f.read())
with open(args.photo, 'rb') as f:
request.photo = f.read()
request.disable_liveness_detection = args.disablelive
try:
response = stub.PhotoVerify(request)
except grpc.RpcError as rpc_error:
print("Received error: ", rpc_error)
print response