Skip to content

Commit

Permalink
first commit
Browse files Browse the repository at this point in the history
  • Loading branch information
Your Name committed Sep 2, 2023
1 parent 6c4493a commit 2e657f7
Show file tree
Hide file tree
Showing 47 changed files with 4,078 additions and 1 deletion.
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,5 @@
venv/
__pycache__/
videos/
weights/
images/
58 changes: 57 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,57 @@
# eKYC-Challenge-Response
# eKYC

eKYC (Electronic Know Your Customer) is a project designed to electronically verify the identity of customers. This is an essential system to ensure authenticity and security in online transactions.

![](resources/ekyc.jpg)

eKYC (Electronic Know Your Customer) is an electronic customer identification and verification solution that enables banks to identify customers 100% online, relying on biometric information and artificial intelligence (AI) for customer recognition, without the need for face-to-face interactions as in the current process.

## eKYC flow
This README provides an overview of the eKYC (Electronic Know Your Customer) flow, which comprises three main components: Upload Document (ID Card), Face Recognition (Verification), and Liveness Detection.

![](resources/flow.jpg)

#### 1. Upload Document (ID Card)

Initially, users are required to upload an image of their ID card. This step is essential for extracting facial information from the ID card photo.

#### 2. Face Verification

Following the document upload, we proceed to verify whether the user matches the individual pictured on the ID card. Here's how we do it:

- **Step 1 - Still Face Capture**: Users are prompted to maintain a steady face in front of the camera.

- **Step 2 - Face Matching (Face Verification)**: Our system utilizes advanced facial recognition technology to compare the live image of the user's face with the photo on the ID card.

#### 3. Liveness Detection

To ensure the user's physical presence during the eKYC process and to prevent the use of static images or videos, we implement Liveness Detection. This step involves the following challenges to validate the user's authenticity:

- **Step 3 - Liveness Challenges**: Users are required to perform specific actions or challenges, which may include blinking, smiling, or turning their head.

- **Step 4 - Successful Liveness Verification**: Successful completion of the liveness challenges indicates the user's authenticity, confirming a successful eKYC process.

These combined steps—ID card upload, Face Verification, and Liveness Detection—comprehensively verify the user's identity, enhancing security and reducing the risk of fraudulent attempts.

## Installation
1. Clone the repository
```bash
git clone https://github.com/manhcuong02/eKYC
cd eKYC
```
2. Install the required dependencies
```bash
pip install -r requirements.txt
```

## Usage
1. Download weights of the [pretrained VGGFace models](https://drive.google.com/drive/folders/1JwC02IGWyAh8_rn55vmUz3tsZnBz06KM?usp=drive_link) from ggdrive, and then add them to the 'verification_models/weights' directory

2. Using the PyQt5 Interface:
```bash
python3 main.py
```

## Results

coming soon
140 changes: 140 additions & 0 deletions challenge_response.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,140 @@
import random

import cv2 as cv
import numpy as np
import torch

from facenet.models.mtcnn import MTCNN
from liveness_detection.blink_detection import *
from liveness_detection.emotion_prediction import *
from liveness_detection.face_orientation import *
from utils.functions import extract_face


def random_challenge():
return random.choice(
['smile', 'surprise', 'blink eyes', 'right', 'left']
)

def get_question(challenge):
"""
Generate a question or instruction based on the challenge.
Parameters:
challenge (str): The current challenge, which can be 'smile', 'surprise', 'right', 'left', 'front', or 'blink eyes'.
Returns:
str or list: A question or instruction related to the challenge.
If the challenge is 'blink eyes', returns a list containing the instruction and the required number of blinks.
"""
if challenge in ['smile', 'surprise']:
return "Please put on a {} expression".format(challenge)

elif challenge in ['right', 'left', 'front']:
return "Please turn your face to the {}".format(challenge)

elif challenge == 'blink eyes':

num = random.randint(2,4)
return ['Blink your eyes {} times'.format(num), num]

def get_challenge_and_question():
challenge = random_challenge()

question = get_question(challenge)

return challenge, question

def blink_response(image, box, question, model: BlinkDetector):

thresh = question[1]
blink_success = model.eye_blink(image, box, thresh)

return blink_success

def face_response(challenge: str, landmarks: list, model: FaceOrientationDetector):

orientation = model.detect(landmarks)

return orientation == challenge

def emotion_response(face, challenge: str, model: EmotionPredictor):

emotion = model.predict(face)

return emotion == challenge

def result_challenge_response(frame: np.ndarray, challenge: str, question, model: list, mtcnn: MTCNN):
'''
Process the response to a challenge based on the input frame.
Parameters:
frame (np.ndarray): RGB color image.
challenge (str): The current challenge, which can be 'smile', 'surprise', 'right', 'left', 'front', or 'blink eyes'.
question: A question or instruction related to the challenge.
model (list): List of models used, including [blink_model, face_orientation_model, emotion_model].
mtcnn (MTCNN): MTCNN object used for face extraction.
Returns:
bool: The result of the challenge (True if correct, False if incorrect).
'''
face, box, landmarks = extract_face(frame, mtcnn, padding = 10)
if box is not None:
if challenge in ['smile', 'surprise']:
isCorrect = emotion_response(face, challenge, model[2])

elif challenge in ['right', 'left', 'front']:
isCorrect = face_response(challenge, landmarks, model[1])

elif challenge == 'blink eyes':
isCorrect = blink_response(frame, box, question, model[0])

return isCorrect
return False

if __name__ == '__main__':

video = cv.VideoCapture(0)

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

mtcnn = MTCNN()
blink_detector = BlinkDetector()
emotion_predictor = EmotionPredictor()
face_orientation_detector = FaceOrientationDetector()

model = [blink_detector, face_orientation_detector, emotion_predictor]

challenge, question = get_challenge_and_question()
challengeIsCorrect = False

count = 0
while True:
ret, frame = video.read()

if ret:
frame = cv.flip(frame, 1)
if challengeIsCorrect is False:

rgb_frame = cv.cvtColor(frame, cv.COLOR_BGR2RGB)
challengeIsCorrect = result_challenge_response(rgb_frame, challenge, question, model, mtcnn)

if isinstance(question, list):
cv.putText(frame, "Question: {}".format(question[0]), (20,20), cv.FONT_HERSHEY_COMPLEX, 0.5, (0,0,255), 1)
else:
cv.putText(frame, "Question: {}".format(question), (20,20), cv.FONT_HERSHEY_COMPLEX, 0.5, (0,0,255), 1)

cv.imshow("", frame)
if cv.waitKey(1) & 0xFF == ord('q'):
break

count += 1

if challengeIsCorrect is True and count >= 100:
challenge, question = get_challenge_and_question()
print(question)
challengeIsCorrect = False

count = 0
else:
break
92 changes: 92 additions & 0 deletions face_verification.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
import cv2 as cv
import torch
from PIL import Image

from facenet.models.mtcnn import MTCNN
from utils.distance import *
from utils.functions import *
from verification_models import VGGFace


def face_matching(face1, face2, model: torch.nn.Module, distance_metric_name, model_name, device = 'cpu'):
"""
Perform face matching to verify the similarity of two faces using a given distance metric and model.
Parameters:
face1: The first face image for comparison.
face2: The second face image for comparison.
model (torch.nn.Module): The face recognition model.
distance_metric_name: The name of the distance metric to be used ('cosine', 'L1', or 'euclidean').
model_name: The name of the face recognition model.
device (str, optional): The device on which the model should run (default is 'cpu').
Returns:
bool: True if the faces are considered a match, False otherwise.
"""
distance_metric = {
"cosine": Cosine_Distance,
"L1": L1_Distance,
"euclidean": Euclidean_Distance,
}

distance_func = distance_metric.get(distance_metric_name, Euclidean_Distance)

device = model.device()

face1 = face_transform(face1, model_name = model_name, device = device)
face2 = face_transform(face2, model_name = model_name, device = device)

result1 = model(face1)
result2 = model(face2)

id1 = torch.argmax(result1, dim = 1)
id2 = torch.argmax(result2, dim = 1)

# print(id1, id2)

return id1 == id2

dis = distance_func(result1, result2)

threshold = findThreshold(model_name = 'VGG-Face1', distance_metric = distance_metric_name)
print(dis)
return dis < threshold

def verify(img1: np.ndarray, img2: np.ndarray, detector_model: MTCNN, verifier_model, model_name = 'VGG-Face1'):
"""
Verify the similarity between two face images.
Parameters:
img1 (np.ndarray): A numpy RGB image containing the first face.
img2 (np.ndarray): A numpy RGB image containing the second face.
detector_model (MTCNN): The face detection model used to locate faces in the images.
verifier_model: The face verification model used for similarity comparison.
model_name (str, optional): The name of the verification model (default is 'VGG-Face1').
Returns:
bool: True if the faces are verified to be similar, False otherwise.
"""

face1, box1, landmarks = extract_face(img1, detector_model, padding = 1.5)
face2, box2, landmarks = extract_face(img2, detector_model, padding = 1.5)

verified = face_matching(face1, face2, verifier_model, distance_metric_name = 'euclidean', model_name = model_name)

return verified

if __name__ == '__main__':

filename1 = "images/thanh2.png"
filename2 = "images/thanh4.jpg"

image1 = get_image(filename1)
image2 = get_image(filename2)

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

detector_model = MTCNN(device = device)
verifier_model = VGGFace.load_model(device = device)

results = verify(image1, image2, detector_model, verifier_model)

print(results)
21 changes: 21 additions & 0 deletions facenet/LICENSE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2019 Timothy Esler

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
11 changes: 11 additions & 0 deletions facenet/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
from .models.inception_resnet_v1 import InceptionResnetV1
from .models.mtcnn import MTCNN, PNet, RNet, ONet, prewhiten, fixed_image_standardization
from .models.utils.detect_face import extract_face
from .models.utils import training

import warnings
warnings.filterwarnings(
action="ignore",
message="This overload of nonzero is deprecated:\n\tnonzero()",
category=UserWarning
)
3 changes: 3 additions & 0 deletions facenet/data/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
2018*
*.json
profile.txt
Binary file added facenet/data/facenet-pytorch-banner.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added facenet/data/multiface.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added facenet/data/multiface_detected.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added facenet/data/onet.pt
Binary file not shown.
Binary file added facenet/data/pnet.pt
Binary file not shown.
Binary file added facenet/data/rnet.pt
Binary file not shown.
Loading

0 comments on commit 2e657f7

Please sign in to comment.