Skip to content

tanjiarui/facial-emotion-recognition

Repository files navigation

Documentation

demo

Getting Started

this is an example of how you may give instructions on setting up your project

install install all dependencies in Linux
sudo apt update ; sudo apt -y dist-upgrade
sudo apt install -y cmake python3-opencv
pip3 install --upgrade pip
pip3 install -r requirement
cd ./efficientdet
python3 setup.py build_ext --inplace # compile the function compute_overlap
mv efficientdet/utils/compute_overlap.cpython* utils
cd ..
configuration
  1. download the dataset CAER. the static version is applied in this project

  2. edit config.py

Preprocessing

annotation due to the dataset only classifies the emotion, the script to annotate faces is necessary. only one face is classified within each image, but there might be multiple faces. thus, there must be a rule to filter out the right face that is identified as the target. in this case, the closer to the image center, the higher likelihood to be the target face
python3 bounding\ box.py
json encoder the output of annotation is csv, while the data generator loads json. that's where this script fits in
python3 encoding.py

Modeling

training

EfficientDet4 is applied for this application

python3 train.py
evaluation

the evaluation follows on COCO metric

python3 evaluation.py
explainable model

explaining model features by visualizing one selected image as a heatmap

python3 grad\ cam.py

Deployment

convert the model export the trained model to accelerate the efficiency
python3 model\ convert.py
inference

inference.py applies facial emotion recognition over a video

python3 inference.py

Improvement

According to mAP over the test set, the model fitted in a satisfactory manner, but it actually becomes weak during the inference. try to set a greater phi to lift the model capacity

Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.725
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.784
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.782
Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.388
Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.673
Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.744
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.847
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.850
Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.850
Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.625
Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.812
Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.863