This project implements the Thatcher Effect, a psychological phenomenon where it becomes difficult to detect local feature changes in an upside-down face, despite identical changes being obvious in an upright face.
The Thatcher Effect, also known as the Thatcher Illusion, is named after British Prime Minister Margaret Thatcher, whose image was used in the original study. In this effect:
- A face is modified by inverting its eyes and mouth.
- The entire face is then turned upside down.
When viewed upside down, the face might appear relatively normal. However, when the image is rotated to its upright position, the distortions become strikingly apparent.
When viewed upside down, the face might appear relatively normal.
original_flipped | thatcherized |
---|---|
![]() |
![]() |
However, when the image is rotated to its upright position, the distortions become strikingly apparent.
original | thatcherized_flipped |
---|---|
![]() |
![]() |
In my implementation uses advanced computer vision techniques to apply the Thatcher Effect to any facial image:
- Face Detection: We use the RetinaFace model to detect faces in the input image.
- Feature Segmentation: The Segment Anything Model (SAM) is used to precisely segment the eyes and mouth.
- Feature Inversion: The segmented eyes and mouth are flipped vertically and horizontally.
- Feature Application: The inverted features are overlaid onto the original image.
- Full Image Inversion: The entire image is flipped to complete the effect.
-
Clone this repository:
git clone https://github.com/your-username/thatcher-effect.git cd thatcher-effect
-
Install the required dependencies:
pip install opencv-python numpy torch torchvision facexlib segment-anything
-
Download the SAM checkpoint:
wget https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth
The script is divided into two main parts: segmentation and applying the Thatcher effect. This allows for easier debugging and experimentation.
- Run the script:
python app.py
The script produces several outputs:
- Segmented features (left eye, right eye, mouth) with transparent backgrounds
- A JSON file containing feature location data
- The final Thatcherized image
You can adjust several parameters in the script to fine-tune the effect:
box_margin
in thesegment_feature
function: Controls the size of the segmentation boxscale_factor
in theapply_thatcher_effect
function: Adjusts the size increase of features before application
If you encounter issues:
- Ensure all dependencies are correctly installed.
- Check that the input image path and SAM checkpoint path are correct.
- For segmentation issues, try adjusting the
box_margin
parameter. - For application issues, experiment with different
scale_factor
values.
Contributions to improve the implementation are welcome. Please feel free to submit issues or pull requests.