Skip to content

Commit 2f2bce8

Browse files
committed
very very first commit, WIP git remote add origin [email protected]:cquest/sgblur.git
0 parents  commit 2f2bce8

19 files changed

+560
-0
lines changed

ALGORITHMS.md

+60
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
# Blur algorithms
2+
3+
GeoVisio blurring module offers various blur strategies. It can be selected using `STRATEGY` environment variable, which accepts `FAST`, `COMPROMISE`, `QUALITATIVE` and `LEGACY` values.
4+
5+
If you use __Docker__, you may want to map container folder `/opt/blur/models` to an host folder where there is enough space to store blurring models.
6+
7+
## Options
8+
9+
Several specific options are available, and can be defined using environment variables.
10+
11+
- `STRATEGY` (optional) : set algorithm to use for blurring, it depends on the level of speed and precision you require. Values are:
12+
- `DISABLE` completely disables blurring
13+
- `FAST` is the fastest algorithm but should be considered unreliable on pictures with lots of persons or large pictures like 360° images with persons in the background
14+
- `COMPROMISE` should be reliable enough for all kinds of images but the blur may be jagged
15+
- `QUALITATIVE` takes a lot more time to complete but will accomplish good detouring
16+
- `LEGACY` theorically has the best results but is *way* slower than every other method (this is the blur algorithm used in GeoVisio versions <= 1.2.0)
17+
- `WEBP_METHOD` : quality/speed trade-off for WebP encoding of pictures derivates (0=fast, 6=slower-better, 6 by default).
18+
19+
## Development notes
20+
21+
This part presents more information about blur strategies implementation, this is a recommended-but-not-mandatory read 😉
22+
23+
### General blur pipeline
24+
25+
The blurring pipeline is as follow:
26+
27+
1. Get an input image
28+
2. Split it into multiple quads, which sizes correspond to the object detection AI model (inferer)'s input size
29+
3. Give the quads to the inferer
30+
4. Get boxes that *may* contain cars, persons...
31+
5. Place them on a new single image, which size is as close as possible to the sementic segmentation AI model (segmenter)'s input size
32+
6. Give that image to the segmenter
33+
7. Get a blur mask back (containing a blur mask for each box)
34+
8. Arrange the blur masks back on the original image, given the inferer's boxes
35+
9. Apply the blur mask to the original image
36+
37+
The inferer is [YOLOv6](https://github.com/meituan/YOLOv6) and the segmenter depends on the segmentation strategy.
38+
39+
When using the `FAST` strategy, steps 2-5 and 8 are skipped and inferer is not used.
40+
41+
On step 6 (no matter if steps 2-5 took place), if the image is larger than the segmenter's input size, it is reduced to fit in width *or* height and fed to the segmenter in one or more batches.
42+
43+
### Logs
44+
45+
There are several log messages comming from Tensorflow, Pytorch and YOLO that cannot be easily suppressed :\
46+
`INFO: Created TensorFlow Lite XNNPACK delegate for CPU.`, see [a relating issue](https://github.com/google/mediapipe/issues/2354). To fix, a Tensorflow build from source is required.\
47+
`...UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument...`, this is a YOLOv6 generated warning, see [a fix](https://github.com/pytorch/pytorch/issues/50276). To fix, add `, indexing='ij'` in the calls to `torch.meshgrid()` in files `effidehead.py` and `loss.py` of YOLOv6's source.
48+
49+
### GPU/TPU usage
50+
51+
To run the segmenter on the GPU, a CUDA capable GPU is required, run `pip uninstall tensorflow && pip install tensorflow-gpu`. For more information see [the Tensorflow GPU guide](https://www.tensorflow.org/guide/gpu).
52+
53+
To run the inferer on the GPU, refer to the [Pytorch download page](https://pytorch.org/) to find the right Pytorch version with CUDA capabilities.
54+
Both the inferer and segmenter should automatically switch to GPU, this __hasn't been tested__ in a production environment. If you see any issues, please [let us know](https://gitlab.com/PanierAvide/geovisio/-/issues).
55+
56+
For TPU usage, refer to the [Tensorflow tpu guide](https://www.tensorflow.org/guide/tpu), Pytorch seems to be compatible with TPUs but it remains to be verified whether YOLO's architectures also is.
57+
58+
### On YOLOv6 updates
59+
60+
Current GeoVisio code and documentation fixes YOLOv6 to its 0.2.0 release. This has been done because trained models are not compatible through versions, and should be explicitly changed in some parts of the code. When updating, make sure to update models download links and YOLOv6 release tag everywhere necessary in GeoVisio code.

CODE_OF_CONDUCT.md

+134
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,134 @@
1+
2+
# Code of Conduct
3+
4+
## Our Pledge
5+
6+
We as members, contributors, and leaders pledge to make participation in our
7+
community a harassment-free experience for everyone, regardless of age, body
8+
size, visible or invisible disability, ethnicity, sex characteristics, gender
9+
identity and expression, level of experience, education, socio-economic status,
10+
nationality, personal appearance, race, caste, color, religion, or sexual
11+
identity and orientation.
12+
13+
We pledge to act and interact in ways that contribute to an open, welcoming,
14+
diverse, inclusive, and healthy community.
15+
16+
## Our Standards
17+
18+
Examples of behavior that contributes to a positive environment for our
19+
community include:
20+
21+
* Demonstrating empathy and kindness toward other people
22+
* Being respectful of differing opinions, viewpoints, and experiences
23+
* Giving and gracefully accepting constructive feedback
24+
* Accepting responsibility and apologizing to those affected by our mistakes,
25+
and learning from the experience
26+
* Focusing on what is best not just for us as individuals, but for the overall
27+
community
28+
29+
Examples of unacceptable behavior include:
30+
31+
* The use of sexualized language or imagery, and sexual attention or advances of
32+
any kind
33+
* Trolling, insulting or derogatory comments, and personal or political attacks
34+
* Public or private harassment
35+
* Publishing others' private information, such as a physical or email address,
36+
without their explicit permission
37+
* Other conduct which could reasonably be considered inappropriate in a
38+
professional setting
39+
40+
## Enforcement Responsibilities
41+
42+
Community leaders are responsible for clarifying and enforcing our standards of
43+
acceptable behavior and will take appropriate and fair corrective action in
44+
response to any behavior that they deem inappropriate, threatening, offensive,
45+
or harmful.
46+
47+
Community leaders have the right and responsibility to remove, edit, or reject
48+
comments, commits, code, wiki edits, issues, and other contributions that are
49+
not aligned to this Code of Conduct, and will communicate reasons for moderation
50+
decisions when appropriate.
51+
52+
## Scope
53+
54+
This Code of Conduct applies within all community spaces, and also applies when
55+
an individual is officially representing the community in public spaces.
56+
Examples of representing our community include using an official e-mail address,
57+
posting via an official social media account, or acting as an appointed
58+
representative at an online or offline event.
59+
60+
## Enforcement
61+
62+
Instances of abusive, harassing, or otherwise unacceptable behavior may be
63+
reported to the community leaders responsible for enforcement at [email protected]
64+
65+
All complaints will be reviewed and investigated promptly and fairly.
66+
67+
All community leaders are obligated to respect the privacy and security of the
68+
reporter of any incident.
69+
70+
## Enforcement Guidelines
71+
72+
Community leaders will follow these Community Impact Guidelines in determining
73+
the consequences for any action they deem in violation of this Code of Conduct:
74+
75+
### 1. Correction
76+
77+
**Community Impact**: Use of inappropriate language or other behavior deemed
78+
unprofessional or unwelcome in the community.
79+
80+
**Consequence**: A private, written warning from community leaders, providing
81+
clarity around the nature of the violation and an explanation of why the
82+
behavior was inappropriate. A public apology may be requested.
83+
84+
### 2. Warning
85+
86+
**Community Impact**: A violation through a single incident or series of
87+
actions.
88+
89+
**Consequence**: A warning with consequences for continued behavior. No
90+
interaction with the people involved, including unsolicited interaction with
91+
those enforcing the Code of Conduct, for a specified period of time. This
92+
includes avoiding interactions in community spaces as well as external channels
93+
like social media. Violating these terms may lead to a temporary or permanent
94+
ban.
95+
96+
### 3. Temporary Ban
97+
98+
**Community Impact**: A serious violation of community standards, including
99+
sustained inappropriate behavior.
100+
101+
**Consequence**: A temporary ban from any sort of interaction or public
102+
communication with the community for a specified period of time. No public or
103+
private interaction with the people involved, including unsolicited interaction
104+
with those enforcing the Code of Conduct, is allowed during this period.
105+
Violating these terms may lead to a permanent ban.
106+
107+
### 4. Permanent Ban
108+
109+
**Community Impact**: Demonstrating a pattern of violation of community
110+
standards, including sustained inappropriate behavior, harassment of an
111+
individual, or aggression toward or disparagement of classes of individuals.
112+
113+
**Consequence**: A permanent ban from any sort of public interaction within the
114+
community.
115+
116+
## Attribution
117+
118+
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
119+
version 2.1, available at
120+
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
121+
122+
Community Impact Guidelines were inspired by
123+
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
124+
125+
For answers to common questions about this code of conduct, see the FAQ at
126+
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
127+
[https://www.contributor-covenant.org/translations][translations].
128+
129+
[homepage]: https://www.contributor-covenant.org
130+
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
131+
[Mozilla CoC]: https://github.com/mozilla/diversity
132+
[FAQ]: https://www.contributor-covenant.org/faq
133+
[translations]: https://www.contributor-covenant.org/translations
134+

LICENSE

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
MIT License
2+
3+
Copyright (c) 2022 Adrien Pavie
4+
5+
Permission is hereby granted, free of charge, to any person obtaining a copy
6+
of this software and associated documentation files (the "Software"), to deal
7+
in the Software without restriction, including without limitation the rights
8+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9+
copies of the Software, and to permit persons to whom the Software is
10+
furnished to do so, subject to the following conditions:
11+
12+
The above copyright notice and this permission notice shall be included in all
13+
copies or substantial portions of the Software.
14+
15+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21+
SOFTWARE.

README.md

+150
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,150 @@
1+
# GeoVisio Blurring Algorithms
2+
3+
![GeoVisio logo](https://gitlab.com/geovisio/api/-/blob/52df05fd9c95f1a929b32bee9735505eeeddc7e8/images/logo_full.png)
4+
5+
[GeoVisio](https://gitlab.com/geovisio) is a complete solution for storing and __serving your own geolocated pictures__ (like [StreetView](https://www.google.com/streetview/) / [Mapillary](https://mapillary.com/)).
6+
7+
This repository only contains __the blurring algorithms and API__. This repository can be used completely independently from others. [All other components are listed here](https://gitlab.com/geovisio).
8+
9+
10+
## Install
11+
12+
### System dependencies
13+
14+
Some algorithms (compromise and qualitative) will need system dependencies :
15+
16+
- ffmpeg
17+
- libsm6
18+
- libxext6
19+
20+
You can install them through your package manager, for example in Ubuntu:
21+
22+
```bash
23+
sudo apt install ffmpeg libsm6 libxext6
24+
```
25+
26+
### Retrieve code
27+
28+
You can download code from this repository with git clone:
29+
30+
```bash
31+
git clone https://gitlab.com/geovisio/blurring.git
32+
cd blurring/
33+
```
34+
35+
### Other dependencies
36+
37+
We use [Git Submodules](https://git-scm.com/book/en/v2/Git-Tools-Submodules) to manage some of our dependencies. Run the following command to get these dependencies:
38+
39+
```bash
40+
git submodule update --init
41+
```
42+
43+
We also use Pip to handle Python dependencies. You can create a virtual environment first:
44+
45+
```bash
46+
python -m venv env
47+
source ./env/bin/activate
48+
```
49+
50+
And depending on if you want to use API, command-line scripts or both, run these commands:
51+
52+
```bash
53+
pip install -r requirements-bin.txt # For CLI
54+
pip install -r requirements-api.txt # For API
55+
```
56+
57+
If at some point you're lost or need help, you can contact us through [issues](https://gitlab.com/geovisio/blurring/-/issues) or by [email](mailto:[email protected]).
58+
59+
60+
## Usage
61+
62+
### Command-line interface
63+
64+
All details of available commands are listed in [USAGE.md](./USAGE.md) documentation, or by calling this command:
65+
66+
```bash
67+
python src/main.py --help
68+
```
69+
70+
A single picture can be blurred using the following command:
71+
72+
```bash
73+
python src/main.py <path the the picture> <path to the output picture>
74+
```
75+
76+
You can also launch the CLI through Docker:
77+
78+
```bash
79+
docker run \
80+
geovisio/blurring \
81+
cli
82+
```
83+
84+
### Web API
85+
86+
The Web API can be launched with the following command:
87+
88+
```bash
89+
uvicorn src.api:app --reload
90+
```
91+
92+
It is then accessible on [localhost:8000](http://127.0.0.1:8000).
93+
94+
You can also launch the API through Docker:
95+
96+
```bash
97+
docker run \
98+
-p 8000:80 \
99+
--name geovisio_blurring \
100+
geovisio/blurring \
101+
api
102+
```
103+
104+
API documentation is available under `/docs` route, so [localhost:8000/docs](http://127.0.0.1:8000/docs) if you use local instance.
105+
106+
A single picture can be blurred using the following HTTP call (here made using _curl_):
107+
108+
```bash
109+
# Considering your picture is called my_picture.jpg
110+
curl -X 'POST' \
111+
'http://127.0.0.1:8000/blur/' \
112+
-H 'accept: image/webp' \
113+
-H 'Content-Type: multipart/form-data' \
114+
-F 'picture=@my_picture.jpg;type=image/jpeg' \
115+
--output blurred.webp
116+
```
117+
118+
Note that various settings can be changed to control API behaviour. You can edit them [using one of the described method in FastAPI documentation](https://fastapi.tiangolo.com/advanced/settings/). Available settings are:
119+
120+
- `STRATEGY`: blur algorithm to use (FAST, LEGACY, COMPROMISE, QUALITATIVE)
121+
- `WEBP_METHOD`: quality/speed trade-off for WebP encoding of pictures derivates (0=fast, 6=slower-better, 6 by default)
122+
123+
124+
## Contributing
125+
126+
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
127+
128+
You might want to read more [about available blur algorithms](./ALGORITHMS.md).
129+
130+
### Testing
131+
132+
Tests are handled with Pytest. You can run them using:
133+
134+
```bash
135+
pip install -r requirements-dev.txt
136+
pytest
137+
```
138+
139+
### Documentation
140+
141+
High-level documentation for command-line script is handled by [Typer](https://typer.tiangolo.com/). You can update the generated `USAGE.md` file using this command:
142+
143+
```bash
144+
make docs
145+
```
146+
147+
148+
## License
149+
150+
Copyright (c) GeoVisio team 2022-2023, [released under MIT license](./LICENSE).

USAGE.md

+22
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
# `python src/main.py`
2+
3+
GeoVisio blurring scripts
4+
5+
**Usage**:
6+
7+
```console
8+
$ python src/main.py [OPTIONS] INPUT OUTPUT COMMAND [ARGS]...
9+
```
10+
11+
**Arguments**:
12+
13+
* `INPUT`: Picture to blur [required]
14+
* `OUTPUT`: Output file path [required]
15+
16+
**Options**:
17+
18+
* `--strategy [fast|legacy|compromise|qualitative]`: Blur algorithm to use [default: Strategy.fast]
19+
* `--mask / --picture`: Get a blur mask instead of blurred picture [default: picture]
20+
* `--install-completion`: Install completion for the current shell.
21+
* `--show-completion`: Show completion for the current shell, to copy it or customize the installation.
22+
* `--help`: Show this message and exit.

models/yolov8s_panoramax.pt

21.9 MB
Binary file not shown.

requirements.txt

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
ultralytics==8.0.58
2+
pyturbojpeg==1.7.0
3+
Pillow-simd

src/__init__.py

Whitespace-only changes.
123 Bytes
Binary file not shown.

src/__pycache__/api.cpython-310.pyc

815 Bytes
Binary file not shown.

0 commit comments

Comments
 (0)