+ 1. Download libtorch from here. Please note that only Pre-cxx11 ABI and version 1.8.1+ on Linux platform are supported by now. For previous versions of libtorch, you can find them in the issue comment.
+ 2. Take Libtorch1.8.1+cu111 as an example. You can install it like this:
+
+
+```python
+from mmengine.evaluator import BaseMetric
+
+class Accuracy(BaseMetric):
+ def process(self, data_batch, data_samples):
+ score, gt = data_samples
+ # save the middle result of a batch to `self.results`
+ self.results.append({
+ 'batch_size': len(gt),
+ 'correct': (score.argmax(dim=1) == gt).sum().cpu(),
+ })
+
+ def compute_metrics(self, results):
+ total_correct = sum(item['correct'] for item in results)
+ total_size = sum(item['batch_size'] for item in results)
+ # return the dict containing the eval results
+ # the key is the name of the metric name
+ return dict(accuracy=100 * total_correct / total_size)
+```
+
+
+
+```python
+# The default configuration of log_processor is used for epoch based training.
+# Defining it here additionally is for building runner with the same way.
+log_processor = dict(by_epoch=True)
+```
+
+
diff --git a/internlm_langchain/knowledge_base/MMOCR/content/visualizers.md b/internlm_langchain/knowledge_base/MMOCR/content/visualizers.md
new file mode 100644
index 00000000..323dc0a2
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMOCR/content/visualizers.md
@@ -0,0 +1,3 @@
+# 可视化组件\[待更新\]
+
+待更新
diff --git a/internlm_langchain/knowledge_base/MMOCR/vector_store/index.faiss b/internlm_langchain/knowledge_base/MMOCR/vector_store/index.faiss
new file mode 100644
index 00000000..3e74d49d
Binary files /dev/null and b/internlm_langchain/knowledge_base/MMOCR/vector_store/index.faiss differ
diff --git a/internlm_langchain/knowledge_base/MMPose/content/2d_animal_keypoint.md b/internlm_langchain/knowledge_base/MMPose/content/2d_animal_keypoint.md
new file mode 100644
index 00000000..28b0b726
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/2d_animal_keypoint.md
@@ -0,0 +1,545 @@
+# 2D Animal Keypoint Dataset
+
+It is recommended to symlink the dataset root to `$MMPOSE/data`.
+If your folder structure is different, you may need to change the corresponding paths in config files.
+
+MMPose supported datasets:
+
+- [Animal-Pose](#animal-pose) \[ [Homepage](https://sites.google.com/view/animal-pose/) \]
+- [AP-10K](#ap-10k) \[ [Homepage](https://github.com/AlexTheBad/AP-10K/) \]
+- [Horse-10](#horse-10) \[ [Homepage](http://www.mackenziemathislab.org/horse10) \]
+- [MacaquePose](#macaquepose) \[ [Homepage](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html) \]
+- [Vinegar Fly](#vinegar-fly) \[ [Homepage](https://github.com/jgraving/DeepPoseKit-Data) \]
+- [Desert Locust](#desert-locust) \[ [Homepage](https://github.com/jgraving/DeepPoseKit-Data) \]
+- [Grévy’s Zebra](#grvys-zebra) \[ [Homepage](https://github.com/jgraving/DeepPoseKit-Data) \]
+- [ATRW](#atrw) \[ [Homepage](https://cvwc2019.github.io/challenge.html) \]
+- [Animal Kingdom](#Animal-Kindom) \[ [Homepage](https://openaccess.thecvf.com/content/CVPR2022/html/Ng_Animal_Kingdom_A_Large_and_Diverse_Dataset_for_Animal_Behavior_CVPR_2022_paper.html) \]
+
+## Animal-Pose
+
+
+
+
+Animal-Pose (ICCV'2019)
+
+```bibtex
+@InProceedings{Cao_2019_ICCV,
+ author = {Cao, Jinkun and Tang, Hongyang and Fang, Hao-Shu and Shen, Xiaoyong and Lu, Cewu and Tai, Yu-Wing},
+ title = {Cross-Domain Adaptation for Animal Pose Estimation},
+ booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
+ month = {October},
+ year = {2019}
+}
+```
+
+
+
+
+
+
+
+For [Animal-Pose](https://sites.google.com/view/animal-pose/) dataset, we prepare the dataset as follows:
+
+1. Download the images of [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/#data), especially the five categories (dog, cat, sheep, cow, horse), which we use as trainval dataset.
+2. Download the [test-set](https://drive.google.com/drive/folders/1DwhQobZlGntOXxdm7vQsE4bqbFmN3b9y?usp=sharing) images with raw annotations (1000 images, 5 categories).
+3. We have pre-processed the annotations to make it compatible with MMPose. Please download the annotation files from [annotations](https://download.openmmlab.com/mmpose/datasets/animalpose_annotations.tar). If you would like to generate the annotations by yourself, please check our dataset parsing [codes](/tools/dataset_converters/parse_animalpose_dataset.py).
+
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── animalpose
+ │
+ │-- VOC2012
+ │ │-- Annotations
+ │ │-- ImageSets
+ │ │-- JPEGImages
+ │ │-- SegmentationClass
+ │ │-- SegmentationObject
+ │
+ │-- animalpose_image_part2
+ │ │-- cat
+ │ │-- cow
+ │ │-- dog
+ │ │-- horse
+ │ │-- sheep
+ │
+ │-- annotations
+ │ │-- animalpose_train.json
+ │ |-- animalpose_val.json
+ │ |-- animalpose_trainval.json
+ │ │-- animalpose_test.json
+ │
+ │-- PASCAL2011_animal_annotation
+ │ │-- cat
+ │ │ |-- 2007_000528_1.xml
+ │ │ |-- 2007_000549_1.xml
+ │ │ │-- ...
+ │ │-- cow
+ │ │-- dog
+ │ │-- horse
+ │ │-- sheep
+ │
+ │-- annimalpose_anno2
+ │ │-- cat
+ │ │ |-- ca1.xml
+ │ │ |-- ca2.xml
+ │ │ │-- ...
+ │ │-- cow
+ │ │-- dog
+ │ │-- horse
+ │ │-- sheep
+
+```
+
+The official dataset does not provide the official train/val/test set split.
+We choose the images from PascalVOC for train & val. In total, we have 3608 images and 5117 annotations for train+val, where
+2798 images with 4000 annotations are used for training, and 810 images with 1117 annotations are used for validation.
+Those images from other sources (1000 images with 1000 annotations) are used for testing.
+
+## AP-10K
+
+
+
+
+AP-10K (NeurIPS'2021)
+
+```bibtex
+@misc{yu2021ap10k,
+ title={AP-10K: A Benchmark for Animal Pose Estimation in the Wild},
+ author={Hang Yu and Yufei Xu and Jing Zhang and Wei Zhao and Ziyu Guan and Dacheng Tao},
+ year={2021},
+ eprint={2108.12617},
+ archivePrefix={arXiv},
+ primaryClass={cs.CV}
+}
+```
+
+
+
+
+
+
+
+For [AP-10K](https://github.com/AlexTheBad/AP-10K/) dataset, images and annotations can be downloaded from [download](https://drive.google.com/file/d/1-FNNGcdtAQRehYYkGY1y4wzFNg4iWNad/view?usp=sharing).
+Note, this data and annotation data is for non-commercial use only.
+
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── ap10k
+ │-- annotations
+ │ │-- ap10k-train-split1.json
+ │ |-- ap10k-train-split2.json
+ │ |-- ap10k-train-split3.json
+ │ │-- ap10k-val-split1.json
+ │ |-- ap10k-val-split2.json
+ │ |-- ap10k-val-split3.json
+ │ |-- ap10k-test-split1.json
+ │ |-- ap10k-test-split2.json
+ │ |-- ap10k-test-split3.json
+ │-- data
+ │ │-- 000000000001.jpg
+ │ │-- 000000000002.jpg
+ │ │-- ...
+
+```
+
+The annotation files in 'annotation' folder contains 50 labeled animal species. There are total 10,015 labeled images with 13,028 instances in the AP-10K dataset. We randonly split them into train, val, and test set following the ratio of 7:1:2.
+
+## Horse-10
+
+
+
+
+Horse-10 (WACV'2021)
+
+```bibtex
+@inproceedings{mathis2021pretraining,
+ title={Pretraining boosts out-of-domain robustness for pose estimation},
+ author={Mathis, Alexander and Biasi, Thomas and Schneider, Steffen and Yuksekgonul, Mert and Rogers, Byron and Bethge, Matthias and Mathis, Mackenzie W},
+ booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
+ pages={1859--1868},
+ year={2021}
+}
+```
+
+
+
+
+
+
+
+For [Horse-10](http://www.mackenziemathislab.org/horse10) dataset, images can be downloaded from [download](http://www.mackenziemathislab.org/horse10).
+Please download the annotation files from [horse10_annotations](https://download.openmmlab.com/mmpose/datasets/horse10_annotations.tar). Note, this data and annotation data is for non-commercial use only, per the authors (see http://horse10.deeplabcut.org for more information).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── horse10
+ │-- annotations
+ │ │-- horse10-train-split1.json
+ │ |-- horse10-train-split2.json
+ │ |-- horse10-train-split3.json
+ │ │-- horse10-test-split1.json
+ │ |-- horse10-test-split2.json
+ │ |-- horse10-test-split3.json
+ │-- labeled-data
+ │ │-- BrownHorseinShadow
+ │ │-- BrownHorseintoshadow
+ │ │-- ...
+
+```
+
+## MacaquePose
+
+
+
+
+MacaquePose (bioRxiv'2020)
+
+```bibtex
+@article{labuguen2020macaquepose,
+ title={MacaquePose: A novel ‘in the wild’macaque monkey pose dataset for markerless motion capture},
+ author={Labuguen, Rollyn and Matsumoto, Jumpei and Negrete, Salvador and Nishimaru, Hiroshi and Nishijo, Hisao and Takada, Masahiko and Go, Yasuhiro and Inoue, Ken-ichi and Shibata, Tomohiro},
+ journal={bioRxiv},
+ year={2020},
+ publisher={Cold Spring Harbor Laboratory}
+}
+```
+
+
+
+
+
+
+
+For [MacaquePose](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html) dataset, images can be downloaded from [download](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html).
+Please download the annotation files from [macaque_annotations](https://download.openmmlab.com/mmpose/datasets/macaque_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── macaque
+ │-- annotations
+ │ │-- macaque_train.json
+ │ |-- macaque_test.json
+ │-- images
+ │ │-- 01418849d54b3005.jpg
+ │ │-- 0142d1d1a6904a70.jpg
+ │ │-- 01ef2c4c260321b7.jpg
+ │ │-- 020a1c75c8c85238.jpg
+ │ │-- 020b1506eef2557d.jpg
+ │ │-- ...
+
+```
+
+Since the official dataset does not provide the test set, we randomly select 12500 images for training, and the rest for evaluation (see [code](/tools/dataset/parse_macaquepose_dataset.py)).
+
+## Vinegar Fly
+
+
+
+
+Vinegar Fly (Nature Methods'2019)
+
+```bibtex
+@article{pereira2019fast,
+ title={Fast animal pose estimation using deep neural networks},
+ author={Pereira, Talmo D and Aldarondo, Diego E and Willmore, Lindsay and Kislin, Mikhail and Wang, Samuel S-H and Murthy, Mala and Shaevitz, Joshua W},
+ journal={Nature methods},
+ volume={16},
+ number={1},
+ pages={117--125},
+ year={2019},
+ publisher={Nature Publishing Group}
+}
+```
+
+
+
+
+
+
+
+For [Vinegar Fly](https://github.com/jgraving/DeepPoseKit-Data) dataset, images can be downloaded from [vinegar_fly_images](https://download.openmmlab.com/mmpose/datasets/vinegar_fly_images.tar).
+Please download the annotation files from [vinegar_fly_annotations](https://download.openmmlab.com/mmpose/datasets/vinegar_fly_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── fly
+ │-- annotations
+ │ │-- fly_train.json
+ │ |-- fly_test.json
+ │-- images
+ │ │-- 0.jpg
+ │ │-- 1.jpg
+ │ │-- 2.jpg
+ │ │-- 3.jpg
+ │ │-- ...
+
+```
+
+Since the official dataset does not provide the test set, we randomly select 90% images for training, and the rest (10%) for evaluation (see [code](/tools/dataset_converters/parse_deepposekit_dataset.py)).
+
+## Desert Locust
+
+
+
+
+Desert Locust (Elife'2019)
+
+```bibtex
+@article{graving2019deepposekit,
+ title={DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning},
+ author={Graving, Jacob M and Chae, Daniel and Naik, Hemal and Li, Liang and Koger, Benjamin and Costelloe, Blair R and Couzin, Iain D},
+ journal={Elife},
+ volume={8},
+ pages={e47994},
+ year={2019},
+ publisher={eLife Sciences Publications Limited}
+}
+```
+
+
+
+
+
+
+
+For [Desert Locust](https://github.com/jgraving/DeepPoseKit-Data) dataset, images can be downloaded from [locust_images](https://download.openmmlab.com/mmpose/datasets/locust_images.tar).
+Please download the annotation files from [locust_annotations](https://download.openmmlab.com/mmpose/datasets/locust_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── locust
+ │-- annotations
+ │ │-- locust_train.json
+ │ |-- locust_test.json
+ │-- images
+ │ │-- 0.jpg
+ │ │-- 1.jpg
+ │ │-- 2.jpg
+ │ │-- 3.jpg
+ │ │-- ...
+
+```
+
+Since the official dataset does not provide the test set, we randomly select 90% images for training, and the rest (10%) for evaluation (see [code](/tools/dataset_converters/parse_deepposekit_dataset.py)).
+
+## Grévy’s Zebra
+
+
+
+
+Grévy’s Zebra (Elife'2019)
+
+```bibtex
+@article{graving2019deepposekit,
+ title={DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning},
+ author={Graving, Jacob M and Chae, Daniel and Naik, Hemal and Li, Liang and Koger, Benjamin and Costelloe, Blair R and Couzin, Iain D},
+ journal={Elife},
+ volume={8},
+ pages={e47994},
+ year={2019},
+ publisher={eLife Sciences Publications Limited}
+}
+```
+
+
+
+
+
+
+
+For [Grévy’s Zebra](https://github.com/jgraving/DeepPoseKit-Data) dataset, images can be downloaded from [zebra_images](https://download.openmmlab.com/mmpose/datasets/zebra_images.tar).
+Please download the annotation files from [zebra_annotations](https://download.openmmlab.com/mmpose/datasets/zebra_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── zebra
+ │-- annotations
+ │ │-- zebra_train.json
+ │ |-- zebra_test.json
+ │-- images
+ │ │-- 0.jpg
+ │ │-- 1.jpg
+ │ │-- 2.jpg
+ │ │-- 3.jpg
+ │ │-- ...
+
+```
+
+Since the official dataset does not provide the test set, we randomly select 90% images for training, and the rest (10%) for evaluation (see [code](/tools/dataset_converters/parse_deepposekit_dataset.py)).
+
+## ATRW
+
+
+
+
+ATRW (ACM MM'2020)
+
+```bibtex
+@inproceedings{li2020atrw,
+ title={ATRW: A Benchmark for Amur Tiger Re-identification in the Wild},
+ author={Li, Shuyuan and Li, Jianguo and Tang, Hanlin and Qian, Rui and Lin, Weiyao},
+ booktitle={Proceedings of the 28th ACM International Conference on Multimedia},
+ pages={2590--2598},
+ year={2020}
+}
+```
+
+
+
+
+
+
+
+ATRW captures images of the Amur tiger (also known as Siberian tiger, Northeast-China tiger) in the wild.
+For [ATRW](https://cvwc2019.github.io/challenge.html) dataset, please download images from
+[Pose_train](https://lilablobssc.blob.core.windows.net/cvwc2019/train/atrw_pose_train.tar.gz),
+[Pose_val](https://lilablobssc.blob.core.windows.net/cvwc2019/train/atrw_pose_val.tar.gz), and
+[Pose_test](https://lilablobssc.blob.core.windows.net/cvwc2019/test/atrw_pose_test.tar.gz).
+Note that in the ATRW official annotation files, the key "file_name" is written as "filename". To make it compatible with
+other coco-type json files, we have modified this key.
+Please download the modified annotation files from [atrw_annotations](https://download.openmmlab.com/mmpose/datasets/atrw_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── atrw
+ │-- annotations
+ │ │-- keypoint_train.json
+ │ │-- keypoint_val.json
+ │ │-- keypoint_trainval.json
+ │-- images
+ │ │-- train
+ │ │ │-- 000002.jpg
+ │ │ │-- 000003.jpg
+ │ │ │-- ...
+ │ │-- val
+ │ │ │-- 000001.jpg
+ │ │ │-- 000013.jpg
+ │ │ │-- ...
+ │ │-- test
+ │ │ │-- 000000.jpg
+ │ │ │-- 000004.jpg
+ │ │ │-- ...
+
+```
+
+## Animal Kingdom
+
+
+Animal Kingdom (CVPR'2022)
+
+
+
+
+
+```bibtex
+@inproceedings{Ng_2022_CVPR,
+ author = {Ng, Xun Long and Ong, Kian Eng and Zheng, Qichen and Ni, Yun and Yeo, Si Yong and Liu, Jun},
+ title = {Animal Kingdom: A Large and Diverse Dataset for Animal Behavior Understanding},
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
+ month = {June},
+ year = {2022},
+ pages = {19023-19034}
+ }
+```
+
+For [Animal Kingdom](https://github.com/sutdcv/Animal-Kingdom) dataset, images can be downloaded from [here](https://forms.office.com/pages/responsepage.aspx?id=drd2NJDpck-5UGJImDFiPVRYpnTEMixKqPJ1FxwK6VZUQkNTSkRISTNORUI2TDBWMUpZTlQ5WUlaSyQlQCN0PWcu).
+Please Extract dataset under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── ak
+ |--annotations
+ │ │-- ak_P1
+ │ │ │-- train.json
+ │ │ │-- test.json
+ │ │-- ak_P2
+ │ │ │-- train.json
+ │ │ │-- test.json
+ │ │-- ak_P3_amphibian
+ │ │ │-- train.json
+ │ │ │-- test.json
+ │ │-- ak_P3_bird
+ │ │ │-- train.json
+ │ │ │-- test.json
+ │ │-- ak_P3_fish
+ │ │ │-- train.json
+ │ │ │-- test.json
+ │ │-- ak_P3_mammal
+ │ │ │-- train.json
+ │ │ │-- test.json
+ │ │-- ak_P3_reptile
+ │ │-- train.json
+ │ │-- test.json
+ │-- images
+ │ │-- AAACXZTV
+ │ │ │--AAACXZTV_f000059.jpg
+ │ │ │--...
+ │ │-- AAAUILHH
+ │ │ │--AAAUILHH_f000098.jpg
+ │ │ │--...
+ │ │-- ...
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/2d_body_keypoint.md b/internlm_langchain/knowledge_base/MMPose/content/2d_body_keypoint.md
new file mode 100644
index 00000000..4448ebe8
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/2d_body_keypoint.md
@@ -0,0 +1,588 @@
+# 2D Body Keypoint Datasets
+
+It is recommended to symlink the dataset root to `$MMPOSE/data`.
+If your folder structure is different, you may need to change the corresponding paths in config files.
+
+MMPose supported datasets:
+
+- Images
+ - [COCO](#coco) \[ [Homepage](http://cocodataset.org/) \]
+ - [MPII](#mpii) \[ [Homepage](http://human-pose.mpi-inf.mpg.de/) \]
+ - [MPII-TRB](#mpii-trb) \[ [Homepage](https://github.com/kennymckormick/Triplet-Representation-of-human-Body) \]
+ - [AI Challenger](#aic) \[ [Homepage](https://github.com/AIChallenger/AI_Challenger_2017) \]
+ - [CrowdPose](#crowdpose) \[ [Homepage](https://github.com/Jeff-sjtu/CrowdPose) \]
+ - [OCHuman](#ochuman) \[ [Homepage](https://github.com/liruilong940607/OCHumanApi) \]
+ - [MHP](#mhp) \[ [Homepage](https://lv-mhp.github.io/dataset) \]
+ - [Human-Art](#humanart) \[ [Homepage](https://idea-research.github.io/HumanArt/) \]
+- Videos
+ - [PoseTrack18](#posetrack18) \[ [Homepage](https://posetrack.net/users/download.php) \]
+ - [sub-JHMDB](#sub-jhmdb-dataset) \[ [Homepage](http://jhmdb.is.tue.mpg.de/dataset) \]
+
+## COCO
+
+
+
+
+COCO (ECCV'2014)
+
+```bibtex
+@inproceedings{lin2014microsoft,
+ title={Microsoft coco: Common objects in context},
+ author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
+ booktitle={European conference on computer vision},
+ pages={740--755},
+ year={2014},
+ organization={Springer}
+}
+```
+
+
+
+
+
+
+
+For [COCO](http://cocodataset.org/) data, please download from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation.
+[HRNet-Human-Pose-Estimation](https://github.com/HRNet/HRNet-Human-Pose-Estimation) provides person detection result of COCO val2017 to reproduce our multi-person pose estimation results.
+Please download from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing).
+Optionally, to evaluate on COCO'2017 test-dev, please download the [image-info](https://download.openmmlab.com/mmpose/datasets/person_keypoints_test-dev-2017.json).
+Download and extract them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── coco
+ │-- annotations
+ │ │-- person_keypoints_train2017.json
+ │ |-- person_keypoints_val2017.json
+ │ |-- person_keypoints_test-dev-2017.json
+ |-- person_detection_results
+ | |-- COCO_val2017_detections_AP_H_56_person.json
+ | |-- COCO_test-dev2017_detections_AP_H_609_person.json
+ │-- train2017
+ │ │-- 000000000009.jpg
+ │ │-- 000000000025.jpg
+ │ │-- 000000000030.jpg
+ │ │-- ...
+ `-- val2017
+ │-- 000000000139.jpg
+ │-- 000000000285.jpg
+ │-- 000000000632.jpg
+ │-- ...
+
+```
+
+## MPII
+
+
+
+
+MPII (CVPR'2014)
+
+```bibtex
+@inproceedings{andriluka14cvpr,
+ author = {Mykhaylo Andriluka and Leonid Pishchulin and Peter Gehler and Schiele, Bernt},
+ title = {2D Human Pose Estimation: New Benchmark and State of the Art Analysis},
+ booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
+ year = {2014},
+ month = {June}
+}
+```
+
+
+
+
+
+
+
+For [MPII](http://human-pose.mpi-inf.mpg.de/) data, please download from [MPII Human Pose Dataset](http://human-pose.mpi-inf.mpg.de/).
+We have converted the original annotation files into json format, please download them from [mpii_annotations](https://download.openmmlab.com/mmpose/datasets/mpii_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── mpii
+ |── annotations
+ | |── mpii_gt_val.mat
+ | |── mpii_test.json
+ | |── mpii_train.json
+ | |── mpii_trainval.json
+ | `── mpii_val.json
+ `── images
+ |── 000001163.jpg
+ |── 000003072.jpg
+
+```
+
+During training and inference, the prediction result will be saved as '.mat' format by default. We also provide a tool to convert this '.mat' to more readable '.json' format.
+
+```shell
+python tools/dataset/mat2json ${PRED_MAT_FILE} ${GT_JSON_FILE} ${OUTPUT_PRED_JSON_FILE}
+```
+
+For example,
+
+```shell
+python tools/dataset/mat2json work_dirs/res50_mpii_256x256/pred.mat data/mpii/annotations/mpii_val.json pred.json
+```
+
+## MPII-TRB
+
+
+
+
+MPII-TRB (ICCV'2019)
+
+```bibtex
+@inproceedings{duan2019trb,
+ title={TRB: A Novel Triplet Representation for Understanding 2D Human Body},
+ author={Duan, Haodong and Lin, Kwan-Yee and Jin, Sheng and Liu, Wentao and Qian, Chen and Ouyang, Wanli},
+ booktitle={Proceedings of the IEEE International Conference on Computer Vision},
+ pages={9479--9488},
+ year={2019}
+}
+```
+
+
+
+
+
+
+
+For [MPII-TRB](https://github.com/kennymckormick/Triplet-Representation-of-human-Body) data, please download from [MPII Human Pose Dataset](http://human-pose.mpi-inf.mpg.de/).
+Please download the annotation files from [mpii_trb_annotations](https://download.openmmlab.com/mmpose/datasets/mpii_trb_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── mpii
+ |── annotations
+ | |── mpii_trb_train.json
+ | |── mpii_trb_val.json
+ `── images
+ |── 000001163.jpg
+ |── 000003072.jpg
+
+```
+
+## AIC
+
+
+
+
+AI Challenger (ArXiv'2017)
+
+```bibtex
+@article{wu2017ai,
+ title={Ai challenger: A large-scale dataset for going deeper in image understanding},
+ author={Wu, Jiahong and Zheng, He and Zhao, Bo and Li, Yixin and Yan, Baoming and Liang, Rui and Wang, Wenjia and Zhou, Shipei and Lin, Guosen and Fu, Yanwei and others},
+ journal={arXiv preprint arXiv:1711.06475},
+ year={2017}
+}
+```
+
+
+
+
+
+
+
+For [AIC](https://github.com/AIChallenger/AI_Challenger_2017) data, please download from [AI Challenger 2017](https://github.com/AIChallenger/AI_Challenger_2017), 2017 Train/Val is needed for keypoints training and validation.
+Please download the annotation files from [aic_annotations](https://download.openmmlab.com/mmpose/datasets/aic_annotations.tar).
+Download and extract them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── aic
+ │-- annotations
+ │ │-- aic_train.json
+ │ |-- aic_val.json
+ │-- ai_challenger_keypoint_train_20170902
+ │ │-- keypoint_train_images_20170902
+ │ │ │-- 0000252aea98840a550dac9a78c476ecb9f47ffa.jpg
+ │ │ │-- 000050f770985ac9653198495ef9b5c82435d49c.jpg
+ │ │ │-- ...
+ `-- ai_challenger_keypoint_validation_20170911
+ │-- keypoint_validation_images_20170911
+ │-- 0002605c53fb92109a3f2de4fc3ce06425c3b61f.jpg
+ │-- 0003b55a2c991223e6d8b4b820045bd49507bf6d.jpg
+ │-- ...
+```
+
+## CrowdPose
+
+
+
+
+CrowdPose (CVPR'2019)
+
+```bibtex
+@article{li2018crowdpose,
+ title={CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark},
+ author={Li, Jiefeng and Wang, Can and Zhu, Hao and Mao, Yihuan and Fang, Hao-Shu and Lu, Cewu},
+ journal={arXiv preprint arXiv:1812.00324},
+ year={2018}
+}
+```
+
+
+
+
+
+
+
+For [CrowdPose](https://github.com/Jeff-sjtu/CrowdPose) data, please download from [CrowdPose](https://github.com/Jeff-sjtu/CrowdPose).
+Please download the annotation files and human detection results from [crowdpose_annotations](https://download.openmmlab.com/mmpose/datasets/crowdpose_annotations.tar).
+For top-down approaches, we follow [CrowdPose](https://arxiv.org/abs/1812.00324) to use the [pre-trained weights](https://pjreddie.com/media/files/yolov3.weights) of [YOLOv3](https://github.com/eriklindernoren/PyTorch-YOLOv3) to generate the detected human bounding boxes.
+For model training, we follow [HigherHRNet](https://github.com/HRNet/HigherHRNet-Human-Pose-Estimation) to train models on CrowdPose train/val dataset, and evaluate models on CrowdPose test dataset.
+Download and extract them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── crowdpose
+ │-- annotations
+ │ │-- mmpose_crowdpose_train.json
+ │ │-- mmpose_crowdpose_val.json
+ │ │-- mmpose_crowdpose_trainval.json
+ │ │-- mmpose_crowdpose_test.json
+ │ │-- det_for_crowd_test_0.1_0.5.json
+ │-- images
+ │-- 100000.jpg
+ │-- 100001.jpg
+ │-- 100002.jpg
+ │-- ...
+```
+
+## OCHuman
+
+
+
+
+OCHuman (CVPR'2019)
+
+```bibtex
+@inproceedings{zhang2019pose2seg,
+ title={Pose2seg: Detection free human instance segmentation},
+ author={Zhang, Song-Hai and Li, Ruilong and Dong, Xin and Rosin, Paul and Cai, Zixi and Han, Xi and Yang, Dingcheng and Huang, Haozhi and Hu, Shi-Min},
+ booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
+ pages={889--898},
+ year={2019}
+}
+```
+
+
+
+
+
+
+
+For [OCHuman](https://github.com/liruilong940607/OCHumanApi) data, please download the images and annotations from [OCHuman](https://github.com/liruilong940607/OCHumanApi),
+Move them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── ochuman
+ │-- annotations
+ │ │-- ochuman_coco_format_val_range_0.00_1.00.json
+ │ |-- ochuman_coco_format_test_range_0.00_1.00.json
+ |-- images
+ │-- 000001.jpg
+ │-- 000002.jpg
+ │-- 000003.jpg
+ │-- ...
+
+```
+
+## MHP
+
+
+
+
+MHP (ACM MM'2018)
+
+```bibtex
+@inproceedings{zhao2018understanding,
+ title={Understanding humans in crowded scenes: Deep nested adversarial learning and a new benchmark for multi-human parsing},
+ author={Zhao, Jian and Li, Jianshu and Cheng, Yu and Sim, Terence and Yan, Shuicheng and Feng, Jiashi},
+ booktitle={Proceedings of the 26th ACM international conference on Multimedia},
+ pages={792--800},
+ year={2018}
+}
+```
+
+
+
+
+
+
+
+For [MHP](https://lv-mhp.github.io/dataset) data, please download from [MHP](https://lv-mhp.github.io/dataset).
+Please download the annotation files from [mhp_annotations](https://download.openmmlab.com/mmpose/datasets/mhp_annotations.tar.gz).
+Please download and extract them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── mhp
+ │-- annotations
+ │ │-- mhp_train.json
+ │ │-- mhp_val.json
+ │
+ `-- train
+ │ │-- images
+ │ │ │-- 1004.jpg
+ │ │ │-- 10050.jpg
+ │ │ │-- ...
+ │
+ `-- val
+ │ │-- images
+ │ │ │-- 10059.jpg
+ │ │ │-- 10068.jpg
+ │ │ │-- ...
+ │
+ `-- test
+ │ │-- images
+ │ │ │-- 1005.jpg
+ │ │ │-- 10052.jpg
+ │ │ │-- ...~~~~
+```
+
+## Human-Art dataset
+
+
+
+
+Human-Art (CVPR'2023)
+
+```bibtex
+@inproceedings{ju2023humanart,
+ title={Human-Art: A Versatile Human-Centric Dataset Bridging Natural and Artificial Scenes},
+ author={Ju, Xuan and Zeng, Ailing and Jianan, Wang and Qiang, Xu and Lei, Zhang},
+ booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR),
+ year={2023}}
+```
+
+
+
+
+
+
+
+For [Human-Art](https://idea-research.github.io/HumanArt/) data, please download the images and annotation files from [its website](https://idea-research.github.io/HumanArt/). You need to fill in the [data form](https://docs.google.com/forms/d/e/1FAIpQLScroT_jvw6B9U2Qca1_cl5Kmmu1ceKtlh6DJNmWLte8xNEhEw/viewform) to get access to the data.
+Move them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+|── data
+ │── HumanArt
+ │-- images
+ │ │-- 2D_virtual_human
+ │ │ |-- cartoon
+ │ │ | |-- 000000000000.jpg
+ │ │ | |-- ...
+ │ │ |-- digital_art
+ │ │ |-- ...
+ │ |-- 3D_virtual_human
+ │ |-- real_human
+ |-- annotations
+ │ │-- validation_humanart.json
+ │ │-- training_humanart_coco.json
+ |-- person_detection_results
+ │ │-- HumanArt_validation_detections_AP_H_56_person.json
+```
+
+You can choose whether to download other annotation files in Human-Art. If you want to use additional annotation files (e.g. validation set of cartoon), you need to edit the corresponding code in config file.
+
+## PoseTrack18
+
+
+
+
+PoseTrack18 (CVPR'2018)
+
+```bibtex
+@inproceedings{andriluka2018posetrack,
+ title={Posetrack: A benchmark for human pose estimation and tracking},
+ author={Andriluka, Mykhaylo and Iqbal, Umar and Insafutdinov, Eldar and Pishchulin, Leonid and Milan, Anton and Gall, Juergen and Schiele, Bernt},
+ booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
+ pages={5167--5176},
+ year={2018}
+}
+```
+
+
+
+
+
+
+
+For [PoseTrack18](https://posetrack.net/users/download.php) data, please download from [PoseTrack18](https://posetrack.net/users/download.php).
+Please download the annotation files from [posetrack18_annotations](https://download.openmmlab.com/mmpose/datasets/posetrack18_annotations.tar).
+We have merged the video-wise separated official annotation files into two json files (posetrack18_train & posetrack18_val.json). We also generate the [mask files](https://download.openmmlab.com/mmpose/datasets/posetrack18_mask.tar) to speed up training.
+For top-down approaches, we use [MMDetection](https://github.com/open-mmlab/mmdetection) pre-trained [Cascade R-CNN](https://download.openmmlab.com/mmdetection/v2.0/cascade_rcnn/cascade_rcnn_x101_64x4d_fpn_20e_coco/cascade_rcnn_x101_64x4d_fpn_20e_coco_20200509_224357-051557b1.pth) (X-101-64x4d-FPN) to generate the detected human bounding boxes.
+Please download and extract them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── posetrack18
+ │-- annotations
+ │ │-- posetrack18_train.json
+ │ │-- posetrack18_val.json
+ │ │-- posetrack18_val_human_detections.json
+ │ │-- train
+ │ │ │-- 000001_bonn_train.json
+ │ │ │-- 000002_bonn_train.json
+ │ │ │-- ...
+ │ │-- val
+ │ │ │-- 000342_mpii_test.json
+ │ │ │-- 000522_mpii_test.json
+ │ │ │-- ...
+ │ `-- test
+ │ │-- 000001_mpiinew_test.json
+ │ │-- 000002_mpiinew_test.json
+ │ │-- ...
+ │
+ `-- images
+ │ │-- train
+ │ │ │-- 000001_bonn_train
+ │ │ │ │-- 000000.jpg
+ │ │ │ │-- 000001.jpg
+ │ │ │ │-- ...
+ │ │ │-- ...
+ │ │-- val
+ │ │ │-- 000342_mpii_test
+ │ │ │ │-- 000000.jpg
+ │ │ │ │-- 000001.jpg
+ │ │ │ │-- ...
+ │ │ │-- ...
+ │ `-- test
+ │ │-- 000001_mpiinew_test
+ │ │ │-- 000000.jpg
+ │ │ │-- 000001.jpg
+ │ │ │-- ...
+ │ │-- ...
+ `-- mask
+ │-- train
+ │ │-- 000002_bonn_train
+ │ │ │-- 000000.jpg
+ │ │ │-- 000001.jpg
+ │ │ │-- ...
+ │ │-- ...
+ `-- val
+ │-- 000522_mpii_test
+ │ │-- 000000.jpg
+ │ │-- 000001.jpg
+ │ │-- ...
+ │-- ...
+```
+
+The official evaluation tool for PoseTrack should be installed from GitHub.
+
+```shell
+pip install git+https://github.com/svenkreiss/poseval.git
+```
+
+## sub-JHMDB dataset
+
+
+
+
+RSN (ECCV'2020)
+
+```bibtex
+@misc{cai2020learning,
+ title={Learning Delicate Local Representations for Multi-Person Pose Estimation},
+ author={Yuanhao Cai and Zhicheng Wang and Zhengxiong Luo and Binyi Yin and Angang Du and Haoqian Wang and Xinyu Zhou and Erjin Zhou and Xiangyu Zhang and Jian Sun},
+ year={2020},
+ eprint={2003.04030},
+ archivePrefix={arXiv},
+ primaryClass={cs.CV}
+}
+```
+
+
+
+
+
+
+
+For [sub-JHMDB](http://jhmdb.is.tue.mpg.de/dataset) data, please download the [images](<(http://files.is.tue.mpg.de/jhmdb/Rename_Images.tar.gz)>) from [JHMDB](http://jhmdb.is.tue.mpg.de/dataset),
+Please download the annotation files from [jhmdb_annotations](https://download.openmmlab.com/mmpose/datasets/jhmdb_annotations.tar).
+Move them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── jhmdb
+ │-- annotations
+ │ │-- Sub1_train.json
+ │ |-- Sub1_test.json
+ │ │-- Sub2_train.json
+ │ |-- Sub2_test.json
+ │ │-- Sub3_train.json
+ │ |-- Sub3_test.json
+ |-- Rename_Images
+ │-- brush_hair
+ │ │--April_09_brush_hair_u_nm_np1_ba_goo_0
+ | │ │--00001.png
+ | │ │--00002.png
+ │-- catch
+ │-- ...
+
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/2d_face_keypoint.md b/internlm_langchain/knowledge_base/MMPose/content/2d_face_keypoint.md
new file mode 100644
index 00000000..62f66bd8
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/2d_face_keypoint.md
@@ -0,0 +1,384 @@
+# 2D Face Keypoint Datasets
+
+It is recommended to symlink the dataset root to `$MMPOSE/data`.
+If your folder structure is different, you may need to change the corresponding paths in config files.
+
+MMPose supported datasets:
+
+- [300W](#300w-dataset) \[ [Homepage](https://ibug.doc.ic.ac.uk/resources/300-W/) \]
+- [WFLW](#wflw-dataset) \[ [Homepage](https://wywu.github.io/projects/LAB/WFLW.html) \]
+- [AFLW](#aflw-dataset) \[ [Homepage](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/) \]
+- [COFW](#cofw-dataset) \[ [Homepage](http://www.vision.caltech.edu/xpburgos/ICCV13/) \]
+- [COCO-WholeBody-Face](#coco-wholebody-face) \[ [Homepage](https://github.com/jin-s13/COCO-WholeBody/) \]
+- [LaPa](#lapa-dataset) \[ [Homepage](https://github.com/JDAI-CV/lapa-dataset) \]
+
+## 300W Dataset
+
+
+
+
+300W (IMAVIS'2016)
+
+```bibtex
+@article{sagonas2016300,
+ title={300 faces in-the-wild challenge: Database and results},
+ author={Sagonas, Christos and Antonakos, Epameinondas and Tzimiropoulos, Georgios and Zafeiriou, Stefanos and Pantic, Maja},
+ journal={Image and vision computing},
+ volume={47},
+ pages={3--18},
+ year={2016},
+ publisher={Elsevier}
+}
+```
+
+
+
+
+
+For WFLW data, please download images from [WFLW Dataset](https://wywu.github.io/projects/LAB/WFLW.html).
+Please download the annotation files from [wflw_annotations](https://download.openmmlab.com/mmpose/datasets/wflw_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── wflw
+ |── annotations
+ | |── face_landmarks_wflw_train.json
+ | |── face_landmarks_wflw_test.json
+ | |── face_landmarks_wflw_test_blur.json
+ | |── face_landmarks_wflw_test_occlusion.json
+ | |── face_landmarks_wflw_test_expression.json
+ | |── face_landmarks_wflw_test_largepose.json
+ | |── face_landmarks_wflw_test_illumination.json
+ | |── face_landmarks_wflw_test_makeup.json
+ |
+ `── images
+ |── 0--Parade
+ | |── 0_Parade_marchingband_1_1015.jpg
+ | |── 0_Parade_marchingband_1_1031.jpg
+ | ...
+ |── 1--Handshaking
+ | |── 1_Handshaking_Handshaking_1_105.jpg
+ | |── 1_Handshaking_Handshaking_1_107.jpg
+ | ...
+ ...
+```
+
+## AFLW Dataset
+
+
+
+
+AFLW (ICCVW'2011)
+
+```bibtex
+@inproceedings{koestinger2011annotated,
+ title={Annotated facial landmarks in the wild: A large-scale, real-world database for facial landmark localization},
+ author={Koestinger, Martin and Wohlhart, Paul and Roth, Peter M and Bischof, Horst},
+ booktitle={2011 IEEE international conference on computer vision workshops (ICCV workshops)},
+ pages={2144--2151},
+ year={2011},
+ organization={IEEE}
+}
+```
+
+
+
+For AFLW data, please download images from [AFLW Dataset](https://www.tugraz.at/institute/icg/research/team-bischof/lrs/downloads/aflw/).
+Please download the annotation files from [aflw_annotations](https://download.openmmlab.com/mmpose/datasets/aflw_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── aflw
+ |── annotations
+ | |── face_landmarks_aflw_train.json
+ | |── face_landmarks_aflw_test_frontal.json
+ | |── face_landmarks_aflw_test.json
+ `── images
+ |── flickr
+ |── 0
+ | |── image00002.jpg
+ | |── image00013.jpg
+ | ...
+ |── 2
+ | |── image00004.jpg
+ | |── image00006.jpg
+ | ...
+ `── 3
+ |── image00032.jpg
+ |── image00035.jpg
+ ...
+```
+
+## COFW Dataset
+
+
+
+
+COFW (ICCV'2013)
+
+```bibtex
+@inproceedings{burgos2013robust,
+ title={Robust face landmark estimation under occlusion},
+ author={Burgos-Artizzu, Xavier P and Perona, Pietro and Doll{\'a}r, Piotr},
+ booktitle={Proceedings of the IEEE international conference on computer vision},
+ pages={1513--1520},
+ year={2013}
+}
+```
+
+
+
+
+
+
+
+For COFW data, please download from [COFW Dataset (Color Images)](http://www.vision.caltech.edu/xpburgos/ICCV13/Data/COFW_color.zip).
+Move `COFW_train_color.mat` and `COFW_test_color.mat` to `data/cofw/` and make them look like:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── cofw
+ |── COFW_train_color.mat
+ |── COFW_test_color.mat
+```
+
+Run the following script under `{MMPose}/data`
+
+`python tools/dataset_converters/parse_cofw_dataset.py`
+
+And you will get
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── cofw
+ |── COFW_train_color.mat
+ |── COFW_test_color.mat
+ |── annotations
+ | |── cofw_train.json
+ | |── cofw_test.json
+ |── images
+ |── 000001.jpg
+ |── 000002.jpg
+```
+
+## COCO-WholeBody (Face)
+
+
+
+
+COCO-WholeBody-Face (ECCV'2020)
+
+```bibtex
+@inproceedings{jin2020whole,
+ title={Whole-Body Human Pose Estimation in the Wild},
+ author={Jin, Sheng and Xu, Lumin and Xu, Jin and Wang, Can and Liu, Wentao and Qian, Chen and Ouyang, Wanli and Luo, Ping},
+ booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
+ year={2020}
+}
+```
+
+
+
+
+
+
+
+For [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody/) dataset, images can be downloaded from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation.
+Download COCO-WholeBody annotations for COCO-WholeBody annotations for [Train](https://drive.google.com/file/d/1thErEToRbmM9uLNi1JXXfOsaS5VK2FXf/view?usp=sharing) / [Validation](https://drive.google.com/file/d/1N6VgwKnj8DeyGXCvp1eYgNbRmw6jdfrb/view?usp=sharing) (Google Drive).
+Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing).
+Download and extract them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── coco
+ │-- annotations
+ │ │-- coco_wholebody_train_v1.0.json
+ │ |-- coco_wholebody_val_v1.0.json
+ |-- person_detection_results
+ | |-- COCO_val2017_detections_AP_H_56_person.json
+ │-- train2017
+ │ │-- 000000000009.jpg
+ │ │-- 000000000025.jpg
+ │ │-- 000000000030.jpg
+ │ │-- ...
+ `-- val2017
+ │-- 000000000139.jpg
+ │-- 000000000285.jpg
+ │-- 000000000632.jpg
+ │-- ...
+
+```
+
+Please also install the latest version of [Extended COCO API](https://github.com/jin-s13/xtcocoapi) to support COCO-WholeBody evaluation:
+
+`pip install xtcocotools`
+
+## LaPa
+
+
+
+
+LaPa (AAAI'2020)
+
+```bibtex
+@inproceedings{liu2020new,
+ title={A New Dataset and Boundary-Attention Semantic Segmentation for Face Parsing.},
+ author={Liu, Yinglu and Shi, Hailin and Shen, Hao and Si, Yue and Wang, Xiaobo and Mei, Tao},
+ booktitle={AAAI},
+ pages={11637--11644},
+ year={2020}
+}
+```
+
+
+
+
+
+
+
+For [LaPa](https://github.com/JDAI-CV/lapa-dataset) dataset, images can be downloaded from [their github page](https://github.com/JDAI-CV/lapa-dataset).
+
+Download and extract them under $MMPOSE/data, and use our `tools/dataset_converters/lapa2coco.py` to make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── LaPa
+ │-- annotations
+ │ │-- lapa_train.json
+ │ |-- lapa_val.json
+ │ |-- lapa_test.json
+ | |-- lapa_trainval.json
+ │-- train
+ │ │-- images
+ │ │-- labels
+ │ │-- landmarks
+ │-- val
+ │ │-- images
+ │ │-- labels
+ │ │-- landmarks
+ `-- test
+ │ │-- images
+ │ │-- labels
+ │ │-- landmarks
+
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/2d_fashion_landmark.md b/internlm_langchain/knowledge_base/MMPose/content/2d_fashion_landmark.md
new file mode 100644
index 00000000..25b7fd7c
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/2d_fashion_landmark.md
@@ -0,0 +1,3 @@
+# 2D服装关键点数据集
+
+内容建设中……
diff --git a/internlm_langchain/knowledge_base/MMPose/content/2d_hand_keypoint.md b/internlm_langchain/knowledge_base/MMPose/content/2d_hand_keypoint.md
new file mode 100644
index 00000000..aade3585
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/2d_hand_keypoint.md
@@ -0,0 +1,348 @@
+# 2D Hand Keypoint Datasets
+
+It is recommended to symlink the dataset root to `$MMPOSE/data`.
+If your folder structure is different, you may need to change the corresponding paths in config files.
+
+MMPose supported datasets:
+
+- [OneHand10K](#onehand10k) \[ [Homepage](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html) \]
+- [FreiHand](#freihand-dataset) \[ [Homepage](https://lmb.informatik.uni-freiburg.de/projects/freihand/) \]
+- [CMU Panoptic HandDB](#cmu-panoptic-handdb) \[ [Homepage](http://domedb.perception.cs.cmu.edu/handdb.html) \]
+- [InterHand2.6M](#interhand26m) \[ [Homepage](https://mks0601.github.io/InterHand2.6M/) \]
+- [RHD](#rhd-dataset) \[ [Homepage](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html) \]
+- [COCO-WholeBody-Hand](#coco-wholebody-hand) \[ [Homepage](https://github.com/jin-s13/COCO-WholeBody/) \]
+
+## OneHand10K
+
+
+
+
+OneHand10K (TCSVT'2019)
+
+```bibtex
+@article{wang2018mask,
+ title={Mask-pose cascaded cnn for 2d hand pose estimation from single color image},
+ author={Wang, Yangang and Peng, Cong and Liu, Yebin},
+ journal={IEEE Transactions on Circuits and Systems for Video Technology},
+ volume={29},
+ number={11},
+ pages={3258--3268},
+ year={2018},
+ publisher={IEEE}
+}
+```
+
+
+
+
+
+
+
+For [OneHand10K](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html) data, please download from [OneHand10K Dataset](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html).
+Please download the annotation files from [onehand10k_annotations](https://download.openmmlab.com/mmpose/datasets/onehand10k_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── onehand10k
+ |── annotations
+ | |── onehand10k_train.json
+ | |── onehand10k_test.json
+ `── Train
+ | |── source
+ | |── 0.jpg
+ | |── 1.jpg
+ | ...
+ `── Test
+ |── source
+ |── 0.jpg
+ |── 1.jpg
+
+```
+
+## FreiHAND Dataset
+
+
+
+
+FreiHand (ICCV'2019)
+
+```bibtex
+@inproceedings{zimmermann2019freihand,
+ title={Freihand: A dataset for markerless capture of hand pose and shape from single rgb images},
+ author={Zimmermann, Christian and Ceylan, Duygu and Yang, Jimei and Russell, Bryan and Argus, Max and Brox, Thomas},
+ booktitle={Proceedings of the IEEE International Conference on Computer Vision},
+ pages={813--822},
+ year={2019}
+}
+```
+
+
+
+
+
+
+
+For [FreiHAND](https://lmb.informatik.uni-freiburg.de/projects/freihand/) data, please download from [FreiHand Dataset](https://lmb.informatik.uni-freiburg.de/resources/datasets/FreihandDataset.en.html).
+Since the official dataset does not provide validation set, we randomly split the training data into 8:1:1 for train/val/test.
+Please download the annotation files from [freihand_annotations](https://download.openmmlab.com/mmpose/datasets/frei_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── freihand
+ |── annotations
+ | |── freihand_train.json
+ | |── freihand_val.json
+ | |── freihand_test.json
+ `── training
+ |── rgb
+ | |── 00000000.jpg
+ | |── 00000001.jpg
+ | ...
+ |── mask
+ |── 00000000.jpg
+ |── 00000001.jpg
+ ...
+```
+
+## CMU Panoptic HandDB
+
+
+
+
+CMU Panoptic HandDB (CVPR'2017)
+
+```bibtex
+@inproceedings{simon2017hand,
+ title={Hand keypoint detection in single images using multiview bootstrapping},
+ author={Simon, Tomas and Joo, Hanbyul and Matthews, Iain and Sheikh, Yaser},
+ booktitle={Proceedings of the IEEE conference on Computer Vision and Pattern Recognition},
+ pages={1145--1153},
+ year={2017}
+}
+```
+
+
+
+
+
+
+
+For [CMU Panoptic HandDB](http://domedb.perception.cs.cmu.edu/handdb.html), please download from [CMU Panoptic HandDB](http://domedb.perception.cs.cmu.edu/handdb.html).
+Following [Simon et al](https://arxiv.org/abs/1704.07809), panoptic images (hand143_panopticdb) and MPII & NZSL training sets (manual_train) are used for training, while MPII & NZSL test set (manual_test) for testing.
+Please download the annotation files from [panoptic_annotations](https://download.openmmlab.com/mmpose/datasets/panoptic_annotations.tar).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── panoptic
+ |── annotations
+ | |── panoptic_train.json
+ | |── panoptic_test.json
+ |
+ `── hand143_panopticdb
+ | |── imgs
+ | | |── 00000000.jpg
+ | | |── 00000001.jpg
+ | | ...
+ |
+ `── hand_labels
+ |── manual_train
+ | |── 000015774_01_l.jpg
+ | |── 000015774_01_r.jpg
+ | ...
+ |
+ `── manual_test
+ |── 000648952_02_l.jpg
+ |── 000835470_01_l.jpg
+ ...
+```
+
+## InterHand2.6M
+
+
+
+
+InterHand2.6M (ECCV'2020)
+
+```bibtex
+@InProceedings{Moon_2020_ECCV_InterHand2.6M,
+author = {Moon, Gyeongsik and Yu, Shoou-I and Wen, He and Shiratori, Takaaki and Lee, Kyoung Mu},
+title = {InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image},
+booktitle = {European Conference on Computer Vision (ECCV)},
+year = {2020}
+}
+```
+
+
+
+
+
+
+
+For [InterHand2.6M](https://mks0601.github.io/InterHand2.6M/), please download from [InterHand2.6M](https://mks0601.github.io/InterHand2.6M/).
+Please download the annotation files from [annotations](https://drive.google.com/drive/folders/1pWXhdfaka-J0fSAze0MsajN0VpZ8e8tO).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── interhand2.6m
+ |── annotations
+ | |── all
+ | |── human_annot
+ | |── machine_annot
+ | |── skeleton.txt
+ | |── subject.txt
+ |
+ `── images
+ | |── train
+ | | |-- Capture0 ~ Capture26
+ | |── val
+ | | |-- Capture0
+ | |── test
+ | | |-- Capture0 ~ Capture7
+```
+
+## RHD Dataset
+
+
+
+
+RHD (ICCV'2017)
+
+```bibtex
+@TechReport{zb2017hand,
+ author={Christian Zimmermann and Thomas Brox},
+ title={Learning to Estimate 3D Hand Pose from Single RGB Images},
+ institution={arXiv:1705.01389},
+ year={2017},
+ note="https://arxiv.org/abs/1705.01389",
+ url="https://lmb.informatik.uni-freiburg.de/projects/hand3d/"
+}
+```
+
+
+
+
+
+
+
+For [RHD Dataset](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html), please download from [RHD Dataset](https://lmb.informatik.uni-freiburg.de/resources/datasets/RenderedHandposeDataset.en.html).
+Please download the annotation files from [rhd_annotations](https://download.openmmlab.com/mmpose/datasets/rhd_annotations.zip).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── rhd
+ |── annotations
+ | |── rhd_train.json
+ | |── rhd_test.json
+ `── training
+ | |── color
+ | | |── 00000.jpg
+ | | |── 00001.jpg
+ | |── depth
+ | | |── 00000.jpg
+ | | |── 00001.jpg
+ | |── mask
+ | | |── 00000.jpg
+ | | |── 00001.jpg
+ `── evaluation
+ | |── color
+ | | |── 00000.jpg
+ | | |── 00001.jpg
+ | |── depth
+ | | |── 00000.jpg
+ | | |── 00001.jpg
+ | |── mask
+ | | |── 00000.jpg
+ | | |── 00001.jpg
+```
+
+## COCO-WholeBody (Hand)
+
+
+
+
+COCO-WholeBody-Hand (ECCV'2020)
+
+```bibtex
+@inproceedings{jin2020whole,
+ title={Whole-Body Human Pose Estimation in the Wild},
+ author={Jin, Sheng and Xu, Lumin and Xu, Jin and Wang, Can and Liu, Wentao and Qian, Chen and Ouyang, Wanli and Luo, Ping},
+ booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
+ year={2020}
+}
+```
+
+
+
+
+
+
+
+For [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody/) dataset, images can be downloaded from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation.
+Download COCO-WholeBody annotations for COCO-WholeBody annotations for [Train](https://drive.google.com/file/d/1thErEToRbmM9uLNi1JXXfOsaS5VK2FXf/view?usp=sharing) / [Validation](https://drive.google.com/file/d/1N6VgwKnj8DeyGXCvp1eYgNbRmw6jdfrb/view?usp=sharing) (Google Drive).
+Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing).
+Download and extract them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── coco
+ │-- annotations
+ │ │-- coco_wholebody_train_v1.0.json
+ │ |-- coco_wholebody_val_v1.0.json
+ |-- person_detection_results
+ | |-- COCO_val2017_detections_AP_H_56_person.json
+ │-- train2017
+ │ │-- 000000000009.jpg
+ │ │-- 000000000025.jpg
+ │ │-- 000000000030.jpg
+ │ │-- ...
+ `-- val2017
+ │-- 000000000139.jpg
+ │-- 000000000285.jpg
+ │-- 000000000632.jpg
+ │-- ...
+```
+
+Please also install the latest version of [Extended COCO API](https://github.com/jin-s13/xtcocoapi) to support COCO-WholeBody evaluation:
+
+`pip install xtcocotools`
diff --git a/internlm_langchain/knowledge_base/MMPose/content/2d_wholebody_keypoint.md b/internlm_langchain/knowledge_base/MMPose/content/2d_wholebody_keypoint.md
new file mode 100644
index 00000000..a082c657
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/2d_wholebody_keypoint.md
@@ -0,0 +1,133 @@
+# 2D Wholebody Keypoint Datasets
+
+It is recommended to symlink the dataset root to `$MMPOSE/data`.
+If your folder structure is different, you may need to change the corresponding paths in config files.
+
+MMPose supported datasets:
+
+- [COCO-WholeBody](#coco-wholebody) \[ [Homepage](https://github.com/jin-s13/COCO-WholeBody/) \]
+- [Halpe](#halpe) \[ [Homepage](https://github.com/Fang-Haoshu/Halpe-FullBody/) \]
+
+## COCO-WholeBody
+
+
+
+
+COCO-WholeBody (ECCV'2020)
+
+```bibtex
+@inproceedings{jin2020whole,
+ title={Whole-Body Human Pose Estimation in the Wild},
+ author={Jin, Sheng and Xu, Lumin and Xu, Jin and Wang, Can and Liu, Wentao and Qian, Chen and Ouyang, Wanli and Luo, Ping},
+ booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
+ year={2020}
+}
+```
+
+
+
+
+
+
+
+For [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody/) dataset, images can be downloaded from [COCO download](http://cocodataset.org/#download), 2017 Train/Val is needed for COCO keypoints training and validation.
+Download COCO-WholeBody annotations for COCO-WholeBody annotations for [Train](https://drive.google.com/file/d/1thErEToRbmM9uLNi1JXXfOsaS5VK2FXf/view?usp=sharing) / [Validation](https://drive.google.com/file/d/1N6VgwKnj8DeyGXCvp1eYgNbRmw6jdfrb/view?usp=sharing) (Google Drive).
+Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing).
+Download and extract them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── coco
+ │-- annotations
+ │ │-- coco_wholebody_train_v1.0.json
+ │ |-- coco_wholebody_val_v1.0.json
+ |-- person_detection_results
+ | |-- COCO_val2017_detections_AP_H_56_person.json
+ │-- train2017
+ │ │-- 000000000009.jpg
+ │ │-- 000000000025.jpg
+ │ │-- 000000000030.jpg
+ │ │-- ...
+ `-- val2017
+ │-- 000000000139.jpg
+ │-- 000000000285.jpg
+ │-- 000000000632.jpg
+ │-- ...
+
+```
+
+Please also install the latest version of [Extended COCO API](https://github.com/jin-s13/xtcocoapi) (version>=1.5) to support COCO-WholeBody evaluation:
+
+`pip install xtcocotools`
+
+## Halpe
+
+
+
+
+Halpe (CVPR'2020)
+
+```bibtex
+@inproceedings{li2020pastanet,
+ title={PaStaNet: Toward Human Activity Knowledge Engine},
+ author={Li, Yong-Lu and Xu, Liang and Liu, Xinpeng and Huang, Xijie and Xu, Yue and Wang, Shiyi and Fang, Hao-Shu and Ma, Ze and Chen, Mingyang and Lu, Cewu},
+ booktitle={CVPR},
+ year={2020}
+}
+```
+
+
+
+
+
+
+
+For [Halpe](https://github.com/Fang-Haoshu/Halpe-FullBody/) dataset, please download images and annotations from [Halpe download](https://github.com/Fang-Haoshu/Halpe-FullBody).
+The images of the training set are from [HICO-Det](https://drive.google.com/open?id=1QZcJmGVlF9f4h-XLWe9Gkmnmj2z1gSnk) and those of the validation set are from [COCO](http://images.cocodataset.org/zips/val2017.zip).
+Download person detection result of COCO val2017 from [OneDrive](https://1drv.ms/f/s!AhIXJn_J-blWzzDXoz5BeFl8sWM-) or [GoogleDrive](https://drive.google.com/drive/folders/1fRUDNUDxe9fjqcRZ2bnF_TKMlO0nB_dk?usp=sharing).
+Download and extract them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── halpe
+ │-- annotations
+ │ │-- halpe_train_v1.json
+ │ |-- halpe_val_v1.json
+ |-- person_detection_results
+ | |-- COCO_val2017_detections_AP_H_56_person.json
+ │-- hico_20160224_det
+ │ │-- anno_bbox.mat
+ │ │-- anno.mat
+ │ │-- README
+ │ │-- images
+ │ │ │-- train2015
+ │ │ │ │-- HICO_train2015_00000001.jpg
+ │ │ │ │-- HICO_train2015_00000002.jpg
+ │ │ │ │-- HICO_train2015_00000003.jpg
+ │ │ │ │-- ...
+ │ │ │-- test2015
+ │ │-- tools
+ │ │-- ...
+ `-- val2017
+ │-- 000000000139.jpg
+ │-- 000000000285.jpg
+ │-- 000000000632.jpg
+ │-- ...
+
+```
+
+Please also install the latest version of [Extended COCO API](https://github.com/jin-s13/xtcocoapi) (version>=1.5) to support Halpe evaluation:
+
+`pip install xtcocotools`
diff --git a/internlm_langchain/knowledge_base/MMPose/content/3d_body_keypoint.md b/internlm_langchain/knowledge_base/MMPose/content/3d_body_keypoint.md
new file mode 100644
index 00000000..82e21010
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/3d_body_keypoint.md
@@ -0,0 +1,199 @@
+# 3D Body Keypoint Datasets
+
+It is recommended to symlink the dataset root to `$MMPOSE/data`.
+If your folder structure is different, you may need to change the corresponding paths in config files.
+
+MMPose supported datasets:
+
+- [Human3.6M](#human36m) \[ [Homepage](http://vision.imar.ro/human3.6m/description.php) \]
+- [CMU Panoptic](#cmu-panoptic) \[ [Homepage](http://domedb.perception.cs.cmu.edu/) \]
+- [Campus/Shelf](#campus-and-shelf) \[ [Homepage](http://campar.in.tum.de/Chair/MultiHumanPose) \]
+
+## Human3.6M
+
+
+
+
+Human3.6M (TPAMI'2014)
+
+```bibtex
+@article{h36m_pami,
+ author = {Ionescu, Catalin and Papava, Dragos and Olaru, Vlad and Sminchisescu, Cristian},
+ title = {Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments},
+ journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
+ publisher = {IEEE Computer Society},
+ volume = {36},
+ number = {7},
+ pages = {1325-1339},
+ month = {jul},
+ year = {2014}
+}
+```
+
+
+
+
+
+
+
+For [Human3.6M](http://vision.imar.ro/human3.6m/description.php), please download from the official website and run the [preprocessing script](/tools/dataset_converters/preprocess_h36m.py), which will extract camera parameters and pose annotations at full framerate (50 FPS) and downsampled framerate (10 FPS). The processed data should have the following structure:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ ├── h36m
+ ├── annotation_body3d
+ | ├── cameras.pkl
+ | ├── fps50
+ | | ├── h36m_test.npz
+ | | ├── h36m_train.npz
+ | | ├── joint2d_rel_stats.pkl
+ | | ├── joint2d_stats.pkl
+ | | ├── joint3d_rel_stats.pkl
+ | | `── joint3d_stats.pkl
+ | `── fps10
+ | ├── h36m_test.npz
+ | ├── h36m_train.npz
+ | ├── joint2d_rel_stats.pkl
+ | ├── joint2d_stats.pkl
+ | ├── joint3d_rel_stats.pkl
+ | `── joint3d_stats.pkl
+ `── images
+ ├── S1
+ | ├── S1_Directions_1.54138969
+ | | ├── S1_Directions_1.54138969_00001.jpg
+ | | ├── S1_Directions_1.54138969_00002.jpg
+ | | ├── ...
+ | ├── ...
+ ├── S5
+ ├── S6
+ ├── S7
+ ├── S8
+ ├── S9
+ `── S11
+```
+
+## CMU Panoptic
+
+
+CMU Panoptic (ICCV'2015)
+
+```bibtex
+@Article = {joo_iccv_2015,
+author = {Hanbyul Joo, Hao Liu, Lei Tan, Lin Gui, Bart Nabbe, Iain Matthews, Takeo Kanade, Shohei Nobuhara, and Yaser Sheikh},
+title = {Panoptic Studio: A Massively Multiview System for Social Motion Capture},
+booktitle = {ICCV},
+year = {2015}
+}
+```
+
+
+
+
+
+
+
+Please follow [voxelpose-pytorch](https://github.com/microsoft/voxelpose-pytorch) to prepare this dataset.
+
+1. Download the dataset by following the instructions in [panoptic-toolbox](https://github.com/CMU-Perceptual-Computing-Lab/panoptic-toolbox) and extract them under `$MMPOSE/data/panoptic`.
+
+2. Only download those sequences that are needed. You can also just download a subset of camera views by specifying the number of views (HD_Video_Number) and changing the camera order in `./scripts/getData.sh`. The used sequences and camera views can be found in [VoxelPose](https://arxiv.org/abs/2004.06239). Note that the sequence "160906_band3" might not be available due to errors on the server of CMU Panoptic.
+
+3. Note that we only use HD videos, calibration data, and 3D Body Keypoint in the codes. You can comment out other irrelevant codes such as downloading 3D Face data in `./scripts/getData.sh`.
+
+The directory tree should be like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ ├── panoptic
+ ├── 16060224_haggling1
+ | | ├── hdImgs
+ | | ├── hdvideos
+ | | ├── hdPose3d_stage1_coco19
+ | | ├── calibration_160224_haggling1.json
+ ├── 160226_haggling1
+ ├── ...
+```
+
+## Campus and Shelf
+
+
+Campus and Shelf (CVPR'2014)
+
+```bibtex
+@inproceedings {belagian14multi,
+ title = {{3D} Pictorial Structures for Multiple Human Pose Estimation},
+ author = {Belagiannis, Vasileios and Amin, Sikandar and Andriluka, Mykhaylo and Schiele, Bernt and Navab
+ Nassir and Ilic, Slobo
+ booktitle = {IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR)},
+ year = {2014},
+ month = {June},
+ organization={IEEE}
+}
+```
+
+
+
+
+
+
+
+Please follow [voxelpose-pytorch](https://github.com/microsoft/voxelpose-pytorch) to prepare these two datasets.
+
+1. Please download the datasets from the [official website](http://campar.in.tum.de/Chair/MultiHumanPose) and extract them under `$MMPOSE/data/campus` and `$MMPOSE/data/shelf`, respectively. The original data include images as well as the ground truth pose file `actorsGT.mat`.
+
+2. We directly use the processed camera parameters from [voxelpose-pytorch](https://github.com/microsoft/voxelpose-pytorch). You can download them from this repository and place in under `$MMPOSE/data/campus/calibration_campus.json` and `$MMPOSE/data/shelf/calibration_shelf.json`, respectively.
+
+3. Like [Voxelpose](https://github.com/microsoft/voxelpose-pytorch), due to the limited and incomplete annotations of the two datasets, we don't train the model using this dataset. Instead, we directly use the 2D pose estimator trained on COCO, and use independent 3D human poses from the CMU Panoptic dataset to train our 3D model. It lies in `${MMPOSE}/data/panoptic_training_pose.pkl`.
+
+4. Like [Voxelpose](https://github.com/microsoft/voxelpose-pytorch), for testing, we first estimate 2D poses and generate 2D heatmaps for these two datasets. You can download the predicted poses from [voxelpose-pytorch](https://github.com/microsoft/voxelpose-pytorch) and place them in `$MMPOSE/data/campus/pred_campus_maskrcnn_hrnet_coco.pkl` and `$MMPOSE/data/shelf/pred_shelf_maskrcnn_hrnet_coco.pkl`, respectively. You can also use the models trained on COCO dataset (like HigherHRNet) to generate 2D heatmaps directly.
+
+The directory tree should be like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ ├── panoptic_training_pose.pkl
+ ├── campus
+ | ├── Camera0
+ | | | ├── campus4-c0-00000.png
+ | | | ├── ...
+ | | | ├── campus4-c0-01999.png
+ | ...
+ | ├── Camera2
+ | | | ├── campus4-c2-00000.png
+ | | | ├── ...
+ | | | ├── campus4-c2-01999.png
+ | ├── calibration_campus.json
+ | ├── pred_campus_maskrcnn_hrnet_coco.pkl
+ | ├── actorsGT.mat
+ ├── shelf
+ | ├── Camera0
+ | | | ├── img_000000.png
+ | | | ├── ...
+ | | | ├── img_003199.png
+ | ...
+ | ├── Camera4
+ | | | ├── img_000000.png
+ | | | ├── ...
+ | | | ├── img_003199.png
+ | ├── calibration_shelf.json
+ | ├── pred_shelf_maskrcnn_hrnet_coco.pkl
+ | ├── actorsGT.mat
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/3d_body_mesh.md b/internlm_langchain/knowledge_base/MMPose/content/3d_body_mesh.md
new file mode 100644
index 00000000..aced63c8
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/3d_body_mesh.md
@@ -0,0 +1,342 @@
+# 3D Body Mesh Recovery Datasets
+
+It is recommended to symlink the dataset root to `$MMPOSE/data`.
+If your folder structure is different, you may need to change the corresponding paths in config files.
+
+To achieve high-quality human mesh estimation, we use multiple datasets for training.
+The following items should be prepared for human mesh training:
+
+
+
+- [3D Body Mesh Recovery Datasets](#3d-body-mesh-recovery-datasets)
+ - [Notes](#notes)
+ - [Annotation Files for Human Mesh Estimation](#annotation-files-for-human-mesh-estimation)
+ - [SMPL Model](#smpl-model)
+ - [COCO](#coco)
+ - [Human3.6M](#human36m)
+ - [MPI-INF-3DHP](#mpi-inf-3dhp)
+ - [LSP](#lsp)
+ - [LSPET](#lspet)
+ - [CMU MoShed Data](#cmu-moshed-data)
+
+
+
+## Notes
+
+### Annotation Files for Human Mesh Estimation
+
+For human mesh estimation, we use multiple datasets for training.
+The annotation of different datasets are preprocessed to the same format. Please
+follow the [preprocess procedure](https://github.com/nkolot/SPIN/tree/master/datasets/preprocess)
+of SPIN to generate the annotation files or download the processed files from
+[here](https://download.openmmlab.com/mmpose/datasets/mesh_annotation_files.zip),
+and make it look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── mesh_annotation_files
+ ├── coco_2014_train.npz
+ ├── h36m_valid_protocol1.npz
+ ├── h36m_valid_protocol2.npz
+ ├── hr-lspet_train.npz
+ ├── lsp_dataset_original_train.npz
+ ├── mpi_inf_3dhp_train.npz
+ └── mpii_train.npz
+```
+
+### SMPL Model
+
+```bibtex
+@article{loper2015smpl,
+ title={SMPL: A skinned multi-person linear model},
+ author={Loper, Matthew and Mahmood, Naureen and Romero, Javier and Pons-Moll, Gerard and Black, Michael J},
+ journal={ACM transactions on graphics (TOG)},
+ volume={34},
+ number={6},
+ pages={1--16},
+ year={2015},
+ publisher={ACM New York, NY, USA}
+}
+```
+
+For human mesh estimation, SMPL model is used to generate the human mesh.
+Please download the [gender neutral SMPL model](http://smplify.is.tue.mpg.de/),
+[joints regressor](https://download.openmmlab.com/mmpose/datasets/joints_regressor_cmr.npy)
+and [mean parameters](https://download.openmmlab.com/mmpose/datasets/smpl_mean_params.npz)
+under `$MMPOSE/models/smpl`, and make it look like this:
+
+```text
+mmpose
+├── mmpose
+├── ...
+├── models
+ │── smpl
+ ├── joints_regressor_cmr.npy
+ ├── smpl_mean_params.npz
+ └── SMPL_NEUTRAL.pkl
+```
+
+## COCO
+
+
+
+
+COCO (ECCV'2014)
+
+```bibtex
+@inproceedings{lin2014microsoft,
+ title={Microsoft coco: Common objects in context},
+ author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
+ booktitle={European conference on computer vision},
+ pages={740--755},
+ year={2014},
+ organization={Springer}
+}
+```
+
+
+
+For [COCO](http://cocodataset.org/) data, please download from [COCO download](http://cocodataset.org/#download). COCO'2014 Train is needed for human mesh estimation training.
+Download and extract them under $MMPOSE/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── coco
+ │-- train2014
+ │ ├── COCO_train2014_000000000009.jpg
+ │ ├── COCO_train2014_000000000025.jpg
+ │ ├── COCO_train2014_000000000030.jpg
+ | │-- ...
+
+```
+
+## Human3.6M
+
+
+
+
+Human3.6M (TPAMI'2014)
+
+```bibtex
+@article{h36m_pami,
+ author = {Ionescu, Catalin and Papava, Dragos and Olaru, Vlad and Sminchisescu, Cristian},
+ title = {Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments},
+ journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
+ publisher = {IEEE Computer Society},
+ volume = {36},
+ number = {7},
+ pages = {1325-1339},
+ month = {jul},
+ year = {2014}
+}
+```
+
+
+
+For [Human3.6M](http://vision.imar.ro/human3.6m/description.php), we use the MoShed data provided in [HMR](https://github.com/akanazawa/hmr) for training.
+However, due to license limitations, we are not allowed to redistribute the MoShed data.
+
+For the evaluation on Human3.6M dataset, please follow the
+[preprocess procedure](https://github.com/nkolot/SPIN/tree/master/datasets/preprocess)
+of SPIN to extract test images from
+[Human3.6M](http://vision.imar.ro/human3.6m/description.php) original videos,
+and make it look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── Human3.6M
+ ├── images
+ ├── S11_Directions_1.54138969_000001.jpg
+ ├── S11_Directions_1.54138969_000006.jpg
+ ├── S11_Directions_1.54138969_000011.jpg
+ ├── ...
+```
+
+The download of Human3.6M dataset is quite difficult, you can also download the
+[zip file](https://drive.google.com/file/d/1WnRJD9FS3NUf7MllwgLRJJC-JgYFr8oi/view?usp=sharing)
+of the test images. However, due to the license limitations, we are not allowed to
+redistribute the images either. So the users need to download the original video and
+extract the images by themselves.
+
+## MPI-INF-3DHP
+
+
+
+```bibtex
+@inproceedings{mono-3dhp2017,
+ author = {Mehta, Dushyant and Rhodin, Helge and Casas, Dan and Fua, Pascal and Sotnychenko, Oleksandr and Xu, Weipeng and Theobalt, Christian},
+ title = {Monocular 3D Human Pose Estimation In The Wild Using Improved CNN Supervision},
+ booktitle = {3D Vision (3DV), 2017 Fifth International Conference on},
+ url = {http://gvv.mpi-inf.mpg.de/3dhp_dataset},
+ year = {2017},
+ organization={IEEE},
+ doi={10.1109/3dv.2017.00064},
+}
+```
+
+For [MPI-INF-3DHP](http://gvv.mpi-inf.mpg.de/3dhp-dataset/), please follow the
+[preprocess procedure](https://github.com/nkolot/SPIN/tree/master/datasets/preprocess)
+of SPIN to sample images, and make them like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ ├── mpi_inf_3dhp_test_set
+ │ ├── TS1
+ │ ├── TS2
+ │ ├── TS3
+ │ ├── TS4
+ │ ├── TS5
+ │ └── TS6
+ ├── S1
+ │ ├── Seq1
+ │ └── Seq2
+ ├── S2
+ │ ├── Seq1
+ │ └── Seq2
+ ├── S3
+ │ ├── Seq1
+ │ └── Seq2
+ ├── S4
+ │ ├── Seq1
+ │ └── Seq2
+ ├── S5
+ │ ├── Seq1
+ │ └── Seq2
+ ├── S6
+ │ ├── Seq1
+ │ └── Seq2
+ ├── S7
+ │ ├── Seq1
+ │ └── Seq2
+ └── S8
+ ├── Seq1
+ └── Seq2
+```
+
+## LSP
+
+
+
+```bibtex
+@inproceedings{johnson2010clustered,
+ title={Clustered Pose and Nonlinear Appearance Models for Human Pose Estimation.},
+ author={Johnson, Sam and Everingham, Mark},
+ booktitle={bmvc},
+ volume={2},
+ number={4},
+ pages={5},
+ year={2010},
+ organization={Citeseer}
+}
+```
+
+For [LSP](https://sam.johnson.io/research/lsp.html), please download the high resolution version
+[LSP dataset original](http://sam.johnson.io/research/lsp_dataset_original.zip).
+Extract them under `$MMPOSE/data`, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── lsp_dataset_original
+ ├── images
+ ├── im0001.jpg
+ ├── im0002.jpg
+ └── ...
+```
+
+## LSPET
+
+
+
+```bibtex
+@inproceedings{johnson2011learning,
+ title={Learning effective human pose estimation from inaccurate annotation},
+ author={Johnson, Sam and Everingham, Mark},
+ booktitle={CVPR 2011},
+ pages={1465--1472},
+ year={2011},
+ organization={IEEE}
+}
+```
+
+For [LSPET](https://sam.johnson.io/research/lspet.html), please download its high resolution form
+[HR-LSPET](http://datasets.d2.mpi-inf.mpg.de/hr-lspet/hr-lspet.zip).
+Extract them under `$MMPOSE/data`, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── lspet_dataset
+ ├── images
+ │ ├── im00001.jpg
+ │ ├── im00002.jpg
+ │ ├── im00003.jpg
+ │ └── ...
+ └── joints.mat
+```
+
+## CMU MoShed Data
+
+
+
+```bibtex
+@inproceedings{kanazawa2018end,
+ title={End-to-end recovery of human shape and pose},
+ author={Kanazawa, Angjoo and Black, Michael J and Jacobs, David W and Malik, Jitendra},
+ booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
+ pages={7122--7131},
+ year={2018}
+}
+```
+
+Real-world SMPL parameters are used for the adversarial training in human mesh estimation.
+The MoShed data provided in [HMR](https://github.com/akanazawa/hmr) is included in this
+[zip file](https://download.openmmlab.com/mmpose/datasets/mesh_annotation_files.zip).
+Please download and extract it under `$MMPOSE/data`, and make it look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── mesh_annotation_files
+ ├── CMU_mosh.npz
+ └── ...
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/3d_hand_keypoint.md b/internlm_langchain/knowledge_base/MMPose/content/3d_hand_keypoint.md
new file mode 100644
index 00000000..2b1f4d39
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/3d_hand_keypoint.md
@@ -0,0 +1,59 @@
+# 3D Hand Keypoint Datasets
+
+It is recommended to symlink the dataset root to `$MMPOSE/data`.
+If your folder structure is different, you may need to change the corresponding paths in config files.
+
+MMPose supported datasets:
+
+- [InterHand2.6M](#interhand26m) \[ [Homepage](https://mks0601.github.io/InterHand2.6M/) \]
+
+## InterHand2.6M
+
+
+
+
+InterHand2.6M (ECCV'2020)
+
+```bibtex
+@InProceedings{Moon_2020_ECCV_InterHand2.6M,
+author = {Moon, Gyeongsik and Yu, Shoou-I and Wen, He and Shiratori, Takaaki and Lee, Kyoung Mu},
+title = {InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image},
+booktitle = {European Conference on Computer Vision (ECCV)},
+year = {2020}
+}
+```
+
+
+
+
+
+
+
+For [InterHand2.6M](https://mks0601.github.io/InterHand2.6M/), please download from [InterHand2.6M](https://mks0601.github.io/InterHand2.6M/).
+Please download the annotation files from [annotations](https://drive.google.com/drive/folders/1pWXhdfaka-J0fSAze0MsajN0VpZ8e8tO).
+Extract them under {MMPose}/data, and make them look like this:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── interhand2.6m
+ |── annotations
+ | |── all
+ | |── human_annot
+ | |── machine_annot
+ | |── skeleton.txt
+ | |── subject.txt
+ |
+ `── images
+ | |── train
+ | | |-- Capture0 ~ Capture26
+ | |── val
+ | | |-- Capture0
+ | |── test
+ | | |-- Capture0 ~ Capture7
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/advanced_training.md b/internlm_langchain/knowledge_base/MMPose/content/advanced_training.md
new file mode 100644
index 00000000..dd02a766
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/advanced_training.md
@@ -0,0 +1,104 @@
+# 高级训练设置
+
+## 恢复训练
+
+恢复训练是指从之前某次训练保存下来的状态开始继续训练,这里的状态包括模型的权重、优化器和优化器参数调整策略的状态。
+
+### 自动恢复训练
+
+用户可以在训练命令最后加上 `--resume` 恢复训练,程序会自动从 `work_dirs` 中加载最新的权重文件恢复训练。如果 `work_dir` 中有最新的 `checkpoint`(例如该训练在上一次训练时被中断),则会从该 `checkpoint` 恢复训练,否则(例如上一次训练还没来得及保存 `checkpoint` 或者启动了新的训练任务)会重新开始训练。
+
+下面是一个恢复训练的示例:
+
+```shell
+python tools/train.py configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_res50_8xb64-210e_coco-256x192.py --resume
+```
+
+### 指定 Checkpoint 恢复训练
+
+你也可以对 `--resume` 指定 `checkpoint` 路径,MMPose 会自动读取该 `checkpoint` 并从中恢复训练,命令如下:
+
+```shell
+python tools/train.py configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_res50_8xb64-210e_coco-256x192.py \
+ --resume work_dirs/td-hm_res50_8xb64-210e_coco-256x192/latest.pth
+```
+
+如果你希望手动在配置文件中指定 `checkpoint` 路径,除了设置 `resume=True`,还需要设置 `load_from` 参数。需要注意的是,如果只设置了 `load_from` 而没有设置 `resume=True`,则只会加载 `checkpoint` 中的权重并重新开始训练,而不是接着之前的状态继续训练。
+
+下面的例子与上面指定 `--resume` 参数的例子等价:
+
+```python
+resume = True
+load_from = 'work_dirs/td-hm_res50_8xb64-210e_coco-256x192/latest.pth'
+# model settings
+model = dict(
+ ## 内容省略 ##
+ )
+```
+
+## 自动混合精度(AMP)训练
+
+混合精度训练在不改变模型、不降低模型训练精度的前提下,可以缩短训练时间,降低存储需求,因而能支持更大的 batch size、更大模型和尺寸更大的输入的训练。
+
+如果要开启自动混合精度(AMP)训练,在训练命令最后加上 --amp 即可, 命令如下:
+
+```shell
+python tools/train.py ${CONFIG_FILE} --amp
+```
+
+具体例子如下:
+
+```shell
+python tools/train.py configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_res50_8xb64-210e_coco-256x192.py --amp
+```
+
+## 设置随机种子
+
+如果想要在训练时指定随机种子,可以使用以下命令:
+
+```shell
+python ./tools/train.py \
+ ${CONFIG} \ # 配置文件路径
+ --cfg-options randomness.seed=2023 \ # 设置随机种子为 2023
+ [randomness.diff_rank_seed=True] \ # 根据 rank 来设置不同的种子。
+ [randomness.deterministic=True] # 把 cuDNN 后端确定性选项设置为 True
+# [] 代表可选参数,实际输入命令行时,不用输入 []
+```
+
+randomness 有三个参数可设置,具体含义如下:
+
+- `randomness.seed=2023` ,设置随机种子为 `2023`。
+
+- `randomness.diff_rank_seed=True`,根据 `rank` 来设置不同的种子,`diff_rank_seed` 默认为 `False`。
+
+- `randomness.deterministic=True`,把 `cuDNN` 后端确定性选项设置为 `True`,即把 `torch.backends.cudnn.deterministic` 设为 `True`,把 `torch.backends.cudnn.benchmark` 设为 `False`。`deterministic` 默认为 `False`。更多细节见 [Pytorch Randomness](https://pytorch.org/docs/stable/notes/randomness.html)。
+
+如果你希望手动在配置文件中指定随机种子,可以在配置文件中设置 `random_seed` 参数,具体如下:
+
+```python
+randomness = dict(seed=2023)
+# model settings
+model = dict(
+ ## 内容省略 ##
+ )
+```
+
+## 使用 Tensorboard 可视化训练过程
+
+安装 Tensorboard 环境
+
+```shell
+pip install tensorboard
+```
+
+在 config 文件中添加 tensorboard 配置
+
+```python
+visualizer = dict(vis_backends=[dict(type='LocalVisBackend'),dict(type='TensorboardVisBackend')])
+```
+
+运行训练命令后,tensorboard 文件会生成在可视化文件夹 `work_dir/${CONFIG}/${TIMESTAMP}/vis_data` 下,运行下面的命令就可以在网页链接使用 tensorboard 查看 loss、学习率和精度等信息。
+
+```shell
+tensorboard --logdir work_dir/${CONFIG}/${TIMESTAMP}/vis_data
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/changelog.md b/internlm_langchain/knowledge_base/MMPose/content/changelog.md
new file mode 100644
index 00000000..68beeeb0
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/changelog.md
@@ -0,0 +1,1316 @@
+# Changelog
+
+## **v1.0.0rc1 (14/10/2022)**
+
+**Highlights**
+
+- Release RTMPose, a high-performance real-time pose estimation algorithm with cross-platform deployment and inference support. See details at the [project page](/projects/rtmpose/)
+- Support several new algorithms: ViTPose (arXiv'2022), CID (CVPR'2022), DEKR (CVPR'2021)
+- Add Inferencer, a convenient inference interface that perform pose estimation and visualization on images, videos and webcam streams with only one line of code
+- Introduce *Project*, a new form for rapid and easy implementation of new algorithms and features in MMPose, which is more handy for community contributors
+
+**New Features**
+
+- Support RTMPose ([#1971](https://github.com/open-mmlab/mmpose/pull/1971), [#2024](https://github.com/open-mmlab/mmpose/pull/2024), [#2028](https://github.com/open-mmlab/mmpose/pull/2028), [#2030](https://github.com/open-mmlab/mmpose/pull/2030), [#2040](https://github.com/open-mmlab/mmpose/pull/2040), [#2057](https://github.com/open-mmlab/mmpose/pull/2057))
+- Support Inferencer ([#1969](https://github.com/open-mmlab/mmpose/pull/1969))
+- Support ViTPose ([#1876](https://github.com/open-mmlab/mmpose/pull/1876), [#2056](https://github.com/open-mmlab/mmpose/pull/2056), [#2058](https://github.com/open-mmlab/mmpose/pull/2058), [#2065](https://github.com/open-mmlab/mmpose/pull/2065))
+- Support CID ([#1907](https://github.com/open-mmlab/mmpose/pull/1907))
+- Support DEKR ([#1834](https://github.com/open-mmlab/mmpose/pull/1834), [#1901](https://github.com/open-mmlab/mmpose/pull/1901))
+- Support training with multiple datasets ([#1767](https://github.com/open-mmlab/mmpose/pull/1767), [#1930](https://github.com/open-mmlab/mmpose/pull/1930), [#1938](https://github.com/open-mmlab/mmpose/pull/1938), [#2025](https://github.com/open-mmlab/mmpose/pull/2025))
+- Add *project* to allow rapid and easy implementation of new models and features ([#1914](https://github.com/open-mmlab/mmpose/pull/1914))
+
+**Improvements**
+
+- Improve documentation quality ([#1846](https://github.com/open-mmlab/mmpose/pull/1846), [#1858](https://github.com/open-mmlab/mmpose/pull/1858), [#1872](https://github.com/open-mmlab/mmpose/pull/1872), [#1899](https://github.com/open-mmlab/mmpose/pull/1899), [#1925](https://github.com/open-mmlab/mmpose/pull/1925), [#1945](https://github.com/open-mmlab/mmpose/pull/1945), [#1952](https://github.com/open-mmlab/mmpose/pull/1952), [#1990](https://github.com/open-mmlab/mmpose/pull/1990), [#2023](https://github.com/open-mmlab/mmpose/pull/2023), [#2042](https://github.com/open-mmlab/mmpose/pull/2042))
+- Support visualizing keypoint indices ([#2051](https://github.com/open-mmlab/mmpose/pull/2051))
+- Support OpenPose style visualization ([#2055](https://github.com/open-mmlab/mmpose/pull/2055))
+- Accelerate image transpose in data pipelines with tensor operation ([#1976](https://github.com/open-mmlab/mmpose/pull/1976))
+- Support auto-import modules from registry ([#1961](https://github.com/open-mmlab/mmpose/pull/1961))
+- Support keypoint partition metric ([#1944](https://github.com/open-mmlab/mmpose/pull/1944))
+- Support SimCC 1D-heatmap visualization ([#1912](https://github.com/open-mmlab/mmpose/pull/1912))
+- Support saving predictions and data metainfo in demos ([#1814](https://github.com/open-mmlab/mmpose/pull/1814), [#1879](https://github.com/open-mmlab/mmpose/pull/1879))
+- Support SimCC with DARK ([#1870](https://github.com/open-mmlab/mmpose/pull/1870))
+- Remove Gaussian blur for offset maps in UDP-regress ([#1815](https://github.com/open-mmlab/mmpose/pull/1815))
+- Refactor encoding interface of Codec for better extendibility and easier configuration ([#1781](https://github.com/open-mmlab/mmpose/pull/1781))
+- Support evaluating CocoMetric without annotation file ([#1722](https://github.com/open-mmlab/mmpose/pull/1722))
+- Improve unit tests ([#1765](https://github.com/open-mmlab/mmpose/pull/1765))
+
+**Bug Fixes**
+
+- Fix repeated warnings from different ranks ([#2053](https://github.com/open-mmlab/mmpose/pull/2053))
+- Avoid frequent scope switching when using mmdet inference api ([#2039](https://github.com/open-mmlab/mmpose/pull/2039))
+- Remove EMA parameters and message hub data when publishing model checkpoints ([#2036](https://github.com/open-mmlab/mmpose/pull/2036))
+- Fix metainfo copying in dataset class ([#2017](https://github.com/open-mmlab/mmpose/pull/2017))
+- Fix top-down demo bug when there is no object detected ([#2007](https://github.com/open-mmlab/mmpose/pull/2007))
+- Fix config errors ([#1882](https://github.com/open-mmlab/mmpose/pull/1882), [#1906](https://github.com/open-mmlab/mmpose/pull/1906), [#1995](https://github.com/open-mmlab/mmpose/pull/1995))
+- Fix image demo failure when GUI is unavailable ([#1968](https://github.com/open-mmlab/mmpose/pull/1968))
+- Fix bug in AdaptiveWingLoss ([#1953](https://github.com/open-mmlab/mmpose/pull/1953))
+- Fix incorrect importing of RepeatDataset which is deprecated ([#1943](https://github.com/open-mmlab/mmpose/pull/1943))
+- Fix bug in bottom-up datasets that ignores images without instances ([#1752](https://github.com/open-mmlab/mmpose/pull/1752), [#1936](https://github.com/open-mmlab/mmpose/pull/1936))
+- Fix upstream dependency issues ([#1867](https://github.com/open-mmlab/mmpose/pull/1867), [#1921](https://github.com/open-mmlab/mmpose/pull/1921))
+- Fix evaluation issues and update results ([#1763](https://github.com/open-mmlab/mmpose/pull/1763), [#1773](https://github.com/open-mmlab/mmpose/pull/1773), [#1780](https://github.com/open-mmlab/mmpose/pull/1780), [#1850](https://github.com/open-mmlab/mmpose/pull/1850), [#1868](https://github.com/open-mmlab/mmpose/pull/1868))
+- Fix local registry missing warnings ([#1849](https://github.com/open-mmlab/mmpose/pull/1849))
+- Remove deprecated scripts for model deployment ([#1845](https://github.com/open-mmlab/mmpose/pull/1845))
+- Fix a bug in input transformation in BaseHead ([#1843](https://github.com/open-mmlab/mmpose/pull/1843))
+- Fix an interface mismatch with MMDetection in webcam demo ([#1813](https://github.com/open-mmlab/mmpose/pull/1813))
+- Fix a bug in heatmap visualization that causes incorrect scale ([#1800](https://github.com/open-mmlab/mmpose/pull/1800))
+- Add model metafiles ([#1768](https://github.com/open-mmlab/mmpose/pull/1768))
+
+## **v1.0.0rc0 (14/10/2022)**
+
+**New Features**
+
+- Support 4 light-weight pose estimation algorithms: [SimCC](https://doi.org/10.48550/arxiv.2107.03332) (ECCV'2022), [Debias-IPR](https://openaccess.thecvf.com/content/ICCV2021/papers/Gu_Removing_the_Bias_of_Integral_Pose_Regression_ICCV_2021_paper.pdf) (ICCV'2021), [IPR](https://arxiv.org/abs/1711.08229) (ECCV'2018), and [DSNT](https://arxiv.org/abs/1801.07372v2) (ArXiv'2018) ([#1628](https://github.com/open-mmlab/mmpose/pull/1628))
+
+**Migrations**
+
+- Add Webcam API in MMPose 1.0 ([#1638](https://github.com/open-mmlab/mmpose/pull/1638), [#1662](https://github.com/open-mmlab/mmpose/pull/1662)) @Ben-Louis
+- Add codec for Associative Embedding (beta) ([#1603](https://github.com/open-mmlab/mmpose/pull/1603)) @ly015
+
+**Improvements**
+
+- Add a colab tutorial for MMPose 1.0 ([#1660](https://github.com/open-mmlab/mmpose/pull/1660)) @Tau-J
+- Add model index in config folder ([#1710](https://github.com/open-mmlab/mmpose/pull/1710), [#1709](https://github.com/open-mmlab/mmpose/pull/1709), [#1627](https://github.com/open-mmlab/mmpose/pull/1627)) @ly015, @Tau-J, @Ben-Louis
+- Update and improve documentation ([#1692](https://github.com/open-mmlab/mmpose/pull/1692), [#1656](https://github.com/open-mmlab/mmpose/pull/1656), [#1681](https://github.com/open-mmlab/mmpose/pull/1681), [#1677](https://github.com/open-mmlab/mmpose/pull/1677), [#1664](https://github.com/open-mmlab/mmpose/pull/1664), [#1659](https://github.com/open-mmlab/mmpose/pull/1659)) @Tau-J, @Ben-Louis, @liqikai9
+- Improve config structures and formats ([#1651](https://github.com/open-mmlab/mmpose/pull/1651)) @liqikai9
+
+**Bug Fixes**
+
+- Update mmengine version requirements ([#1715](https://github.com/open-mmlab/mmpose/pull/1715)) @Ben-Louis
+- Update dependencies of pre-commit hooks ([#1705](https://github.com/open-mmlab/mmpose/pull/1705)) @Ben-Louis
+- Fix mmcv version in DockerFile ([#1704](https://github.com/open-mmlab/mmpose/pull/1704))
+- Fix a bug in setting dataset metainfo in configs ([#1684](https://github.com/open-mmlab/mmpose/pull/1684)) @ly015
+- Fix a bug in UDP training ([#1682](https://github.com/open-mmlab/mmpose/pull/1682)) @liqikai9
+- Fix a bug in Dark decoding ([#1676](https://github.com/open-mmlab/mmpose/pull/1676)) @liqikai9
+- Fix bugs in visualization ([#1671](https://github.com/open-mmlab/mmpose/pull/1671), [#1668](https://github.com/open-mmlab/mmpose/pull/1668), [#1657](https://github.com/open-mmlab/mmpose/pull/1657)) @liqikai9, @Ben-Louis
+- Fix incorrect flops calculation ([#1669](https://github.com/open-mmlab/mmpose/pull/1669)) @liqikai9
+- Fix `tensor.tile` compatibility issue for pytorch 1.6 ([#1658](https://github.com/open-mmlab/mmpose/pull/1658)) @ly015
+- Fix compatibility with `MultilevelPixelData` ([#1647](https://github.com/open-mmlab/mmpose/pull/1647)) @liqikai9
+
+## **v1.0.0beta (1/09/2022)**
+
+We are excited to announce the release of MMPose 1.0.0beta.
+MMPose 1.0.0beta is the first version of MMPose 1.x, a part of the OpenMMLab 2.0 projects.
+Built upon the new [training engine](https://github.com/open-mmlab/mmengine),
+MMPose 1.x unifies the interfaces of dataset, models, evaluation, and visualization with faster training and testing speed.
+It also provide a general semi-supervised object detection framework, and more strong baselines.
+
+**Highlights**
+
+- **New engines**. MMPose 1.x is based on [MMEngine](https://github.com/open-mmlab/mmengine), which provides a general and powerful runner that allows more flexible customizations and significantly simplifies the entrypoints of high-level interfaces.
+
+- **Unified interfaces**. As a part of the OpenMMLab 2.0 projects, MMPose 1.x unifies and refactors the interfaces and internal logics of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logics to allow the emergence of multi-task/modality algorithms.
+
+- **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://mmpose.readthedocs.io/en/latest/).
+
+**Breaking Changes**
+
+In this release, we made lots of major refactoring and modifications. Please refer to the [migration guide](../migration.md) for details and migration instructions.
+
+## **v0.28.1 (28/07/2022)**
+
+This release is meant to fix the compatibility with the latest mmcv v1.6.1
+
+## **v0.28.0 (06/07/2022)**
+
+**Highlights**
+
+- Support [TCFormer](https://openaccess.thecvf.com/content/CVPR2022/html/Zeng_Not_All_Tokens_Are_Equal_Human-Centric_Visual_Analysis_via_Token_CVPR_2022_paper.html) backbone, CVPR'2022 ([#1447](https://github.com/open-mmlab/mmpose/pull/1447), [#1452](https://github.com/open-mmlab/mmpose/pull/1452)) @zengwang430521
+
+- Add [RLE](https://arxiv.org/abs/2107.11291) models on COCO dataset ([#1424](https://github.com/open-mmlab/mmpose/pull/1424)) @Indigo6, @Ben-Louis, @ly015
+
+- Update swin models with better performance ([#1467](https://github.com/open-mmlab/mmpose/pull/1434)) @jin-s13
+
+**New Features**
+
+- Support [TCFormer](https://openaccess.thecvf.com/content/CVPR2022/html/Zeng_Not_All_Tokens_Are_Equal_Human-Centric_Visual_Analysis_via_Token_CVPR_2022_paper.html) backbone, CVPR'2022 ([#1447](https://github.com/open-mmlab/mmpose/pull/1447), [#1452](https://github.com/open-mmlab/mmpose/pull/1452)) @zengwang430521
+
+- Add [RLE](https://arxiv.org/abs/2107.11291) models on COCO dataset ([#1424](https://github.com/open-mmlab/mmpose/pull/1424)) @Indigo6, @Ben-Louis, @ly015
+
+- Support layer decay optimizer constructor and learning rate decay optimizer constructor ([#1423](https://github.com/open-mmlab/mmpose/pull/1423)) @jin-s13
+
+**Improvements**
+
+- Improve documentation quality ([#1416](https://github.com/open-mmlab/mmpose/pull/1416), [#1421](https://github.com/open-mmlab/mmpose/pull/1421), [#1423](https://github.com/open-mmlab/mmpose/pull/1423), [#1426](https://github.com/open-mmlab/mmpose/pull/1426), [#1458](https://github.com/open-mmlab/mmpose/pull/1458), [#1463](https://github.com/open-mmlab/mmpose/pull/1463)) @ly015, @liqikai9
+
+- Support installation by [mim](https://github.com/open-mmlab/mim) ([#1425](https://github.com/open-mmlab/mmpose/pull/1425)) @liqikai9
+
+- Support PAVI logger ([#1434](https://github.com/open-mmlab/mmpose/pull/1434)) @EvelynWang-0423
+
+- Add progress bar for some demos ([#1454](https://github.com/open-mmlab/mmpose/pull/1454)) @liqikai9
+
+- Webcam API supports quick device setting in terminal commands ([#1466](https://github.com/open-mmlab/mmpose/pull/1466)) @ly015
+
+- Update swin models with better performance ([#1467](https://github.com/open-mmlab/mmpose/pull/1434)) @jin-s13
+
+**Bug Fixes**
+
+- Rename `custom_hooks_config` to `custom_hooks` in configs to align with the documentation ([#1427](https://github.com/open-mmlab/mmpose/pull/1427)) @ly015
+
+- Fix deadlock issue in Webcam API ([#1430](https://github.com/open-mmlab/mmpose/pull/1430)) @ly015
+
+- Fix smoother configs in video 3D demo ([#1457](https://github.com/open-mmlab/mmpose/pull/1457)) @ly015
+
+## **v0.27.0 (07/06/2022)**
+
+**Highlights**
+
+- Support hand gesture recognition
+
+ - Try the demo for gesture recognition
+ - Learn more about the algorithm, dataset and experiment results
+
+- Major upgrade to the Webcam API
+
+ - Tutorials (EN|zh_CN)
+ - [API Reference](https://mmpose.readthedocs.io/en/latest/api.html#mmpose-apis-webcam)
+ - Demo
+
+**New Features**
+
+- Support gesture recognition algorithm [MTUT](https://openaccess.thecvf.com/content_CVPR_2019/html/Abavisani_Improving_the_Performance_of_Unimodal_Dynamic_Hand-Gesture_Recognition_With_Multimodal_CVPR_2019_paper.html) CVPR'2019 and dataset [NVGesture](https://openaccess.thecvf.com/content_cvpr_2016/html/Molchanov_Online_Detection_and_CVPR_2016_paper.html) CVPR'2016 ([#1380](https://github.com/open-mmlab/mmpose/pull/1380)) @Ben-Louis
+
+**Improvements**
+
+- Upgrade Webcam API and related documents ([#1393](https://github.com/open-mmlab/mmpose/pull/1393), [#1404](https://github.com/open-mmlab/mmpose/pull/1404), [#1413](https://github.com/open-mmlab/mmpose/pull/1413)) @ly015
+
+- Support exporting COCO inference result without the annotation file ([#1368](https://github.com/open-mmlab/mmpose/pull/1368)) @liqikai9
+
+- Replace markdownlint with mdformat in CI to avoid the dependence on ruby [#1382](https://github.com/open-mmlab/mmpose/pull/1382) @ly015
+
+- Improve documentation quality ([#1385](https://github.com/open-mmlab/mmpose/pull/1385), [#1394](https://github.com/open-mmlab/mmpose/pull/1394), [#1395](https://github.com/open-mmlab/mmpose/pull/1395), [#1408](https://github.com/open-mmlab/mmpose/pull/1408)) @chubei-oppen, @ly015, @liqikai9
+
+**Bug Fixes**
+
+- Fix xywh->xyxy bbox conversion in dataset sanity check ([#1367](https://github.com/open-mmlab/mmpose/pull/1367)) @jin-s13
+
+- Fix a bug in two-stage 3D keypoint demo ([#1373](https://github.com/open-mmlab/mmpose/pull/1373)) @ly015
+
+- Fix out-dated settings in PVT configs ([#1376](https://github.com/open-mmlab/mmpose/pull/1376)) @ly015
+
+- Fix myst settings for document compiling ([#1381](https://github.com/open-mmlab/mmpose/pull/1381)) @ly015
+
+- Fix a bug in bbox transform ([#1384](https://github.com/open-mmlab/mmpose/pull/1384)) @ly015
+
+- Fix inaccurate description of `min_keypoints` in tracking apis ([#1398](https://github.com/open-mmlab/mmpose/pull/1398)) @pallgeuer
+
+- Fix warning with `torch.meshgrid` ([#1402](https://github.com/open-mmlab/mmpose/pull/1402)) @pallgeuer
+
+- Remove redundant transformer modules from `mmpose.datasets.backbones.utils` ([#1405](https://github.com/open-mmlab/mmpose/pull/1405)) @ly015
+
+## **v0.26.0 (05/05/2022)**
+
+**Highlights**
+
+- Support [RLE (Residual Log-likelihood Estimation)](https://arxiv.org/abs/2107.11291), ICCV'2021 ([#1259](https://github.com/open-mmlab/mmpose/pull/1259)) @Indigo6, @ly015
+
+- Support [Swin Transformer](https://arxiv.org/abs/2103.14030), ICCV'2021 ([#1300](https://github.com/open-mmlab/mmpose/pull/1300)) @yumendecc, @ly015
+
+- Support [PVT](https://arxiv.org/abs/2102.12122), ICCV'2021 and [PVTv2](https://arxiv.org/abs/2106.13797), CVMJ'2022 ([#1343](https://github.com/open-mmlab/mmpose/pull/1343)) @zengwang430521
+
+- Speed up inference and reduce CPU usage by optimizing the pre-processing pipeline ([#1320](https://github.com/open-mmlab/mmpose/pull/1320)) @chenxinfeng4, @liqikai9
+
+**New Features**
+
+- Support [RLE (Residual Log-likelihood Estimation)](https://arxiv.org/abs/2107.11291), ICCV'2021 ([#1259](https://github.com/open-mmlab/mmpose/pull/1259)) @Indigo6, @ly015
+
+- Support [Swin Transformer](https://arxiv.org/abs/2103.14030), ICCV'2021 ([#1300](https://github.com/open-mmlab/mmpose/pull/1300)) @yumendecc, @ly015
+
+- Support [PVT](https://arxiv.org/abs/2102.12122), ICCV'2021 and [PVTv2](https://arxiv.org/abs/2106.13797), CVMJ'2022 ([#1343](https://github.com/open-mmlab/mmpose/pull/1343)) @zengwang430521
+
+- Support [FPN](https://openaccess.thecvf.com/content_cvpr_2017/html/Lin_Feature_Pyramid_Networks_CVPR_2017_paper.html), CVPR'2017 ([#1300](https://github.com/open-mmlab/mmpose/pull/1300)) @yumendecc, @ly015
+
+**Improvements**
+
+- Speed up inference and reduce CPU usage by optimizing the pre-processing pipeline ([#1320](https://github.com/open-mmlab/mmpose/pull/1320)) @chenxinfeng4, @liqikai9
+
+- Video demo supports models that requires multi-frame inputs ([#1300](https://github.com/open-mmlab/mmpose/pull/1300)) @liqikai9, @jin-s13
+
+- Update benchmark regression list ([#1328](https://github.com/open-mmlab/mmpose/pull/1328)) @ly015, @liqikai9
+
+- Remove unnecessary warnings in `TopDownPoseTrack18VideoDataset` ([#1335](https://github.com/open-mmlab/mmpose/pull/1335)) @liqikai9
+
+- Improve documentation quality ([#1313](https://github.com/open-mmlab/mmpose/pull/1313), [#1305](https://github.com/open-mmlab/mmpose/pull/1305)) @Ben-Louis, @ly015
+
+- Update deprecating settings in configs ([#1317](https://github.com/open-mmlab/mmpose/pull/1317)) @ly015
+
+**Bug Fixes**
+
+- Fix a bug in human skeleton grouping that may skip the matching process unexpectedly when `ignore_to_much` is True ([#1341](https://github.com/open-mmlab/mmpose/pull/1341)) @daixinghome
+
+- Fix a GPG key error that leads to CI failure ([#1354](https://github.com/open-mmlab/mmpose/pull/1354)) @ly015
+
+- Fix bugs in distributed training script ([#1338](https://github.com/open-mmlab/mmpose/pull/1338), [#1298](https://github.com/open-mmlab/mmpose/pull/1298)) @ly015
+
+- Fix an upstream bug in xtoccotools that causes incorrect AP(M) results ([#1308](https://github.com/open-mmlab/mmpose/pull/1308)) @jin-s13, @ly015
+
+- Fix indentiation errors in the colab tutorial ([#1298](https://github.com/open-mmlab/mmpose/pull/1298)) @YuanZi1501040205
+
+- Fix incompatible model weight initialization with other OpenMMLab codebases ([#1329](https://github.com/open-mmlab/mmpose/pull/1329)) @274869388
+
+- Fix HRNet FP16 checkpoints download URL ([#1309](https://github.com/open-mmlab/mmpose/pull/1309)) @YinAoXiong
+
+- Fix typos in `body3d_two_stage_video_demo.py` ([#1295](https://github.com/open-mmlab/mmpose/pull/1295)) @mucozcan
+
+**Breaking Changes**
+
+- Refactor bbox processing in datasets and pipelines ([#1311](https://github.com/open-mmlab/mmpose/pull/1311)) @ly015, @Ben-Louis
+
+- The bbox format conversion (xywh to center-scale) and random translation are moved from the dataset to the pipeline. The comparison between new and old version is as below:
+
+v0.26.0v0.25.0Dataset
+(e.g. [TopDownCOCODataset](https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/datasets/top_down/topdown_coco_dataset.py))
+
+... # Data sample only contains bbox rec.append({ 'bbox': obj\['clean_bbox\]\[:4\], ... })
+
+
+
+
+
+... # Convert bbox from xywh to center-scale center, scale = self.\_xywh2cs(\*obj\['clean_bbox'\]\[:4\]) # Data sample contains center and scale rec.append({ 'bbox': obj\['clean_bbox\]\[:4\], 'center': center, 'scale': scale, ... })
+
+
Apply bbox random translation every epoch (instead of only applying once at the annotation loading)
+
+
+
+
-
+
+
+
+
+
+
BC Breaking
+
+
The method `_xywh2cs` of dataset base classes (e.g. [Kpt2dSviewRgbImgTopDownDataset](https://github.com/open-mmlab/mmpose/blob/master/mmpose/datasets/datasets/base/kpt_2d_sview_rgb_img_top_down_dataset.py)) will be deprecated in the future. Custom datasets will need modifications to move the bbox format conversion to pipelines.
+
+
-
+
+
+
+
+
+
+
+## **v0.25.0 (02/04/2022)**
+
+**Highlights**
+
+- Support Shelf and Campus datasets with pre-trained VoxelPose models, ["3D Pictorial Structures for Multiple Human Pose Estimation"](http://campar.in.tum.de/pub/belagiannis2014cvpr/belagiannis2014cvpr.pdf), CVPR'2014 ([#1225](https://github.com/open-mmlab/mmpose/pull/1225)) @liqikai9, @wusize
+
+- Add `Smoother` module for temporal smoothing of the pose estimation with configurable filters ([#1127](https://github.com/open-mmlab/mmpose/pull/1127)) @ailingzengzzz, @ly015
+
+- Support SmoothNet for pose smoothing, ["SmoothNet: A Plug-and-Play Network for Refining Human Poses in Videos"](https://arxiv.org/abs/2112.13715), arXiv'2021 ([#1279](https://github.com/open-mmlab/mmpose/pull/1279)) @ailingzengzzz, @ly015
+
+- Add multiview 3D pose estimation demo ([#1270](https://github.com/open-mmlab/mmpose/pull/1270)) @wusize
+
+**New Features**
+
+- Support Shelf and Campus datasets with pre-trained VoxelPose models, ["3D Pictorial Structures for Multiple Human Pose Estimation"](http://campar.in.tum.de/pub/belagiannis2014cvpr/belagiannis2014cvpr.pdf), CVPR'2014 ([#1225](https://github.com/open-mmlab/mmpose/pull/1225)) @liqikai9, @wusize
+
+- Add `Smoother` module for temporal smoothing of the pose estimation with configurable filters ([#1127](https://github.com/open-mmlab/mmpose/pull/1127)) @ailingzengzzz, @ly015
+
+- Support SmoothNet for pose smoothing, ["SmoothNet: A Plug-and-Play Network for Refining Human Poses in Videos"](https://arxiv.org/abs/2112.13715), arXiv'2021 ([#1279](https://github.com/open-mmlab/mmpose/pull/1279)) @ailingzengzzz, @ly015
+
+- Add multiview 3D pose estimation demo ([#1270](https://github.com/open-mmlab/mmpose/pull/1270)) @wusize
+
+- Support multi-machine distributed training ([#1248](https://github.com/open-mmlab/mmpose/pull/1248)) @ly015
+
+**Improvements**
+
+- Update HRFormer configs and checkpoints with relative position bias ([#1245](https://github.com/open-mmlab/mmpose/pull/1245)) @zengwang430521
+
+- Support using different random seed for each distributed node ([#1257](https://github.com/open-mmlab/mmpose/pull/1257), [#1229](https://github.com/open-mmlab/mmpose/pull/1229)) @ly015
+
+- Improve documentation quality ([#1275](https://github.com/open-mmlab/mmpose/pull/1275), [#1255](https://github.com/open-mmlab/mmpose/pull/1255), [#1258](https://github.com/open-mmlab/mmpose/pull/1258), [#1249](https://github.com/open-mmlab/mmpose/pull/1249), [#1247](https://github.com/open-mmlab/mmpose/pull/1247), [#1240](https://github.com/open-mmlab/mmpose/pull/1240), [#1235](https://github.com/open-mmlab/mmpose/pull/1235)) @ly015, @jin-s13, @YoniChechik
+
+**Bug Fixes**
+
+- Fix keypoint index in RHD dataset meta information ([#1265](https://github.com/open-mmlab/mmpose/pull/1265)) @liqikai9
+
+- Fix pre-commit hook unexpected behavior on Windows ([#1282](https://github.com/open-mmlab/mmpose/pull/1282)) @liqikai9
+
+- Remove python-dev installation in CI ([#1276](https://github.com/open-mmlab/mmpose/pull/1276)) @ly015
+
+- Unify hyphens in argument names in tools and demos ([#1271](https://github.com/open-mmlab/mmpose/pull/1271)) @ly015
+
+- Fix ambiguous channel size in `channel_shuffle` that may cause exporting failure (#1242) @PINTO0309
+
+- Fix a bug in Webcam API that causes single-class detectors fail ([#1239](https://github.com/open-mmlab/mmpose/pull/1239)) @674106399
+
+- Fix the issue that `custom_hook` can not be set in configs ([#1236](https://github.com/open-mmlab/mmpose/pull/1236)) @bladrome
+
+- Fix incompatible MMCV version in DockerFile ([#raykindle](https://github.com/open-mmlab/mmpose/pull/raykindle))
+
+- Skip invisible joints in visualization ([#1228](https://github.com/open-mmlab/mmpose/pull/1228)) @womeier
+
+## **v0.24.0 (07/03/2022)**
+
+**Highlights**
+
+- Support HRFormer ["HRFormer: High-Resolution Vision Transformer for Dense Predict"](https://proceedings.neurips.cc/paper/2021/hash/3bbfdde8842a5c44a0323518eec97cbe-Abstract.html), NeurIPS'2021 ([#1203](https://github.com/open-mmlab/mmpose/pull/1203)) @zengwang430521
+
+- Support Windows installation with pip ([#1213](https://github.com/open-mmlab/mmpose/pull/1213)) @jin-s13, @ly015
+
+- Add WebcamAPI documents ([#1187](https://github.com/open-mmlab/mmpose/pull/1187)) @ly015
+
+**New Features**
+
+- Support HRFormer ["HRFormer: High-Resolution Vision Transformer for Dense Predict"](https://proceedings.neurips.cc/paper/2021/hash/3bbfdde8842a5c44a0323518eec97cbe-Abstract.html), NeurIPS'2021 ([#1203](https://github.com/open-mmlab/mmpose/pull/1203)) @zengwang430521
+
+- Support Windows installation with pip ([#1213](https://github.com/open-mmlab/mmpose/pull/1213)) @jin-s13, @ly015
+
+- Support CPU training with mmcv \< v1.4.4 ([#1161](https://github.com/open-mmlab/mmpose/pull/1161)) @EasonQYS, @ly015
+
+- Add "Valentine Magic" demo with WebcamAPI ([#1189](https://github.com/open-mmlab/mmpose/pull/1189), [#1191](https://github.com/open-mmlab/mmpose/pull/1191)) @liqikai9
+
+**Improvements**
+
+- Refactor multi-view 3D pose estimation framework towards better modularization and expansibility ([#1196](https://github.com/open-mmlab/mmpose/pull/1196)) @wusize
+
+- Add WebcamAPI documents and tutorials ([#1187](https://github.com/open-mmlab/mmpose/pull/1187)) @ly015
+
+- Refactor dataset evaluation interface to align with other OpenMMLab codebases ([#1209](https://github.com/open-mmlab/mmpose/pull/1209)) @ly015
+
+- Add deprecation message for deploy tools since [MMDeploy](https://github.com/open-mmlab/mmdeploy) has supported MMPose ([#1207](https://github.com/open-mmlab/mmpose/pull/1207)) @QwQ2000
+
+- Improve documentation quality ([#1206](https://github.com/open-mmlab/mmpose/pull/1206), [#1161](https://github.com/open-mmlab/mmpose/pull/1161)) @ly015
+
+- Switch to OpenMMLab official pre-commit-hook for copyright check ([#1214](https://github.com/open-mmlab/mmpose/pull/1214)) @ly015
+
+**Bug Fixes**
+
+- Fix hard-coded data collating and scattering in inference ([#1175](https://github.com/open-mmlab/mmpose/pull/1175)) @ly015
+
+- Fix model configs on JHMDB dataset ([#1188](https://github.com/open-mmlab/mmpose/pull/1188)) @jin-s13
+
+- Fix area calculation in pose tracking inference ([#1197](https://github.com/open-mmlab/mmpose/pull/1197)) @pallgeuer
+
+- Fix registry scope conflict of module wrapper ([#1204](https://github.com/open-mmlab/mmpose/pull/1204)) @ly015
+
+- Update MMCV installation in CI and documents ([#1205](https://github.com/open-mmlab/mmpose/pull/1205))
+
+- Fix incorrect color channel order in visualization functions ([#1212](https://github.com/open-mmlab/mmpose/pull/1212)) @ly015
+
+## **v0.23.0 (11/02/2022)**
+
+**Highlights**
+
+- Add [MMPose Webcam API](https://github.com/open-mmlab/mmpose/tree/master/tools/webcam): A simple yet powerful tools to develop interactive webcam applications with MMPose functions. ([#1178](https://github.com/open-mmlab/mmpose/pull/1178), [#1173](https://github.com/open-mmlab/mmpose/pull/1173), [#1173](https://github.com/open-mmlab/mmpose/pull/1173), [#1143](https://github.com/open-mmlab/mmpose/pull/1143), [#1094](https://github.com/open-mmlab/mmpose/pull/1094), [#1133](https://github.com/open-mmlab/mmpose/pull/1133), [#1098](https://github.com/open-mmlab/mmpose/pull/1098), [#1160](https://github.com/open-mmlab/mmpose/pull/1160)) @ly015, @jin-s13, @liqikai9, @wusize, @luminxu, @zengwang430521 @mzr1996
+
+**New Features**
+
+- Add [MMPose Webcam API](https://github.com/open-mmlab/mmpose/tree/master/tools/webcam): A simple yet powerful tools to develop interactive webcam applications with MMPose functions. ([#1178](https://github.com/open-mmlab/mmpose/pull/1178), [#1173](https://github.com/open-mmlab/mmpose/pull/1173), [#1173](https://github.com/open-mmlab/mmpose/pull/1173), [#1143](https://github.com/open-mmlab/mmpose/pull/1143), [#1094](https://github.com/open-mmlab/mmpose/pull/1094), [#1133](https://github.com/open-mmlab/mmpose/pull/1133), [#1098](https://github.com/open-mmlab/mmpose/pull/1098), [#1160](https://github.com/open-mmlab/mmpose/pull/1160)) @ly015, @jin-s13, @liqikai9, @wusize, @luminxu, @zengwang430521 @mzr1996
+
+- Support ConcatDataset ([#1139](https://github.com/open-mmlab/mmpose/pull/1139)) @Canwang-sjtu
+
+- Support CPU training and testing ([#1157](https://github.com/open-mmlab/mmpose/pull/1157)) @ly015
+
+**Improvements**
+
+- Add multi-processing configurations to speed up distributed training and testing ([#1146](https://github.com/open-mmlab/mmpose/pull/1146)) @ly015
+
+- Add default runtime config ([#1145](https://github.com/open-mmlab/mmpose/pull/1145))
+
+- Upgrade isort in pre-commit hook ([#1179](https://github.com/open-mmlab/mmpose/pull/1179)) @liqikai9
+
+- Update README and documents ([#1171](https://github.com/open-mmlab/mmpose/pull/1171), [#1167](https://github.com/open-mmlab/mmpose/pull/1167), [#1153](https://github.com/open-mmlab/mmpose/pull/1153), [#1149](https://github.com/open-mmlab/mmpose/pull/1149), [#1148](https://github.com/open-mmlab/mmpose/pull/1148), [#1147](https://github.com/open-mmlab/mmpose/pull/1147), [#1140](https://github.com/open-mmlab/mmpose/pull/1140)) @jin-s13, @wusize, @TommyZihao, @ly015
+
+**Bug Fixes**
+
+- Fix undeterministic behavior in pre-commit hooks ([#1136](https://github.com/open-mmlab/mmpose/pull/1136)) @jin-s13
+
+- Deprecate the support for "python setup.py test" ([#1179](https://github.com/open-mmlab/mmpose/pull/1179)) @ly015
+
+- Fix incompatible settings with MMCV on HSigmoid default parameters ([#1132](https://github.com/open-mmlab/mmpose/pull/1132)) @ly015
+
+- Fix albumentation installation ([#1184](https://github.com/open-mmlab/mmpose/pull/1184)) @BIGWangYuDong
+
+## **v0.22.0 (04/01/2022)**
+
+**Highlights**
+
+- Support VoxelPose ["VoxelPose: Towards Multi-Camera 3D Human Pose Estimation in Wild Environment"](https://arxiv.org/abs/2004.06239), ECCV'2020 ([#1050](https://github.com/open-mmlab/mmpose/pull/1050)) @wusize
+
+- Support Soft Wing loss ["Structure-Coherent Deep Feature Learning for Robust Face Alignment"](https://linchunze.github.io/papers/TIP21_Structure_coherent_FA.pdf), TIP'2021 ([#1077](https://github.com/open-mmlab/mmpose/pull/1077)) @jin-s13
+
+- Support Adaptive Wing loss ["Adaptive Wing Loss for Robust Face Alignment via Heatmap Regression"](https://arxiv.org/abs/1904.07399), ICCV'2019 ([#1072](https://github.com/open-mmlab/mmpose/pull/1072)) @jin-s13
+
+**New Features**
+
+- Support VoxelPose ["VoxelPose: Towards Multi-Camera 3D Human Pose Estimation in Wild Environment"](https://arxiv.org/abs/2004.06239), ECCV'2020 ([#1050](https://github.com/open-mmlab/mmpose/pull/1050)) @wusize
+
+- Support Soft Wing loss ["Structure-Coherent Deep Feature Learning for Robust Face Alignment"](https://linchunze.github.io/papers/TIP21_Structure_coherent_FA.pdf), TIP'2021 ([#1077](https://github.com/open-mmlab/mmpose/pull/1077)) @jin-s13
+
+- Support Adaptive Wing loss ["Adaptive Wing Loss for Robust Face Alignment via Heatmap Regression"](https://arxiv.org/abs/1904.07399), ICCV'2019 ([#1072](https://github.com/open-mmlab/mmpose/pull/1072)) @jin-s13
+
+- Add LiteHRNet-18 Checkpoints trained on COCO. ([#1120](https://github.com/open-mmlab/mmpose/pull/1120)) @jin-s13
+
+**Improvements**
+
+- Improve documentation quality ([#1115](https://github.com/open-mmlab/mmpose/pull/1115), [#1111](https://github.com/open-mmlab/mmpose/pull/1111), [#1105](https://github.com/open-mmlab/mmpose/pull/1105), [#1087](https://github.com/open-mmlab/mmpose/pull/1087), [#1086](https://github.com/open-mmlab/mmpose/pull/1086), [#1085](https://github.com/open-mmlab/mmpose/pull/1085), [#1084](https://github.com/open-mmlab/mmpose/pull/1084), [#1083](https://github.com/open-mmlab/mmpose/pull/1083), [#1124](https://github.com/open-mmlab/mmpose/pull/1124), [#1070](https://github.com/open-mmlab/mmpose/pull/1070), [#1068](https://github.com/open-mmlab/mmpose/pull/1068)) @jin-s13, @liqikai9, @ly015
+
+- Support CircleCI ([#1074](https://github.com/open-mmlab/mmpose/pull/1074)) @ly015
+
+- Skip unit tests in CI when only document files were changed ([#1074](https://github.com/open-mmlab/mmpose/pull/1074), [#1041](https://github.com/open-mmlab/mmpose/pull/1041)) @QwQ2000, @ly015
+
+- Support file_client_args in LoadImageFromFile ([#1076](https://github.com/open-mmlab/mmpose/pull/1076)) @jin-s13
+
+**Bug Fixes**
+
+- Fix a bug in Dark UDP postprocessing that causes error when the channel number is large. ([#1079](https://github.com/open-mmlab/mmpose/pull/1079), [#1116](https://github.com/open-mmlab/mmpose/pull/1116)) @X00123, @jin-s13
+
+- Fix hard-coded `sigmas` in bottom-up image demo ([#1107](https://github.com/open-mmlab/mmpose/pull/1107), [#1101](https://github.com/open-mmlab/mmpose/pull/1101)) @chenxinfeng4, @liqikai9
+
+- Fix unstable checks in unit tests ([#1112](https://github.com/open-mmlab/mmpose/pull/1112)) @ly015
+
+- Do not destroy NULL windows if `args.show==False` in demo scripts ([#1104](https://github.com/open-mmlab/mmpose/pull/1104)) @bladrome
+
+## **v0.21.0 (06/12/2021)**
+
+**Highlights**
+
+- Support ["Learning Temporal Pose Estimation from Sparsely-Labeled Videos"](https://arxiv.org/abs/1906.04016), NeurIPS'2019 ([#932](https://github.com/open-mmlab/mmpose/pull/932), [#1006](https://github.com/open-mmlab/mmpose/pull/1006), [#1036](https://github.com/open-mmlab/mmpose/pull/1036), [#1060](https://github.com/open-mmlab/mmpose/pull/1060)) @liqikai9
+
+- Add ViPNAS-MobileNetV3 models ([#1025](https://github.com/open-mmlab/mmpose/pull/1025)) @luminxu, @jin-s13
+
+- Add inference speed benchmark ([#1028](https://github.com/open-mmlab/mmpose/pull/1028), [#1034](https://github.com/open-mmlab/mmpose/pull/1034), [#1044](https://github.com/open-mmlab/mmpose/pull/1044)) @liqikai9
+
+**New Features**
+
+- Support ["Learning Temporal Pose Estimation from Sparsely-Labeled Videos"](https://arxiv.org/abs/1906.04016), NeurIPS'2019 ([#932](https://github.com/open-mmlab/mmpose/pull/932), [#1006](https://github.com/open-mmlab/mmpose/pull/1006), [#1036](https://github.com/open-mmlab/mmpose/pull/1036)) @liqikai9
+
+- Add ViPNAS-MobileNetV3 models ([#1025](https://github.com/open-mmlab/mmpose/pull/1025)) @luminxu, @jin-s13
+
+- Add light-weight top-down models for whole-body keypoint detection ([#1009](https://github.com/open-mmlab/mmpose/pull/1009), [#1020](https://github.com/open-mmlab/mmpose/pull/1020), [#1055](https://github.com/open-mmlab/mmpose/pull/1055)) @luminxu, @ly015
+
+- Add HRNet checkpoints with various settings on PoseTrack18 ([#1035](https://github.com/open-mmlab/mmpose/pull/1035)) @liqikai9
+
+**Improvements**
+
+- Add inference speed benchmark ([#1028](https://github.com/open-mmlab/mmpose/pull/1028), [#1034](https://github.com/open-mmlab/mmpose/pull/1034), [#1044](https://github.com/open-mmlab/mmpose/pull/1044)) @liqikai9
+
+- Update model metafile format ([#1001](https://github.com/open-mmlab/mmpose/pull/1001)) @ly015
+
+- Support minus output feature index in mobilenet_v3 ([#1005](https://github.com/open-mmlab/mmpose/pull/1005)) @luminxu
+
+- Improve documentation quality ([#1018](https://github.com/open-mmlab/mmpose/pull/1018), [#1026](https://github.com/open-mmlab/mmpose/pull/1026), [#1027](https://github.com/open-mmlab/mmpose/pull/1027), [#1031](https://github.com/open-mmlab/mmpose/pull/1031), [#1038](https://github.com/open-mmlab/mmpose/pull/1038), [#1046](https://github.com/open-mmlab/mmpose/pull/1046), [#1056](https://github.com/open-mmlab/mmpose/pull/1056), [#1057](https://github.com/open-mmlab/mmpose/pull/1057)) @edybk, @luminxu, @ly015, @jin-s13
+
+- Set default random seed in training initialization ([#1030](https://github.com/open-mmlab/mmpose/pull/1030)) @ly015
+
+- Skip CI when only specific files changed ([#1041](https://github.com/open-mmlab/mmpose/pull/1041), [#1059](https://github.com/open-mmlab/mmpose/pull/1059)) @QwQ2000, @ly015
+
+- Automatically cancel uncompleted action runs when new commit arrives ([#1053](https://github.com/open-mmlab/mmpose/pull/1053)) @ly015
+
+**Bug Fixes**
+
+- Update pose tracking demo to be compatible with latest mmtracking ([#1014](https://github.com/open-mmlab/mmpose/pull/1014)) @jin-s13
+
+- Fix symlink creation failure when installed in Windows environments ([#1039](https://github.com/open-mmlab/mmpose/pull/1039)) @QwQ2000
+
+- Fix AP-10K dataset sigmas ([#1040](https://github.com/open-mmlab/mmpose/pull/1040)) @jin-s13
+
+## **v0.20.0 (01/11/2021)**
+
+**Highlights**
+
+- Add AP-10K dataset for animal pose estimation ([#987](https://github.com/open-mmlab/mmpose/pull/987)) @Annbless, @AlexTheBad, @jin-s13, @ly015
+
+- Support TorchServe ([#979](https://github.com/open-mmlab/mmpose/pull/979)) @ly015
+
+**New Features**
+
+- Add AP-10K dataset for animal pose estimation ([#987](https://github.com/open-mmlab/mmpose/pull/987)) @Annbless, @AlexTheBad, @jin-s13, @ly015
+
+- Add HRNetv2 checkpoints on 300W and COFW datasets ([#980](https://github.com/open-mmlab/mmpose/pull/980)) @jin-s13
+
+- Support TorchServe ([#979](https://github.com/open-mmlab/mmpose/pull/979)) @ly015
+
+**Bug Fixes**
+
+- Fix some deprecated or risky settings in configs ([#963](https://github.com/open-mmlab/mmpose/pull/963), [#976](https://github.com/open-mmlab/mmpose/pull/976), [#992](https://github.com/open-mmlab/mmpose/pull/992)) @jin-s13, @wusize
+
+- Fix issues of default arguments of training and testing scripts ([#970](https://github.com/open-mmlab/mmpose/pull/970), [#985](https://github.com/open-mmlab/mmpose/pull/985)) @liqikai9, @wusize
+
+- Fix heatmap and tag size mismatch in bottom-up with UDP ([#994](https://github.com/open-mmlab/mmpose/pull/994)) @wusize
+
+- Fix python3.9 installation in CI ([#983](https://github.com/open-mmlab/mmpose/pull/983)) @ly015
+
+- Fix model zoo document integrity issue ([#990](https://github.com/open-mmlab/mmpose/pull/990)) @jin-s13
+
+**Improvements**
+
+- Support non-square input shape for bottom-up ([#991](https://github.com/open-mmlab/mmpose/pull/991)) @wusize
+
+- Add image and video resources for demo ([#971](https://github.com/open-mmlab/mmpose/pull/971)) @liqikai9
+
+- Use CUDA docker images to accelerate CI ([#973](https://github.com/open-mmlab/mmpose/pull/973)) @ly015
+
+- Add codespell hook and fix detected typos ([#977](https://github.com/open-mmlab/mmpose/pull/977)) @ly015
+
+## **v0.19.0 (08/10/2021)**
+
+**Highlights**
+
+- Add models for Associative Embedding with Hourglass network backbone ([#906](https://github.com/open-mmlab/mmpose/pull/906), [#955](https://github.com/open-mmlab/mmpose/pull/955)) @jin-s13, @luminxu
+
+- Support COCO-Wholebody-Face and COCO-Wholebody-Hand datasets ([#813](https://github.com/open-mmlab/mmpose/pull/813)) @jin-s13, @innerlee, @luminxu
+
+- Upgrade dataset interface ([#901](https://github.com/open-mmlab/mmpose/pull/901), [#924](https://github.com/open-mmlab/mmpose/pull/924)) @jin-s13, @innerlee, @ly015, @liqikai9
+
+- New style of documentation ([#945](https://github.com/open-mmlab/mmpose/pull/945)) @ly015
+
+**New Features**
+
+- Add models for Associative Embedding with Hourglass network backbone ([#906](https://github.com/open-mmlab/mmpose/pull/906), [#955](https://github.com/open-mmlab/mmpose/pull/955)) @jin-s13, @luminxu
+
+- Support COCO-Wholebody-Face and COCO-Wholebody-Hand datasets ([#813](https://github.com/open-mmlab/mmpose/pull/813)) @jin-s13, @innerlee, @luminxu
+
+- Add pseudo-labeling tool to generate COCO style keypoint annotations with given bounding boxes ([#928](https://github.com/open-mmlab/mmpose/pull/928)) @soltkreig
+
+- New style of documentation ([#945](https://github.com/open-mmlab/mmpose/pull/945)) @ly015
+
+**Bug Fixes**
+
+- Fix segmentation parsing in Macaque dataset preprocessing ([#948](https://github.com/open-mmlab/mmpose/pull/948)) @jin-s13
+
+- Fix dependencies that may lead to CI failure in downstream projects ([#936](https://github.com/open-mmlab/mmpose/pull/936), [#953](https://github.com/open-mmlab/mmpose/pull/953)) @RangiLyu, @ly015
+
+- Fix keypoint order in Human3.6M dataset ([#940](https://github.com/open-mmlab/mmpose/pull/940)) @ttxskk
+
+- Fix unstable image loading for Interhand2.6M ([#913](https://github.com/open-mmlab/mmpose/pull/913)) @zengwang430521
+
+**Improvements**
+
+- Upgrade dataset interface ([#901](https://github.com/open-mmlab/mmpose/pull/901), [#924](https://github.com/open-mmlab/mmpose/pull/924)) @jin-s13, @innerlee, @ly015, @liqikai9
+
+- Improve demo usability and stability ([#908](https://github.com/open-mmlab/mmpose/pull/908), [#934](https://github.com/open-mmlab/mmpose/pull/934)) @ly015
+
+- Standardize model metafile format ([#941](https://github.com/open-mmlab/mmpose/pull/941)) @ly015
+
+- Support `persistent_worker` and several other arguments in configs ([#946](https://github.com/open-mmlab/mmpose/pull/946)) @jin-s13
+
+- Use MMCV root model registry to enable cross-project module building ([#935](https://github.com/open-mmlab/mmpose/pull/935)) @RangiLyu
+
+- Improve the document quality ([#916](https://github.com/open-mmlab/mmpose/pull/916), [#909](https://github.com/open-mmlab/mmpose/pull/909), [#942](https://github.com/open-mmlab/mmpose/pull/942), [#913](https://github.com/open-mmlab/mmpose/pull/913), [#956](https://github.com/open-mmlab/mmpose/pull/956)) @jin-s13, @ly015, @bit-scientist, @zengwang430521
+
+- Improve pull request template ([#952](https://github.com/open-mmlab/mmpose/pull/952), [#954](https://github.com/open-mmlab/mmpose/pull/954)) @ly015
+
+**Breaking Changes**
+
+- Upgrade dataset interface ([#901](https://github.com/open-mmlab/mmpose/pull/901)) @jin-s13, @innerlee, @ly015
+
+## **v0.18.0 (01/09/2021)**
+
+**Bug Fixes**
+
+- Fix redundant model weight loading in pytorch-to-onnx conversion ([#850](https://github.com/open-mmlab/mmpose/pull/850)) @ly015
+
+- Fix a bug in update_model_index.py that may cause pre-commit hook failure([#866](https://github.com/open-mmlab/mmpose/pull/866)) @ly015
+
+- Fix a bug in interhand_3d_head ([#890](https://github.com/open-mmlab/mmpose/pull/890)) @zengwang430521
+
+- Fix pose tracking demo failure caused by out-of-date configs ([#891](https://github.com/open-mmlab/mmpose/pull/891))
+
+**Improvements**
+
+- Add automatic benchmark regression tools ([#849](https://github.com/open-mmlab/mmpose/pull/849), [#880](https://github.com/open-mmlab/mmpose/pull/880), [#885](https://github.com/open-mmlab/mmpose/pull/885)) @liqikai9, @ly015
+
+- Add copyright information and checking hook ([#872](https://github.com/open-mmlab/mmpose/pull/872))
+
+- Add PR template ([#875](https://github.com/open-mmlab/mmpose/pull/875)) @ly015
+
+- Add citation information ([#876](https://github.com/open-mmlab/mmpose/pull/876)) @ly015
+
+- Add python3.9 in CI ([#877](https://github.com/open-mmlab/mmpose/pull/877), [#883](https://github.com/open-mmlab/mmpose/pull/883)) @ly015
+
+- Improve the quality of the documents ([#845](https://github.com/open-mmlab/mmpose/pull/845), [#845](https://github.com/open-mmlab/mmpose/pull/845), [#848](https://github.com/open-mmlab/mmpose/pull/848), [#867](https://github.com/open-mmlab/mmpose/pull/867), [#870](https://github.com/open-mmlab/mmpose/pull/870), [#873](https://github.com/open-mmlab/mmpose/pull/873), [#896](https://github.com/open-mmlab/mmpose/pull/896)) @jin-s13, @ly015, @zhiqwang
+
+## **v0.17.0 (06/08/2021)**
+
+**Highlights**
+
+1. Support ["Lite-HRNet: A Lightweight High-Resolution Network"](https://arxiv.org/abs/2104.06403) CVPR'2021 ([#733](https://github.com/open-mmlab/mmpose/pull/733),[#800](https://github.com/open-mmlab/mmpose/pull/800)) @jin-s13
+
+2. Add 3d body mesh demo ([#771](https://github.com/open-mmlab/mmpose/pull/771)) @zengwang430521
+
+3. Add Chinese documentation ([#787](https://github.com/open-mmlab/mmpose/pull/787), [#798](https://github.com/open-mmlab/mmpose/pull/798), [#799](https://github.com/open-mmlab/mmpose/pull/799), [#802](https://github.com/open-mmlab/mmpose/pull/802), [#804](https://github.com/open-mmlab/mmpose/pull/804), [#805](https://github.com/open-mmlab/mmpose/pull/805), [#815](https://github.com/open-mmlab/mmpose/pull/815), [#816](https://github.com/open-mmlab/mmpose/pull/816), [#817](https://github.com/open-mmlab/mmpose/pull/817), [#819](https://github.com/open-mmlab/mmpose/pull/819), [#839](https://github.com/open-mmlab/mmpose/pull/839)) @ly015, @luminxu, @jin-s13, @liqikai9, @zengwang430521
+
+4. Add Colab Tutorial ([#834](https://github.com/open-mmlab/mmpose/pull/834)) @ly015
+
+**New Features**
+
+- Support ["Lite-HRNet: A Lightweight High-Resolution Network"](https://arxiv.org/abs/2104.06403) CVPR'2021 ([#733](https://github.com/open-mmlab/mmpose/pull/733),[#800](https://github.com/open-mmlab/mmpose/pull/800)) @jin-s13
+
+- Add 3d body mesh demo ([#771](https://github.com/open-mmlab/mmpose/pull/771)) @zengwang430521
+
+- Add Chinese documentation ([#787](https://github.com/open-mmlab/mmpose/pull/787), [#798](https://github.com/open-mmlab/mmpose/pull/798), [#799](https://github.com/open-mmlab/mmpose/pull/799), [#802](https://github.com/open-mmlab/mmpose/pull/802), [#804](https://github.com/open-mmlab/mmpose/pull/804), [#805](https://github.com/open-mmlab/mmpose/pull/805), [#815](https://github.com/open-mmlab/mmpose/pull/815), [#816](https://github.com/open-mmlab/mmpose/pull/816), [#817](https://github.com/open-mmlab/mmpose/pull/817), [#819](https://github.com/open-mmlab/mmpose/pull/819), [#839](https://github.com/open-mmlab/mmpose/pull/839)) @ly015, @luminxu, @jin-s13, @liqikai9, @zengwang430521
+
+- Add Colab Tutorial ([#834](https://github.com/open-mmlab/mmpose/pull/834)) @ly015
+
+- Support training for InterHand v1.0 dataset ([#761](https://github.com/open-mmlab/mmpose/pull/761)) @zengwang430521
+
+**Bug Fixes**
+
+- Fix mpii pckh@0.1 index ([#773](https://github.com/open-mmlab/mmpose/pull/773)) @jin-s13
+
+- Fix multi-node distributed test ([#818](https://github.com/open-mmlab/mmpose/pull/818)) @ly015
+
+- Fix docstring and init_weights error of ShuffleNetV1 ([#814](https://github.com/open-mmlab/mmpose/pull/814)) @Junjun2016
+
+- Fix imshow_bbox error when input bboxes is empty ([#796](https://github.com/open-mmlab/mmpose/pull/796)) @ly015
+
+- Fix model zoo doc generation ([#778](https://github.com/open-mmlab/mmpose/pull/778)) @ly015
+
+- Fix typo ([#767](https://github.com/open-mmlab/mmpose/pull/767)), ([#780](https://github.com/open-mmlab/mmpose/pull/780), [#782](https://github.com/open-mmlab/mmpose/pull/782)) @ly015, @jin-s13
+
+**Breaking Changes**
+
+- Use MMCV EvalHook ([#686](https://github.com/open-mmlab/mmpose/pull/686)) @ly015
+
+**Improvements**
+
+- Add pytest.ini and fix docstring ([#812](https://github.com/open-mmlab/mmpose/pull/812)) @jin-s13
+
+- Update MSELoss ([#829](https://github.com/open-mmlab/mmpose/pull/829)) @Ezra-Yu
+
+- Move process_mmdet_results into inference.py ([#831](https://github.com/open-mmlab/mmpose/pull/831)) @ly015
+
+- Update resource limit ([#783](https://github.com/open-mmlab/mmpose/pull/783)) @jin-s13
+
+- Use COCO 2D pose model in 3D demo examples ([#785](https://github.com/open-mmlab/mmpose/pull/785)) @ly015
+
+- Change model zoo titles in the doc from center-aligned to left-aligned ([#792](https://github.com/open-mmlab/mmpose/pull/792), [#797](https://github.com/open-mmlab/mmpose/pull/797)) @ly015
+
+- Support MIM ([#706](https://github.com/open-mmlab/mmpose/pull/706), [#794](https://github.com/open-mmlab/mmpose/pull/794)) @ly015
+
+- Update out-of-date configs ([#827](https://github.com/open-mmlab/mmpose/pull/827)) @jin-s13
+
+- Remove opencv-python-headless dependency by albumentations ([#833](https://github.com/open-mmlab/mmpose/pull/833)) @ly015
+
+- Update QQ QR code in README_CN.md ([#832](https://github.com/open-mmlab/mmpose/pull/832)) @ly015
+
+## **v0.16.0 (02/07/2021)**
+
+**Highlights**
+
+1. Support ["ViPNAS: Efficient Video Pose Estimation via Neural Architecture Search"](https://arxiv.org/abs/2105.10154) CVPR'2021 ([#742](https://github.com/open-mmlab/mmpose/pull/742),[#755](https://github.com/open-mmlab/mmpose/pull/755)).
+
+2. Support MPI-INF-3DHP dataset ([#683](https://github.com/open-mmlab/mmpose/pull/683),[#746](https://github.com/open-mmlab/mmpose/pull/746),[#751](https://github.com/open-mmlab/mmpose/pull/751)).
+
+3. Add webcam demo tool ([#729](https://github.com/open-mmlab/mmpose/pull/729))
+
+4. Add 3d body and hand pose estimation demo ([#704](https://github.com/open-mmlab/mmpose/pull/704), [#727](https://github.com/open-mmlab/mmpose/pull/727)).
+
+**New Features**
+
+- Support ["ViPNAS: Efficient Video Pose Estimation via Neural Architecture Search"](https://arxiv.org/abs/2105.10154) CVPR'2021 ([#742](https://github.com/open-mmlab/mmpose/pull/742),[#755](https://github.com/open-mmlab/mmpose/pull/755))
+
+- Support MPI-INF-3DHP dataset ([#683](https://github.com/open-mmlab/mmpose/pull/683),[#746](https://github.com/open-mmlab/mmpose/pull/746),[#751](https://github.com/open-mmlab/mmpose/pull/751))
+
+- Support Webcam demo ([#729](https://github.com/open-mmlab/mmpose/pull/729))
+
+- Support Interhand 3d demo ([#704](https://github.com/open-mmlab/mmpose/pull/704))
+
+- Support 3d pose video demo ([#727](https://github.com/open-mmlab/mmpose/pull/727))
+
+- Support H36m dataset for 2d pose estimation ([#709](https://github.com/open-mmlab/mmpose/pull/709), [#735](https://github.com/open-mmlab/mmpose/pull/735))
+
+- Add scripts to generate mim metafile ([#749](https://github.com/open-mmlab/mmpose/pull/749))
+
+**Bug Fixes**
+
+- Fix typos ([#692](https://github.com/open-mmlab/mmpose/pull/692),[#696](https://github.com/open-mmlab/mmpose/pull/696),[#697](https://github.com/open-mmlab/mmpose/pull/697),[#698](https://github.com/open-mmlab/mmpose/pull/698),[#712](https://github.com/open-mmlab/mmpose/pull/712),[#718](https://github.com/open-mmlab/mmpose/pull/718),[#728](https://github.com/open-mmlab/mmpose/pull/728))
+
+- Change model download links from `http` to `https` ([#716](https://github.com/open-mmlab/mmpose/pull/716))
+
+**Breaking Changes**
+
+- Switch to MMCV MODEL_REGISTRY ([#669](https://github.com/open-mmlab/mmpose/pull/669))
+
+**Improvements**
+
+- Refactor MeshMixDataset ([#752](https://github.com/open-mmlab/mmpose/pull/752))
+
+- Rename 'GaussianHeatMap' to 'GaussianHeatmap' ([#745](https://github.com/open-mmlab/mmpose/pull/745))
+
+- Update out-of-date configs ([#734](https://github.com/open-mmlab/mmpose/pull/734))
+
+- Improve compatibility for breaking changes ([#731](https://github.com/open-mmlab/mmpose/pull/731))
+
+- Enable to control radius and thickness in visualization ([#722](https://github.com/open-mmlab/mmpose/pull/722))
+
+- Add regex dependency ([#720](https://github.com/open-mmlab/mmpose/pull/720))
+
+## **v0.15.0 (02/06/2021)**
+
+**Highlights**
+
+1. Support 3d video pose estimation (VideoPose3D).
+
+2. Support 3d hand pose estimation (InterNet).
+
+3. Improve presentation of modelzoo.
+
+**New Features**
+
+- Support "InterHand2.6M: A Dataset and Baseline for 3D Interacting Hand Pose Estimation from a Single RGB Image" (ECCV‘20) ([#624](https://github.com/open-mmlab/mmpose/pull/624))
+
+- Support "3D human pose estimation in video with temporal convolutions and semi-supervised training" (CVPR'19) ([#602](https://github.com/open-mmlab/mmpose/pull/602), [#681](https://github.com/open-mmlab/mmpose/pull/681))
+
+- Support 3d pose estimation demo ([#653](https://github.com/open-mmlab/mmpose/pull/653), [#670](https://github.com/open-mmlab/mmpose/pull/670))
+
+- Support bottom-up whole-body pose estimation ([#689](https://github.com/open-mmlab/mmpose/pull/689))
+
+- Support mmcli ([#634](https://github.com/open-mmlab/mmpose/pull/634))
+
+**Bug Fixes**
+
+- Fix opencv compatibility ([#635](https://github.com/open-mmlab/mmpose/pull/635))
+
+- Fix demo with UDP ([#637](https://github.com/open-mmlab/mmpose/pull/637))
+
+- Fix bottom-up model onnx conversion ([#680](https://github.com/open-mmlab/mmpose/pull/680))
+
+- Fix `GPU_IDS` in distributed training ([#668](https://github.com/open-mmlab/mmpose/pull/668))
+
+- Fix MANIFEST.in ([#641](https://github.com/open-mmlab/mmpose/pull/641), [#657](https://github.com/open-mmlab/mmpose/pull/657))
+
+- Fix docs ([#643](https://github.com/open-mmlab/mmpose/pull/643),[#684](https://github.com/open-mmlab/mmpose/pull/684),[#688](https://github.com/open-mmlab/mmpose/pull/688),[#690](https://github.com/open-mmlab/mmpose/pull/690),[#692](https://github.com/open-mmlab/mmpose/pull/692))
+
+**Breaking Changes**
+
+- Reorganize configs by tasks, algorithms, datasets, and techniques ([#647](https://github.com/open-mmlab/mmpose/pull/647))
+
+- Rename heads and detectors ([#667](https://github.com/open-mmlab/mmpose/pull/667))
+
+**Improvements**
+
+- Add `radius` and `thickness` parameters in visualization ([#638](https://github.com/open-mmlab/mmpose/pull/638))
+
+- Add `trans_prob` parameter in `TopDownRandomTranslation` ([#650](https://github.com/open-mmlab/mmpose/pull/650))
+
+- Switch to `MMCV MODEL_REGISTRY` ([#669](https://github.com/open-mmlab/mmpose/pull/669))
+
+- Update dependencies ([#674](https://github.com/open-mmlab/mmpose/pull/674), [#676](https://github.com/open-mmlab/mmpose/pull/676))
+
+## **v0.14.0 (06/05/2021)**
+
+**Highlights**
+
+1. Support animal pose estimation with 7 popular datasets.
+
+2. Support "A simple yet effective baseline for 3d human pose estimation" (ICCV'17).
+
+**New Features**
+
+- Support "A simple yet effective baseline for 3d human pose estimation" (ICCV'17) ([#554](https://github.com/open-mmlab/mmpose/pull/554),[#558](https://github.com/open-mmlab/mmpose/pull/558),[#566](https://github.com/open-mmlab/mmpose/pull/566),[#570](https://github.com/open-mmlab/mmpose/pull/570),[#589](https://github.com/open-mmlab/mmpose/pull/589))
+
+- Support animal pose estimation ([#559](https://github.com/open-mmlab/mmpose/pull/559),[#561](https://github.com/open-mmlab/mmpose/pull/561),[#563](https://github.com/open-mmlab/mmpose/pull/563),[#571](https://github.com/open-mmlab/mmpose/pull/571),[#603](https://github.com/open-mmlab/mmpose/pull/603),[#605](https://github.com/open-mmlab/mmpose/pull/605))
+
+- Support Horse-10 dataset ([#561](https://github.com/open-mmlab/mmpose/pull/561)), MacaquePose dataset ([#561](https://github.com/open-mmlab/mmpose/pull/561)), Vinegar Fly dataset ([#561](https://github.com/open-mmlab/mmpose/pull/561)), Desert Locust dataset ([#561](https://github.com/open-mmlab/mmpose/pull/561)), Grevy's Zebra dataset ([#561](https://github.com/open-mmlab/mmpose/pull/561)), ATRW dataset ([#571](https://github.com/open-mmlab/mmpose/pull/571)), and Animal-Pose dataset ([#603](https://github.com/open-mmlab/mmpose/pull/603))
+
+- Support bottom-up pose tracking demo ([#574](https://github.com/open-mmlab/mmpose/pull/574))
+
+- Support FP16 training ([#584](https://github.com/open-mmlab/mmpose/pull/584),[#616](https://github.com/open-mmlab/mmpose/pull/616),[#626](https://github.com/open-mmlab/mmpose/pull/626))
+
+- Support NMS for bottom-up ([#609](https://github.com/open-mmlab/mmpose/pull/609))
+
+**Bug Fixes**
+
+- Fix bugs in the top-down demo, when there are no people in the images ([#569](https://github.com/open-mmlab/mmpose/pull/569)).
+
+- Fix the links in the doc ([#612](https://github.com/open-mmlab/mmpose/pull/612))
+
+**Improvements**
+
+- Speed up top-down inference ([#560](https://github.com/open-mmlab/mmpose/pull/560))
+
+- Update github CI ([#562](https://github.com/open-mmlab/mmpose/pull/562), [#564](https://github.com/open-mmlab/mmpose/pull/564))
+
+- Update Readme ([#578](https://github.com/open-mmlab/mmpose/pull/578),[#579](https://github.com/open-mmlab/mmpose/pull/579),[#580](https://github.com/open-mmlab/mmpose/pull/580),[#592](https://github.com/open-mmlab/mmpose/pull/592),[#599](https://github.com/open-mmlab/mmpose/pull/599),[#600](https://github.com/open-mmlab/mmpose/pull/600),[#607](https://github.com/open-mmlab/mmpose/pull/607))
+
+- Update Faq ([#587](https://github.com/open-mmlab/mmpose/pull/587), [#610](https://github.com/open-mmlab/mmpose/pull/610))
+
+## **v0.13.0 (31/03/2021)**
+
+**Highlights**
+
+1. Support Wingloss.
+
+2. Support RHD hand dataset.
+
+**New Features**
+
+- Support Wingloss ([#482](https://github.com/open-mmlab/mmpose/pull/482))
+
+- Support RHD hand dataset ([#523](https://github.com/open-mmlab/mmpose/pull/523), [#551](https://github.com/open-mmlab/mmpose/pull/551))
+
+- Support Human3.6m dataset for 3d keypoint detection ([#518](https://github.com/open-mmlab/mmpose/pull/518), [#527](https://github.com/open-mmlab/mmpose/pull/527))
+
+- Support TCN model for 3d keypoint detection ([#521](https://github.com/open-mmlab/mmpose/pull/521), [#522](https://github.com/open-mmlab/mmpose/pull/522))
+
+- Support Interhand3D model for 3d hand detection ([#536](https://github.com/open-mmlab/mmpose/pull/536))
+
+- Support Multi-task detector ([#480](https://github.com/open-mmlab/mmpose/pull/480))
+
+**Bug Fixes**
+
+- Fix PCKh@0.1 calculation ([#516](https://github.com/open-mmlab/mmpose/pull/516))
+
+- Fix unittest ([#529](https://github.com/open-mmlab/mmpose/pull/529))
+
+- Fix circular importing ([#542](https://github.com/open-mmlab/mmpose/pull/542))
+
+- Fix bugs in bottom-up keypoint score ([#548](https://github.com/open-mmlab/mmpose/pull/548))
+
+**Improvements**
+
+- Update config & checkpoints ([#525](https://github.com/open-mmlab/mmpose/pull/525), [#546](https://github.com/open-mmlab/mmpose/pull/546))
+
+- Fix typos ([#514](https://github.com/open-mmlab/mmpose/pull/514), [#519](https://github.com/open-mmlab/mmpose/pull/519), [#532](https://github.com/open-mmlab/mmpose/pull/532), [#537](https://github.com/open-mmlab/mmpose/pull/537), )
+
+- Speed up post processing ([#535](https://github.com/open-mmlab/mmpose/pull/535))
+
+- Update mmcv version dependency ([#544](https://github.com/open-mmlab/mmpose/pull/544))
+
+## **v0.12.0 (28/02/2021)**
+
+**Highlights**
+
+1. Support DeepPose algorithm.
+
+**New Features**
+
+- Support DeepPose algorithm ([#446](https://github.com/open-mmlab/mmpose/pull/446), [#461](https://github.com/open-mmlab/mmpose/pull/461))
+
+- Support interhand3d dataset ([#468](https://github.com/open-mmlab/mmpose/pull/468))
+
+- Support Albumentation pipeline ([#469](https://github.com/open-mmlab/mmpose/pull/469))
+
+- Support PhotometricDistortion pipeline ([#485](https://github.com/open-mmlab/mmpose/pull/485))
+
+- Set seed option for training ([#493](https://github.com/open-mmlab/mmpose/pull/493))
+
+- Add demos for face keypoint detection ([#502](https://github.com/open-mmlab/mmpose/pull/502))
+
+**Bug Fixes**
+
+- Change channel order according to configs ([#504](https://github.com/open-mmlab/mmpose/pull/504))
+
+- Fix `num_factors` in UDP encoding ([#495](https://github.com/open-mmlab/mmpose/pull/495))
+
+- Fix configs ([#456](https://github.com/open-mmlab/mmpose/pull/456))
+
+**Breaking Changes**
+
+- Refactor configs for wholebody pose estimation ([#487](https://github.com/open-mmlab/mmpose/pull/487), [#491](https://github.com/open-mmlab/mmpose/pull/491))
+
+- Rename `decode` function for heads ([#481](https://github.com/open-mmlab/mmpose/pull/481))
+
+**Improvements**
+
+- Update config & checkpoints ([#453](https://github.com/open-mmlab/mmpose/pull/453),[#484](https://github.com/open-mmlab/mmpose/pull/484),[#487](https://github.com/open-mmlab/mmpose/pull/487))
+
+- Add README in Chinese ([#462](https://github.com/open-mmlab/mmpose/pull/462))
+
+- Add tutorials about configs ([#465](https://github.com/open-mmlab/mmpose/pull/465))
+
+- Add demo videos for various tasks ([#499](https://github.com/open-mmlab/mmpose/pull/499), [#503](https://github.com/open-mmlab/mmpose/pull/503))
+
+- Update docs about MMPose installation ([#467](https://github.com/open-mmlab/mmpose/pull/467), [#505](https://github.com/open-mmlab/mmpose/pull/505))
+
+- Rename `stat.py` to `stats.py` ([#483](https://github.com/open-mmlab/mmpose/pull/483))
+
+- Fix typos ([#463](https://github.com/open-mmlab/mmpose/pull/463), [#464](https://github.com/open-mmlab/mmpose/pull/464), [#477](https://github.com/open-mmlab/mmpose/pull/477), [#481](https://github.com/open-mmlab/mmpose/pull/481))
+
+- latex to bibtex ([#471](https://github.com/open-mmlab/mmpose/pull/471))
+
+- Update FAQ ([#466](https://github.com/open-mmlab/mmpose/pull/466))
+
+## **v0.11.0 (31/01/2021)**
+
+**Highlights**
+
+1. Support fashion landmark detection.
+
+2. Support face keypoint detection.
+
+3. Support pose tracking with MMTracking.
+
+**New Features**
+
+- Support fashion landmark detection (DeepFashion) ([#413](https://github.com/open-mmlab/mmpose/pull/413))
+
+- Support face keypoint detection (300W, AFLW, COFW, WFLW) ([#367](https://github.com/open-mmlab/mmpose/pull/367))
+
+- Support pose tracking demo with MMTracking ([#427](https://github.com/open-mmlab/mmpose/pull/427))
+
+- Support face demo ([#443](https://github.com/open-mmlab/mmpose/pull/443))
+
+- Support AIC dataset for bottom-up methods ([#438](https://github.com/open-mmlab/mmpose/pull/438), [#449](https://github.com/open-mmlab/mmpose/pull/449))
+
+**Bug Fixes**
+
+- Fix multi-batch training ([#434](https://github.com/open-mmlab/mmpose/pull/434))
+
+- Fix sigmas in AIC dataset ([#441](https://github.com/open-mmlab/mmpose/pull/441))
+
+- Fix config file ([#420](https://github.com/open-mmlab/mmpose/pull/420))
+
+**Breaking Changes**
+
+- Refactor Heads ([#382](https://github.com/open-mmlab/mmpose/pull/382))
+
+**Improvements**
+
+- Update readme ([#409](https://github.com/open-mmlab/mmpose/pull/409), [#412](https://github.com/open-mmlab/mmpose/pull/412), [#415](https://github.com/open-mmlab/mmpose/pull/415), [#416](https://github.com/open-mmlab/mmpose/pull/416), [#419](https://github.com/open-mmlab/mmpose/pull/419), [#421](https://github.com/open-mmlab/mmpose/pull/421), [#422](https://github.com/open-mmlab/mmpose/pull/422), [#424](https://github.com/open-mmlab/mmpose/pull/424), [#425](https://github.com/open-mmlab/mmpose/pull/425), [#435](https://github.com/open-mmlab/mmpose/pull/435), [#436](https://github.com/open-mmlab/mmpose/pull/436), [#437](https://github.com/open-mmlab/mmpose/pull/437), [#444](https://github.com/open-mmlab/mmpose/pull/444), [#445](https://github.com/open-mmlab/mmpose/pull/445))
+
+- Add GAP (global average pooling) neck ([#414](https://github.com/open-mmlab/mmpose/pull/414))
+
+- Speed up ([#411](https://github.com/open-mmlab/mmpose/pull/411), [#423](https://github.com/open-mmlab/mmpose/pull/423))
+
+- Support COCO test-dev test ([#433](https://github.com/open-mmlab/mmpose/pull/433))
+
+## **v0.10.0 (31/12/2020)**
+
+**Highlights**
+
+1. Support more human pose estimation methods.
+
+ 1. [UDP](https://arxiv.org/abs/1911.07524)
+
+2. Support pose tracking.
+
+3. Support multi-batch inference.
+
+4. Add some useful tools, including `analyze_logs`, `get_flops`, `print_config`.
+
+5. Support more backbone networks.
+
+ 1. [ResNest](https://arxiv.org/pdf/2004.08955.pdf)
+ 2. [VGG](https://arxiv.org/abs/1409.1556)
+
+**New Features**
+
+- Support UDP ([#353](https://github.com/open-mmlab/mmpose/pull/353), [#371](https://github.com/open-mmlab/mmpose/pull/371), [#402](https://github.com/open-mmlab/mmpose/pull/402))
+
+- Support multi-batch inference ([#390](https://github.com/open-mmlab/mmpose/pull/390))
+
+- Support MHP dataset ([#386](https://github.com/open-mmlab/mmpose/pull/386))
+
+- Support pose tracking demo ([#380](https://github.com/open-mmlab/mmpose/pull/380))
+
+- Support mpii-trb demo ([#372](https://github.com/open-mmlab/mmpose/pull/372))
+
+- Support mobilenet for hand pose estimation ([#377](https://github.com/open-mmlab/mmpose/pull/377))
+
+- Support ResNest backbone ([#370](https://github.com/open-mmlab/mmpose/pull/370))
+
+- Support VGG backbone ([#370](https://github.com/open-mmlab/mmpose/pull/370))
+
+- Add some useful tools, including `analyze_logs`, `get_flops`, `print_config` ([#324](https://github.com/open-mmlab/mmpose/pull/324))
+
+**Bug Fixes**
+
+- Fix bugs in pck evaluation ([#328](https://github.com/open-mmlab/mmpose/pull/328))
+
+- Fix model download links in README ([#396](https://github.com/open-mmlab/mmpose/pull/396), [#397](https://github.com/open-mmlab/mmpose/pull/397))
+
+- Fix CrowdPose annotations and update benchmarks ([#384](https://github.com/open-mmlab/mmpose/pull/384))
+
+- Fix modelzoo stat ([#354](https://github.com/open-mmlab/mmpose/pull/354), [#360](https://github.com/open-mmlab/mmpose/pull/360), [#362](https://github.com/open-mmlab/mmpose/pull/362))
+
+- Fix config files for aic datasets ([#340](https://github.com/open-mmlab/mmpose/pull/340))
+
+**Breaking Changes**
+
+- Rename `image_thr` to `det_bbox_thr` for top-down methods.
+
+**Improvements**
+
+- Organize the readme files ([#398](https://github.com/open-mmlab/mmpose/pull/398), [#399](https://github.com/open-mmlab/mmpose/pull/399), [#400](https://github.com/open-mmlab/mmpose/pull/400))
+
+- Check linting for markdown ([#379](https://github.com/open-mmlab/mmpose/pull/379))
+
+- Add faq.md ([#350](https://github.com/open-mmlab/mmpose/pull/350))
+
+- Remove PyTorch 1.4 in CI ([#338](https://github.com/open-mmlab/mmpose/pull/338))
+
+- Add pypi badge in readme ([#329](https://github.com/open-mmlab/mmpose/pull/329))
+
+## **v0.9.0 (30/11/2020)**
+
+**Highlights**
+
+1. Support more human pose estimation methods.
+
+ 1. [MSPN](https://arxiv.org/abs/1901.00148)
+ 2. [RSN](https://arxiv.org/abs/2003.04030)
+
+2. Support video pose estimation datasets.
+
+ 1. [sub-JHMDB](http://jhmdb.is.tue.mpg.de/dataset)
+
+3. Support Onnx model conversion.
+
+**New Features**
+
+- Support MSPN ([#278](https://github.com/open-mmlab/mmpose/pull/278))
+
+- Support RSN ([#221](https://github.com/open-mmlab/mmpose/pull/221), [#318](https://github.com/open-mmlab/mmpose/pull/318))
+
+- Support new post-processing method for MSPN & RSN ([#288](https://github.com/open-mmlab/mmpose/pull/288))
+
+- Support sub-JHMDB dataset ([#292](https://github.com/open-mmlab/mmpose/pull/292))
+
+- Support urls for pre-trained models in config files ([#232](https://github.com/open-mmlab/mmpose/pull/232))
+
+- Support Onnx ([#305](https://github.com/open-mmlab/mmpose/pull/305))
+
+**Bug Fixes**
+
+- Fix model download links in README ([#255](https://github.com/open-mmlab/mmpose/pull/255), [#315](https://github.com/open-mmlab/mmpose/pull/315))
+
+**Breaking Changes**
+
+- `post_process=True|False` and `unbiased_decoding=True|False` are deprecated, use `post_process=None|default|unbiased` etc. instead ([#288](https://github.com/open-mmlab/mmpose/pull/288))
+
+**Improvements**
+
+- Enrich the model zoo ([#256](https://github.com/open-mmlab/mmpose/pull/256), [#320](https://github.com/open-mmlab/mmpose/pull/320))
+
+- Set the default map_location as 'cpu' to reduce gpu memory cost ([#227](https://github.com/open-mmlab/mmpose/pull/227))
+
+- Support return heatmaps and backbone features for bottom-up models ([#229](https://github.com/open-mmlab/mmpose/pull/229))
+
+- Upgrade mmcv maximum & minimum version ([#269](https://github.com/open-mmlab/mmpose/pull/269), [#313](https://github.com/open-mmlab/mmpose/pull/313))
+
+- Automatically add modelzoo statistics to readthedocs ([#252](https://github.com/open-mmlab/mmpose/pull/252))
+
+- Fix Pylint issues ([#258](https://github.com/open-mmlab/mmpose/pull/258), [#259](https://github.com/open-mmlab/mmpose/pull/259), [#260](https://github.com/open-mmlab/mmpose/pull/260), [#262](https://github.com/open-mmlab/mmpose/pull/262), [#265](https://github.com/open-mmlab/mmpose/pull/265), [#267](https://github.com/open-mmlab/mmpose/pull/267), [#268](https://github.com/open-mmlab/mmpose/pull/268), [#270](https://github.com/open-mmlab/mmpose/pull/270), [#271](https://github.com/open-mmlab/mmpose/pull/271), [#272](https://github.com/open-mmlab/mmpose/pull/272), [#273](https://github.com/open-mmlab/mmpose/pull/273), [#275](https://github.com/open-mmlab/mmpose/pull/275), [#276](https://github.com/open-mmlab/mmpose/pull/276), [#283](https://github.com/open-mmlab/mmpose/pull/283), [#285](https://github.com/open-mmlab/mmpose/pull/285), [#293](https://github.com/open-mmlab/mmpose/pull/293), [#294](https://github.com/open-mmlab/mmpose/pull/294), [#295](https://github.com/open-mmlab/mmpose/pull/295))
+
+- Improve README ([#226](https://github.com/open-mmlab/mmpose/pull/226), [#257](https://github.com/open-mmlab/mmpose/pull/257), [#264](https://github.com/open-mmlab/mmpose/pull/264), [#280](https://github.com/open-mmlab/mmpose/pull/280), [#296](https://github.com/open-mmlab/mmpose/pull/296))
+
+- Support PyTorch 1.7 in CI ([#274](https://github.com/open-mmlab/mmpose/pull/274))
+
+- Add docs/tutorials for running demos ([#263](https://github.com/open-mmlab/mmpose/pull/263))
+
+## **v0.8.0 (31/10/2020)**
+
+**Highlights**
+
+1. Support more human pose estimation datasets.
+
+ 1. [CrowdPose](https://github.com/Jeff-sjtu/CrowdPose)
+ 2. [PoseTrack18](https://posetrack.net/)
+
+2. Support more 2D hand keypoint estimation datasets.
+
+ 1. [InterHand2.6](https://github.com/facebookresearch/InterHand2.6M)
+
+3. Support adversarial training for 3D human shape recovery.
+
+4. Support multi-stage losses.
+
+5. Support mpii demo.
+
+**New Features**
+
+- Support [CrowdPose](https://github.com/Jeff-sjtu/CrowdPose) dataset ([#195](https://github.com/open-mmlab/mmpose/pull/195))
+
+- Support [PoseTrack18](https://posetrack.net/) dataset ([#220](https://github.com/open-mmlab/mmpose/pull/220))
+
+- Support [InterHand2.6](https://github.com/facebookresearch/InterHand2.6M) dataset ([#202](https://github.com/open-mmlab/mmpose/pull/202))
+
+- Support adversarial training for 3D human shape recovery ([#192](https://github.com/open-mmlab/mmpose/pull/192))
+
+- Support multi-stage losses ([#204](https://github.com/open-mmlab/mmpose/pull/204))
+
+**Bug Fixes**
+
+- Fix config files ([#190](https://github.com/open-mmlab/mmpose/pull/190))
+
+**Improvements**
+
+- Add mpii demo ([#216](https://github.com/open-mmlab/mmpose/pull/216))
+
+- Improve README ([#181](https://github.com/open-mmlab/mmpose/pull/181), [#183](https://github.com/open-mmlab/mmpose/pull/183), [#208](https://github.com/open-mmlab/mmpose/pull/208))
+
+- Support return heatmaps and backbone features ([#196](https://github.com/open-mmlab/mmpose/pull/196), [#212](https://github.com/open-mmlab/mmpose/pull/212))
+
+- Support different return formats of mmdetection models ([#217](https://github.com/open-mmlab/mmpose/pull/217))
+
+## **v0.7.0 (30/9/2020)**
+
+**Highlights**
+
+1. Support HMR for 3D human shape recovery.
+
+2. Support WholeBody human pose estimation.
+
+ 1. [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody)
+
+3. Support more 2D hand keypoint estimation datasets.
+
+ 1. [Frei-hand](https://lmb.informatik.uni-freiburg.de/projects/freihand/)
+ 2. [CMU Panoptic HandDB](http://domedb.perception.cs.cmu.edu/handdb.html)
+
+4. Add more popular backbones & enrich the [modelzoo](https://mmpose.readthedocs.io/en/latest/model_zoo.html)
+
+ 1. ShuffleNetv2
+
+5. Support hand demo and whole-body demo.
+
+**New Features**
+
+- Support HMR for 3D human shape recovery ([#157](https://github.com/open-mmlab/mmpose/pull/157), [#160](https://github.com/open-mmlab/mmpose/pull/160), [#161](https://github.com/open-mmlab/mmpose/pull/161), [#162](https://github.com/open-mmlab/mmpose/pull/162))
+
+- Support [COCO-WholeBody](https://github.com/jin-s13/COCO-WholeBody) dataset ([#133](https://github.com/open-mmlab/mmpose/pull/133))
+
+- Support [Frei-hand](https://lmb.informatik.uni-freiburg.de/projects/freihand/) dataset ([#125](https://github.com/open-mmlab/mmpose/pull/125))
+
+- Support [CMU Panoptic HandDB](http://domedb.perception.cs.cmu.edu/handdb.html) dataset ([#144](https://github.com/open-mmlab/mmpose/pull/144))
+
+- Support H36M dataset ([#159](https://github.com/open-mmlab/mmpose/pull/159))
+
+- Support ShuffleNetv2 ([#139](https://github.com/open-mmlab/mmpose/pull/139))
+
+- Support saving best models based on key indicator ([#127](https://github.com/open-mmlab/mmpose/pull/127))
+
+**Bug Fixes**
+
+- Fix typos in docs ([#121](https://github.com/open-mmlab/mmpose/pull/121))
+
+- Fix assertion ([#142](https://github.com/open-mmlab/mmpose/pull/142))
+
+**Improvements**
+
+- Add tools to transform .mat format to .json format ([#126](https://github.com/open-mmlab/mmpose/pull/126))
+
+- Add hand demo ([#115](https://github.com/open-mmlab/mmpose/pull/115))
+
+- Add whole-body demo ([#163](https://github.com/open-mmlab/mmpose/pull/163))
+
+- Reuse mmcv utility function and update version files ([#135](https://github.com/open-mmlab/mmpose/pull/135), [#137](https://github.com/open-mmlab/mmpose/pull/137))
+
+- Enrich the modelzoo ([#147](https://github.com/open-mmlab/mmpose/pull/147), [#169](https://github.com/open-mmlab/mmpose/pull/169))
+
+- Improve docs ([#174](https://github.com/open-mmlab/mmpose/pull/174), [#175](https://github.com/open-mmlab/mmpose/pull/175), [#178](https://github.com/open-mmlab/mmpose/pull/178))
+
+- Improve README ([#176](https://github.com/open-mmlab/mmpose/pull/176))
+
+- Improve version.py ([#173](https://github.com/open-mmlab/mmpose/pull/173))
+
+## **v0.6.0 (31/8/2020)**
+
+**Highlights**
+
+1. Add more popular backbones & enrich the [modelzoo](https://mmpose.readthedocs.io/en/latest/model_zoo.html)
+
+ 1. ResNext
+ 2. SEResNet
+ 3. ResNetV1D
+ 4. MobileNetv2
+ 5. ShuffleNetv1
+ 6. CPM (Convolutional Pose Machine)
+
+2. Add more popular datasets:
+
+ 1. [AIChallenger](https://arxiv.org/abs/1711.06475?context=cs.CV)
+ 2. [MPII](http://human-pose.mpi-inf.mpg.de/)
+ 3. [MPII-TRB](https://github.com/kennymckormick/Triplet-Representation-of-human-Body)
+ 4. [OCHuman](http://www.liruilong.cn/projects/pose2seg/index.html)
+
+3. Support 2d hand keypoint estimation.
+
+ 1. [OneHand10K](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html)
+
+4. Support bottom-up inference.
+
+**New Features**
+
+- Support [OneHand10K](https://www.yangangwang.com/papers/WANG-MCC-2018-10.html) dataset ([#52](https://github.com/open-mmlab/mmpose/pull/52))
+
+- Support [MPII](http://human-pose.mpi-inf.mpg.de/) dataset ([#55](https://github.com/open-mmlab/mmpose/pull/55))
+
+- Support [MPII-TRB](https://github.com/kennymckormick/Triplet-Representation-of-human-Body) dataset ([#19](https://github.com/open-mmlab/mmpose/pull/19), [#47](https://github.com/open-mmlab/mmpose/pull/47), [#48](https://github.com/open-mmlab/mmpose/pull/48))
+
+- Support [OCHuman](http://www.liruilong.cn/projects/pose2seg/index.html) dataset ([#70](https://github.com/open-mmlab/mmpose/pull/70))
+
+- Support [AIChallenger](https://arxiv.org/abs/1711.06475?context=cs.CV) dataset ([#87](https://github.com/open-mmlab/mmpose/pull/87))
+
+- Support multiple backbones ([#26](https://github.com/open-mmlab/mmpose/pull/26))
+
+- Support CPM model ([#56](https://github.com/open-mmlab/mmpose/pull/56))
+
+**Bug Fixes**
+
+- Fix configs for MPII & MPII-TRB datasets ([#93](https://github.com/open-mmlab/mmpose/pull/93))
+
+- Fix the bug of missing `test_pipeline` in configs ([#14](https://github.com/open-mmlab/mmpose/pull/14))
+
+- Fix typos ([#27](https://github.com/open-mmlab/mmpose/pull/27), [#28](https://github.com/open-mmlab/mmpose/pull/28), [#50](https://github.com/open-mmlab/mmpose/pull/50), [#53](https://github.com/open-mmlab/mmpose/pull/53), [#63](https://github.com/open-mmlab/mmpose/pull/63))
+
+**Improvements**
+
+- Update benchmark ([#93](https://github.com/open-mmlab/mmpose/pull/93))
+
+- Add Dockerfile ([#44](https://github.com/open-mmlab/mmpose/pull/44))
+
+- Improve unittest coverage and minor fix ([#18](https://github.com/open-mmlab/mmpose/pull/18))
+
+- Support CPUs for train/val/demo ([#34](https://github.com/open-mmlab/mmpose/pull/34))
+
+- Support bottom-up demo ([#69](https://github.com/open-mmlab/mmpose/pull/69))
+
+- Add tools to publish model ([#62](https://github.com/open-mmlab/mmpose/pull/62))
+
+- Enrich the modelzoo ([#64](https://github.com/open-mmlab/mmpose/pull/64), [#68](https://github.com/open-mmlab/mmpose/pull/68), [#82](https://github.com/open-mmlab/mmpose/pull/82))
+
+## **v0.5.0 (21/7/2020)**
+
+**Highlights**
+
+- MMPose is released.
+
+**Main Features**
+
+- Support both top-down and bottom-up pose estimation approaches.
+
+- Achieve higher training efficiency and higher accuracy than other popular codebases (e.g. AlphaPose, HRNet)
+
+- Support various backbone models: ResNet, HRNet, SCNet, Houglass and HigherHRNet.
diff --git a/internlm_langchain/knowledge_base/MMPose/content/codecs.md b/internlm_langchain/knowledge_base/MMPose/content/codecs.md
new file mode 100644
index 00000000..85d4d2e5
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/codecs.md
@@ -0,0 +1,227 @@
+# 编解码器
+
+在关键点检测任务中,根据算法的不同,需要利用标注信息,生成不同格式的训练目标,比如归一化的坐标值、一维向量、高斯热图等。同样的,对于模型输出的结果,也需要经过处理转换成标注信息格式。我们一般将标注信息到训练目标的处理过程称为编码,模型输出到标注信息的处理过程称为解码。
+
+编码和解码是一对紧密相关的互逆处理过程。在 MMPose 早期版本中,编码和解码过程往往分散在不同模块里,使其不够直观和统一,增加了学习和维护成本。
+
+MMPose 1.0 中引入了新模块 **编解码器(Codec)** ,将关键点数据的编码和解码过程进行集成,以增加代码的友好度和复用性。
+
+编解码器在工作流程中所处的位置如下所示:
+
+
+
+一个编解码器主要包含两个部分:
+
+- 编码器
+- 解码器
+
+### 编码器
+
+编码器主要负责将处于输入图片尺度的坐标值,编码为模型训练所需要的目标格式,主要包括:
+
+- 归一化的坐标值:用于 Regression-based 方法
+- 一维向量:用于 SimCC-based 方法
+- 高斯热图:用于 Heatmap-based 方法
+
+以 Regression-based 方法的编码器为例:
+
+```Python
+def encode(self,
+ keypoints: np.ndarray,
+ keypoints_visible: Optional[np.ndarray] = None) -> dict:
+ """Encoding keypoints from input image space to normalized space.
+
+ Args:
+ keypoints (np.ndarray): Keypoint coordinates in shape (N, K, D)
+ keypoints_visible (np.ndarray): Keypoint visibilities in shape
+ (N, K)
+
+ Returns:
+ dict:
+ - keypoint_labels (np.ndarray): The normalized regression labels in
+ shape (N, K, D) where D is 2 for 2d coordinates
+ - keypoint_weights (np.ndarray): The target weights in shape
+ (N, K)
+ """
+ if keypoints_visible is None:
+ keypoints_visible = np.ones(keypoints.shape[:2], dtype=np.float32)
+
+ w, h = self.input_size
+ valid = ((keypoints >= 0) &
+ (keypoints <= [w - 1, h - 1])).all(axis=-1) & (
+ keypoints_visible > 0.5)
+
+ keypoint_labels = (keypoints / np.array([w, h])).astype(np.float32)
+ keypoint_weights = np.where(valid, 1., 0.).astype(np.float32)
+
+ encoded = dict(
+ keypoint_labels=keypoint_labels, keypoint_weights=keypoint_weights)
+
+ return encoded
+```
+
+编码后的数据会在 `PackPoseInputs` 中被转换为 Tensor 格式,并封装到 `data_sample.gt_instance_labels` 中供模型调用,一般主要用于 loss 计算,下面以 `RegressionHead` 中的 `loss()` 为例:
+
+```Python
+def loss(self,
+ inputs: Tuple[Tensor],
+ batch_data_samples: OptSampleList,
+ train_cfg: ConfigType = {}) -> dict:
+ """Calculate losses from a batch of inputs and data samples."""
+
+ pred_outputs = self.forward(inputs)
+
+ keypoint_labels = torch.cat(
+ [d.gt_instance_labels.keypoint_labels for d in batch_data_samples])
+ keypoint_weights = torch.cat([
+ d.gt_instance_labels.keypoint_weights for d in batch_data_samples
+ ])
+
+ # calculate losses
+ losses = dict()
+ loss = self.loss_module(pred_outputs, keypoint_labels,
+ keypoint_weights.unsqueeze(-1))
+
+ losses.update(loss_kpt=loss)
+ ### 后续内容省略 ###
+```
+
+### 解码器
+
+解码器主要负责将模型的输出解码为输入图片尺度的坐标值,处理过程与编码器相反。
+
+以 Regression-based 方法的解码器为例:
+
+```Python
+def decode(self, encoded: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
+ """Decode keypoint coordinates from normalized space to input image
+ space.
+
+ Args:
+ encoded (np.ndarray): Coordinates in shape (N, K, D)
+
+ Returns:
+ tuple:
+ - keypoints (np.ndarray): Decoded coordinates in shape (N, K, D)
+ - scores (np.ndarray): The keypoint scores in shape (N, K).
+ It usually represents the confidence of the keypoint prediction
+
+ """
+
+ if encoded.shape[-1] == 2:
+ N, K, _ = encoded.shape
+ normalized_coords = encoded.copy()
+ scores = np.ones((N, K), dtype=np.float32)
+ elif encoded.shape[-1] == 4:
+ # split coords and sigma if outputs contain output_sigma
+ normalized_coords = encoded[..., :2].copy()
+ output_sigma = encoded[..., 2:4].copy()
+ scores = (1 - output_sigma).mean(axis=-1)
+ else:
+ raise ValueError(
+ 'Keypoint dimension should be 2 or 4 (with sigma), '
+ f'but got {encoded.shape[-1]}')
+
+ w, h = self.input_size
+ keypoints = normalized_coords * np.array([w, h])
+
+ return keypoints, scores
+```
+
+默认情况下,`decode()` 方法只提供单个目标数据的解码过程,你也可以通过 `batch_decode()` 来实现批量解码提升执行效率。
+
+## 常见用法
+
+在 MMPose 配置文件中,主要有三处涉及编解码器:
+
+- 定义编解码器
+- 生成训练目标
+- 模型头部
+
+### 定义编解码器
+
+以回归方法生成归一化的坐标值为例,在配置文件中,我们通过如下方式定义编解码器:
+
+```Python
+codec = dict(type='RegressionLabel', input_size=(192, 256))
+```
+
+### 生成训练目标
+
+在数据处理阶段生成训练目标时,需要传入编解码器用于编码:
+
+```Python
+dict(type='GenerateTarget', encoder=codec)
+```
+
+### 模型头部
+
+在 MMPose 中,我们在模型头部对模型的输出进行解码,需要传入编解码器用于解码:
+
+```Python
+head=dict(
+ type='RLEHead',
+ in_channels=2048,
+ num_joints=17,
+ loss=dict(type='RLELoss', use_target_weight=True),
+ decoder=codec
+)
+```
+
+它们在配置文件中的具体位置如下:
+
+```Python
+
+# codec settings
+codec = dict(type='RegressionLabel', input_size=(192, 256)) ## 定义 ##
+
+# model settings
+model = dict(
+ type='TopdownPoseEstimator',
+ data_preprocessor=dict(
+ type='PoseDataPreprocessor',
+ mean=[123.675, 116.28, 103.53],
+ std=[58.395, 57.12, 57.375],
+ bgr_to_rgb=True),
+ backbone=dict(
+ type='ResNet',
+ depth=50,
+ init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50'),
+ ),
+ neck=dict(type='GlobalAveragePooling'),
+ head=dict(
+ type='RLEHead',
+ in_channels=2048,
+ num_joints=17,
+ loss=dict(type='RLELoss', use_target_weight=True),
+ decoder=codec), ## 模型头部 ##
+ test_cfg=dict(
+ flip_test=True,
+ shift_coords=True,
+ ))
+
+# base dataset settings
+dataset_type = 'CocoDataset'
+data_mode = 'topdown'
+data_root = 'data/coco/'
+
+backend_args = dict(backend='local')
+
+# pipelines
+train_pipeline = [
+ dict(type='LoadImage', backend_args=backend_args),
+ dict(type='GetBBoxCenterScale'),
+ dict(type='RandomFlip', direction='horizontal'),
+ dict(type='RandomHalfBody'),
+ dict(type='RandomBBoxTransform'),
+ dict(type='TopdownAffine', input_size=codec['input_size']),
+ dict(type='GenerateTarget', encoder=codec), ## 生成训练目标 ##
+ dict(type='PackPoseInputs')
+]
+test_pipeline = [
+ dict(type='LoadImage', backend_args=backend_args),
+ dict(type='GetBBoxCenterScale'),
+ dict(type='TopdownAffine', input_size=codec['input_size']),
+ dict(type='PackPoseInputs')
+]
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/configs.md b/internlm_langchain/knowledge_base/MMPose/content/configs.md
new file mode 100644
index 00000000..0bcb7aa1
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/configs.md
@@ -0,0 +1,466 @@
+# 配置文件
+
+MMPose 使用 Python 文件作为配置文件,将模块化设计和继承设计结合到配置系统中,便于进行各种实验。
+
+## 简介
+
+MMPose 拥有一套强大的配置系统,在注册器的配合下,用户可以通过一个配置文件来定义整个项目需要用到的所有内容,以 Python 字典形式组织配置信息,传递给注册器完成对应模块的实例化。
+
+下面是一个常见的 Pytorch 模块定义的例子:
+
+```Python
+# 在loss_a.py中定义Loss_A类
+Class Loss_A(nn.Module):
+ def __init__(self, param1, param2):
+ self.param1 = param1
+ self.param2 = param2
+ def forward(self, x):
+ return x
+
+# 在需要的地方进行实例化
+loss = Loss_A(param1=1.0, param2=True)
+```
+
+只需要通过一行代码对这个类进行注册:
+
+```Python
+# 在loss_a.py中定义Loss_A类
+from mmpose.registry import MODELS
+
+@MODELS.register_module() # 注册该类到 MODELS 下
+Class Loss_A(nn.Module):
+ def __init__(self, param1, param2):
+ self.param1 = param1
+ self.param2 = param2
+ def forward(self, x):
+ return x
+```
+
+并在对应目录下的 `__init__.py` 中进行 `import`:
+
+```Python
+# __init__.py of mmpose/models/losses
+from .loss_a.py import Loss_A
+
+__all__ = ['Loss_A']
+```
+
+我们就可以通过如下方式来从配置文件定义并进行实例化:
+
+```Python
+# 在config_file.py中定义
+loss_cfg = dict(
+ type='Loss_A', # 通过type指定类名
+ param1=1.0, # 传递__init__所需的参数
+ param2=True
+)
+
+# 在需要的地方进行实例化
+loss = MODELS.build(loss_cfg) # 等价于 loss = Loss_A(param1=1.0, param2=True)
+```
+
+MMPose 预定义的 Registry 在 `$MMPOSE/mmpose/registry.py` 中,目前支持的有:
+
+- `DATASETS`:数据集
+
+- `TRANSFORMS`:数据变换
+
+- `MODELS`:模型模块(Backbone、Neck、Head、Loss等)
+
+- `VISUALIZERS`:可视化工具
+
+- `VISBACKENDS`:可视化后端
+
+- `METRICS`:评测指标
+
+- `KEYPOINT_CODECS`:编解码器
+
+- `HOOKS`:钩子类
+
+```{note}
+需要注意的是,所有新增的模块都需要使用注册器(Registry)进行注册,并在对应目录的 `__init__.py` 中进行 `import`,以便能够使用配置文件构建其实例。
+```
+
+## 配置系统
+
+具体而言,一个配置文件主要包含如下五个部分:
+
+- 通用配置:与训练或测试无关的通用配置,如时间统计,模型存储与加载,可视化等相关 Hook,以及一些分布式相关的环境配置
+
+- 数据配置:数据增强策略,Dataset和Dataloader相关配置
+
+- 训练配置:断点恢复、模型权重加载、优化器、学习率调整、训练轮数和测试间隔等
+
+- 模型配置:模型模块、参数、损失函数等
+
+- 评测配置:模型性能评测指标
+
+你可以在 `$MMPOSE/configs` 下找到我们提供的配置文件,配置文件之间通过继承来避免冗余。为了保持配置文件简洁易读,我们将一些必要但不常改动的配置存放到了 `$MMPOSE/configs/_base_` 目录下,如果希望查阅完整的配置信息,你可以运行如下指令:
+
+```Bash
+python tools/analysis/print_config.py /PATH/TO/CONFIG
+```
+
+### 通用配置
+
+通用配置指与训练或测试无关的必要配置,主要包括:
+
+- **默认Hook**:迭代时间统计,训练日志,参数更新,checkpoint 等
+
+- **环境配置**:分布式后端,cudnn,多进程配置等
+
+- **可视化器**:可视化后端和策略设置
+
+- **日志配置**:日志等级,格式,打印和记录间隔等
+
+下面是通用配置的样例说明:
+
+```Python
+# 通用配置
+default_scope = 'mmpose'
+default_hooks = dict(
+ timer=dict(type='IterTimerHook'), # 迭代时间统计,包括数据耗时和模型耗时
+ logger=dict(type='LoggerHook', interval=50), # 日志打印间隔
+ param_scheduler=dict(type='ParamSchedulerHook'), # 用于调度学习率更新
+ checkpoint=dict(
+ type='CheckpointHook', interval=1, save_best='coco/AP', # ckpt保存间隔,最优ckpt参考指标
+ rule='greater'), # 最优ckpt指标评价规则
+ sampler_seed=dict(type='DistSamplerSeedHook')) # 分布式随机种子设置
+env_cfg = dict(
+ cudnn_benchmark=False, # cudnn benchmark开关
+ mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), # opencv多线程配置
+ dist_cfg=dict(backend='nccl')) # 分布式训练后端设置
+vis_backends = [dict(type='LocalVisBackend')] # 可视化器后端设置
+visualizer = dict( # 可视化器设置
+ type='PoseLocalVisualizer',
+ vis_backends=[dict(type='LocalVisBackend')],
+ name='visualizer')
+log_processor = dict( # 训练日志格式、间隔
+ type='LogProcessor', window_size=50, by_epoch=True, num_digits=6)
+log_level = 'INFO' # 日志记录等级
+```
+
+通用配置一般单独存放到`$MMPOSE/configs/_base_`目录下,通过如下方式进行继承:
+
+```Python
+_base_ = ['../../../_base_/default_runtime.py'] # 以运行时的config文件位置为相对路径起点
+```
+
+```{note}
+CheckpointHook:
+
+- save_best: `'coco/AP'` 用于 `CocoMetric`, `'PCK'` 用于 `PCKAccuracy`
+- max_keep_ckpts: 最大保留ckpt数量,默认为-1,代表不限制
+
+样例:
+
+`default_hooks = dict(checkpoint=dict(save_best='PCK', rule='greater', max_keep_ckpts=1))`
+```
+
+### 数据配置
+
+数据配置指数据处理相关的配置,主要包括:
+
+- **数据后端**:数据供给后端设置,默认为本地硬盘,我们也支持从 LMDB,S3 Bucket 等加载
+
+- **数据集**:图像与标注文件路径
+
+- **加载**:加载策略,批量大小等
+
+- **流水线**:数据增强策略
+
+- **编码器**:根据标注生成特定格式的监督信息
+
+下面是数据配置的样例说明:
+
+```Python
+backend_args = dict(backend='local') # 数据加载后端设置,默认从本地硬盘加载
+dataset_type = 'CocoDataset' # 数据集类名
+data_mode = 'topdown' # 算法结构类型,用于指定标注信息加载策略
+data_root = 'data/coco/' # 数据存放路径
+ # 定义数据编解码器,用于生成target和对pred进行解码,同时包含了输入图片和输出heatmap尺寸等信息
+codec = dict(
+ type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2)
+train_pipeline = [ # 训练时数据增强
+ dict(type='LoadImage', backend_args=backend_args, # 加载图片
+ dict(type='GetBBoxCenterScale'), # 根据bbox获取center和scale
+ dict(type='RandomBBoxTransform'), # 生成随机位移、缩放、旋转变换矩阵
+ dict(type='RandomFlip', direction='horizontal'), # 生成随机翻转变换矩阵
+ dict(type='RandomHalfBody'), # 随机半身增强
+ dict(type='TopdownAffine', input_size=codec['input_size']), # 根据变换矩阵更新目标数据
+ dict(
+ type='GenerateTarget', # 根据目标数据生成监督信息
+ # 监督信息类型
+ encoder=codec, # 传入编解码器,用于数据编码,生成特定格式的监督信息
+ dict(type='PackPoseInputs') # 对target进行打包用于训练
+]
+test_pipeline = [ # 测试时数据增强
+ dict(type='LoadImage', backend_args=backend_args), # 加载图片
+ dict(type='GetBBoxCenterScale'), # 根据bbox获取center和scale
+ dict(type='TopdownAffine', input_size=codec['input_size']), # 根据变换矩阵更新目标数据
+ dict(type='PackPoseInputs') # 对target进行打包用于训练
+]
+train_dataloader = dict( # 训练数据加载
+ batch_size=64, # 批次大小
+ num_workers=2, # 数据加载进程数
+ persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销
+ sampler=dict(type='DefaultSampler', shuffle=True), # 采样策略,打乱数据
+ dataset=dict(
+ type=dataset_type , # 数据集类名
+ data_root=data_root, # 数据集路径
+ data_mode=data_mode, # 算法类型
+ ann_file='annotations/person_keypoints_train2017.json', # 标注文件路径
+ data_prefix=dict(img='train2017/'), # 图像路径
+ pipeline=train_pipeline # 数据流水线
+ ))
+val_dataloader = dict(
+ batch_size=32,
+ num_workers=2,
+ persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销
+ drop_last=False,
+ sampler=dict(type='DefaultSampler', shuffle=False), # 采样策略,不进行打乱
+ dataset=dict(
+ type=dataset_type , # 数据集类名
+ data_root=data_root, # 数据集路径
+ data_mode=data_mode, # 算法类型
+ ann_file='annotations/person_keypoints_val2017.json', # 标注文件路径
+ bbox_file=
+ 'data/coco/person_detection_results/COCO_val2017_detections_AP_H_56_person.json', # 检测框标注文件,topdown方法专用
+ data_prefix=dict(img='val2017/'), # 图像路径
+ test_mode=True, # 测试模式开关
+ pipeline=test_pipeline # 数据流水线
+ ))
+test_dataloader = val_dataloader # 默认情况下不区分验证集和测试集,用户根据需要来自行定义
+```
+
+```{note}
+
+常用功能可以参考以下教程:
+- [恢复训练](../common_usages/resume_training.md)
+- [自动混合精度训练](../common_usages/amp_training.md)
+- [设置随机种子](../common_usages/set_random_seed.md)
+
+```
+
+### 训练配置
+
+训练配置指训练策略相关的配置,主要包括:
+
+- 从断点恢复训练
+
+- 模型权重加载
+
+- 训练轮数和测试间隔
+
+- 学习率调整策略,如 warmup,scheduler
+
+- 优化器和学习率
+
+- 高级训练策略设置,如自动学习率缩放
+
+下面是训练配置的样例说明:
+
+```Python
+resume = False # 断点恢复
+load_from = None # 模型权重加载
+train_cfg = dict(by_epoch=True, max_epochs=210, val_interval=10) # 训练轮数,测试间隔
+param_scheduler = [
+ dict( # warmup策略
+ type='LinearLR', begin=0, end=500, start_factor=0.001, by_epoch=False),
+ dict( # scheduler
+ type='MultiStepLR',
+ begin=0,
+ end=210,
+ milestones=[170, 200],
+ gamma=0.1,
+ by_epoch=True)
+]
+optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005)) # 优化器和学习率
+auto_scale_lr = dict(base_batch_size=512) # 根据batch_size自动缩放学习率
+```
+
+### 模型配置
+
+模型配置指模型训练和推理相关的配置,主要包括:
+
+- 模型结构
+
+- 损失函数
+
+- 数据解码策略
+
+- 测试时增强策略
+
+下面是模型配置的样例说明,定义了一个基于 HRNetw32 的 Top-down Heatmap-based 模型:
+
+```Python
+# 定义数据编解码器,如果在数据配置部分已经定义过则无需重复定义
+codec = dict(
+ type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2)
+# 模型配置
+model = dict(
+ type='TopdownPoseEstimator', # 模型结构决定了算法流程
+ data_preprocessor=dict( # 数据归一化和通道顺序调整,作为模型的一部分
+ type='PoseDataPreprocessor',
+ mean=[123.675, 116.28, 103.53],
+ std=[58.395, 57.12, 57.375],
+ bgr_to_rgb=True),
+ backbone=dict( # 骨干网络定义
+ type='HRNet',
+ in_channels=3,
+ extra=dict(
+ stage1=dict(
+ num_modules=1,
+ num_branches=1,
+ block='BOTTLENECK',
+ num_blocks=(4, ),
+ num_channels=(64, )),
+ stage2=dict(
+ num_modules=1,
+ num_branches=2,
+ block='BASIC',
+ num_blocks=(4, 4),
+ num_channels=(32, 64)),
+ stage3=dict(
+ num_modules=4,
+ num_branches=3,
+ block='BASIC',
+ num_blocks=(4, 4, 4),
+ num_channels=(32, 64, 128)),
+ stage4=dict(
+ num_modules=3,
+ num_branches=4,
+ block='BASIC',
+ num_blocks=(4, 4, 4, 4),
+ num_channels=(32, 64, 128, 256))),
+ init_cfg=dict(
+ type='Pretrained', # 预训练参数,只加载backbone权重用于迁移学习
+ checkpoint='https://download.openmmlab.com/mmpose'
+ '/pretrain_models/hrnet_w32-36af842e.pth'),
+ ),
+ head=dict( # 模型头部
+ type='HeatmapHead',
+ in_channels=32,
+ out_channels=17,
+ deconv_out_channels=None,
+ loss=dict(type='KeypointMSELoss', use_target_weight=True), # 损失函数
+ decoder=codec), # 解码器,将heatmap解码成坐标值
+ test_cfg=dict(
+ flip_test=True, # 开启测试时水平翻转集成
+ flip_mode='heatmap', # 对heatmap进行翻转
+ shift_heatmap=True, # 对翻转后的结果进行平移提高精度
+ ))
+```
+
+### 评测配置
+
+评测配置指公开数据集中关键点检测任务常用的评测指标,主要包括:
+
+- AR, AP and mAP
+
+- PCK, PCKh, tPCK
+
+- AUC
+
+- EPE
+
+- NME
+
+下面是评测配置的样例说明,定义了一个COCO指标评测器:
+
+```Python
+val_evaluator = dict(
+ type='CocoMetric', # coco 评测指标
+ ann_file=data_root + 'annotations/person_keypoints_val2017.json') # 加载评测标注数据
+test_evaluator = val_evaluator # 默认情况下不区分验证集和测试集,用户根据需要来自行定义
+```
+
+## 配置文件命名规则
+
+MMPose 配置文件命名风格如下:
+
+```Python
+{{算法信息}}_{{模块信息}}_{{训练信息}}_{{数据信息}}.py
+```
+
+文件名总体分为四部分:算法信息,模块信息,训练信息和数据信息。不同部分的单词之间用下划线 `'_'` 连接,同一部分有多个单词用短横线 `'-'` 连接。
+
+- **算法信息**:算法名称,如 `topdown-heatmap`,`topdown-rle` 等
+
+- **模块信息**:按照数据流的顺序列举一些中间的模块,其内容依赖于算法任务,如 `res101`,`hrnet-w48`等
+
+- **训练信息**:训练策略的一些设置,包括 `batch size`,`schedule` 等,如 `8xb64-210e`
+
+- **数据信息**:数据集名称、模态、输入尺寸等,如 `ap10k-256x256`,`zebra-160x160` 等
+
+有时为了避免文件名过长,会省略模型信息中一些强相关的模块,只保留关键信息,如RLE-based算法中的`GAP`,Heatmap-based算法中的 `deconv` 等。
+
+如果你希望向MMPose添加新的方法,你的配置文件同样需要遵守该命名规则。
+
+## 常见用法
+
+### 配置文件的继承
+
+该用法常用于隐藏一些必要但不需要修改的配置,以提高配置文件的可读性。假如有如下两个配置文件:
+
+`optimizer_cfg.py`:
+
+```Python
+optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
+```
+
+`resnet50.py`:
+
+```Python
+_base_ = ['optimizer_cfg.py']
+model = dict(type='ResNet', depth=50)
+```
+
+虽然我们在 `resnet50.py` 中没有定义 optimizer 字段,但由于我们写了 `_base_ = ['optimizer_cfg.py']`,会使这个配置文件获得 `optimizer_cfg.py` 中的所有字段:
+
+```Python
+cfg = Config.fromfile('resnet50.py')
+cfg.optimizer # ConfigDict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
+```
+
+### 继承字段的修改
+
+对于继承过来的已经定义好的字典,可以直接指定对应字段进行修改,而不需要重新定义完整的字典:
+
+`resnet50_lr0.01.py`:
+
+```Python
+_base_ = ['optimizer_cfg.py']
+model = dict(type='ResNet', depth=50)
+optimizer = dict(lr=0.01) # 直接修改对应字段
+```
+
+这个配置文件只修改了对应字段`lr`的信息:
+
+```Python
+cfg = Config.fromfile('resnet50_lr0.01.py')
+cfg.optimizer # ConfigDict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001)
+```
+
+### 删除字典中的字段
+
+如果不仅是需要修改某些字段,还需要删除已定义的一些字段,需要在重新定义这个字典时指定`_delete_=True`,表示将没有在新定义中出现的字段全部删除:
+
+`resnet50.py`:
+
+```Python
+_base_ = ['optimizer_cfg.py', 'runtime_cfg.py']
+model = dict(type='ResNet', depth=50)
+optimizer = dict(_delete_=True, type='SGD', lr=0.01) # 重新定义字典
+```
+
+此时字典中除了 `type` 和 `lr` 以外的内容(`momentum`和`weight_decay`)将被全部删除:
+
+```Python
+cfg = Config.fromfile('resnet50_lr0.01.py')
+cfg.optimizer # ConfigDict(type='SGD', lr=0.01)
+```
+
+```{note}
+如果你希望更深入地了解配置系统的高级用法,可以查看 [MMEngine 教程](https://mmengine.readthedocs.io/zh_CN/latest/tutorials/config.html)。
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/contribution_guide.md b/internlm_langchain/knowledge_base/MMPose/content/contribution_guide.md
new file mode 100644
index 00000000..96be7d17
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/contribution_guide.md
@@ -0,0 +1,207 @@
+# 如何给 MMPose 贡献代码
+
+欢迎加入 MMPose 社区,我们致力于打造最前沿的计算机视觉基础库,我们欢迎任何形式的贡献,包括但不限于:
+
+- **修复错误**
+ 1. 如果提交的代码改动较大,我们鼓励你先开一个 issue 并正确描述现象、原因和复现方式,讨论后确认修复方案。
+ 2. 修复错误并补充相应的单元测试,提交 PR 。
+- **新增功能或组件**
+ 1. 如果新功能或模块涉及较大的代码改动,我们建议先提交 issue,与我们确认功能的必要性。
+ 2. 实现新增功能并添加单元测试,提交 PR 。
+- **文档补充或翻译**
+ - 如果发现文档有错误或不完善的地方,欢迎直接提交 PR 。
+
+```{note}
+- 如果你希望向 MMPose 1.0 贡献代码,请从 dev-1.x 上创建新分支,并提交 PR 到 dev-1.x 分支上。
+- 如果你是论文作者,并希望将你的方法加入到 MMPose 中,欢迎联系我们,我们将非常感谢你的贡献。
+- 如果你希望尽快将你的项目分享到 MMPose 开源社区,欢迎将 PR 提到 Projects 目录下,该目录下的项目将简化 Review 流程并尽快合入。
+- 如果你希望加入 MMPose 的维护者,欢迎联系我们,我们将邀请你加入 MMPose 的维护者群。
+```
+
+## 准备工作
+
+PR 操作所使用的命令都是用 Git 去实现的,该章节将介绍如何进行 Git 配置与 GitHub 绑定。
+
+### Git 配置
+
+首先,你需要在本地安装 Git,然后配置你的 Git 用户名和邮箱:
+
+```Shell
+# 在命令提示符(cmd)或终端(terminal)中输入以下命令,查看 Git 版本
+git --version
+```
+
+然后,你需要检查自己的 Git Config 是否正确配置,如果 `user.name` 和 `user.email` 为空,你需要配置你的 Git 用户名和邮箱:
+
+```Shell
+# 在命令提示符(cmd)或终端(terminal)中输入以下命令,查看 Git 配置
+git config --global --list
+# 设置 Git 用户名和邮箱
+git config --global user.name "这里填入你的用户名"
+git config --global user.email "这里填入你的邮箱"
+```
+
+## PR 流程
+
+如果你对 PR 流程不熟悉,接下来将会从零开始,一步一步地教你如何提交 PR。如果你想深入了解 PR 开发模式,可以参考 [GitHub 官方文档](https://docs.github.com/cn/github/collaborating-with-issues-and-pull-requests/about-pull-requests)。
+
+### 1. Fork 项目
+
+当你第一次提交 PR 时,需要先 Fork 项目到自己的 GitHub 账号下。点击项目右上角的 Fork 按钮,将项目 Fork 到自己的 GitHub 账号下。
+
+
+
+接着,你需要将你的 Fork 仓库 Clone 到本地,然后添加官方仓库作为远程仓库:
+
+```Shell
+
+# Clone 你的 Fork 仓库到本地
+git clone https://github.com/username/mmpose.git
+
+# 添加官方仓库作为远程仓库
+cd mmpose
+git remote add upstream https://github.com/open-mmlab/mmpose.git
+```
+
+在终端中输入以下命令,查看远程仓库是否成功添加:
+
+```Shell
+git remote -v
+```
+
+如果出现以下信息,说明你已经成功添加了远程仓库:
+
+```Shell
+origin https://github.com/{username}/mmpose.git (fetch)
+origin https://github.com/{username}/mmpose.git (push)
+upstream https://github.com/open-mmlab/mmpose.git (fetch)
+upstream https://github.com/open-mmlab/mmpose.git (push)
+```
+
+```{note}
+这里对 origin 和 upstream 进行一个简单的介绍,当我们使用 git clone 来克隆代码时,会默认创建一个 origin 的 remote,它指向我们克隆的代码库地址,而 upstream 则是我们自己添加的,用来指向原始代码库地址。当然如果你不喜欢他叫 upstream,也可以自己修改,比如叫 open-mmlab。我们通常向 origin 提交代码(即 fork 下来的远程仓库),然后向 upstream 提交一个 pull request。如果提交的代码和最新的代码发生冲突,再从 upstream 拉取最新的代码,和本地分支解决冲突,再提交到 origin。
+```
+
+### 2. 配置 pre-commit
+
+在本地开发环境中,我们使用 pre-commit 来检查代码风格,以确保代码风格的统一。在提交代码前,你需要先安装 pre-commit:
+
+```Shell
+pip install -U pre-commit
+
+# 在 mmpose 根目录下安装 pre-commit
+pre-commit install
+```
+
+检查 pre-commit 是否配置成功,并安装 `.pre-commit-config.yaml` 中的钩子:
+
+```Shell
+pre-commit run --all-files
+```
+
+
+
+```{note}
+如果你是中国大陆用户,由于网络原因,可能会出现 pre-commit 安装失败的情况。
+
+这时你可以使用清华源来安装 pre-commit:
+pip install -U pre-commit -i https://pypi.tuna.tsinghua.edu.cn/simple
+
+或者使用国内镜像来安装 pre-commit:
+pip install -U pre-commit -i https://pypi.mirrors.ustc.edu.cn/simple
+```
+
+如果安装过程被中断,可以重复执行上述命令,直到安装成功。
+
+如果你提交的代码中有不符合规范的地方,pre-commit 会发出警告,并自动修复部分错误。
+
+
+
+### 3. 创建开发分支
+
+安装完 pre-commit 之后,我们需要基于 dev 分支创建一个新的开发分支,建议以 `username/pr_name` 的形式命名,例如:
+
+```Shell
+git checkout -b username/refactor_contributing_doc
+```
+
+在后续的开发中,如果本地仓库的 dev 分支落后于官方仓库的 dev 分支,需要先拉取 upstream 的 dev 分支,然后 rebase 到本地的开发分支上:
+
+```Shell
+git checkout username/refactor_contributing_doc
+git fetch upstream
+git rebase upstream/dev-1.x
+```
+
+在 rebase 时,如果出现冲突,需要手动解决冲突,然后执行 `git add` 命令,再执行 `git rebase --continue` 命令,直到 rebase 完成。
+
+### 4. 提交代码并在本地通过单元测试
+
+在本地开发完成后,我们需要在本地通过单元测试,然后提交代码。
+
+```shell
+# 运行单元测试
+pytest tests/
+
+# 提交代码
+git add .
+git commit -m "commit message"
+```
+
+### 5. 推送代码到远程仓库
+
+在本地开发完成后,我们需要将代码推送到远程仓库。
+
+```Shell
+git push origin username/refactor_contributing_doc
+```
+
+### 6. 提交 Pull Request (PR)
+
+#### (1) 在 GitHub 上创建 PR
+
+
+
+#### (2) 在 PR 中根据指引修改描述,添加必要的信息
+
+
+
+```{note}
+- 在 PR branch 左侧选择 `dev` 分支,否则 PR 会被拒绝。
+- 如果你是第一次向 OpenMMLab 提交 PR,需要签署 CLA。
+```
+
+
+
+## 代码风格
+
+### Python
+
+我们采用[PEP8](https://www.python.org/dev/peps/pep-0008/)作为代码风格。
+
+使用下面的工具来对代码进行整理和格式化:
+
+- [flake8](http://flake8.pycqa.org/en/latest/):代码提示
+- [isort](https://github.com/timothycrosley/isort):import 排序
+- [yapf](https://github.com/google/yapf):格式化工具
+- [codespell](https://github.com/codespell-project/codespell): 单词拼写检查
+- [mdformat](https://github.com/executablebooks/mdformat): markdown 文件格式化工具
+- [docformatter](https://github.com/myint/docformatter): docstring 格式化工具
+
+`yapf`和`isort`的样式配置可以在[setup.cfg](/setup.cfg)中找到。
+
+我们使用[pre-commit hook](https://pre-commit.com/)来:
+
+- 检查和格式化 `flake8`、`yapf`、`isort`、`trailing whitespaces`
+- 修复 `end-of-files`
+- 在每次提交时自动排序 `requirments.txt`
+
+`pre-commit`的配置存储在[.pre-commit-config](/.pre-commit-config.yaml)中。
+
+```{note}
+在你创建PR之前,请确保你的代码格式符合规范,且经过了 yapf 格式化。
+```
+
+### C++与CUDA
+
+遵循[Google C++风格指南](https://google.github.io/styleguide/cppguide.html)
diff --git a/internlm_langchain/knowledge_base/MMPose/content/customize_datasets.md b/internlm_langchain/knowledge_base/MMPose/content/customize_datasets.md
new file mode 100644
index 00000000..61b58dc9
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/customize_datasets.md
@@ -0,0 +1,264 @@
+# 自定义数据集
+
+MMPose 目前已支持了多个任务和相应的数据集。您可以在 [数据集](https://mmpose.readthedocs.io/zh_CN/latest/dataset_zoo.html) 找到它们。请按照相应的指南准备数据。
+
+
+
+- [自定义数据集-将数据组织为 COCO 格式](#自定义数据集-将数据组织为-coco-格式)
+- [创建自定义数据集的元信息文件](#创建自定义数据集的元信息文件)
+- [创建自定义数据集类](#创建自定义数据集类)
+- [创建自定义配置文件](#创建自定义配置文件)
+- [数据集封装](#数据集封装)
+
+
+
+## 将数据组织为 COCO 格式
+
+最简单的使用自定义数据集的方法是将您的注释格式转换为 COCO 数据集格式。
+
+COCO 格式的注释 JSON 文件具有以下必要键:
+
+```python
+'images': [
+ {
+ 'file_name': '000000001268.jpg',
+ 'height': 427,
+ 'width': 640,
+ 'id': 1268
+ },
+ ...
+],
+'annotations': [
+ {
+ 'segmentation': [[426.36,
+ ...
+ 424.34,
+ 223.3]],
+ 'keypoints': [0,0,0,
+ 0,0,0,
+ 0,0,0,
+ 427,220,2,
+ 443,222,2,
+ 414,228,2,
+ 449,232,2,
+ 408,248,1,
+ 454,261,2,
+ 0,0,0,
+ 0,0,0,
+ 411,287,2,
+ 431,287,2,
+ 0,0,0,
+ 458,265,2,
+ 0,0,0,
+ 466,300,1],
+ 'num_keypoints': 10,
+ 'area': 3894.5826,
+ 'iscrowd': 0,
+ 'image_id': 1268,
+ 'bbox': [402.34, 205.02, 65.26, 88.45],
+ 'category_id': 1,
+ 'id': 215218
+ },
+ ...
+],
+'categories': [
+ {'id': 1, 'name': 'person'},
+ ]
+```
+
+JSON 标注文件中有三个关键词是必需的:
+
+- `images`:包含所有图像信息的列表,每个图像都有一个 `file_name`、`height`、`width` 和 `id` 键。
+- `annotations`:包含所有实例标注信息的列表,每个实例都有一个 `segmentation`、`keypoints`、`num_keypoints`、`area`、`iscrowd`、`image_id`、`bbox`、`category_id` 和 `id` 键。
+- `categories`:包含所有类别信息的列表,每个类别都有一个 `id` 和 `name` 键。以人体姿态估计为例,`id` 为 1,`name` 为 `person`。
+
+如果您的数据集已经是 COCO 格式的,那么您可以直接使用 `CocoDataset` 类来读取该数据集。
+
+## 创建自定义数据集的元信息文件
+
+对于一个新的数据集而言,您需要创建一个新的数据集元信息文件。该文件包含了数据集的基本信息,如关键点个数、排列顺序、可视化颜色、骨架连接关系等。元信息文件通常存放在 `config/_base_/datasets/` 目录下,例如:
+
+```
+config/_base_/datasets/custom.py
+```
+
+元信息文件中需要包含以下信息:
+
+- `keypoint_info`:每个关键点的信息:
+ 1. `name`: 关键点名称,必须是唯一的,例如 `nose`、`left_eye` 等。
+ 2. `id`: 关键点 ID,必须是唯一的,从 0 开始。
+ 3. `color`: 关键点可视化时的颜色,以 (\[B, G, R\]) 格式组织起来,用于可视化。
+ 4. `type`: 关键点类型,可以是 `upper`、`lower` 或 \`\`,用于数据增强。
+ 5. `swap`: 关键点交换关系,用于水平翻转数据增强。
+- `skeleton_info`:骨架连接关系,用于可视化。
+- `joint_weights`:每个关键点的权重,用于损失函数计算。
+- `sigma`:标准差,用于计算 OKS 分数,详细信息请参考 [keypoints-eval](https://cocodataset.org/#keypoints-eval)。
+
+下面是一个简化版本的元信息文件([完整版](/configs/_base_/datasets/coco.py)):
+
+```python
+dataset_info = dict(
+ dataset_name='coco',
+ paper_info=dict(
+ author='Lin, Tsung-Yi and Maire, Michael and '
+ 'Belongie, Serge and Hays, James and '
+ 'Perona, Pietro and Ramanan, Deva and '
+ r'Doll{\'a}r, Piotr and Zitnick, C Lawrence',
+ title='Microsoft coco: Common objects in context',
+ container='European conference on computer vision',
+ year='2014',
+ homepage='http://cocodataset.org/',
+ ),
+ keypoint_info={
+ 0:
+ dict(name='nose', id=0, color=[51, 153, 255], type='upper', swap=''),
+ 1:
+ dict(
+ name='left_eye',
+ id=1,
+ color=[51, 153, 255],
+ type='upper',
+ swap='right_eye'),
+ ...
+ 16:
+ dict(
+ name='right_ankle',
+ id=16,
+ color=[255, 128, 0],
+ type='lower',
+ swap='left_ankle')
+ },
+ skeleton_info={
+ 0:
+ dict(link=('left_ankle', 'left_knee'), id=0, color=[0, 255, 0]),
+ ...
+ 18:
+ dict(
+ link=('right_ear', 'right_shoulder'), id=18, color=[51, 153, 255])
+ },
+ joint_weights=[
+ 1., 1., 1., 1., 1., 1., 1., 1.2, 1.2, 1.5, 1.5, 1., 1., 1.2, 1.2, 1.5,
+ 1.5
+ ],
+ sigmas=[
+ 0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,
+ 0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
+ ])
+```
+
+## 创建自定义数据集类
+
+如果标注信息不是用 COCO 格式存储的,那么您需要创建一个新的数据集类。数据集类需要继承自 `BaseDataset` 类,并且需要按照以下步骤实现:
+
+1. 在 `mmpose/datasets/datasets` 目录下找到该数据集符合的 package,如果没有符合的,则创建一个新的 package。
+
+2. 在该 package 下创建一个新的数据集类,在对应的注册器中进行注册:
+
+ ```python
+ from mmengine.dataset import BaseDataset
+ from mmpose.registry import DATASETS
+
+ @DATASETS.register_module(name='MyCustomDataset')
+ class MyCustomDataset(BaseDataset):
+ ```
+
+ 如果未注册,你会在运行时遇到 `KeyError: 'XXXXX is not in the dataset registry'`。
+ 关于 `mmengine.BaseDataset` 的更多信息,请参考 [这个文档](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/basedataset.html)。
+
+3. 确保你在 package 的 `__init__.py` 中导入了该数据集类。
+
+4. 确保你在 `mmpose/datasets/__init__.py` 中导入了该 package。
+
+## 创建自定义配置文件
+
+在配置文件中,你需要修改跟数据集有关的部分,例如:
+
+```python
+...
+# 自定义数据集类
+dataset_type = 'MyCustomDataset' # or 'CocoDataset'
+
+train_dataloader = dict(
+ batch_size=2,
+ dataset=dict(
+ type=dataset_type,
+ data_root='root/of/your/train/data',
+ ann_file='path/to/your/train/json',
+ data_prefix=dict(img='path/to/your/train/img'),
+ metainfo=dict(from_file='configs/_base_/datasets/custom.py'),
+ ...),
+ )
+
+val_dataloader = dict(
+ batch_size=2,
+ dataset=dict(
+ type=dataset_type,
+ data_root='root/of/your/val/data',
+ ann_file='path/to/your/val/json',
+ data_prefix=dict(img='path/to/your/val/img'),
+ metainfo=dict(from_file='configs/_base_/datasets/custom.py'),
+ ...),
+ )
+
+test_dataloader = dict(
+ batch_size=2,
+ dataset=dict(
+ type=dataset_type,
+ data_root='root/of/your/test/data',
+ ann_file='path/to/your/test/json',
+ data_prefix=dict(img='path/to/your/test/img'),
+ metainfo=dict(from_file='configs/_base_/datasets/custom.py'),
+ ...),
+ )
+...
+```
+
+请确保所有的路径都是正确的。
+
+## 数据集封装
+
+目前 [MMEngine](https://github.com/open-mmlab/mmengine) 支持以下数据集封装:
+
+- [ConcatDataset](https://mmengine.readthedocs.io/zh_CN/latest/advanced_tutorials/basedataset.html#concatdataset)
+- [RepeatDataset](https://mmengine.readthedocs.io/zh_CN/latest/advanced_tutorials/basedataset.html#repeatdataset)
+
+### CombinedDataset
+
+MMPose 提供了一个 `CombinedDataset` 类,它可以将多个数据集封装成一个数据集。它的使用方法如下:
+
+```python
+dataset_1 = dict(
+ type='dataset_type_1',
+ data_root='root/of/your/dataset1',
+ data_prefix=dict(img_path='path/to/your/img'),
+ ann_file='annotations/train.json',
+ pipeline=[
+ # 使用转换器将标注信息统一为需要的格式
+ converter_transform_1
+ ])
+
+dataset_2 = dict(
+ type='dataset_type_2',
+ data_root='root/of/your/dataset2',
+ data_prefix=dict(img_path='path/to/your/img'),
+ ann_file='annotations/train.json',
+ pipeline=[
+ converter_transform_2
+ ])
+
+shared_pipeline = [
+ LoadImage(),
+ ParseImage(),
+]
+
+combined_dataset = dict(
+ type='CombinedDataset',
+ metainfo=dict(from_file='path/to/your/metainfo'),
+ datasets=[dataset_1, dataset_2],
+ pipeline=shared_pipeline,
+)
+```
+
+- **合并数据集的元信息** 决定了标注格式,可以是子数据集的元信息,也可以是自定义的元信息。如果要自定义元信息,可以参考 [创建自定义数据集的元信息文件](#创建自定义数据集的元信息文件)。
+- **KeypointConverter** 用于将不同的标注格式转换成统一的格式。比如将关键点个数不同、关键点排列顺序不同的数据集进行合并。
+- 更详细的说明请前往[混合数据集训练](../user_guides/mixed_datasets.md)。
diff --git a/internlm_langchain/knowledge_base/MMPose/content/customize_logging.md b/internlm_langchain/knowledge_base/MMPose/content/customize_logging.md
new file mode 100644
index 00000000..093a530d
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/customize_logging.md
@@ -0,0 +1,3 @@
+# Customize Logging
+
+Coming soon.
diff --git a/internlm_langchain/knowledge_base/MMPose/content/customize_optimizer.md b/internlm_langchain/knowledge_base/MMPose/content/customize_optimizer.md
new file mode 100644
index 00000000..fd6a2829
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/customize_optimizer.md
@@ -0,0 +1,3 @@
+# Customize Optimizer and Scheduler
+
+Coming soon.
diff --git a/internlm_langchain/knowledge_base/MMPose/content/customize_transforms.md b/internlm_langchain/knowledge_base/MMPose/content/customize_transforms.md
new file mode 100644
index 00000000..15441399
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/customize_transforms.md
@@ -0,0 +1,3 @@
+# Customize Data Transformation and Augmentation
+
+Coming soon.
diff --git a/internlm_langchain/knowledge_base/MMPose/content/dataflow.md b/internlm_langchain/knowledge_base/MMPose/content/dataflow.md
new file mode 100644
index 00000000..9f098b02
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/dataflow.md
@@ -0,0 +1,3 @@
+# Dataflow in MMPose
+
+Coming soon.
diff --git a/internlm_langchain/knowledge_base/MMPose/content/dataset_tools.md b/internlm_langchain/knowledge_base/MMPose/content/dataset_tools.md
new file mode 100644
index 00000000..a2e6d01d
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/dataset_tools.md
@@ -0,0 +1,413 @@
+# 数据集格式转换脚本
+
+MMPose 提供了一些工具来帮助用户处理数据集。
+
+## Animal Pose 数据集
+
+
+Animal-Pose (ICCV'2019)
+
+```bibtex
+@InProceedings{Cao_2019_ICCV,
+ author = {Cao, Jinkun and Tang, Hongyang and Fang, Hao-Shu and Shen, Xiaoyong and Lu, Cewu and Tai, Yu-Wing},
+ title = {Cross-Domain Adaptation for Animal Pose Estimation},
+ booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
+ month = {October},
+ year = {2019}
+}
+```
+
+
+
+对于 [Animal-Pose](https://sites.google.com/view/animal-pose/),可以从[官方网站](https://sites.google.com/view/animal-pose/)下载图像和标注。脚本 `tools/dataset_converters/parse_animalpose_dataset.py` 将原始标注转换为 MMPose 兼容的格式。预处理的[标注文件](https://download.openmmlab.com/mmpose/datasets/animalpose_annotations.tar)可用。如果您想自己生成标注,请按照以下步骤操作:
+
+1. 下载图片与标注信息并解压到 `$MMPOSE/data`,按照以下格式组织:
+
+ ```text
+ mmpose
+ ├── mmpose
+ ├── docs
+ ├── tests
+ ├── tools
+ ├── configs
+ `── data
+ │── animalpose
+ │
+ │-- VOC2012
+ │ │-- Annotations
+ │ │-- ImageSets
+ │ │-- JPEGImages
+ │ │-- SegmentationClass
+ │ │-- SegmentationObject
+ │
+ │-- animalpose_image_part2
+ │ │-- cat
+ │ │-- cow
+ │ │-- dog
+ │ │-- horse
+ │ │-- sheep
+ │
+ │-- PASCAL2011_animal_annotation
+ │ │-- cat
+ │ │ |-- 2007_000528_1.xml
+ │ │ |-- 2007_000549_1.xml
+ │ │ │-- ...
+ │ │-- cow
+ │ │-- dog
+ │ │-- horse
+ │ │-- sheep
+ │
+ │-- annimalpose_anno2
+ │ │-- cat
+ │ │ |-- ca1.xml
+ │ │ |-- ca2.xml
+ │ │ │-- ...
+ │ │-- cow
+ │ │-- dog
+ │ │-- horse
+ │ │-- sheep
+ ```
+
+2. 运行脚本
+
+ ```bash
+ python tools/dataset_converters/parse_animalpose_dataset.py
+ ```
+
+ 生成的标注文件将保存在 `$MMPOSE/data/animalpose/annotations` 中。
+
+开源作者没有提供官方的 train/val/test 划分,我们选择来自 PascalVOC 的图片作为 train & val,train+val 一共 3600 张图片,5117 个标注。其中 2798 张图片,4000 个标注用于训练,810 张图片,1117 个标注用于验证。测试集包含 1000 张图片,1000 个标注用于评估。
+
+## COFW 数据集
+
+
+COFW (ICCV'2013)
+
+```bibtex
+@inproceedings{burgos2013robust,
+ title={Robust face landmark estimation under occlusion},
+ author={Burgos-Artizzu, Xavier P and Perona, Pietro and Doll{\'a}r, Piotr},
+ booktitle={Proceedings of the IEEE international conference on computer vision},
+ pages={1513--1520},
+ year={2013}
+}
+```
+
+
+
+对于 COFW 数据集,请从 [COFW Dataset (Color Images)](https://data.caltech.edu/records/20099) 进行下载。
+
+将 `COFW_train_color.mat` 和 `COFW_test_color.mat` 移动到 `$MMPOSE/data/cofw/`,确保它们按照以下格式组织:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── cofw
+ |── COFW_train_color.mat
+ |── COFW_test_color.mat
+```
+
+运行 `pip install h5py` 安装依赖,然后在 `$MMPOSE` 下运行脚本:
+
+```bash
+python tools/dataset_converters/parse_cofw_dataset.py
+```
+
+最终结果为:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ │── cofw
+ |── COFW_train_color.mat
+ |── COFW_test_color.mat
+ |── annotations
+ | |── cofw_train.json
+ | |── cofw_test.json
+ |── images
+ |── 000001.jpg
+ |── 000002.jpg
+```
+
+## DeepposeKit 数据集
+
+
+Desert Locust (Elife'2019)
+
+```bibtex
+@article{graving2019deepposekit,
+ title={DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning},
+ author={Graving, Jacob M and Chae, Daniel and Naik, Hemal and Li, Liang and Koger, Benjamin and Costelloe, Blair R and Couzin, Iain D},
+ journal={Elife},
+ volume={8},
+ pages={e47994},
+ year={2019},
+ publisher={eLife Sciences Publications Limited}
+}
+```
+
+
+
+对于 [Vinegar Fly](https://github.com/jgraving/DeepPoseKit-Data),[Desert Locust](https://github.com/jgraving/DeepPoseKit-Data), 和 [Grévy’s Zebra](https://github.com/jgraving/DeepPoseKit-Data) 数据集,请从 [DeepPoseKit-Data](https://github.com/jgraving/DeepPoseKit-Data) 下载数据。
+
+`tools/dataset_converters/parse_deepposekit_dataset.py` 脚本可以将原始标注转换为 MMPose 支持的格式。我们已经转换好的标注文件可以在这里下载:
+
+- [vinegar_fly_annotations](https://download.openmmlab.com/mmpose/datasets/vinegar_fly_annotations.tar)
+- [locust_annotations](https://download.openmmlab.com/mmpose/datasets/locust_annotations.tar)
+- [zebra_annotations](https://download.openmmlab.com/mmpose/datasets/zebra_annotations.tar)
+
+如果你希望自己转换数据,请按照以下步骤操作:
+
+1. 下载原始图片和标注,并解压到 `$MMPOSE/data`,将它们按照以下格式组织:
+
+ ```text
+ mmpose
+ ├── mmpose
+ ├── docs
+ ├── tests
+ ├── tools
+ ├── configs
+ `── data
+ |
+ |── DeepPoseKit-Data
+ | `── datasets
+ | |── fly
+ | | |── annotation_data_release.h5
+ | | |── skeleton.csv
+ | | |── ...
+ | |
+ | |── locust
+ | | |── annotation_data_release.h5
+ | | |── skeleton.csv
+ | | |── ...
+ | |
+ | `── zebra
+ | |── annotation_data_release.h5
+ | |── skeleton.csv
+ | |── ...
+ |
+ │── fly
+ `-- images
+ │-- 0.jpg
+ │-- 1.jpg
+ │-- ...
+ ```
+
+ 图片也可以在 [vinegar_fly_images](https://download.openmmlab.com/mmpose/datasets/vinegar_fly_images.tar),[locust_images](https://download.openmmlab.com/mmpose/datasets/locust_images.tar) 和[zebra_images](https://download.openmmlab.com/mmpose/datasets/zebra_images.tar) 下载。
+
+2. 运行脚本:
+
+ ```bash
+ python tools/dataset_converters/parse_deepposekit_dataset.py
+ ```
+
+ 生成的标注文件将保存在 $MMPOSE/data/fly/annotations`,`$MMPOSE/data/locust/annotations`和`$MMPOSE/data/zebra/annotations\` 中。
+
+由于官方数据集中没有提供测试集,我们随机选择了 90% 的图片用于训练,剩下的 10% 用于测试。
+
+## Macaque 数据集
+
+
+MacaquePose (bioRxiv'2020)
+
+```bibtex
+@article{labuguen2020macaquepose,
+ title={MacaquePose: A novel ‘in the wild’macaque monkey pose dataset for markerless motion capture},
+ author={Labuguen, Rollyn and Matsumoto, Jumpei and Negrete, Salvador and Nishimaru, Hiroshi and Nishijo, Hisao and Takada, Masahiko and Go, Yasuhiro and Inoue, Ken-ichi and Shibata, Tomohiro},
+ journal={bioRxiv},
+ year={2020},
+ publisher={Cold Spring Harbor Laboratory}
+}
+```
+
+
+
+对于 [MacaquePose](http://www2.ehub.kyoto-u.ac.jp/datasets/macaquepose/index.html) 数据集,请从 [这里](http://www2.ehub.kyoto-u.ac.jp/datasets/macaquepose/index.html) 下载数据。
+
+`tools/dataset_converters/parse_macaquepose_dataset.py` 脚本可以将原始标注转换为 MMPose 支持的格式。我们已经转换好的标注文件可以在 [这里](https://download.openmmlab.com/mmpose/datasets/macaque_annotations.tar) 下载。
+
+如果你希望自己转换数据,请按照以下步骤操作:
+
+1. 下载原始图片和标注,并解压到 `$MMPOSE/data`,将它们按照以下格式组织:
+
+ ```text
+ mmpose
+ ├── mmpose
+ ├── docs
+ ├── tests
+ ├── tools
+ ├── configs
+ `── data
+ │── macaque
+ │-- annotations.csv
+ │-- images
+ │ │-- 01418849d54b3005.jpg
+ │ │-- 0142d1d1a6904a70.jpg
+ │ │-- 01ef2c4c260321b7.jpg
+ │ │-- 020a1c75c8c85238.jpg
+ │ │-- 020b1506eef2557d.jpg
+ │ │-- ...
+ ```
+
+2. 运行脚本:
+
+ ```bash
+ python tools/dataset_converters/parse_macaquepose_dataset.py
+ ```
+
+ 生成的标注文件将保存在 `$MMPOSE/data/macaque/annotations` 中。
+
+由于官方数据集中没有提供测试集,我们随机选择了 90% 的图片用于训练,剩下的 10% 用于测试。
+
+## Human3.6M 数据集
+
+
+Human3.6M (TPAMI'2014)
+
+```bibtex
+@article{h36m_pami,
+ author = {Ionescu, Catalin and Papava, Dragos and Olaru, Vlad and Sminchisescu, Cristian},
+ title = {Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments},
+ journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
+ publisher = {IEEE Computer Society},
+ volume = {36},
+ number = {7},
+ pages = {1325-1339},
+ month = {jul},
+ year = {2014}
+}
+```
+
+
+
+对于 [Human3.6M](http://vision.imar.ro/human3.6m/description.php) 数据集,请从官网下载数据,放置到 `$MMPOSE/data/h36m` 下。
+
+然后执行 [预处理脚本](/tools/dataset_converters/preprocess_h36m.py)。
+
+```bash
+python tools/dataset_converters/preprocess_h36m.py --metadata {path to metadata.xml} --original data/h36m
+```
+
+这将在全帧率(50 FPS)和降频帧率(10 FPS)下提取相机参数和姿势注释。处理后的数据应具有以下结构:
+
+```text
+mmpose
+├── mmpose
+├── docs
+├── tests
+├── tools
+├── configs
+`── data
+ ├── h36m
+ ├── annotation_body3d
+ | ├── cameras.pkl
+ | ├── fps50
+ | | ├── h36m_test.npz
+ | | ├── h36m_train.npz
+ | | ├── joint2d_rel_stats.pkl
+ | | ├── joint2d_stats.pkl
+ | | ├── joint3d_rel_stats.pkl
+ | | `── joint3d_stats.pkl
+ | `── fps10
+ | ├── h36m_test.npz
+ | ├── h36m_train.npz
+ | ├── joint2d_rel_stats.pkl
+ | ├── joint2d_stats.pkl
+ | ├── joint3d_rel_stats.pkl
+ | `── joint3d_stats.pkl
+ `── images
+ ├── S1
+ | ├── S1_Directions_1.54138969
+ | | ├── S1_Directions_1.54138969_00001.jpg
+ | | ├── S1_Directions_1.54138969_00002.jpg
+ | | ├── ...
+ | ├── ...
+ ├── S5
+ ├── S6
+ ├── S7
+ ├── S8
+ ├── S9
+ `── S11
+```
+
+然后,标注信息需要转换为 MMPose 支持的 COCO 格式。这可以通过运行以下命令完成:
+
+```bash
+python tools/dataset_converters/h36m_to_coco.py
+```
+
+## MPII 数据集
+
+
+MPII (CVPR'2014)
+
+```bibtex
+@inproceedings{andriluka14cvpr,
+ author = {Mykhaylo Andriluka and Leonid Pishchulin and Peter Gehler and Schiele, Bernt},
+ title = {2D Human Pose Estimation: New Benchmark and State of the Art Analysis},
+ booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
+ year = {2014},
+ month = {June}
+}
+```
+
+
+
+对于 [MPII](http://human-pose.mpi-inf.mpg.de/) 数据集,请从官网下载数据,放置到 `$MMPOSE/data/mpii` 下。
+
+我们提供了一个脚本来将 `.mat` 格式的标注文件转换为 `.json` 格式。这可以通过运行以下命令完成:
+
+```shell
+python tools/dataset_converters/mat2json ${PRED_MAT_FILE} ${GT_JSON_FILE} ${OUTPUT_PRED_JSON_FILE}
+```
+
+例如:
+
+```shell
+python tools/dataset/mat2json work_dirs/res50_mpii_256x256/pred.mat data/mpii/annotations/mpii_val.json pred.json
+```
+
+## Label Studio 数据集
+
+
+Label Studio
+
+```bibtex
+@misc{Label Studio,
+ title={{Label Studio}: Data labeling software},
+ url={https://github.com/heartexlabs/label-studio},
+ note={Open source software available from https://github.com/heartexlabs/label-studio},
+ author={
+ Maxim Tkachenko and
+ Mikhail Malyuk and
+ Andrey Holmanyuk and
+ Nikolai Liubimov},
+ year={2020-2022},
+}
+```
+
+
+
+对于 [Label Studio](https://github.com/heartexlabs/label-studio/) 用户,请依照 [Label Studio 转换工具文档](./label_studio.md) 中的方法进行标注,并将结果导出为 Label Studio 标准的 `.json` 文件,将 `Labeling Interface` 中的 `Code` 保存为 `.xml` 文件。
+
+我们提供了一个脚本来将 Label Studio 标准的 `.json` 格式标注文件转换为 COCO 标准的 `.json` 格式。这可以通过运行以下命令完成:
+
+```shell
+python tools/dataset_converters/labelstudio2coco.py ${LS_JSON_FILE} ${LS_XML_FILE} ${OUTPUT_COCO_JSON_FILE}
+```
+
+例如:
+
+```shell
+python tools/dataset_converters/labelstudio2coco.py config.xml project-1-at-2023-05-13-09-22-91b53efa.json output/result.json
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/ecosystem.md b/internlm_langchain/knowledge_base/MMPose/content/ecosystem.md
new file mode 100644
index 00000000..b0027cfa
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/ecosystem.md
@@ -0,0 +1,3 @@
+# Ecosystem
+
+Coming soon.
diff --git a/internlm_langchain/knowledge_base/MMPose/content/faq.md b/internlm_langchain/knowledge_base/MMPose/content/faq.md
new file mode 100644
index 00000000..b1e69983
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/faq.md
@@ -0,0 +1,148 @@
+# FAQ
+
+We list some common issues faced by many users and their corresponding solutions here.
+Feel free to enrich the list if you find any frequent issues and have ways to help others to solve them.
+If the contents here do not cover your issue, please create an issue using the [provided templates](/.github/ISSUE_TEMPLATE/error-report.md) and make sure you fill in all required information in the template.
+
+## Installation
+
+Compatibility issue between MMCV and MMPose; "AssertionError: MMCV==xxx is used but incompatible. Please install mmcv>=xxx, \<=xxx."
+
+Here are the version correspondences between `mmdet`, `mmcv` and `mmpose`:
+
+- mmdet 2.x \<=> mmpose 0.x \<=> mmcv 1.x
+- mmdet 3.x \<=> mmpose 1.x \<=> mmcv 2.x
+
+Detailed compatible MMPose and MMCV versions are shown as below. Please choose the correct version of MMCV to avoid installation issues.
+
+### MMPose 1.x
+
+| MMPose version | MMCV/MMEngine version |
+| :------------: | :-----------------------------: |
+| 1.1.0 | mmcv>=2.0.1, mmengine>=0.8.0 |
+| 1.0.0 | mmcv>=2.0.0, mmengine>=0.7.0 |
+| 1.0.0rc1 | mmcv>=2.0.0rc4, mmengine>=0.6.0 |
+| 1.0.0rc0 | mmcv>=2.0.0rc0, mmengine>=0.0.1 |
+| 1.0.0b0 | mmcv>=2.0.0rc0, mmengine>=0.0.1 |
+
+### MMPose 0.x
+
+| MMPose version | MMCV version |
+| :------------: | :-----------------------: |
+| 0.x | mmcv-full>=1.3.8, \<1.8.0 |
+| 0.29.0 | mmcv-full>=1.3.8, \<1.7.0 |
+| 0.28.1 | mmcv-full>=1.3.8, \<1.7.0 |
+| 0.28.0 | mmcv-full>=1.3.8, \<1.6.0 |
+| 0.27.0 | mmcv-full>=1.3.8, \<1.6.0 |
+| 0.26.0 | mmcv-full>=1.3.8, \<1.6.0 |
+| 0.25.1 | mmcv-full>=1.3.8, \<1.6.0 |
+| 0.25.0 | mmcv-full>=1.3.8, \<1.5.0 |
+| 0.24.0 | mmcv-full>=1.3.8, \<1.5.0 |
+| 0.23.0 | mmcv-full>=1.3.8, \<1.5.0 |
+| 0.22.0 | mmcv-full>=1.3.8, \<1.5.0 |
+| 0.21.0 | mmcv-full>=1.3.8, \<1.5.0 |
+| 0.20.0 | mmcv-full>=1.3.8, \<1.4.0 |
+| 0.19.0 | mmcv-full>=1.3.8, \<1.4.0 |
+| 0.18.0 | mmcv-full>=1.3.8, \<1.4.0 |
+| 0.17.0 | mmcv-full>=1.3.8, \<1.4.0 |
+| 0.16.0 | mmcv-full>=1.3.8, \<1.4.0 |
+| 0.14.0 | mmcv-full>=1.1.3, \<1.4.0 |
+| 0.13.0 | mmcv-full>=1.1.3, \<1.4.0 |
+| 0.12.0 | mmcv-full>=1.1.3, \<1.3 |
+| 0.11.0 | mmcv-full>=1.1.3, \<1.3 |
+| 0.10.0 | mmcv-full>=1.1.3, \<1.3 |
+| 0.9.0 | mmcv-full>=1.1.3, \<1.3 |
+| 0.8.0 | mmcv-full>=1.1.1, \<1.2 |
+| 0.7.0 | mmcv-full>=1.1.1, \<1.2 |
+
+- **Unable to install xtcocotools**
+
+ 1. Try to install it using pypi manually `pip install xtcocotools`.
+ 2. If step1 does not work. Try to install it from [source](https://github.com/jin-s13/xtcocoapi).
+
+ ```
+ git clone https://github.com/jin-s13/xtcocoapi
+ cd xtcocoapi
+ python setup.py install
+ ```
+
+- **No matching distribution found for xtcocotools>=1.6**
+
+ 1. Install cython by `pip install cython`.
+ 2. Install xtcocotools from [source](https://github.com/jin-s13/xtcocoapi).
+
+ ```
+ git clone https://github.com/jin-s13/xtcocoapi
+ cd xtcocoapi
+ python setup.py install
+ ```
+
+- **"No module named 'mmcv.ops'"; "No module named 'mmcv.\_ext'"**
+
+ 1. Uninstall existing mmcv in the environment using `pip uninstall mmcv`.
+ 2. Install mmcv-full following the [installation instruction](https://mmcv.readthedocs.io/en/latest/#installation).
+
+## Data
+
+- **What if my custom dataset does not have bounding box label?**
+
+ We can estimate the bounding box of a person as the minimal box that tightly bounds all the keypoints.
+
+- **What is `COCO_val2017_detections_AP_H_56_person.json`? Can I train pose models without it?**
+
+ "COCO_val2017_detections_AP_H_56_person.json" contains the "detected" human bounding boxes for COCO validation set, which are generated by FasterRCNN.
+ One can choose to use gt bounding boxes to evaluate models, by setting `bbox_file=None''` in `val_dataloader.dataset` in config. Or one can use detected boxes to evaluate
+ the generalizability of models, by setting `bbox_file='COCO_val2017_detections_AP_H_56_person.json'`.
+
+## Training
+
+- **RuntimeError: Address already in use**
+
+ Set the environment variables `MASTER_PORT=XXX`. For example,
+ `MASTER_PORT=29517 GPUS=16 GPUS_PER_NODE=8 CPUS_PER_TASK=2 ./tools/slurm_train.sh Test res50 configs/body_2d_keypoint/topdown_regression/coco/td-reg_res50_8xb64-210e_coco-256x192.py work_dirs/res50_coco_256x192`
+
+- **"Unexpected keys in source state dict" when loading pre-trained weights**
+
+ It's normal that some layers in the pretrained model are not used in the pose model. ImageNet-pretrained classification network and the pose network may have different architectures (e.g. no classification head). So some unexpected keys in source state dict is actually expected.
+
+- **How to use trained models for backbone pre-training ?**
+
+ Refer to [Migration - Step3: Model - Backbone](../migration.md).
+
+ When training, the unexpected keys will be ignored.
+
+- **How to visualize the training accuracy/loss curves in real-time ?**
+
+ Use `TensorboardLoggerHook` in `log_config` like
+
+ ```python
+ log_config=dict(interval=20, hooks=[dict(type='TensorboardLoggerHook')])
+ ```
+
+ You can refer to [user_guides/visualization.md](../user_guides/visualization.md).
+
+- **Log info is NOT printed**
+
+ Use smaller log interval. For example, change `interval=50` to `interval=1` in the config.
+
+## Evaluation
+
+- **How to evaluate on MPII test dataset?**
+ Since we do not have the ground-truth for test dataset, we cannot evaluate it 'locally'.
+ If you would like to evaluate the performance on test set, you have to upload the pred.mat (which is generated during testing) to the official server via email, according to [the MPII guideline](http://human-pose.mpi-inf.mpg.de/#evaluation).
+
+- **For top-down 2d pose estimation, why predicted joint coordinates can be out of the bounding box (bbox)?**
+ We do not directly use the bbox to crop the image. bbox will be first transformed to center & scale, and the scale will be multiplied by a factor (1.25) to include some context. If the ratio of width/height is different from that of model input (possibly 192/256), we will adjust the bbox.
+
+## Inference
+
+- **How to run mmpose on CPU?**
+
+ Run demos with `--device=cpu`.
+
+- **How to speed up inference?**
+
+ For top-down models, try to edit the config file. For example,
+
+ 1. set `flip_test=False` in `init_cfg` in the config file.
+ 2. use faster human bounding box detector, see [MMDetection](https://mmdetection.readthedocs.io/zh_CN/3.x/model_zoo.html).
diff --git a/internlm_langchain/knowledge_base/MMPose/content/guide_to_framework.md b/internlm_langchain/knowledge_base/MMPose/content/guide_to_framework.md
new file mode 100644
index 00000000..349abf23
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/guide_to_framework.md
@@ -0,0 +1,682 @@
+# 20 分钟了解 MMPose 架构设计
+
+MMPose 1.0 与之前的版本有较大改动,对部分模块进行了重新设计和组织,降低代码冗余度,提升运行效率,降低学习难度。
+
+MMPose 1.0 采用了全新的模块结构设计以精简代码,提升运行效率,降低学习难度。对于有一定深度学习基础的用户,本章节提供了对 MMPose 架构设计的总体介绍。不论你是**旧版 MMPose 的用户**,还是**希望直接从 MMPose 1.0 上手的新用户**,都可以通过本教程了解如何构建一个基于 MMPose 1.0 的项目。
+
+```{note}
+本教程包含了使用 MMPose 1.0 时开发者会关心的内容:
+
+- 整体代码架构与设计逻辑
+
+- 如何用config文件管理模块
+
+- 如何使用自定义数据集
+
+- 如何添加新的模块(骨干网络、模型头部、损失函数等)
+```
+
+以下是这篇教程的目录:
+
+- [20 分钟了解 MMPose 架构设计](#20-分钟了解-mmpose-架构设计)
+ - [总览](#总览)
+ - [Step1:配置文件](#step1配置文件)
+ - [Step2:数据](#step2数据)
+ - [数据集元信息](#数据集元信息)
+ - [数据集](#数据集)
+ - [数据流水线](#数据流水线)
+ - [i. 数据增强](#i-数据增强)
+ - [ii. 数据变换](#ii-数据变换)
+ - [iii. 数据编码](#iii-数据编码)
+ - [iv. 数据打包](#iv-数据打包)
+ - [Step3: 模型](#step3-模型)
+ - [前处理器(DataPreprocessor)](#前处理器datapreprocessor)
+ - [主干网络(Backbone)](#主干网络backbone)
+ - [颈部模块(Neck)](#颈部模块neck)
+ - [预测头(Head)](#预测头head)
+
+## 总览
+
+
+
+一般来说,开发者在项目开发过程中经常接触内容的主要有**五个**方面:
+
+- **通用**:环境、钩子(Hook)、模型权重存取(Checkpoint)、日志(Logger)等
+
+- **数据**:数据集、数据读取(Dataloader)、数据增强等
+
+- **训练**:优化器、学习率调整等
+
+- **模型**:主干网络、颈部模块(Neck)、预测头模块(Head)、损失函数等
+
+- **评测**:评测指标(Metric)、评测器(Evaluator)等
+
+其中**通用**、**训练**和**评测**相关的模块往往由训练框架提供,开发者只需要调用和调整参数,不需要自行实现,开发者主要实现的是**数据**和**模型**部分。
+
+## Step1:配置文件
+
+在MMPose中,我们通常 python 格式的配置文件,用于整个项目的定义、参数管理,因此我们强烈建议第一次接触 MMPose 的开发者,查阅 [配置文件](./user_guides/configs.md) 学习配置文件的定义。
+
+需要注意的是,所有新增的模块都需要使用注册器(Registry)进行注册,并在对应目录的 `__init__.py` 中进行 `import`,以便能够使用配置文件构建其实例。
+
+## Step2:数据
+
+MMPose 数据的组织主要包含三个方面:
+
+- 数据集元信息
+
+- 数据集
+
+- 数据流水线
+
+### 数据集元信息
+
+元信息指具体标注之外的数据集信息。姿态估计数据集的元信息通常包括:关键点和骨骼连接的定义、对称性、关键点性质(如关键点权重、标注标准差、所属上下半身)等。这些信息在数据在数据处理、模型训练和测试中有重要作用。在 MMPose 中,数据集的元信息使用 python 格式的配置文件保存,位于 `$MMPOSE/configs/_base_/datasets` 目录下。
+
+在 MMPose 中使用自定义数据集时,你需要增加对应的元信息配置文件。以 MPII 数据集(`$MMPOSE/configs/_base_/datasets/mpii.py`)为例:
+
+```Python
+dataset_info = dict(
+ dataset_name='mpii',
+ paper_info=dict(
+ author='Mykhaylo Andriluka and Leonid Pishchulin and '
+ 'Peter Gehler and Schiele, Bernt',
+ title='2D Human Pose Estimation: New Benchmark and '
+ 'State of the Art Analysis',
+ container='IEEE Conference on Computer Vision and '
+ 'Pattern Recognition (CVPR)',
+ year='2014',
+ homepage='http://human-pose.mpi-inf.mpg.de/',
+ ),
+ keypoint_info={
+ 0:
+ dict(
+ name='right_ankle',
+ id=0,
+ color=[255, 128, 0],
+ type='lower',
+ swap='left_ankle'),
+ ## 内容省略
+ },
+ skeleton_info={
+ 0:
+ dict(link=('right_ankle', 'right_knee'), id=0, color=[255, 128, 0]),
+ ## 内容省略
+ },
+ joint_weights=[
+ 1.5, 1.2, 1., 1., 1.2, 1.5, 1., 1., 1., 1., 1.5, 1.2, 1., 1., 1.2, 1.5
+ ],
+ # 使用 COCO 数据集中提供的 sigmas 值
+ sigmas=[
+ 0.089, 0.083, 0.107, 0.107, 0.083, 0.089, 0.026, 0.026, 0.026, 0.026,
+ 0.062, 0.072, 0.179, 0.179, 0.072, 0.062
+ ])
+```
+
+在模型配置文件中,你需要为自定义数据集指定对应的元信息配置文件。假如该元信息配置文件路径为 `$MMPOSE/configs/_base_/datasets/custom.py`,指定方式如下:
+
+```python
+# dataset and dataloader settings
+dataset_type = 'MyCustomDataset' # or 'CocoDataset'
+train_dataloader = dict(
+ batch_size=2,
+ dataset=dict(
+ type=dataset_type,
+ data_root='root/of/your/train/data',
+ ann_file='path/to/your/train/json',
+ data_prefix=dict(img='path/to/your/train/img'),
+ # 指定对应的元信息配置文件
+ metainfo=dict(from_file='configs/_base_/datasets/custom.py'),
+ ...),
+ )
+val_dataloader = dict(
+ batch_size=2,
+ dataset=dict(
+ type=dataset_type,
+ data_root='root/of/your/val/data',
+ ann_file='path/to/your/val/json',
+ data_prefix=dict(img='path/to/your/val/img'),
+ # 指定对应的元信息配置文件
+ metainfo=dict(from_file='configs/_base_/datasets/custom.py'),
+ ...),
+ )
+test_dataloader = val_dataloader
+```
+
+### 数据集
+
+在 MMPose 中使用自定义数据集时,我们推荐将数据转化为已支持的格式(如 COCO 或 MPII),并直接使用我们提供的对应数据集实现。如果这种方式不可行,则用户需要实现自己的数据集类。
+
+MMPose 中的大部分 2D 关键点数据集**以 COCO 形式组织**,为此我们提供了基类 [BaseCocoStyleDataset](/mmpose/datasets/datasets/base/base_coco_style_dataset.py)。我们推荐用户继承该基类,并按需重写它的方法(通常是 `__init__()` 和 `_load_annotations()` 方法),以扩展到新的 2D 关键点数据集。
+
+```{note}
+关于COCO数据格式的详细说明请参考 [COCO](./dataset_zoo/2d_body_keypoint.md) 。
+```
+
+```{note}
+在 MMPose 中 bbox 的数据格式采用 `xyxy`,而不是 `xywh`,这与 [MMDetection](https://github.com/open-mmlab/mmdetection) 等其他 OpenMMLab 成员保持一致。为了实现不同 bbox 格式之间的转换,我们提供了丰富的函数:`bbox_xyxy2xywh`、`bbox_xywh2xyxy`、`bbox_xyxy2cs`等。这些函数定义在`$MMPOSE/mmpose/structures/bbox/transforms.py`。
+```
+
+下面我们以MPII数据集的实现(`$MMPOSE/mmpose/datasets/datasets/body/mpii_dataset.py`)为例:
+
+```Python
+@DATASETS.register_module()
+class MpiiDataset(BaseCocoStyleDataset):
+ METAINFO: dict = dict(from_file='configs/_base_/datasets/mpii.py')
+
+ def __init__(self,
+ ## 内容省略
+ headbox_file: Optional[str] = None,
+ ## 内容省略):
+
+ if headbox_file:
+ if data_mode != 'topdown':
+ raise ValueError(
+ f'{self.__class__.__name__} is set to {data_mode}: '
+ 'mode, while "headbox_file" is only '
+ 'supported in topdown mode.')
+
+ if not test_mode:
+ raise ValueError(
+ f'{self.__class__.__name__} has `test_mode==False` '
+ 'while "headbox_file" is only '
+ 'supported when `test_mode==True`.')
+
+ headbox_file_type = headbox_file[-3:]
+ allow_headbox_file_type = ['mat']
+ if headbox_file_type not in allow_headbox_file_type:
+ raise KeyError(
+ f'The head boxes file type {headbox_file_type} is not '
+ f'supported. Should be `mat` but got {headbox_file_type}.')
+ self.headbox_file = headbox_file
+
+ super().__init__(
+ ## 内容省略
+ )
+
+ def _load_annotations(self) -> List[dict]:
+ """Load data from annotations in MPII format."""
+ check_file_exist(self.ann_file)
+ with open(self.ann_file) as anno_file:
+ anns = json.load(anno_file)
+
+ if self.headbox_file:
+ check_file_exist(self.headbox_file)
+ headbox_dict = loadmat(self.headbox_file)
+ headboxes_src = np.transpose(headbox_dict['headboxes_src'],
+ [2, 0, 1])
+ SC_BIAS = 0.6
+
+ data_list = []
+ ann_id = 0
+
+ # mpii bbox scales are normalized with factor 200.
+ pixel_std = 200.
+
+ for idx, ann in enumerate(anns):
+ center = np.array(ann['center'], dtype=np.float32)
+ scale = np.array([ann['scale'], ann['scale']],
+ dtype=np.float32) * pixel_std
+
+ # Adjust center/scale slightly to avoid cropping limbs
+ if center[0] != -1:
+ center[1] = center[1] + 15. / pixel_std * scale[1]
+
+ # MPII uses matlab format, index is 1-based,
+ # we should first convert to 0-based index
+ center = center - 1
+
+ # unify shape with coco datasets
+ center = center.reshape(1, -1)
+ scale = scale.reshape(1, -1)
+ bbox = bbox_cs2xyxy(center, scale)
+
+ # load keypoints in shape [1, K, 2] and keypoints_visible in [1, K]
+ keypoints = np.array(ann['joints']).reshape(1, -1, 2)
+ keypoints_visible = np.array(ann['joints_vis']).reshape(1, -1)
+
+ data_info = {
+ 'id': ann_id,
+ 'img_id': int(ann['image'].split('.')[0]),
+ 'img_path': osp.join(self.data_prefix['img'], ann['image']),
+ 'bbox_center': center,
+ 'bbox_scale': scale,
+ 'bbox': bbox,
+ 'bbox_score': np.ones(1, dtype=np.float32),
+ 'keypoints': keypoints,
+ 'keypoints_visible': keypoints_visible,
+ }
+
+ if self.headbox_file:
+ # calculate the diagonal length of head box as norm_factor
+ headbox = headboxes_src[idx]
+ head_size = np.linalg.norm(headbox[1] - headbox[0], axis=0)
+ head_size *= SC_BIAS
+ data_info['head_size'] = head_size.reshape(1, -1)
+
+ data_list.append(data_info)
+ ann_id = ann_id + 1
+
+ return data_list
+```
+
+在对MPII数据集进行支持时,由于MPII需要读入 `head_size` 信息来计算 `PCKh`,因此我们在`__init__()`中增加了 `headbox_file`,并重载了 `_load_annotations()` 来完成数据组织。
+
+如果自定义数据集无法被 `BaseCocoStyleDataset` 支持,你需要直接继承 [MMEngine](https://github.com/open-mmlab/mmengine) 中提供的 `BaseDataset` 基类。具体方法请参考相关[文档](https://mmengine.readthedocs.io/en/latest/advanced_tutorials/basedataset.html)。
+
+### 数据流水线
+
+一个典型的数据流水线配置如下:
+
+```Python
+# pipelines
+train_pipeline = [
+ dict(type='LoadImage'),
+ dict(type='GetBBoxCenterScale'),
+ dict(type='RandomFlip', direction='horizontal'),
+ dict(type='RandomHalfBody'),
+ dict(type='RandomBBoxTransform'),
+ dict(type='TopdownAffine', input_size=codec['input_size']),
+ dict(type='GenerateTarget', encoder=codec),
+ dict(type='PackPoseInputs')
+]
+test_pipeline = [
+ dict(type='LoadImage'),
+ dict(type='GetBBoxCenterScale'),
+ dict(type='TopdownAffine', input_size=codec['input_size']),
+ dict(type='PackPoseInputs')
+]
+```
+
+在关键点检测任务中,数据一般会在三个尺度空间中变换:
+
+- **原始图片空间**:图片存储时的原始空间,不同图片的尺寸不一定相同
+
+- **输入图片空间**:模型输入的图片尺度空间,所有**图片**和**标注**被缩放到输入尺度,如 `256x256`,`256x192` 等
+
+- **输出尺度空间**:模型输出和训练监督信息所在的尺度空间,如`64x64(热力图)`,`1x1(回归坐标值)`等
+
+数据在三个空间中变换的流程如图所示:
+
+
+
+在MMPose中,数据变换所需要的模块在`$MMPOSE/mmpose/datasets/transforms`目录下,它们的工作流程如图所示:
+
+
+
+#### i. 数据增强
+
+数据增强中常用的变换存放在 `$MMPOSE/mmpose/datasets/transforms/common_transforms.py` 中,如 `RandomFlip`、`RandomHalfBody` 等。
+
+对于 top-down 方法,`Shift`、`Rotate`、`Resize` 操作由 `RandomBBoxTransform`来实现;对于 bottom-up 方法,这些则是由 `BottomupRandomAffine` 实现。
+
+```{note}
+值得注意的是,大部分数据变换都依赖于 `bbox_center` 和 `bbox_scale`,它们可以通过 `GetBBoxCenterScale` 来得到。
+```
+
+#### ii. 数据变换
+
+我们使用仿射变换,将图像和坐标标注从原始图片空间变换到输入图片空间。这一操作在 top-down 方法中由 `TopdownAffine` 完成,在 bottom-up 方法中则由 `BottomupRandomAffine` 完成。
+
+#### iii. 数据编码
+
+在模型训练时,数据从原始空间变换到输入图片空间后,需要使用 `GenerateTarget` 来生成训练所需的监督目标(比如用坐标值生成高斯热图),我们将这一过程称为编码(Encode),反之,通过高斯热图得到对应坐标值的过程称为解码(Decode)。
+
+在 MMPose 中,我们将编码和解码过程集合成一个编解码器(Codec),在其中实现 `encode()` 和 `decode()`。
+
+目前 MMPose 支持生成以下类型的监督目标:
+
+- `heatmap`: 高斯热图
+
+- `keypoint_label`: 关键点标签(如归一化的坐标值)
+
+- `keypoint_xy_label`: 单个坐标轴关键点标签
+
+- `heatmap+keypoint_label`: 同时生成高斯热图和关键点标签
+
+- `multiscale_heatmap`: 多尺度高斯热图
+
+生成的监督目标会按以下关键字进行封装:
+
+- `heatmaps`:高斯热图
+
+- `keypoint_labels`:关键点标签(如归一化的坐标值)
+
+- `keypoint_x_labels`:x 轴关键点标签
+
+- `keypoint_y_labels`:y 轴关键点标签
+
+- `keypoint_weights`:关键点权重
+
+```Python
+@TRANSFORMS.register_module()
+class GenerateTarget(BaseTransform):
+ """Encode keypoints into Target.
+
+ Added Keys (depends on the args):
+ - heatmaps
+ - keypoint_labels
+ - keypoint_x_labels
+ - keypoint_y_labels
+ - keypoint_weights
+ """
+```
+
+值得注意的是,我们对 top-down 和 bottom-up 的数据格式进行了统一,这意味着标注信息中会新增一个维度来代表同一张图里的不同目标(如人),格式为:
+
+```Python
+[batch_size, num_instances, num_keypoints, dim_coordinates]
+```
+
+- top-down:`[B, 1, K, D]`
+
+- Bottom-up: `[B, N, K, D]`
+
+当前已经支持的编解码器定义在 `$MMPOSE/mmpose/codecs` 目录下,如果你需要自定新的编解码器,可以前往[编解码器](./user_guides/codecs.md)了解更多详情。
+
+#### iv. 数据打包
+
+数据经过前处理变换后,最终需要通过 `PackPoseInputs` 打包成数据样本。该操作定义在 `$MMPOSE/mmpose/datasets/transforms/formatting.py` 中。
+
+打包过程会将数据流水线中用字典 `results` 存储的数据转换成用 MMPose 所需的标准数据结构, 如 `InstanceData`,`PixelData`,`PoseDataSample` 等。
+
+具体而言,我们将数据样本内容分为 `gt`(标注真值) 和 `pred`(模型预测)两部分,它们都包含以下数据项:
+
+- **instances**(numpy.array):实例级别的原始标注或预测结果,属于原始尺度空间
+
+- **instance_labels**(torch.tensor):实例级别的训练标签(如归一化的坐标值、关键点可见性),属于输出尺度空间
+
+- **fields**(torch.tensor):像素级别的训练标签(如高斯热图)或预测结果,属于输出尺度空间
+
+下面是 `PoseDataSample` 底层实现的例子:
+
+```Python
+def get_pose_data_sample(self):
+ # meta
+ pose_meta = dict(
+ img_shape=(600, 900), # [h, w, c]
+ crop_size=(256, 192), # [h, w]
+ heatmap_size=(64, 48), # [h, w]
+ )
+
+ # gt_instances
+ gt_instances = InstanceData()
+ gt_instances.bboxes = np.random.rand(1, 4)
+ gt_instances.keypoints = np.random.rand(1, 17, 2)
+
+ # gt_instance_labels
+ gt_instance_labels = InstanceData()
+ gt_instance_labels.keypoint_labels = torch.rand(1, 17, 2)
+ gt_instance_labels.keypoint_weights = torch.rand(1, 17)
+
+ # pred_instances
+ pred_instances = InstanceData()
+ pred_instances.keypoints = np.random.rand(1, 17, 2)
+ pred_instances.keypoint_scores = np.random.rand(1, 17)
+
+ # gt_fields
+ gt_fields = PixelData()
+ gt_fields.heatmaps = torch.rand(17, 64, 48)
+
+ # pred_fields
+ pred_fields = PixelData()
+ pred_fields.heatmaps = torch.rand(17, 64, 48)
+ data_sample = PoseDataSample(
+ gt_instances=gt_instances,
+ pred_instances=pred_instances,
+ gt_fields=gt_fields,
+ pred_fields=pred_fields,
+ metainfo=pose_meta)
+
+ return data_sample
+```
+
+## Step3: 模型
+
+在 MMPose 1.0中,模型由以下几部分构成:
+
+- **预处理器(DataPreprocessor)**:完成图像归一化和通道转换等前处理
+
+- **主干网络 (Backbone)**:用于特征提取
+
+- **颈部模块(Neck)**:GAP,FPN 等可选项
+
+- **预测头(Head)**:用于实现核心算法功能和损失函数定义
+
+我们在 `$MMPOSE/models/pose_estimators/base.py` 下为姿态估计模型定义了一个基类 `BasePoseEstimator`,所有的模型(如 `TopdownPoseEstimator`)都需要继承这个基类,并重载对应的方法。
+
+在模型的 `forward()` 方法中提供了三种不同的模式:
+
+- `mode == 'loss'`:返回损失函数计算的结果,用于模型训练
+
+- `mode == 'predict'`:返回输入尺度下的预测结果,用于模型推理
+
+- `mode == 'tensor'`:返回输出尺度下的模型输出,即只进行模型前向传播,用于模型导出
+
+开发者需要在 `PoseEstimator` 中按照模型结构调用对应的 `Registry` ,对模块进行实例化。以 top-down 模型为例:
+
+```Python
+@MODELS.register_module()
+class TopdownPoseEstimator(BasePoseEstimator):
+ def __init__(self,
+ backbone: ConfigType,
+ neck: OptConfigType = None,
+ head: OptConfigType = None,
+ train_cfg: OptConfigType = None,
+ test_cfg: OptConfigType = None,
+ data_preprocessor: OptConfigType = None,
+ init_cfg: OptMultiConfig = None):
+ super().__init__(data_preprocessor, init_cfg)
+
+ self.backbone = MODELS.build(backbone)
+
+ if neck is not None:
+ self.neck = MODELS.build(neck)
+
+ if head is not None:
+ self.head = MODELS.build(head)
+```
+
+### 前处理器(DataPreprocessor)
+
+从 MMPose 1.0 开始,我们在模型中添加了新的前处理器模块,用以完成图像归一化、通道顺序变换等操作。这样做的好处是可以利用 GPU 等设备的计算能力加快计算,并使模型在导出和部署时更具完整性。
+
+在配置文件中,一个常见的 `data_preprocessor` 如下:
+
+```Python
+data_preprocessor=dict(
+ type='PoseDataPreprocessor',
+ mean=[123.675, 116.28, 103.53],
+ std=[58.395, 57.12, 57.375],
+ bgr_to_rgb=True),
+```
+
+它会将输入图片的通道顺序从 `bgr` 转换为 `rgb`,并根据 `mean` 和 `std` 进行数据归一化。
+
+### 主干网络(Backbone)
+
+MMPose 实现的主干网络存放在 `$MMPOSE/mmpose/models/backbones` 目录下。
+
+在实际开发中,开发者经常会使用预训练的网络权重进行迁移学习,这能有效提升模型在小数据集上的性能。 在 MMPose 中,只需要在配置文件 `backbone` 的 `init_cfg` 中设置:
+
+```Python
+init_cfg=dict(
+ type='Pretrained',
+ checkpoint='PATH/TO/YOUR_MODEL_WEIGHTS.pth'),
+```
+
+如果你想只加载一个训练好的 checkpoint 的 backbone 部分,你需要指明一下前缀 `prefix`:
+
+```Python
+init_cfg=dict(
+ type='Pretrained',
+ prefix='backbone.',
+ checkpoint='PATH/TO/YOUR_CHECKPOINT.pth'),
+```
+
+其中 `checkpoint` 既可以是本地路径,也可以是下载链接。因此,如果你想使用 Torchvision 提供的预训练模型(比如ResNet50),可以使用:
+
+```Python
+init_cfg=dict(
+ type='Pretrained',
+ checkpoint='torchvision://resnet50')
+```
+
+除了这些常用的主干网络以外,你还可以从 MMClassification 等其他 OpenMMLab 项目中方便地迁移主干网络,它们都遵循同一套配置文件格式,并提供了预训练权重可供使用。
+
+需要强调的是,如果你加入了新的主干网络,需要在模型定义时进行注册:
+
+```Python
+@MODELS.register_module()
+class YourBackbone(BaseBackbone):
+```
+
+同时在 `$MMPOSE/mmpose/models/backbones/__init__.py` 下进行 `import`,并加入到 `__all__` 中,才能被配置文件正确地调用。
+
+### 颈部模块(Neck)
+
+颈部模块通常是介于主干网络和预测头之间的模块,在部分模型算法中会用到,常见的颈部模块有:
+
+- Global Average Pooling (GAP)
+
+- Feature Pyramid Networks (FPN)
+
+- Feature Map Processor (FMP)
+
+ `FeatureMapProcessor` 是一个通用的 PyTorch 模块,旨在通过选择、拼接和缩放等非参数变换将主干网络输出的特征图转换成适合预测头的格式。以下是一些操作的配置方式及效果示意图:
+
+ - 选择操作
+
+ ```python
+ neck=dict(type='FeatureMapProcessor', select_index=0)
+ ```
+
+
+
+ - 拼接操作
+
+ ```python
+ neck=dict(type='FeatureMapProcessor', concat=True)
+ ```
+
+
+
+ 拼接之前,其它特征图会被缩放到和序号为 0 的特征图相同的尺寸。
+
+ - 缩放操作
+
+ ```python
+ neck=dict(type='FeatureMapProcessor', scale_factor=2.0)
+ ```
+
+
+
+### 预测头(Head)
+
+通常来说,预测头是模型算法实现的核心,用于控制模型的输出,并进行损失函数计算。
+
+MMPose 中 Head 相关的模块定义在 `$MMPOSE/mmpose/models/heads` 目录下,开发者在自定义预测头时需要继承我们提供的基类 `BaseHead`,并重载以下三个方法对应模型推理的三种模式:
+
+- forward()
+
+- predict()
+
+- loss()
+
+具体而言,`predict()` 返回的应是输入图片尺度下的结果,因此需要调用 `self.decode()` 对网络输出进行解码,这一过程实现在 `BaseHead` 中已经实现,它会调用编解码器提供的 `decode()` 方法来完成解码。
+
+另一方面,我们会在 `predict()` 中进行测试时增强。在进行预测时,一个常见的测试时增强技巧是进行翻转集成。即,将一张图片先进行一次推理,再将图片水平翻转进行一次推理,推理的结果再次水平翻转回去,对两次推理的结果进行平均。这个技巧能有效提升模型的预测稳定性。
+
+下面是在 `RegressionHead` 中定义 `predict()` 的例子:
+
+```Python
+def predict(self,
+ feats: Tuple[Tensor],
+ batch_data_samples: OptSampleList,
+ test_cfg: ConfigType = {}) -> Predictions:
+ """Predict results from outputs."""
+
+ if test_cfg.get('flip_test', False):
+ # TTA: flip test -> feats = [orig, flipped]
+ assert isinstance(feats, list) and len(feats) == 2
+ flip_indices = batch_data_samples[0].metainfo['flip_indices']
+ input_size = batch_data_samples[0].metainfo['input_size']
+ _feats, _feats_flip = feats
+ _batch_coords = self.forward(_feats)
+ _batch_coords_flip = flip_coordinates(
+ self.forward(_feats_flip),
+ flip_indices=flip_indices,
+ shift_coords=test_cfg.get('shift_coords', True),
+ input_size=input_size)
+ batch_coords = (_batch_coords + _batch_coords_flip) * 0.5
+ else:
+ batch_coords = self.forward(feats) # (B, K, D)
+
+ batch_coords.unsqueeze_(dim=1) # (B, N, K, D)
+ preds = self.decode(batch_coords)
+```
+
+`loss()`除了进行损失函数的计算,还会进行 accuracy 等训练时指标的计算,并通过一个字典 `losses` 来传递:
+
+```Python
+ # calculate accuracy
+_, avg_acc, _ = keypoint_pck_accuracy(
+ pred=to_numpy(pred_coords),
+ gt=to_numpy(keypoint_labels),
+ mask=to_numpy(keypoint_weights) > 0,
+ thr=0.05,
+ norm_factor=np.ones((pred_coords.size(0), 2), dtype=np.float32))
+
+acc_pose = torch.tensor(avg_acc, device=keypoint_labels.device)
+losses.update(acc_pose=acc_pose)
+```
+
+每个 batch 的数据都打包成了 `batch_data_samples`。以 Regression-based 方法为例,训练所需的归一化的坐标值和关键点权重可以用如下方式获取:
+
+```Python
+keypoint_labels = torch.cat(
+ [d.gt_instance_labels.keypoint_labels for d in batch_data_samples])
+keypoint_weights = torch.cat([
+ d.gt_instance_labels.keypoint_weights for d in batch_data_samples
+])
+```
+
+以下为 `RegressionHead` 中完整的 `loss()` 实现:
+
+```Python
+def loss(self,
+ inputs: Tuple[Tensor],
+ batch_data_samples: OptSampleList,
+ train_cfg: ConfigType = {}) -> dict:
+ """Calculate losses from a batch of inputs and data samples."""
+
+ pred_outputs = self.forward(inputs)
+
+ keypoint_labels = torch.cat(
+ [d.gt_instance_labels.keypoint_labels for d in batch_data_samples])
+ keypoint_weights = torch.cat([
+ d.gt_instance_labels.keypoint_weights for d in batch_data_samples
+ ])
+
+ # calculate losses
+ losses = dict()
+ loss = self.loss_module(pred_outputs, keypoint_labels,
+ keypoint_weights.unsqueeze(-1))
+
+ if isinstance(loss, dict):
+ losses.update(loss)
+ else:
+ losses.update(loss_kpt=loss)
+
+ # calculate accuracy
+ _, avg_acc, _ = keypoint_pck_accuracy(
+ pred=to_numpy(pred_outputs),
+ gt=to_numpy(keypoint_labels),
+ mask=to_numpy(keypoint_weights) > 0,
+ thr=0.05,
+ norm_factor=np.ones((pred_outputs.size(0), 2), dtype=np.float32))
+ acc_pose = torch.tensor(avg_acc, device=keypoint_labels.device)
+ losses.update(acc_pose=acc_pose)
+
+ return losses
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/how_to_deploy.md b/internlm_langchain/knowledge_base/MMPose/content/how_to_deploy.md
new file mode 100644
index 00000000..b4fead87
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/how_to_deploy.md
@@ -0,0 +1,3 @@
+# How to Deploy MMPose Models
+
+Coming soon.
diff --git a/internlm_langchain/knowledge_base/MMPose/content/implement_new_models.md b/internlm_langchain/knowledge_base/MMPose/content/implement_new_models.md
new file mode 100644
index 00000000..4a10b0c3
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/implement_new_models.md
@@ -0,0 +1,3 @@
+# Implement New Models
+
+Coming soon.
diff --git a/internlm_langchain/knowledge_base/MMPose/content/inference.md b/internlm_langchain/knowledge_base/MMPose/content/inference.md
new file mode 100644
index 00000000..0844bc61
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/inference.md
@@ -0,0 +1,267 @@
+# 使用现有模型进行推理
+
+MMPose为姿态估计提供了大量可以从[模型库](https://mmpose.readthedocs.io/en/latest/model_zoo.html)中找到的预测训练模型。本指南将演示**如何执行推理**,或使用训练过的模型对提供的图像或视频运行姿态估计。
+
+有关在标准数据集上测试现有模型的说明,请参阅本指南。
+
+在MMPose,模型由配置文件定义,而其已计算好的参数存储在权重文件(checkpoint file)中。您可以在[模型库](https://mmpose.readthedocs.io/en/latest/model_zoo.html)中找到模型配置文件和相应的权重文件的URL。我们建议从使用HRNet模型的[配置文件](https://github.com/open-mmlab/mmpose/blob/main/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192.py)和[权重文件](https://download.openmmlab.com/mmpose/v1/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192-81c58e40_20220909.pth)开始。
+
+## 推理器:统一的推理接口
+
+MMPose提供了一个被称为`MMPoseInferencer`的、全面的推理API。这个API使得用户得以使用所有MMPose支持的模型来对图像和视频进行模型推理。此外,该API可以完成推理结果自动化,并方便用户保存预测结果。
+
+### 基本用法
+
+`MMPoseInferencer`可以在任何Python程序中被用来执行姿态估计任务。以下是在一个在Python Shell中使用预训练的人体姿态模型对给定图像进行推理的示例。
+
+```python
+from mmpose.apis import MMPoseInferencer
+
+img_path = 'tests/data/coco/000000000785.jpg' # 将img_path替换给你自己的路径
+
+# 使用模型别名创建推断器
+inferencer = MMPoseInferencer('human')
+
+# MMPoseInferencer采用了惰性推断方法,在给定输入时创建一个预测生成器
+result_generator = inferencer(img_path, show=True)
+result = next(result_generator)
+```
+
+如果一切正常,你将在一个新窗口中看到下图:
+
+
+
+`result` 变量是一个包含两个键值 `'visualization'` 和 `'predictions'` 的字典。
+
+- `'visualization'` 键对应的值是一个列表,该列表:
+ - 包含可视化结果,例如输入图像、估计姿态的标记,以及可选的预测热图。
+ - 如果没有指定 `return_vis` 参数,该列表将保持为空。
+- `'predictions'` 键对应的值是:
+ - 一个包含每个检测实例的预估关键点的列表。
+
+`result` 字典的结构如下所示:
+
+```python
+result = {
+ 'visualization': [
+ # 元素数量:batch_size(默认为1)
+ vis_image_1,
+ ...
+ ],
+ 'predictions': [
+ # 每张图像的姿态估计结果
+ # 元素数量:batch_size(默认为1)
+ [
+ # 每个检测到的实例的姿态信息
+ # 元素数量:检测到的实例数
+ {'keypoints': ..., # 实例 1
+ 'keypoint_scores': ...,
+ ...
+ },
+ {'keypoints': ..., # 实例 2
+ 'keypoint_scores': ...,
+ ...
+ },
+ ]
+ ...
+ ]
+}
+```
+
+还可以使用用于用于推断的**命令行界面工具**(CLI, command-line interface): `demo/inferencer_demo.py`。这个工具允许用户使用以下命令使用相同的模型和输入执行推理:
+
+```python
+python demo/inferencer_demo.py 'tests/data/coco/000000000785.jpg' \
+ --pose2d 'human' --show --pred-out-dir 'predictions'
+```
+
+预测结果将被保存在路径`predictions/000000000785.json`。作为一个API,`inferencer_demo.py`的输入参数与`MMPoseInferencer`的相同。前者能够处理一系列输入类型,包括以下内容:
+
+- 图像路径
+
+- 视频路径
+
+- 文件夹路径(这会导致该文件夹中的所有图像都被推断出来)
+
+- 表示图像的 numpy array (在命令行界面工具中未支持)
+
+- 表示图像的 numpy array 列表 (在命令行界面工具中未支持)
+
+- 摄像头(在这种情况下,输入参数应该设置为`webcam`或`webcam:{CAMERA_ID}`)
+
+当输入对应于多个图像时,例如输入为**视频**或**文件夹**路径时,推理生成器必须被遍历,以便推理器对视频/文件夹中的所有帧/图像进行推理。以下是一个示例:
+
+```python
+folder_path = 'tests/data/coco'
+
+result_generator = inferencer(folder_path, show=True)
+results = [result for result in result_generator]
+```
+
+在这个示例中,`inferencer` 接受 `folder_path` 作为输入,并返回一个生成器对象(`result_generator`),用于生成推理结果。通过遍历 `result_generator` 并将每个结果存储在 `results` 列表中,您可以获得视频/文件夹中所有帧/图像的推理结果。
+
+### 自定义姿态估计模型
+
+`MMPoseInferencer`提供了几种可用于自定义所使用的模型的方法:
+
+```python
+# 使用模型别名构建推断器
+inferencer = MMPoseInferencer('human')
+
+# 使用模型配置名构建推断器
+inferencer = MMPoseInferencer('td-hm_hrnet-w32_8xb64-210e_coco-256x192')
+
+# 使用模型配置文件和权重文件的路径或 URL 构建推断器
+inferencer = MMPoseInferencer(
+ pose2d='configs/body_2d_keypoint/topdown_heatmap/coco/' \
+ 'td-hm_hrnet-w32_8xb64-210e_coco-256x192.py',
+ pose2d_weights='https://download.openmmlab.com/mmpose/top_down/' \
+ 'hrnet/hrnet_w32_coco_256x192-c78dce93_20200708.pth'
+)
+```
+
+模型别名的完整列表可以在模型别名部分中找到。
+
+此外,自顶向下的姿态估计器还需要一个对象检测模型。`MMPoseInferencer`能够推断用MMPose支持的数据集训练的模型的实例类型,然后构建必要的对象检测模型。用户也可以通过以下方式手动指定检测模型:
+
+```python
+# 通过别名指定检测模型
+# 可用的别名包括“human”、“hand”、“face”、“animal”、
+# 以及mmdet中定义的任何其他别名
+inferencer = MMPoseInferencer(
+ # 假设姿态估计器是在自定义数据集上训练的
+ pose2d='custom_human_pose_estimator.py',
+ pose2d_weights='custom_human_pose_estimator.pth',
+ det_model='human'
+)
+
+# 使用模型配置名称指定检测模型
+inferencer = MMPoseInferencer(
+ pose2d='human',
+ det_model='yolox_l_8x8_300e_coco',
+ det_cat_ids=[0], # 指定'human'类的类别id
+)
+
+# 使用模型配置文件和权重文件的路径或URL构建推断器
+inferencer = MMPoseInferencer(
+ pose2d='human',
+ det_model=f'{PATH_TO_MMDET}/configs/yolox/yolox_l_8x8_300e_coco.py',
+ det_weights='https://download.openmmlab.com/mmdetection/v2.0/' \
+ 'yolox/yolox_l_8x8_300e_coco/' \
+ 'yolox_l_8x8_300e_coco_20211126_140236-d3bd2b23.pth',
+ det_cat_ids=[0], # 指定'human'类的类别id
+)
+```
+
+### 转储结果
+
+在执行姿态估计推理任务之后,您可能希望保存结果以供进一步分析或处理。本节将指导您将预测的关键点和可视化结果保存到本地。
+
+要将预测保存在JSON文件中,在运行`MMPoseInferencer`的实例`inferencer`时使用`pred_out_dir`参数:
+
+```python
+result_generator = inferencer(img_path, pred_out_dir='predictions')
+result = next(result_generator)
+```
+
+预测结果将以JSON格式保存在`predictions/`文件夹中,每个文件以相应的输入图像或视频的名称命名。
+
+对于更高级的场景,还可以直接从`inferencer`返回的`result`字典中访问预测结果。其中,`predictions`包含输入图像或视频中每个单独实例的预测关键点列表。然后,您可以使用您喜欢的方法操作或存储这些结果。
+
+请记住,如果你想将可视化图像和预测文件保存在一个文件夹中,你可以使用`out_dir`参数:
+
+```python
+result_generator = inferencer(img_path, out_dir='output')
+result = next(result_generator)
+```
+
+在这种情况下,可视化图像将保存在`output/visualization/`文件夹中,而预测将存储在`output/forecasts/`文件夹中。
+
+### 可视化
+
+推理器`inferencer`可以自动对输入的图像或视频进行预测。可视化结果可以显示在一个新的窗口中,并保存在本地。
+
+要在新窗口中查看可视化结果,请使用以下代码:
+
+请注意:
+
+- 如果输入视频来自网络摄像头,默认情况下将在新窗口中显示可视化结果,以此让用户看到输入
+
+- 如果平台上没有GUI,这个步骤可能会卡住
+
+要将可视化结果保存在本地,可以像这样指定`vis_out_dir`参数:
+
+```python
+result_generator = inferencer(img_path, vis_out_dir='vis_results')
+result = next(result_generator)
+```
+
+输入图片或视频的可视化预测结果将保存在`vis_results/`文件夹中
+
+在开头展示的滑雪图中,姿态的可视化估计结果由关键点(用实心圆描绘)和骨架(用线条表示)组成。这些视觉元素的默认大小可能不会产生令人满意的结果。用户可以使用`radius`和`thickness`参数来调整圆的大小和线的粗细,如下所示:
+
+```python
+result_generator = inferencer(img_path, show=True, radius=4, thickness=2)
+result = next(result_generator)
+```
+
+### 推理器参数
+
+`MMPoseInferencer`提供了各种自定义姿态估计、可视化和保存预测结果的参数。下面是初始化推断器时可用的参数列表及对这些参数的描述:
+
+| Argument | Description |
+| ---------------- | ------------------------------------------------------------ |
+| `pose2d` | 指定 2D 姿态估计模型的模型别名、配置文件名称或配置文件路径。 |
+| `pose2d_weights` | 指定 2D 姿态估计模型权重文件的URL或本地路径。 |
+| `pose3d` | 指定 3D 姿态估计模型的模型别名、配置文件名称或配置文件路径。 |
+| `pose3d_weights` | 指定 3D 姿态估计模型权重文件的URL或本地路径。 |
+| `det_model` | 指定对象检测模型的模型别名、配置文件名或配置文件路径。 |
+| `det_weights` | 指定对象检测模型权重文件的 URL 或本地路径。 |
+| `det_cat_ids` | 指定与要检测的对象类对应的类别 id 列表。 |
+| `device` | 执行推理的设备。如果为 `None`,推理器将选择最合适的一个。 |
+| `scope` | 定义模型模块的名称空间 |
+
+推理器被设计用于可视化和保存预测。以下表格列出了在使用 `MMPoseInferencer` 进行推断时可用的参数列表,以及它们与 2D 和 3D 推理器的兼容性:
+
+| 参数 | 描述 | 2D | 3D |
+| ------------------------ | -------------------------------------------------------------------------------------------------------------------------- | --- | --- |
+| `show` | 控制是否在弹出窗口中显示图像或视频。 | ✔️ | ✔️ |
+| `radius` | 设置可视化关键点的半径。 | ✔️ | ✔️ |
+| `thickness` | 确定可视化链接的厚度。 | ✔️ | ✔️ |
+| `kpt_thr` | 设置关键点分数阈值。分数超过此阈值的关键点将被显示。 | ✔️ | ✔️ |
+| `draw_bbox` | 决定是否显示实例的边界框。 | ✔️ | ✔️ |
+| `draw_heatmap` | 决定是否绘制预测的热图。 | ✔️ | ❌ |
+| `black_background` | 决定是否在黑色背景上显示预估的姿势。 | ✔️ | ❌ |
+| `skeleton_style` | 设置骨架样式。可选项包括 'mmpose'(默认)和 'openpose'。 | ✔️ | ❌ |
+| `use_oks_tracking` | 决定是否在追踪中使用OKS作为相似度测量。 | ❌ | ✔️ |
+| `tracking_thr` | 设置追踪的相似度阈值。 | ❌ | ✔️ |
+| `norm_pose_2d` | 决定是否将边界框缩放至数据集的平均边界框尺寸,并将边界框移至数据集的平均边界框中心。 | ❌ | ✔️ |
+| `rebase_keypoint_height` | 决定是否将最低关键点的高度置为 0。 | ❌ | ✔️ |
+| `return_vis` | 决定是否在结果中包含可视化图像。 | ✔️ | ✔️ |
+| `vis_out_dir` | 定义保存可视化图像的文件夹路径。如果未设置,将不保存可视化图像。 | ✔️ | ✔️ |
+| `return_datasample` | 决定是否以 `PoseDataSample` 格式返回预测。 | ✔️ | ✔️ |
+| `pred_out_dir` | 指定保存预测的文件夹路径。如果未设置,将不保存预测。 | ✔️ | ✔️ |
+| `out_dir` | 如果 `vis_out_dir` 或 `pred_out_dir` 未设置,它们将分别设置为 `f'{out_dir}/visualization'` 或 `f'{out_dir}/predictions'`。 | ✔️ | ✔️ |
+
+### 模型别名
+
+MMPose为常用模型提供了一组预定义的别名。在初始化 `MMPoseInferencer` 时,这些别名可以用作简略的表达方式,而不是指定完整的模型配置名称。下面是可用的模型别名及其对应的配置名称的列表:
+
+| 别名 | 配置文件名称 | 对应任务 | 姿态估计模型 | 检测模型 |
+| --------- | -------------------------------------------------- | ------------------------------- | ------------- | ------------------- |
+| animal | rtmpose-m_8xb64-210e_ap10k-256x256 | Animal pose estimation | RTMPose-m | RTMDet-m |
+| human | rtmpose-m_8xb256-420e_aic-coco-256x192 | Human pose estimation | RTMPose-m | RTMDet-m |
+| face | rtmpose-m_8xb64-60e_wflw-256x256 | Face keypoint detection | RTMPose-m | yolox-s |
+| hand | rtmpose-m_8xb32-210e_coco-wholebody-hand-256x256 | Hand keypoint detection | RTMPose-m | ssdlite_mobilenetv2 |
+| wholebody | rtmpose-m_8xb64-270e_coco-wholebody-256x192 | Human wholebody pose estimation | RTMPose-m | RTMDet-m |
+| vitpose | td-hm_ViTPose-base-simple_8xb64-210e_coco-256x192 | Human pose estimation | ViTPose-base | RTMDet-m |
+| vitpose-s | td-hm_ViTPose-small-simple_8xb64-210e_coco-256x192 | Human pose estimation | ViTPose-small | RTMDet-m |
+| vitpose-b | td-hm_ViTPose-base-simple_8xb64-210e_coco-256x192 | Human pose estimation | ViTPose-base | RTMDet-m |
+| vitpose-l | td-hm_ViTPose-large-simple_8xb64-210e_coco-256x192 | Human pose estimation | ViTPose-large | RTMDet-m |
+| vitpose-h | td-hm_ViTPose-huge-simple_8xb64-210e_coco-256x192 | Human pose estimation | ViTPose-huge | RTMDet-m |
+
+此外,用户可以使用命令行界面工具显示所有可用的别名,使用以下命令:
+
+```shell
+python demo/inferencer_demo.py --show-alias
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/installation.md b/internlm_langchain/knowledge_base/MMPose/content/installation.md
new file mode 100644
index 00000000..ef515c80
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/installation.md
@@ -0,0 +1,248 @@
+# 安装
+
+我们推荐用户按照我们的最佳实践来安装 MMPose。但除此之外,如果您想根据
+您的习惯完成安装流程,也可以参见 [自定义安装](#自定义安装) 一节来获取更多信息。
+
+- [安装](#安装)
+ - [依赖环境](#依赖环境)
+ - [最佳实践](#最佳实践)
+ - [从源码安装 MMPose](#从源码安装-mmpose)
+ - [作为 Python 包安装](#作为-python-包安装)
+ - [验证安装](#验证安装)
+ - [自定义安装](#自定义安装)
+ - [CUDA 版本](#cuda-版本)
+ - [不使用 MIM 安装 MMEngine](#不使用-mim-安装-mmengine)
+ - [在 CPU 环境中安装](#在-cpu-环境中安装)
+ - [在 Google Colab 中安装](#在-google-colab-中安装)
+ - [通过 Docker 使用 MMPose](#通过-docker-使用-mmpose)
+ - [故障解决](#故障解决)
+
+## 依赖环境
+
+在本节中,我们将演示如何准备 PyTorch 相关的依赖环境。
+
+MMPose 适用于 Linux、Windows 和 macOS。它需要 Python 3.7+、CUDA 9.2+ 和 PyTorch 1.8+。
+
+如果您对配置 PyTorch 环境已经很熟悉,并且已经完成了配置,可以直接进入下一节:[安装](#安装-mmpose)。否则,请依照以下步骤完成配置。
+
+**第 1 步** 从[官网](https://docs.conda.io/en/latest/miniconda.html) 下载并安装 Miniconda。
+
+**第 2 步** 创建一个 conda 虚拟环境并激活它。
+
+```shell
+conda create --name openmmlab python=3.8 -y
+conda activate openmmlab
+```
+
+**第 3 步** 按照[官方指南](https://pytorch.org/get-started/locally/) 安装 PyTorch。例如:
+
+在 GPU 平台:
+
+```shell
+conda install pytorch torchvision -c pytorch
+```
+
+```{warning}
+以上命令会自动安装最新版的 PyTorch 与对应的 cudatoolkit,请检查它们是否与您的环境匹配。
+```
+
+在 CPU 平台:
+
+```shell
+conda install pytorch torchvision cpuonly -c pytorch
+```
+
+**第 4 步** 使用 [MIM](https://github.com/open-mmlab/mim) 安装 [MMEngine](https://github.com/open-mmlab/mmengine) 和 [MMCV](https://github.com/open-mmlab/mmcv/tree/2.x)
+
+```shell
+pip install -U openmim
+mim install mmengine
+mim install "mmcv>=2.0.1"
+```
+
+请注意,MMPose 中的一些推理示例脚本需要使用 [MMDetection](https://github.com/open-mmlab/mmdetection) (mmdet) 检测人体。如果您想运行这些示例脚本,可以通过运行以下命令安装 mmdet:
+
+```shell
+mim install "mmdet>=3.1.0"
+```
+
+## 最佳实践
+
+根据具体需求,我们支持两种安装模式: 从源码安装(推荐)和作为 Python 包安装
+
+### 从源码安装(推荐)
+
+如果基于 MMPose 框架开发自己的任务,需要添加新的功能,比如新的模型或是数据集,或者使用我们提供的各种工具。从源码按如下方式安装 mmpose:
+
+```shell
+git clone https://github.com/open-mmlab/mmpose.git
+cd mmpose
+pip install -r requirements.txt
+pip install -v -e .
+# "-v" 表示输出更多安装相关的信息
+# "-e" 表示以可编辑形式安装,这样可以在不重新安装的情况下,让本地修改直接生效
+```
+
+### 作为 Python 包安装
+
+如果只是希望调用 MMPose 的接口,或者在自己的项目中导入 MMPose 中的模块。直接使用 mim 安装即可。
+
+```shell
+mim install "mmpose>=1.1.0"
+```
+
+## 验证安装
+
+为了验证 MMPose 是否安装正确,您可以通过以下步骤运行模型推理。
+
+**第 1 步** 我们需要下载配置文件和模型权重文件
+
+```shell
+mim download mmpose --config td-hm_hrnet-w48_8xb32-210e_coco-256x192 --dest .
+```
+
+下载过程往往需要几秒或更多的时间,这取决于您的网络环境。完成之后,您会在当前目录下找到这两个文件:`td-hm_hrnet-w48_8xb32-210e_coco-256x192.py` 和 `hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth`, 分别是配置文件和对应的模型权重文件。
+
+**第 2 步** 验证推理示例
+
+如果您是**从源码安装**的 mmpose,可以直接运行以下命令进行验证:
+
+```shell
+python demo/image_demo.py \
+ tests/data/coco/000000000785.jpg \
+ td-hm_hrnet-w48_8xb32-210e_coco-256x192.py \
+ hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth \
+ --out-file vis_results.jpg \
+ --draw-heatmap
+```
+
+如果一切顺利,您将会得到这样的可视化结果:
+
+
+
+代码会将预测的关键点和热图绘制在图像中的人体上,并保存到当前文件夹下的 `vis_results.jpg`。
+
+如果您是**作为 Python 包安装**,可以打开您的 Python 解释器,复制并粘贴如下代码:
+
+```python
+from mmpose.apis import inference_topdown, init_model
+from mmpose.utils import register_all_modules
+
+register_all_modules()
+
+config_file = 'td-hm_hrnet-w48_8xb32-210e_coco-256x192.py'
+checkpoint_file = 'hrnet_w48_coco_256x192-b9e0b3ab_20200708.pth'
+model = init_model(config_file, checkpoint_file, device='cpu') # or device='cuda:0'
+
+# 请准备好一张带有人体的图片
+results = inference_topdown(model, 'demo.jpg')
+```
+
+示例图片 `demo.jpg` 可以从 [Github](https://raw.githubusercontent.com/open-mmlab/mmpose/main/tests/data/coco/000000000785.jpg) 下载。
+推理结果是一个 `PoseDataSample` 列表,预测结果将会保存在 `pred_instances` 中,包括检测到的关键点位置和置信度。
+
+## 自定义安装
+
+### CUDA 版本
+
+安装 PyTorch 时,需要指定 CUDA 版本。如果您不清楚选择哪个,请遵循我们的建议:
+
+- 对于 Ampere 架构的 NVIDIA GPU,例如 GeForce 30 系列 以及 NVIDIA A100,CUDA 11 是必需的。
+- 对于更早的 NVIDIA GPU,CUDA 11 是向后兼容 (backward compatible) 的,但 CUDA 10.2 能够提供更好的兼容性,也更加轻量。
+
+请确保您的 GPU 驱动版本满足最低的版本需求,参阅[这张表](https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-major-component-versions__table-cuda-toolkit-driver-versions)。
+
+```{note}
+如果按照我们的最佳实践进行安装,CUDA 运行时库就足够了,因为我们提供相关 CUDA 代码的预编译,您不需要进行本地编译。
+但如果您希望从源码进行 MMCV 的编译,或是进行其他 CUDA 算子的开发,那么就必须安装完整的 CUDA 工具链,参见
+[NVIDIA 官网](https://developer.nvidia.com/cuda-downloads),另外还需要确保该 CUDA 工具链的版本与 PyTorch 安装时
+的配置相匹配(如用 `conda install` 安装 PyTorch 时指定的 cudatoolkit 版本)。
+```
+
+### 不使用 MIM 安装 MMEngine
+
+若不使用 mim 安装 MMEngine,请遵循 [ MMEngine 安装指南](https://mmengine.readthedocs.io/zh_CN/latest/get_started/installation.html).
+
+例如,您可以通过以下命令安装 MMEngine:
+
+```shell
+pip install mmengine
+```
+
+### 不使用 MIM 安装 MMCV
+
+MMCV 包含 C++ 和 CUDA 扩展,因此其对 PyTorch 的依赖比较复杂。MIM 会自动解析这些
+依赖,选择合适的 MMCV 预编译包,使安装更简单,但它并不是必需的。
+
+若不使用 mim 来安装 MMCV,请遵照 [MMCV 安装指南](https://mmcv.readthedocs.io/zh_CN/2.x/get_started/installation.html)。
+它需要您用指定 url 的形式手动指定对应的 PyTorch 和 CUDA 版本。
+
+举个例子,如下命令将会安装基于 PyTorch 1.10.x 和 CUDA 11.3 编译的 mmcv。
+
+```shell
+pip install 'mmcv>=2.0.1' -f https://download.openmmlab.com/mmcv/dist/cu113/torch1.10/index.html
+```
+
+### 在 CPU 环境中安装
+
+MMPose 可以仅在 CPU 环境中安装,在 CPU 模式下,您可以完成训练、测试和模型推理等所有操作。
+
+在 CPU 模式下,MMCV 的部分功能将不可用,通常是一些 GPU 编译的算子,如 `Deformable Convolution`。MMPose 中大部分的模型都不会依赖这些算子,但是如果您尝试使用包含这些算子的模型来运行训练、测试或推理,将会报错。
+
+### 在 Google Colab 中安装
+
+[Google Colab](https://colab.research.google.com/) 通常已经包含了 PyTorch 环境,因此我们只需要安装 MMEngine, MMCV 和 MMPose 即可,命令如下:
+
+**第 1 步** 使用 [MIM](https://github.com/open-mmlab/mim) 安装 [MMEngine](https://github.com/open-mmlab/mmengine) 和 [MMCV](https://github.com/open-mmlab/mmcv/tree/2.x)
+
+```shell
+!pip3 install openmim
+!mim install mmengine
+!mim install "mmcv>=2.0.1"
+```
+
+**第 2 步** 从源码安装 mmpose
+
+```shell
+!git clone https://github.com/open-mmlab/mmpose.git
+%cd mmpose
+!pip install -e .
+```
+
+**第 3 步** 验证
+
+```python
+import mmpose
+print(mmpose.__version__)
+# 预期输出: 1.1.0
+```
+
+```{note}
+在 Jupyter 中,感叹号 `!` 用于执行外部命令,而 `%cd` 是一个[魔术命令](https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-cd),用于切换 Python 的工作路径。
+```
+
+### 通过 Docker 使用 MMPose
+
+MMPose 提供 [Dockerfile](https://github.com/open-mmlab/mmpose/blob/master/docker/Dockerfile)
+用于构建镜像。请确保您的 [Docker 版本](https://docs.docker.com/engine/install/) >=19.03。
+
+```shell
+# 构建默认的 PyTorch 1.8.0,CUDA 10.1 版本镜像
+# 如果您希望使用其他版本,请修改 Dockerfile
+docker build -t mmpose docker/
+```
+
+**注意**:请确保您已经安装了 [nvidia-container-toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker)。
+
+用以下命令运行 Docker 镜像:
+
+```shell
+docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmpose/data mmpose
+```
+
+`{DATA_DIR}` 是您本地存放用于 MMPose 训练、测试、推理等流程的数据目录。
+
+## 故障解决
+
+如果您在安装过程中遇到了什么问题,请先查阅[常见问题](faq.md)。如果没有找到解决方法,可以在 GitHub
+上[提出 issue](https://github.com/open-mmlab/mmpose/issues/new/choose)。
diff --git a/internlm_langchain/knowledge_base/MMPose/content/label_studio.md b/internlm_langchain/knowledge_base/MMPose/content/label_studio.md
new file mode 100644
index 00000000..94cbd641
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/label_studio.md
@@ -0,0 +1,76 @@
+# Label Studio 标注工具转COCO脚本
+
+[Label Studio](https://labelstud.io/) 是一款广受欢迎的深度学习标注工具,可以对多种任务进行标注,然而对于关键点标注,Label Studio 无法直接导出成 MMPose 所需要的 COCO 格式。本文将介绍如何使用Label Studio 标注关键点数据,并利用 [labelstudio2coco.py](../../../tools/dataset_converters/labelstudio2coco.py) 工具将其转换为训练所需的格式。
+
+## Label Studio 标注要求
+
+根据 COCO 格式的要求,每个标注的实例中都需要包含关键点、分割和 bbox 的信息,然而 Label Studio 在标注时会将这些信息分散在不同的实例中,因此需要按一定规则进行标注,才能正常使用后续的脚本。
+
+1. 标签接口设置
+
+对于一个新建的 Label Studio 项目,首先要设置它的标签接口。这里需要有三种类型的标注:`KeyPointLabels`、`PolygonLabels`、`RectangleLabels`,分别对应 COCO 格式中的`keypoints`、`segmentation`、`bbox`。以下是一个标签接口的示例,可以在项目的`Settings`中找到`Labeling Interface`,点击`Code`,粘贴使用该示例。
+
+```xml
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+2. 标注顺序
+
+由于需要将多个标注实例中的不同类型标注组合到一个实例中,因此采取了按特定顺序标注的方式,以此来判断各标注是否位于同一个实例。标注时须按照 **KeyPointLabels -> PolygonLabels/RectangleLabels** 的顺序标注,其中 KeyPointLabels 的顺序和数量要与 MMPose 配置文件中的`dataset_info`的关键点顺序和数量一致, PolygonLabels 和 RectangleLabels 的标注顺序可以互换,且可以只标注其中一个,只要保证一个实例的标注中,以关键点开始,以非关键点结束即可。下图为标注的示例:
+
+*注:bbox 和 area 会根据靠后的 PolygonLabels/RectangleLabels 来计算,如若先标 PolygonLabels,那么bbox会是靠后的 RectangleLabels 的范围,面积为矩形的面积,反之则是多边形外接矩形和多边形的面积*
+
+
+
+3. 导出标注
+
+上述标注完成后,需要将标注进行导出。选择项目界面的`Export`按钮,选择`JSON`格式,再点击`Export`即可下载包含标签的 JSON 格式文件。
+
+*注:上述文件中仅仅包含标签,不包含原始图片,因此需要额外提供标注对应的图片。由于 Label Studio 会对过长的文件名进行截断,因此不建议直接使用上传的文件,而是使用`Export`功能中的导出 COCO 格式工具,使用压缩包内的图片文件夹。*
+
+
+
+## 转换工具脚本的使用
+
+转换工具脚本位于`tools/dataset_converters/labelstudio2coco.py`,使用方式如下:
+
+```bash
+python tools/dataset_converters/labelstudio2coco.py config.xml project-1-at-2023-05-13-09-22-91b53efa.json output/result.json
+```
+
+其中`config.xml`的内容为标签接口设置中提到的`Labeling Interface`中的`Code`,`project-1-at-2023-05-13-09-22-91b53efa.json`即为导出标注时导出的 Label Studio 格式的 JSON 文件,`output/result.json`为转换后得到的 COCO 格式的 JSON 文件路径,若路径不存在,该脚本会自动创建路径。
+
+随后,将图片的文件夹放置在输出目录下,即可完成 COCO 数据集的转换。目录结构示例如下:
+
+```bash
+.
+├── images
+│ ├── 38b480f2.jpg
+│ └── aeb26f04.jpg
+└── result.json
+
+```
+
+若想在 MMPose 中使用该数据集,可以进行类似如下的修改:
+
+```python
+dataset=dict(
+ type=dataset_type,
+ data_root=data_root,
+ data_mode=data_mode,
+ ann_file='result.json',
+ data_prefix=dict(img='images/'),
+ pipeline=train_pipeline,
+)
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/migration.md b/internlm_langchain/knowledge_base/MMPose/content/migration.md
new file mode 100644
index 00000000..9a591dfc
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/migration.md
@@ -0,0 +1,201 @@
+# MMPose 0.X 兼容性说明
+
+MMPose 1.0 经过了大规模重构并解决了许多遗留问题,对于 0.x 版本的大部分代码 MMPose 1.0 将不兼容。
+
+## 数据变换
+
+### 平移、旋转和缩放
+
+旧版的数据变换方法 `TopDownRandomShiftBboxCenter` 和 `TopDownGetRandomScaleRotation`,将被合并为 `RandomBBoxTransform`:
+
+```Python
+@TRANSFORMS.register_module()
+class RandomBBoxTransform(BaseTransform):
+ r"""Rnadomly shift, resize and rotate the bounding boxes.
+
+ Required Keys:
+
+ - bbox_center
+ - bbox_scale
+
+ Modified Keys:
+
+ - bbox_center
+ - bbox_scale
+
+ Added Keys:
+ - bbox_rotation
+
+ Args:
+ shift_factor (float): Randomly shift the bbox in range
+ :math:`[-dx, dx]` and :math:`[-dy, dy]` in X and Y directions,
+ where :math:`dx(y) = x(y)_scale \cdot shift_factor` in pixels.
+ Defaults to 0.16
+ shift_prob (float): Probability of applying random shift. Defaults to
+ 0.3
+ scale_factor (Tuple[float, float]): Randomly resize the bbox in range
+ :math:`[scale_factor[0], scale_factor[1]]`. Defaults to (0.5, 1.5)
+ scale_prob (float): Probability of applying random resizing. Defaults
+ to 1.0
+ rotate_factor (float): Randomly rotate the bbox in
+ :math:`[-rotate_factor, rotate_factor]` in degrees. Defaults
+ to 80.0
+ rotate_prob (float): Probability of applying random rotation. Defaults
+ to 0.6
+ """
+
+ def __init__(self,
+ shift_factor: float = 0.16,
+ shift_prob: float = 0.3,
+ scale_factor: Tuple[float, float] = (0.5, 1.5),
+ scale_prob: float = 1.0,
+ rotate_factor: float = 80.0,
+ rotate_prob: float = 0.6) -> None:
+```
+
+### 标签生成
+
+旧版用于训练标签生成的方法 `TopDownGenerateTarget` 、`TopDownGenerateTargetRegression`、`BottomUpGenerateHeatmapTarget`、`BottomUpGenerateTarget` 等将被合并为 `GenerateTarget`,而实际的生成方法由[编解码器](./user_guides/codecs.md) 提供:
+
+```Python
+@TRANSFORMS.register_module()
+class GenerateTarget(BaseTransform):
+ """Encode keypoints into Target.
+
+ The generated target is usually the supervision signal of the model
+ learning, e.g. heatmaps or regression labels.
+
+ Required Keys:
+
+ - keypoints
+ - keypoints_visible
+ - dataset_keypoint_weights
+
+ Added Keys:
+
+ - The keys of the encoded items from the codec will be updated into
+ the results, e.g. ``'heatmaps'`` or ``'keypoint_weights'``. See
+ the specific codec for more details.
+
+ Args:
+ encoder (dict | list[dict]): The codec config for keypoint encoding.
+ Both single encoder and multiple encoders (given as a list) are
+ supported
+ multilevel (bool): Determine the method to handle multiple encoders.
+ If ``multilevel==True``, generate multilevel targets from a group
+ of encoders of the same type (e.g. multiple :class:`MSRAHeatmap`
+ encoders with different sigma values); If ``multilevel==False``,
+ generate combined targets from a group of different encoders. This
+ argument will have no effect in case of single encoder. Defaults
+ to ``False``
+ use_dataset_keypoint_weights (bool): Whether use the keypoint weights
+ from the dataset meta information. Defaults to ``False``
+ """
+
+ def __init__(self,
+ encoder: MultiConfig,
+ multilevel: bool = False,
+ use_dataset_keypoint_weights: bool = False) -> None:
+```
+
+### 数据归一化
+
+旧版的数据归一化操作 `NormalizeTensor` 和 `ToTensor` 方法将由 **DataPreprocessor** 模块替代,不再作为流水线的一部分,而是作为模块加入到模型前向传播中。
+
+## 模型兼容
+
+我们对 model zoo 提供的模型权重进行了兼容性处理,确保相同的模型权重测试精度能够与 0.x 版本保持同等水平,但由于在这两个版本中存在大量处理细节的差异,推理结果可能会产生轻微的不同(精度误差小于 0.05%)。
+
+对于使用 0.x 版本训练保存的模型权重,我们在预测头中提供了 `_load_state_dict_pre_hook()` 方法来将旧版的权重字典替换为新版,如果你希望将在旧版上开发的模型兼容到新版,可以参考我们的实现。
+
+```Python
+@MODELS.register_module()
+class YourHead(BaseHead):
+def __init__(self):
+
+ ## omitted
+
+ # Register the hook to automatically convert old version state dicts
+ self._register_load_state_dict_pre_hook(self._load_state_dict_pre_hook)
+```
+
+### Heatmap-based 方法
+
+对于基于SimpleBaseline方法的模型,主要需要注意最后一层卷积层的兼容:
+
+```Python
+def _load_state_dict_pre_hook(self, state_dict, prefix, local_meta, *args,
+ **kwargs):
+ version = local_meta.get('version', None)
+
+ if version and version >= self._version:
+ return
+
+ # convert old-version state dict
+ keys = list(state_dict.keys())
+ for _k in keys:
+ if not _k.startswith(prefix):
+ continue
+ v = state_dict.pop(_k)
+ k = _k[len(prefix):]
+ # In old version, "final_layer" includes both intermediate
+ # conv layers (new "conv_layers") and final conv layers (new
+ # "final_layer").
+ #
+ # If there is no intermediate conv layer, old "final_layer" will
+ # have keys like "final_layer.xxx", which should be still
+ # named "final_layer.xxx";
+ #
+ # If there are intermediate conv layers, old "final_layer" will
+ # have keys like "final_layer.n.xxx", where the weights of the last
+ # one should be renamed "final_layer.xxx", and others should be
+ # renamed "conv_layers.n.xxx"
+ k_parts = k.split('.')
+ if k_parts[0] == 'final_layer':
+ if len(k_parts) == 3:
+ assert isinstance(self.conv_layers, nn.Sequential)
+ idx = int(k_parts[1])
+ if idx < len(self.conv_layers):
+ # final_layer.n.xxx -> conv_layers.n.xxx
+ k_new = 'conv_layers.' + '.'.join(k_parts[1:])
+ else:
+ # final_layer.n.xxx -> final_layer.xxx
+ k_new = 'final_layer.' + k_parts[2]
+ else:
+ # final_layer.xxx remains final_layer.xxx
+ k_new = k
+ else:
+ k_new = k
+
+ state_dict[prefix + k_new] = v
+```
+
+### RLE-based 方法
+
+对于基于 RLE 的模型,由于新版的 `loss` 模块更名为 `loss_module`,且 flow 模型归属在 `loss` 模块下,因此需要对权重字典中 `loss` 字段进行更改:
+
+```Python
+def _load_state_dict_pre_hook(self, state_dict, prefix, local_meta, *args,
+ **kwargs):
+
+ version = local_meta.get('version', None)
+
+ if version and version >= self._version:
+ return
+
+ # convert old-version state dict
+ keys = list(state_dict.keys())
+ for _k in keys:
+ v = state_dict.pop(_k)
+ k = _k.lstrip(prefix)
+ # In old version, "loss" includes the instances of loss,
+ # now it should be renamed "loss_module"
+ k_parts = k.split('.')
+ if k_parts[0] == 'loss':
+ # loss.xxx -> loss_module.xxx
+ k_new = prefix + 'loss_module.' + '.'.join(k_parts[1:])
+ else:
+ k_new = _k
+
+ state_dict[k_new] = v
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/mixed_datasets.md b/internlm_langchain/knowledge_base/MMPose/content/mixed_datasets.md
new file mode 100644
index 00000000..fac38e33
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/mixed_datasets.md
@@ -0,0 +1,159 @@
+# 混合数据集训练
+
+MMPose 提供了一个灵活、便捷的工具 `CombinedDataset` 来进行混合数据集训练。它作为一个封装器,可以包含多个子数据集,并将来自不同子数据集的数据转换成一个统一的格式,以用于模型训练。使用 `CombinedDataset` 的数据处理流程如下图所示。
+
+
+
+本篇教程的后续部分将通过一个结合 COCO 和 AI Challenger (AIC) 数据集的例子详细介绍如何配置 `CombinedDataset`。
+
+## COCO & AIC 数据集混合案例
+
+COCO 和 AIC 都是 2D 人体姿态数据集。但是,这两个数据集在关键点的数量和排列顺序上有所不同。下面是分别来自这两个数据集的图片及关键点:
+
+
+
+有些关键点(例如“左手”)在两个数据集中都有定义,但它们具有不同的序号。具体来说,“左手”关键点在 COCO 数据集中的序号为 9,在AIC数据集中的序号为 5。此外,每个数据集都包含独特的关键点,另一个数据集中不存在。例如,面部关键点(序号为0〜4)仅在 COCO 数据集中定义,而“头顶”(序号为 12)和“颈部”(序号为 13)关键点仅在 AIC 数据集中存在。以下的维恩图显示了两个数据集中关键点之间的关系。
+
+
+
+接下来,我们会介绍两种混合数据集的方式:
+
+- [将 AIC 合入 COCO 数据集](#将-aic-合入-coco-数据集)
+- [合并 AIC 和 COCO 数据集](#合并-aic-和-coco-数据集)
+
+### 将 AIC 合入 COCO 数据集
+
+如果用户想提高其模型在 COCO 或类似数据集上的性能,可以将 AIC 数据集作为辅助数据。此时应该仅选择 AIC 数据集中与 COCO 数据集共享的关键点,忽略其余关键点。此外,还需要将这些被选择的关键点在 AIC 数据集中的序号进行转换,以匹配在 COCO 数据集中对应关键点的序号。
+
+
+
+在这种情况下,来自 COCO 的数据不需要进行转换。此时 COCO 数据集可通过如下方式配置:
+
+```python
+dataset_coco = dict(
+ type='CocoDataset',
+ data_root='data/coco/',
+ ann_file='annotations/person_keypoints_train2017.json',
+ data_prefix=dict(img='train2017/'),
+ pipeline=[], # `pipeline` 应为空列表,因为 COCO 数据不需要转换
+)
+```
+
+对于 AIC 数据集,需要转换关键点的顺序。MMPose 提供了一个 `KeypointConverter` 转换器来实现这一点。以下是配置 AIC 子数据集的示例:
+
+```python
+dataset_aic = dict(
+ type='AicDataset',
+ data_root='data/aic/',
+ ann_file='annotations/aic_train.json',
+ data_prefix=dict(img='ai_challenger_keypoint_train_20170902/'
+ 'keypoint_train_images_20170902/'),
+ pipeline=[
+ dict(
+ type='KeypointConverter',
+ num_keypoints=17, # 与 COCO 数据集关键点数一致
+ mapping=[ # 需要列出所有带转换关键点的序号
+ (0, 6), # 0 (AIC 中的序号) -> 6 (COCO 中的序号)
+ (1, 8),
+ (2, 10),
+ (3, 5),
+ (4, 7),
+ (5, 9),
+ (6, 12),
+ (7, 14),
+ (8, 16),
+ (9, 11),
+ (10, 13),
+ (11, 15),
+ ])
+ ],
+)
+```
+
+`KeypointConverter` 会将原序号在 0 到 11 之间的关键点的序号转换为在 5 到 16 之间的对应序号。同时,在 AIC 中序号为为 12 和 13 的关键点将被删除。另外,目标序号在 0 到 4 之间的关键点在 `mapping` 参数中没有定义,这些点将被设为不可见,并且不会在训练中使用。
+
+子数据集都完成配置后, 混合数据集 `CombinedDataset` 可以通过如下方式配置:
+
+```python
+dataset = dict(
+ type='CombinedDataset',
+ # 混合数据集关键点顺序和 COCO 数据集相同,
+ # 所以使用 COCO 数据集的描述信息
+ metainfo=dict(from_file='configs/_base_/datasets/coco.py'),
+ datasets=[dataset_coco, dataset_aic],
+ # `train_pipeline` 包含了常用的数据预处理,
+ # 比如图片读取、数据增广等
+ pipeline=train_pipeline,
+)
+```
+
+MMPose 提供了一份完整的 [配置文件](https://github.com/open-mmlab/mmpose/blob/dev-1.x/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-aic-256x192-merge.py) 来将 AIC 合入 COCO 数据集并用于训练网络。用户可以查阅这个文件以获取更多细节,或者参考这个文件来构建新的混合数据集。
+
+### 合并 AIC 和 COCO 数据集
+
+将 AIC 合入 COCO 数据集的过程中丢弃了部分 AIC 数据集中的标注信息。如果用户想要使用两个数据集中的所有信息,可以将两个数据集合并,即在两个数据集中取关键点的并集。
+
+
+
+在这种情况下,COCO 和 AIC 数据集都需要使用 `KeypointConverter` 来调整它们关键点的顺序:
+
+```python
+dataset_coco = dict(
+ type='CocoDataset',
+ data_root='data/coco/',
+ ann_file='annotations/person_keypoints_train2017.json',
+ data_prefix=dict(img='train2017/'),
+ pipeline=[
+ dict(
+ type='KeypointConverter',
+ num_keypoints=19, # 并集中有 19 个关键点
+ mapping=[
+ (0, 0),
+ (1, 1),
+ # 省略
+ (16, 16),
+ ])
+ ])
+
+dataset_aic = dict(
+ type='AicDataset',
+ data_root='data/aic/',
+ ann_file='annotations/aic_train.json',
+ data_prefix=dict(img='ai_challenger_keypoint_train_20170902/'
+ 'keypoint_train_images_20170902/'),
+ pipeline=[
+ dict(
+ type='KeypointConverter',
+ num_keypoints=19, # 并集中有 19 个关键点
+ mapping=[
+ (0, 6),
+ # 省略
+ (12, 17),
+ (13, 18),
+ ])
+ ],
+)
+```
+
+合并后的数据集有 19 个关键点,这与 COCO 或 AIC 数据集都不同,因此需要一个新的数据集描述信息文件。[coco_aic.py](https://github.com/open-mmlab/mmpose/blob/dev-1.x/configs/_base_/datasets/coco_aic.py) 是一个描述信息文件的示例,它基于 [coco.py](https://github.com/open-mmlab/mmpose/blob/dev-1.x/configs/_base_/datasets/coco.py) 并进行了以下几点修改:
+
+- 添加了 AIC 数据集的文章信息;
+- 在 `keypoint_info` 中添加了“头顶”和“颈部”这两个只在 AIC 中定义的关键点;
+- 在 `skeleton_info` 中添加了“头顶”和“颈部”间的连线;
+- 拓展 `joint_weights` 和 `sigmas` 以添加新增关键点的信息。
+
+完成以上步骤后,合并数据集 `CombinedDataset` 可以通过以下方式配置:
+
+```python
+dataset = dict(
+ type='CombinedDataset',
+ # 使用新的描述信息文件
+ metainfo=dict(from_file='configs/_base_/datasets/coco_aic.py'),
+ datasets=[dataset_coco, dataset_aic],
+ # `train_pipeline` 包含了常用的数据预处理,
+ # 比如图片读取、数据增广等
+ pipeline=train_pipeline,
+)
+```
+
+此外,在使用混合数据集时,由于关键点数量的变化,模型的输出通道数也要做相应调整。如果用户用混合数据集训练了模型,但是要在 COCO 数据集上评估模型,就需要从模型输出的关键点中取出一个子集来匹配 COCO 中的关键点格式。可以通过 `test_cfg` 中的 `output_keypoint_indices` 参数自定义此子集。这个 [配置文件](https://github.com/open-mmlab/mmpose/blob/dev-1.x/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-aic-256x192-combine.py) 展示了如何用 AIC 和 COCO 合并后的数据集训练模型并在 COCO 数据集上进行测试。用户可以查阅这个文件以获取更多细节,或者参考这个文件来构建新的混合数据集。
diff --git a/internlm_langchain/knowledge_base/MMPose/content/model_analysis.md b/internlm_langchain/knowledge_base/MMPose/content/model_analysis.md
new file mode 100644
index 00000000..234dc5be
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/model_analysis.md
@@ -0,0 +1,100 @@
+# Model Analysis
+
+## 统计模型参数量与计算量
+
+MMPose 提供了 `tools/analysis_tools/get_flops.py` 来统计模型的参数量与计算量。
+
+```shell
+python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}] [--cfg-options ${CFG_OPTIONS}]
+```
+
+参数说明:
+
+`CONFIG_FILE` : 模型配置文件的路径。
+
+`--shape`: 模型的输入张量形状。
+
+`--input-constructor`: 如果指定为 `batch`,将会生成一个 `batch tensor` 来计算 FLOPs。
+
+`--batch-size`:如果 `--input-constructor` 指定为 `batch`,将会生成一个随机 `tensor`,形状为 `(batch_size, 3, **input_shape)` 来计算 FLOPs。
+
+`--cfg-options`: 如果指定,可选的 `cfg` 的键值对将会被合并到配置文件中。
+
+示例:
+
+```shell
+python tools/analysis_tools/get_flops.py configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192.py
+```
+
+结果如下:
+
+```text
+==============================
+Input shape: (1, 3, 256, 192)
+Flops: 7.7 GFLOPs
+Params: 28.54 M
+==============================
+```
+
+```{note}
+目前该工具仍处于实验阶段,我们不能保证统计结果绝对正确,一些算子(比如 GN 或自定义算子)没有被统计到 FLOPs 中。
+```
+
+## 分析训练日志
+
+MMPose 提供了 `tools/analysis_tools/analyze_logs.py` 来对训练日志进行简单的分析,包括:
+
+- 将日志绘制成损失和精度曲线图
+- 统计训练速度
+
+### 绘制损失和精度曲线图
+
+该功能依赖于 `seaborn`,请先运行 `pip install seaborn` 安装依赖包。
+
+
+
+```shell
+python tools/analysis_tools/analyze_logs.py plot_curve ${JSON_LOGS} [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
+```
+
+示例:
+
+- 绘制损失曲线
+
+ ```shell
+ python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_kpt --legend loss_kpt
+ ```
+
+- 绘制精度曲线并导出为 PDF 文件
+
+ ```shell
+ python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys acc_pose --out results.pdf
+ ```
+
+- 将多个日志文件绘制在同一张图上
+
+ ```shell
+ python tools/analysis_tools/analyze_logs.py plot_curve log1.json log2.json --keys loss_kpt --legend run1 run2 --title loss_kpt --out loss_kpt.png
+ ```
+
+### 统计训练速度
+
+```shell
+python tools/analysis_tools/analyze_logs.py cal_train_time ${JSON_LOGS} [--include-outliers]
+```
+
+示例:
+
+```shell
+python tools/analysis_tools/analyze_logs.py cal_train_time log.json
+```
+
+结果如下:
+
+```text
+-----Analyze train time of hrnet_w32_256x192.json-----
+slowest epoch 56, average time is 0.6924
+fastest epoch 1, average time is 0.6502
+time std over epochs is 0.0085
+average iter time: 0.6688 s/iter
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/overview.md b/internlm_langchain/knowledge_base/MMPose/content/overview.md
new file mode 100644
index 00000000..a790cd3b
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/overview.md
@@ -0,0 +1,76 @@
+# 概述
+
+本章将向你介绍 MMPose 的整体框架,并提供详细的教程链接。
+
+## 什么是 MMPose
+
+
+
+MMPose 是一款基于 Pytorch 的姿态估计开源工具箱,是 OpenMMLab 项目的成员之一,包含了丰富的 2D 多人姿态估计、2D 手部姿态估计、2D 人脸关键点检测、133关键点全身人体姿态估计、动物关键点检测、服饰关键点检测等算法以及相关的组件和模块,下面是它的整体框架:
+
+MMPose 由 **8** 个主要部分组成,apis、structures、datasets、codecs、models、engine、evaluation 和 visualization。
+
+- **apis** 提供用于模型推理的高级 API
+
+- **structures** 提供 bbox、keypoint 和 PoseDataSample 等数据结构
+
+- **datasets** 支持用于姿态估计的各种数据集
+
+ - **transforms** 包含各种数据增强变换
+
+- **codecs** 提供姿态编解码器:编码器用于将姿态信息(通常为关键点坐标)编码为模型学习目标(如热力图),解码器则用于将模型输出解码为姿态估计结果
+
+- **models** 以模块化结构提供了姿态估计模型的各类组件
+
+ - **pose_estimators** 定义了所有姿态估计模型类
+ - **data_preprocessors** 用于预处理模型的输入数据
+ - **backbones** 包含各种骨干网络
+ - **necks** 包含各种模型颈部组件
+ - **heads** 包含各种模型头部
+ - **losses** 包含各种损失函数
+
+- **engine** 包含与姿态估计任务相关的运行时组件
+
+ - **hooks** 提供运行时的各种钩子
+
+- **evaluation** 提供各种评估模型性能的指标
+
+- **visualization** 用于可视化关键点骨架和热力图等信息
+
+## 如何使用本指南
+
+针对不同类型的用户,我们准备了详细的指南:
+
+1. 安装说明:
+
+ - [安装](./installation.md)
+
+2. MMPose 的基本使用方法:
+
+ - [20 分钟上手教程](./guide_to_framework.md)
+ - [Demos](./demos.md)
+ - [模型推理](./user_guides/inference.md)
+ - [配置文件](./user_guides/configs.md)
+ - [准备数据集](./user_guides/prepare_datasets.md)
+ - [训练与测试](./user_guides/train_and_test.md)
+
+3. 对于希望基于 MMPose 进行开发的研究者和开发者:
+
+ - [编解码器](./advanced_guides/codecs.md)
+ - [数据流](./advanced_guides/dataflow.md)
+ - [实现新模型](./advanced_guides/implement_new_models.md)
+ - [自定义数据集](./advanced_guides/customize_datasets.md)
+ - [自定义数据变换](./advanced_guides/customize_transforms.md)
+ - [自定义优化器](./advanced_guides/customize_optimizer.md)
+ - [自定义日志](./advanced_guides/customize_logging.md)
+ - [模型部署](./advanced_guides/how_to_deploy.md)
+ - [模型分析工具](./advanced_guides/model_analysis.md)
+ - [迁移指南](./migration.md)
+
+4. 对于希望加入开源社区,向 MMPose 贡献代码的研究者和开发者:
+
+ - [参与贡献代码](./contribution_guide.md)
+
+5. 对于使用过程中的常见问题:
+
+ - [FAQ](./faq.md)
diff --git a/internlm_langchain/knowledge_base/MMPose/content/prepare_datasets.md b/internlm_langchain/knowledge_base/MMPose/content/prepare_datasets.md
new file mode 100644
index 00000000..8b7d651e
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/prepare_datasets.md
@@ -0,0 +1,221 @@
+# 准备数据集
+
+在这份文档将指导如何为 MMPose 准备数据集,包括使用内置数据集、创建自定义数据集、结合数据集进行训练、浏览和下载数据集。
+
+## 使用内置数据集
+
+**步骤一**: 准备数据
+
+MMPose 支持多种任务和相应的数据集。你可以在 [数据集仓库](https://mmpose.readthedocs.io/en/latest/dataset_zoo.html) 中找到它们。为了正确准备你的数据,请按照你选择的数据集的指南进行操作。
+
+**步骤二**: 在配置文件中进行数据集设置
+
+在开始训练或评估模型之前,你必须配置数据集设置。以 [`td-hm_hrnet-w32_8xb64-210e_coco-256x192.py`](/configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-210e_coco-256x192.py) 为例,它可以用于在 COCO 数据集上训练或评估 HRNet 姿态估计器。下面我们浏览一下数据集配置:
+
+- 基础数据集参数
+
+ ```python
+ # base dataset settings
+ dataset_type = 'CocoDataset'
+ data_mode = 'topdown'
+ data_root = 'data/coco/'
+ ```
+
+ - `dataset_type` 指定数据集的类名。用户可以参考 [数据集 API](https://mmpose.readthedocs.io/en/latest/api.html#datasets) 来找到他们想要的数据集的类名。
+ - `data_mode` 决定了数据集的输出格式,有两个选项可用:`'topdown'` 和 `'bottomup'`。如果 `data_mode='topdown'`,数据元素表示一个实例及其姿态;否则,一个数据元素代表一张图像,包含多个实例和姿态。
+ - `data_root` 指定数据集的根目录。
+
+- 数据处理流程
+
+ ```python
+ # pipelines
+ train_pipeline = [
+ dict(type='LoadImage'),
+ dict(type='GetBBoxCenterScale'),
+ dict(type='RandomFlip', direction='horizontal'),
+ dict(type='RandomHalfBody'),
+ dict(type='RandomBBoxTransform'),
+ dict(type='TopdownAffine', input_size=codec['input_size']),
+ dict(type='GenerateTarget', encoder=codec),
+ dict(type='PackPoseInputs')
+ ]
+ val_pipeline = [
+ dict(type='LoadImage'),
+ dict(type='GetBBoxCenterScale'),
+ dict(type='TopdownAffine', input_size=codec['input_size']),
+ dict(type='PackPoseInputs')
+ ]
+ ```
+
+ `train_pipeline` 和 `val_pipeline` 分别定义了训练和评估阶段处理数据元素的步骤。除了加载图像和打包输入之外,`train_pipeline` 主要包含数据增强技术和目标生成器,而 `val_pipeline` 则专注于将数据元素转换为统一的格式。
+
+- 数据加载器
+
+ ```python
+ # data loaders
+ train_dataloader = dict(
+ batch_size=64,
+ num_workers=2,
+ persistent_workers=True,
+ sampler=dict(type='DefaultSampler', shuffle=True),
+ dataset=dict(
+ type=dataset_type,
+ data_root=data_root,
+ data_mode=data_mode,
+ ann_file='annotations/person_keypoints_train2017.json',
+ data_prefix=dict(img='train2017/'),
+ pipeline=train_pipeline,
+ ))
+ val_dataloader = dict(
+ batch_size=32,
+ num_workers=2,
+ persistent_workers=True,
+ drop_last=False,
+ sampler=dict(type='DefaultSampler', shuffle=False, round_up=False),
+ dataset=dict(
+ type=dataset_type,
+ data_root=data_root,
+ data_mode=data_mode,
+ ann_file='annotations/person_keypoints_val2017.json',
+ bbox_file='data/coco/person_detection_results/'
+ 'COCO_val2017_detections_AP_H_56_person.json',
+ data_prefix=dict(img='val2017/'),
+ test_mode=True,
+ pipeline=val_pipeline,
+ ))
+ test_dataloader = val_dataloader
+ ```
+
+ 这个部分是配置数据集的关键。除了前面讨论过的基础数据集参数和数据处理流程之外,这里还定义了其他重要的参数。`batch_size` 决定了每个 GPU 的 batch size;`ann_file` 指定了数据集的注释文件;`data_prefix` 指定了图像文件夹。`bbox_file` 仅在 top-down 数据集的 val/test 数据加载器中使用,用于提供检测到的边界框信息。
+
+我们推荐从使用相同数据集的配置文件中复制数据集配置,而不是从头开始编写,以最小化潜在的错误。通过这样做,用户可以根据需要进行必要的修改,从而确保更可靠和高效的设置过程。
+
+## 使用自定义数据集
+
+[自定义数据集](../advanced_guides/customize_datasets.md) 指南提供了如何构建自定义数据集的详细信息。在本节中,我们将强调一些使用和配置自定义数据集的关键技巧。
+
+- 确定数据集类名。如果你将数据集重组为 COCO 格式,你可以简单地使用 `CocoDataset` 作为 `dataset_type` 的值。否则,你将需要使用你添加的自定义数据集类的名称。
+
+- 指定元信息配置文件。MMPose 1.x 采用了与 MMPose 0.x 不同的策略来指定元信息。在 MMPose 1.x 中,用户可以按照以下方式指定元信息配置文件:
+
+ ```python
+ train_dataloader = dict(
+ ...
+ dataset=dict(
+ type=dataset_type,
+ data_root='root/of/your/train/data',
+ ann_file='path/to/your/train/json',
+ data_prefix=dict(img='path/to/your/train/img'),
+ # specify dataset meta information
+ metainfo=dict(from_file='configs/_base_/datasets/custom.py'),
+ ...),
+ )
+ ```
+
+ 注意,`metainfo` 参数必须在 val/test 数据加载器中指定。
+
+## 使用混合数据集进行训练
+
+MMPose 提供了一个方便且多功能的解决方案,用于训练混合数据集。请参考[混合数据集训练](./mixed_datasets.md)。
+
+## 浏览数据集
+
+`tools/analysis_tools/browse_dataset.py` 帮助用户可视化地浏览姿态数据集,或将图像保存到指定的目录。
+
+```shell
+python tools/misc/browse_dataset.py ${CONFIG} [-h] [--output-dir ${OUTPUT_DIR}] [--not-show] [--phase ${PHASE}] [--mode ${MODE}] [--show-interval ${SHOW_INTERVAL}]
+```
+
+| ARGS | Description |
+| -------------------------------- | ---------------------------------------------------------------------------------------------------------- |
+| `CONFIG` | 配置文件的路径 |
+| `--output-dir OUTPUT_DIR` | 保存可视化结果的目标文件夹。如果不指定,可视化的结果将不会被保存 |
+| `--not-show` | 不适用外部窗口显示可视化的结果 |
+| `--phase {train, val, test}` | 数据集选项 |
+| `--mode {original, transformed}` | 指定可视化图片类型。 `original` 为不使用数据增强的原始图片及标注可视化; `transformed` 为经过增强后的可视化 |
+| `--show-interval SHOW_INTERVAL` | 显示图片的时间间隔 |
+
+例如,用户想要可视化 COCO 数据集中的图像和标注,可以使用:
+
+```shell
+python tools/misc/browse_dataset.py configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-e210_coco-256x192.py --mode original
+```
+
+检测框和关键点将被绘制在原始图像上。下面是一个例子:
+
+
+原始图像在被输入模型之前需要被处理。为了可视化预处理后的图像和标注,用户需要将参数 `mode` 修改为 `transformed`。例如:
+
+```shell
+python tools/misc/browse_dataset.py configs/body_2d_keypoint/topdown_heatmap/coco/td-hm_hrnet-w32_8xb64-e210_coco-256x192.py --mode transformed
+```
+
+这是一个处理后的样本:
+
+
+
+热图目标将与之一起可视化,如果它是在 pipeline 中生成的。
+
+## 用 MIM 下载数据集
+
+通过使用 [OpenDataLab](https://opendatalab.com/),您可以直接下载开源数据集。通过平台的搜索功能,您可以快速轻松地找到他们正在寻找的数据集。使用平台上的格式化数据集,您可以高效地跨数据集执行任务。
+
+如果您使用 MIM 下载,请确保版本大于 v0.3.8。您可以使用以下命令进行更新、安装、登录和数据集下载:
+
+```shell
+# upgrade your MIM
+pip install -U openmim
+
+# install OpenDataLab CLI tools
+pip install -U opendatalab
+# log in OpenDataLab, registry
+odl login
+
+# download coco2017 and preprocess by MIM
+mim download mmpose --dataset coco2017
+```
+
+### 已支持的数据集
+
+下面是支持的数据集列表,更多数据集将在之后持续更新:
+
+#### 人体数据集
+
+| Dataset name | Download command |
+| ------------- | ----------------------------------------- |
+| COCO 2017 | `mim download mmpose --dataset coco2017` |
+| MPII | `mim download mmpose --dataset mpii` |
+| AI Challenger | `mim download mmpose --dataset aic` |
+| CrowdPose | `mim download mmpose --dataset crowdpose` |
+
+#### 人脸数据集
+
+| Dataset name | Download command |
+| ------------ | ------------------------------------ |
+| LaPa | `mim download mmpose --dataset lapa` |
+| 300W | `mim download mmpose --dataset 300w` |
+| WFLW | `mim download mmpose --dataset wflw` |
+
+#### 手部数据集
+
+| Dataset name | Download command |
+| ------------ | ------------------------------------------ |
+| OneHand10K | `mim download mmpose --dataset onehand10k` |
+| FreiHand | `mim download mmpose --dataset freihand` |
+| HaGRID | `mim download mmpose --dataset hagrid` |
+
+#### 全身数据集
+
+| Dataset name | Download command |
+| ------------ | ------------------------------------- |
+| Halpe | `mim download mmpose --dataset halpe` |
+
+#### 动物数据集
+
+| Dataset name | Download command |
+| ------------ | ------------------------------------- |
+| AP-10K | `mim download mmpose --dataset ap10k` |
+
+#### 服装数据集
+
+Coming Soon
diff --git a/internlm_langchain/knowledge_base/MMPose/content/projects.md b/internlm_langchain/knowledge_base/MMPose/content/projects.md
new file mode 100644
index 00000000..460d8583
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/projects.md
@@ -0,0 +1,20 @@
+# Projects based on MMPose
+
+There are many projects built upon MMPose. We list some of them as examples of how to extend MMPose for your own projects. As the page might not be completed, please feel free to create a PR to update this page.
+
+## Projects as an extension
+
+Some projects extend the boundary of MMPose for deployment or other research fields. They reveal the potential of what MMPose can do. We list several of them as below.
+
+- [Anime Face Detector](https://github.com/hysts/anime-face-detector): An anime face landmark detection toolbox.
+- [PosePipeline](https://github.com/peabody124/PosePipeline): Open-Source Human Pose Estimation Pipeline for Clinical Research
+
+## Projects of papers
+
+There are also projects released with papers. Some of the papers are published in top-tier conferences (CVPR, ICCV, and ECCV), the others are also highly influential. We list some of these works as a reference for the community to develop and compare new pose estimation algorithms. Methods already supported and maintained by MMPose are not listed.
+
+- Pose for Everything: Towards Category-Agnostic Pose Estimation, ECCV 2022. [\[paper\]](https://arxiv.org/abs/2207.10387)[\[github\]](https://github.com/luminxu/Pose-for-Everything)
+- UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning, ICLR 2022. [\[paper\]](https://arxiv.org/abs/2201.04676)[\[github\]](https://github.com/Sense-X/UniFormer)
+- Poseur:Direct Human Pose Regression with Transformers, ECCV 2022. [\[paper\]](https://arxiv.org/abs/2201.07412)[\[github\]](https://github.com/aim-uofa/Poseur)
+- ViTAEv2: Vision Transformer Advanced by Exploring Inductive Bias for Image Recognition and Beyond, NeurIPS 2022. [\[paper\]](https://arxiv.org/abs/2106.03348)[\[github\]](https://github.com/ViTAE-Transformer/ViTAE-Transformer)
+- Dite-HRNet:Dynamic Lightweight High-Resolution Network for Human Pose Estimation, IJCAI-ECAI 2021. [\[paper\]](https://arxiv.org/abs/2204.10762)[\[github\]](https://github.com/ZiyiZhang27/Dite-HRNet)
diff --git a/internlm_langchain/knowledge_base/MMPose/content/pytorch_2.md b/internlm_langchain/knowledge_base/MMPose/content/pytorch_2.md
new file mode 100644
index 00000000..4892e554
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/pytorch_2.md
@@ -0,0 +1,14 @@
+# PyTorch 2.0 Compatibility and Benchmarks
+
+MMPose 1.0.0 is now compatible with PyTorch 2.0, ensuring that users can leverage the latest features and performance improvements offered by the PyTorch 2.0 framework when using MMPose. With the integration of inductor, users can expect faster model speeds. The table below shows several example models:
+
+| Model | Training Speed | Memory |
+| :-------- | :---------------------: | :-----------: |
+| ViTPose-B | 29.6% ↑ (0.931 → 0.655) | 10586 → 10663 |
+| ViTPose-S | 33.7% ↑ (0.563 → 0.373) | 6091 → 6170 |
+| HRNet-w32 | 12.8% ↑ (0.553 → 0.482) | 9849 → 10145 |
+| HRNet-w48 | 37.1% ↑ (0.437 → 0.275) | 7319 → 7394 |
+| RTMPose-t | 6.3% ↑ (1.533 → 1.437) | 6292 → 6489 |
+| RTMPose-s | 13.1% ↑ (1.645 → 1.430) | 9013 → 9208 |
+
+- Pytorch 2.0 test, add projects doc and refactor by @LareinaM in [PR#2136](https://github.com/open-mmlab/mmpose/pull/2136)
diff --git a/internlm_langchain/knowledge_base/MMPose/content/quick_run.md b/internlm_langchain/knowledge_base/MMPose/content/quick_run.md
new file mode 100644
index 00000000..55c2d63b
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/quick_run.md
@@ -0,0 +1,188 @@
+# 快速上手
+
+在这一章里,我们将带领你走过MMPose工作流程中关键的七个步骤,帮助你快速上手:
+
+1. 使用预训练模型进行推理
+2. 准备数据集
+3. 准备配置文件
+4. 可视化训练图片
+5. 训练
+6. 测试
+7. 可视化
+
+## 安装
+
+请查看[安装指南](./installation.md),以了解完整步骤。
+
+## 快速开始
+
+### 使用预训练模型进行推理
+
+你可以通过以下命令来使用预训练模型对单张图片进行识别:
+
+```Bash
+python demo/image_demo.py \
+ tests/data/coco/000000000785.jpg \
+ configs/body_2d_keypoint/topdown_regression/coco/td-reg_res50_rle-8xb64-210e_coco-256x192.py\
+ https://download.openmmlab.com/mmpose/top_down/deeppose/deeppose_res50_coco_256x192_rle-2ea9bb4a_20220616.pth
+```
+
+该命令中用到了测试图片、完整的配置文件、预训练模型,如果MMPose安装无误,将会弹出一个新窗口,对检测结果进行可视化显示:
+
+
+
+更多演示脚本的详细参数说明可以在 [模型推理](./user_guides/inference.md) 中找到。
+
+### 准备数据集
+
+MMPose支持各种不同的任务,我们提供了对应的数据集准备教程。
+
+- [2D人体关键点](./dataset_zoo/2d_body_keypoint.md)
+
+- [3D人体关键点](./dataset_zoo/3d_body_keypoint.md)
+
+- [2D人手关键点](./dataset_zoo/2d_hand_keypoint.md)
+
+- [3D人手关键点](./dataset_zoo/3d_hand_keypoint.md)
+
+- [2D人脸关键点](./dataset_zoo/2d_face_keypoint.md)
+
+- [2D全身人体关键点](./dataset_zoo/2d_wholebody_keypoint.md)
+
+- [2D服饰关键点](./dataset_zoo/2d_fashion_landmark.md)
+
+- [2D动物关键点](./dataset_zoo/2d_animal_keypoint.md)
+
+你可以在【2D人体关键点数据集】>【COCO】下找到COCO数据集的准备教程,并按照教程完成数据集的下载和整理。
+
+```{note}
+在MMPose中,我们建议将COCO数据集存放到新建的 `$MMPOSE/data` 目录下。
+```
+
+### 准备配置文件
+
+MMPose拥有一套强大的配置系统,用于管理训练所需的一系列必要参数:
+
+- **通用**:环境、Hook、Checkpoint、Logger、Timer等
+
+- **数据**:Dataset、Dataloader、数据增强等
+
+- **训练**:优化器、学习率调整等
+
+- **模型**:Backbone、Neck、Head、损失函数等
+
+- **评测**:Metrics
+
+在`$MMPOSE/configs`目录下,我们提供了大量前沿论文方法的配置文件,可供直接使用和参考。
+
+要在COCO数据集上训练基于ResNet50的RLE模型时,所需的配置文件为:
+
+```Bash
+$MMPOSE/configs/body_2d_keypoint/topdown_regression/coco/td-reg_res50_rle-8xb64-210e_coco-256x192.py
+```
+
+我们需要将配置文件中的 data_root 变量修改为COCO数据集存放路径:
+
+```Python
+data_root = 'data/coco'
+```
+
+```{note}
+感兴趣的读者也可以查阅 [配置文件](./user_guides/configs.md) 来进一步学习MMPose所使用的配置系统。
+```
+
+### 可视化训练图片
+
+在开始训练之前,我们还可以对训练图片进行可视化,检查训练图片是否正确进行了数据增强。
+
+我们提供了相应的可视化脚本:
+
+```Bash
+python tools/misc/browse_dastaset.py \
+ configs/body_2d_keypoint/topdown_regression/coco/td-reg_res50_rle-8xb64-210e_coco-256x192.py \
+ --mode transformed
+```
+
+
+
+### 训练
+
+确定数据无误后,运行以下命令启动训练:
+
+```Bash
+python tools/train.py configs/body_2d_keypoint/topdown_regression/coco/td-reg_res50_rle-8xb64-210e_coco-256x192.py
+```
+
+```{note}
+MMPose中集成了大量实用训练trick和功能:
+
+- 学习率warmup和scheduling
+
+- ImageNet预训练权重
+
+- 自动学习率缩放、自动batch size缩放
+
+- CPU训练、多机多卡训练、集群训练
+
+- HardDisk、LMDB、Petrel、HTTP等不同数据后端
+
+- 混合精度浮点训练
+
+- TensorBoard
+```
+
+### 测试
+
+在不指定额外参数时,训练的权重和日志信息会默认存储到`$MMPOSE/work_dirs`目录下,最优的模型权重存放在`$MMPOSE/work_dir/best_coco`目录下。
+
+我们可以通过如下指令测试模型在COCO验证集上的精度:
+
+```Bash
+python tools/test.py \
+ configs/body_2d_keypoint/topdown_regression/coco/td-reg_res50_rle-8xb64-210e_coco-256x192.py \
+ work_dir/best_coco/AP_epoch_20.pth
+```
+
+在COCO验证集上评测结果样例如下:
+
+```Bash
+ Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.704
+ Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.883
+ Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.777
+ Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.667
+ Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.769
+ Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.751
+ Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.920
+ Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.815
+ Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.709
+ Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.811
+08/23 12:04:42 - mmengine - INFO - Epoch(test) [3254/3254] coco/AP: 0.704168 coco/AP .5: 0.883134 coco/AP .75: 0.777015 coco/AP (M): 0.667207 coco/AP (L): 0.768644 coco/AR: 0.750913 coco/AR .5: 0.919710 coco/AR .75: 0.815334 coco/AR (M): 0.709232 coco/AR (L): 0.811334
+```
+
+```{note}
+如果需要测试模型在其他数据集上的表现,可以前往 [训练与测试](./user_guides/train_and_test.md) 查看。
+```
+
+### 可视化
+
+除了对关键点骨架的可视化以外,我们还支持对热度图进行可视化,你只需要在配置文件中设置`output_heatmap=True`:
+
+```Python
+model = dict(
+ ## 内容省略
+ test_cfg = dict(
+ ## 内容省略
+ output_heatmaps=True
+ )
+)
+```
+
+或在命令行中添加`--cfg-options='model.test_cfg.output_heatmaps=True'`。
+
+可视化效果如下:
+
+
+
+```{note}
+如果你希望深入地学习MMPose,将其应用到自己的项目当中,我们准备了一份详细的 [迁移指南](./migration.md) 。
+```
diff --git a/internlm_langchain/knowledge_base/MMPose/content/train_and_test.md b/internlm_langchain/knowledge_base/MMPose/content/train_and_test.md
new file mode 100644
index 00000000..452eddc9
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/train_and_test.md
@@ -0,0 +1,5 @@
+# 训练与测试
+
+中文内容建设中,暂时请查阅[英文版文档](../../en/user_guides/train_and_test.md)
+
+如果您愿意参与中文文档的翻译与维护,我们团队将十分感谢您的贡献!欢迎加入我们的社区群与我们取得联系,或直接按照 [如何给 MMPose 贡献代码](../contribution_guide.md) 在 GitHub 上提交 Pull Request。
diff --git a/internlm_langchain/knowledge_base/MMPose/content/useful_tools.md b/internlm_langchain/knowledge_base/MMPose/content/useful_tools.md
new file mode 100644
index 00000000..f2ceb771
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/useful_tools.md
@@ -0,0 +1,5 @@
+# 常用工具
+
+中文内容建设中,暂时请查阅[英文版文档](../../en/user_guides/useful_tools.md)
+
+如果您愿意参与中文文档的翻译与维护,我们团队将十分感谢您的贡献!欢迎加入我们的社区群与我们取得联系,或直接按照 [如何给 MMPose 贡献代码](../contribution_guide.md) 在 GitHub 上提交 Pull Request。
diff --git a/internlm_langchain/knowledge_base/MMPose/content/visualization.md b/internlm_langchain/knowledge_base/MMPose/content/visualization.md
new file mode 100644
index 00000000..a584eb45
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPose/content/visualization.md
@@ -0,0 +1,5 @@
+# 可视化
+
+中文内容建设中,暂时请查阅[英文版文档](../../en/user_guides/visualization.md)
+
+如果您愿意参与中文文档的翻译与维护,我们团队将十分感谢您的贡献!欢迎加入我们的社区群与我们取得联系,或直接按照 [如何给 MMPose 贡献代码](../contribution_guide.md) 在 GitHub 上提交 Pull Request。
diff --git a/internlm_langchain/knowledge_base/MMPose/vector_store/index.faiss b/internlm_langchain/knowledge_base/MMPose/vector_store/index.faiss
new file mode 100644
index 00000000..844a8a0b
Binary files /dev/null and b/internlm_langchain/knowledge_base/MMPose/vector_store/index.faiss differ
diff --git a/internlm_langchain/knowledge_base/MMPreTrain/content/cam_visualization.md b/internlm_langchain/knowledge_base/MMPreTrain/content/cam_visualization.md
new file mode 100644
index 00000000..94d5ed17
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPreTrain/content/cam_visualization.md
@@ -0,0 +1,164 @@
+# 类别激活图(CAM)可视化
+
+## 类别激活图可视化工具介绍
+
+MMPretrain 提供 `tools/visualization/vis_cam.py` 工具来可视化类别激活图。请使用 `pip install "grad-cam>=1.3.6"` 安装依赖的 [pytorch-grad-cam](https://github.com/jacobgil/pytorch-grad-cam)。
+
+目前支持的方法有:
+
+| Method | What it does |
+| :----------: | :-----------------------------------------------------------------------------------------------: |
+| GradCAM | 使用平均梯度对 2D 激活进行加权 |
+| GradCAM++ | 类似 GradCAM,但使用了二阶梯度 |
+| XGradCAM | 类似 GradCAM,但通过归一化的激活对梯度进行了加权 |
+| EigenCAM | 使用 2D 激活的第一主成分(无法区分类别,但效果似乎不错) |
+| EigenGradCAM | 类似 EigenCAM,但支持类别区分,使用了激活 * 梯度的第一主成分,看起来和 GradCAM 差不多,但是更干净 |
+| LayerCAM | 使用正梯度对激活进行空间加权,对于浅层有更好的效果 |
+
+也可以使用新版本 `pytorch-grad-cam` 支持的更多 CAM 方法,但我们尚未验证可用性。
+
+**命令行**:
+
+```bash
+python tools/visualization/vis_cam.py \
+ ${IMG} \
+ ${CONFIG_FILE} \
+ ${CHECKPOINT} \
+ [--target-layers ${TARGET-LAYERS}] \
+ [--preview-model] \
+ [--method ${METHOD}] \
+ [--target-category ${TARGET-CATEGORY}] \
+ [--save-path ${SAVE_PATH}] \
+ [--vit-like] \
+ [--num-extra-tokens ${NUM-EXTRA-TOKENS}]
+ [--aug_smooth] \
+ [--eigen_smooth] \
+ [--device ${DEVICE}] \
+ [--cfg-options ${CFG-OPTIONS}]
+```
+
+**所有参数的说明**:
+
+- `img`:目标图片路径。
+- `config`:模型配置文件的路径。
+- `checkpoint`:权重路径。
+- `--target-layers`:所查看的网络层名称,可输入一个或者多个网络层,如果不设置,将使用最后一个`block`中的`norm`层。
+- `--preview-model`:是否查看模型所有网络层。
+- `--method`:类别激活图图可视化的方法,目前支持 `GradCAM`, `GradCAM++`, `XGradCAM`, `EigenCAM`, `EigenGradCAM`, `LayerCAM`,不区分大小写。如果不设置,默认为 `GradCAM`。
+- `--target-category`:查看的目标类别,如果不设置,使用模型检测出来的类别做为目标类别。
+- `--save-path`:保存的可视化图片的路径,默认不保存。
+- `--eigen-smooth`:是否使用主成分降低噪音,默认不开启。
+- `--vit-like`: 是否为 `ViT` 类似的 Transformer-based 网络
+- `--num-extra-tokens`: `ViT` 类网络的额外的 tokens 通道数,默认使用主干网络的 `num_extra_tokens`。
+- `--aug-smooth`:是否使用测试时增强
+- `--device`:使用的计算设备,如果不设置,默认为'cpu'。
+- `--cfg-options`:对配置文件的修改,参考[学习配置文件](../user_guides/config.md)。
+
+```{note}
+在指定 `--target-layers` 时,如果不知道模型有哪些网络层,可使用命令行添加 `--preview-model` 查看所有网络层名称;
+```
+
+## 如何可视化 CNN 网络的类别激活图(如 ResNet-50)
+
+`--target-layers` 在 `Resnet-50` 中的一些示例如下:
+
+- `'backbone.layer4'`,表示第四个 `ResLayer` 层的输出。
+- `'backbone.layer4.2'` 表示第四个 `ResLayer` 层中第三个 `BottleNeck` 块的输出。
+- `'backbone.layer4.2.conv1'` 表示上述 `BottleNeck` 块中 `conv1` 层的输出。
+
+1. 使用不同方法可视化 `ResNet50`,默认 `target-category` 为模型检测的结果,使用默认推导的 `target-layers`。
+
+ ```shell
+ python tools/visualization/vis_cam.py \
+ demo/bird.JPEG \
+ configs/resnet/resnet50_8xb32_in1k.py \
+ https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_batch256_imagenet_20200708-cfb998bf.pth \
+ --method GradCAM
+ # GradCAM++, XGradCAM, EigenCAM, EigenGradCAM, LayerCAM
+ ```
+
+ | Image | GradCAM | GradCAM++ | EigenGradCAM | LayerCAM |
+ | ------------------------------------ | --------------------------------------- | ----------------------------------------- | -------------------------------------------- | ---------------------------------------- |
+ |
|
diff --git a/internlm_langchain/knowledge_base/MMPreTrain/content/test.md b/internlm_langchain/knowledge_base/MMPreTrain/content/test.md
new file mode 100644
index 00000000..054e1e41
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPreTrain/content/test.md
@@ -0,0 +1,117 @@
+# 测试
+
+## 单机单卡测试
+
+你可以使用 `tools/test.py` 在电脑上用 CPU 或是 GPU 进行模型的测试。
+
+以下是测试脚本的完整用法:
+
+```shell
+python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [ARGS]
+```
+
+````{note}
+默认情况下,MMPretrain 会自动调用你的 GPU 进行测试。如果你有 GPU 但仍想使用 CPU 进行测试,请设置环境变量 `CUDA_VISIBLE_DEVICES` 为空或者 -1 来对禁用 GPU。
+
+```bash
+CUDA_VISIBLE_DEVICES=-1 python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [ARGS]
+```
+````
+
+| 参数 | 描述 |
+| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `CONFIG_FILE` | 配置文件的路径。 |
+| `CHECKPOINT_FILE` | 权重文件路径(支持 http 链接,你可以在[这里](https://mmpretrain.readthedocs.io/en/latest/modelzoo_statistics.html)寻找需要的权重文件)。 |
+| `--work-dir WORK_DIR` | 用来保存测试指标结果的文件夹。 |
+| `--out OUT` | 用来保存测试输出的文件。 |
+| `--out-item OUT_ITEM` | 指定测试输出文件的内容,可以为 "pred" 或 "metrics",其中 "pred" 表示保存所有模型输出,这些数据可以用于离线测评;"metrics" 表示输出测试指标。默认为 "pred"。 |
+| `--cfg-options CFG_OPTIONS` | 重载配置文件中的一些设置。使用类似 `xxx=yyy` 的键值对形式指定,这些设置会被融合入从配置文件读取的配置。你可以使用 `key="[a,b]"` 或者 `key=a,b` 的格式来指定列表格式的值,且支持嵌套,例如 \`key="[(a,b),(c,d)]",这里的引号是不可省略的。另外每个重载项内部不可出现空格。 |
+| `--show-dir SHOW_DIR` | 用于保存可视化预测结果图像的文件夹。 |
+| `--show` | 在窗口中显示预测结果图像。 |
+| `--interval INTERVAL` | 每隔多少样本进行一次预测结果可视化。 |
+| `--wait-time WAIT_TIME` | 每个窗口的显示时间(单位为秒)。 |
+| `--no-pin-memory` | 是否在 dataloaders 中关闭 `pin_memory` 选项 |
+| `--tta` | 是否开启 Test-Time-Aug (TTA). 如果配置文件有 `tta_pipeline` 和 `tta_model`,将使用这些配置指定 TTA transforms,并且决定如何融合 TTA 的结果。 否则,通过平均分类分数使用 flip TTA。 |
+| `--launcher {none,pytorch,slurm,mpi}` | 启动器,默认为 "none"。 |
+
+## 单机多卡测试
+
+我们提供了一个 shell 脚本,可以使用 `torch.distributed.launch` 启动多 GPU 任务。
+
+```shell
+bash ./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [PY_ARGS]
+```
+
+| 参数 | 描述 |
+| ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
+| `CONFIG_FILE` | 配置文件的路径。 |
+| `CHECKPOINT_FILE` | 权重文件路径(支持 http 链接,你可以在[这里](https://mmpretrain.readthedocs.io/en/latest/modelzoo_statistics.html)寻找需要的权重文件)。 |
+| `GPU_NUM` | 使用的 GPU 数量。 |
+| `[PY_ARGS]` | `tools/test.py` 支持的其他可选参数,参见[上文](#单机单卡测试)。 |
+
+你还可以使用环境变量来指定启动器的额外参数,比如用如下命令将启动器的通讯端口变更为 29666:
+
+```shell
+PORT=29666 bash ./tools/dist_test.sh ${CONFIG_FILE} ${CHECKPOINT_FILE} ${GPU_NUM} [PY_ARGS]
+```
+
+如果你希望使用不同的 GPU 进行多项测试任务,可以在启动时指定不同的通讯端口和不同的可用设备。
+
+```shell
+CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 bash ./tools/dist_test.sh ${CONFIG_FILE1} ${CHECKPOINT_FILE} 4 [PY_ARGS]
+CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 bash ./tools/dist_test.sh ${CONFIG_FILE2} ${CHECKPOINT_FILE} 4 [PY_ARGS]
+```
+
+## 多机测试
+
+### 同一网络下的多机
+
+如果你希望使用同一局域网下连接的多台电脑进行一个测试任务,可以使用如下命令:
+
+在第一台机器上:
+
+```shell
+NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_test.sh $CONFIG $CHECKPOINT_FILE $GPUS
+```
+
+在第二台机器上:
+
+```shell
+NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_test.sh $CONFIG $CHECKPOINT_FILE $GPUS
+```
+
+和单机多卡相比,你需要指定一些额外的环境变量:
+
+| 环境变量 | 描述 |
+| ------------- | ---------------------------------------------- |
+| `NNODES` | 机器总数。 |
+| `NODE_RANK` | 本机的序号 |
+| `PORT` | 通讯端口,它在所有机器上都应当是一致的。 |
+| `MASTER_ADDR` | 主机的 IP 地址,它在所有机器上都应当是一致的。 |
+
+### Slurm 管理下的多机集群
+
+如果你在 [slurm](https://slurm.schedmd.com/) 集群上,可以使用 `tools/slurm_test.sh` 脚本启动任务。
+
+```shell
+[ENV_VARS] ./tools/slurm_test.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${CHECKPOINT_FILE} [PY_ARGS]
+```
+
+这里是该脚本的一些参数:
+
+| 参数 | 描述 |
+| ----------------- | ---------------------------------------------------------------------------------------------------------------------------------------- |
+| `PARTITION` | 使用的集群分区。 |
+| `JOB_NAME` | 任务的名称,你可以随意起一个名字。 |
+| `CONFIG_FILE` | 配置文件路径。 |
+| `CHECKPOINT_FILE` | 权重文件路径(支持 http 链接,你可以在[这里](https://mmpretrain.readthedocs.io/en/latest/modelzoo_statistics.html)寻找需要的权重文件)。 |
+| `[PY_ARGS]` | `tools/test.py` 支持的其他可选参数,参见[上文](#单机单卡测试)。 |
+
+这里是一些你可以用来配置 slurm 任务的环境变量:
+
+| 环境变量 | 描述 |
+| --------------- | ------------------------------------------------------------------------------------------ |
+| `GPUS` | 使用的 GPU 总数,默认为 8。 |
+| `GPUS_PER_NODE` | 每个节点分配的 GPU 数,你可以根据节点情况指定。默认为 8。 |
+| `CPUS_PER_TASK` | 每个任务分配的 CPU 数(通常一个 GPU 对应一个任务)。默认为 5。 |
+| `SRUN_ARGS` | `srun` 命令支持的其他参数。可用的选项参见[官方文档](https://slurm.schedmd.com/srun.html)。 |
diff --git a/internlm_langchain/knowledge_base/MMPreTrain/content/train.md b/internlm_langchain/knowledge_base/MMPreTrain/content/train.md
new file mode 100644
index 00000000..841edabb
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPreTrain/content/train.md
@@ -0,0 +1,118 @@
+# 训练
+
+在本教程中,我们将介绍如何使用 MMPretrain 中提供的脚本启动训练任务。
+如果你需要了解一些具体的训练例子,可以查阅 [如何在自定义数据集上进行模型预训练](../notes/pretrain_custom_dataset.md) 和 [如何在自定义数据集上微调模型](../notes/finetune_custom_dataset.md).
+
+## 单机单卡训练
+
+你可以使用 `tools/train.py` 在电脑上用 CPU 或是 GPU 进行模型的训练。
+
+以下是训练脚本的完整用法:
+
+```shell
+python tools/train.py ${CONFIG_FILE} [ARGS]
+```
+
+````{note}
+默认情况下,MMPretrain 会自动调用你的 GPU 进行训练。如果你有 GPU 但仍想使用 CPU 进行训练,请设置环境变量 `CUDA_VISIBLE_DEVICES` 为空或者 -1 来对禁用 GPU。
+
+```bash
+CUDA_VISIBLE_DEVICES=-1 python tools/train.py ${CONFIG_FILE} [ARGS]
+```
+````
+
+| 参数 | 描述 |
+| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `CONFIG_FILE` | 配置文件的路径。 |
+| `--work-dir WORK_DIR` | 用来保存训练日志和权重文件的文件夹,默认是 `./work_dirs` 目录下,与配置文件同名的文件夹。 |
+| `--resume [RESUME]` | 恢复训练。如果指定了权重文件路径,则从指定的权重文件恢复;如果没有指定,则尝试从最新的权重文件进行恢复。 |
+| `--amp` | 启用混合精度训练。 |
+| `--no-validate` | **不建议** 在训练过程中不进行验证集上的精度验证。 |
+| `--auto-scale-lr` | 自动根据实际的批次大小(batch size)和预设的批次大小对学习率进行缩放。 |
+| `--no-pin-memory` | 是否在 dataloaders 中关闭 `pin_memory` 选项 |
+| `--no-persistent-workers` | 是否在 dataloaders 中关闭 `persistent_workers` 选项 |
+| `--cfg-options CFG_OPTIONS` | 重载配置文件中的一些设置。使用类似 `xxx=yyy` 的键值对形式指定,这些设置会被融合入从配置文件读取的配置。你可以使用 `key="[a,b]"` 或者 `key=a,b` 的格式来指定列表格式的值,且支持嵌套,例如 \`key="[(a,b),(c,d)]",这里的引号是不可省略的。另外每个重载项内部不可出现空格。 |
+| `--launcher {none,pytorch,slurm,mpi}` | 启动器,默认为 "none"。 |
+
+## 单机多卡训练
+
+我们提供了一个 shell 脚本,可以使用 `torch.distributed.launch` 启动多 GPU 任务。
+
+```shell
+bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [PY_ARGS]
+```
+
+| 参数 | 描述 |
+| ------------- | ---------------------------------------------------------------- |
+| `CONFIG_FILE` | 配置文件的路径。 |
+| `GPU_NUM` | 使用的 GPU 数量。 |
+| `[PY_ARGS]` | `tools/train.py` 支持的其他可选参数,参见[上文](#单机单卡训练)。 |
+
+你还可以使用环境变量来指定启动器的额外参数,比如用如下命令将启动器的通讯端口变更为 29666:
+
+```shell
+PORT=29666 bash ./tools/dist_train.sh ${CONFIG_FILE} ${GPU_NUM} [PY_ARGS]
+```
+
+如果你希望使用不同的 GPU 进行多项训练任务,可以在启动时指定不同的通讯端口和不同的可用设备。
+
+```shell
+CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 bash ./tools/dist_train.sh ${CONFIG_FILE1} 4 [PY_ARGS]
+CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 bash ./tools/dist_train.sh ${CONFIG_FILE2} 4 [PY_ARGS]
+```
+
+## 多机训练
+
+### 同一网络下的多机
+
+如果你希望使用同一局域网下连接的多台电脑进行一个训练任务,可以使用如下命令:
+
+在第一台机器上:
+
+```shell
+NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_train.sh $CONFIG $GPUS
+```
+
+在第二台机器上:
+
+```shell
+NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR bash tools/dist_train.sh $CONFIG $GPUS
+```
+
+和单机多卡相比,你需要指定一些额外的环境变量:
+
+| 环境变量 | 描述 |
+| ------------- | ---------------------------------------------- |
+| `NNODES` | 机器总数。 |
+| `NODE_RANK` | 本机的序号 |
+| `PORT` | 通讯端口,它在所有机器上都应当是一致的。 |
+| `MASTER_ADDR` | 主机的 IP 地址,它在所有机器上都应当是一致的。 |
+
+通常来说,如果这几台机器之间不是高速网络连接,训练速度会非常慢。
+
+### Slurm 管理下的多机集群
+
+如果你在 [slurm](https://slurm.schedmd.com/) 集群上,可以使用 `tools/slurm_train.sh` 脚本启动任务。
+
+```shell
+[ENV_VARS] ./tools/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${WORK_DIR} [PY_ARGS]
+```
+
+这里是该脚本的一些参数:
+
+| 参数 | 描述 |
+| ------------- | ---------------------------------------------------------------- |
+| `PARTITION` | 使用的集群分区。 |
+| `JOB_NAME` | 任务的名称,你可以随意起一个名字。 |
+| `CONFIG_FILE` | 配置文件路径。 |
+| `WORK_DIR` | 用以保存日志和权重文件的文件夹。 |
+| `[PY_ARGS]` | `tools/train.py` 支持的其他可选参数,参见[上文](#单机单卡训练)。 |
+
+这里是一些你可以用来配置 slurm 任务的环境变量:
+
+| 环境变量 | 描述 |
+| --------------- | ------------------------------------------------------------------------------------------ |
+| `GPUS` | 使用的 GPU 总数,默认为 8。 |
+| `GPUS_PER_NODE` | 每个节点分配的 GPU 数,你可以根据节点情况指定。默认为 8。 |
+| `CPUS_PER_TASK` | 每个任务分配的 CPU 数(通常一个 GPU 对应一个任务)。默认为 5。 |
+| `SRUN_ARGS` | `srun` 命令支持的其他参数。可用的选项参见[官方文档](https://slurm.schedmd.com/srun.html)。 |
diff --git a/internlm_langchain/knowledge_base/MMPreTrain/content/verify_dataset.md b/internlm_langchain/knowledge_base/MMPreTrain/content/verify_dataset.md
new file mode 100644
index 00000000..655ce977
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMPreTrain/content/verify_dataset.md
@@ -0,0 +1,28 @@
+# 数据集验证
+
+在 MMPretrain 中,`tools/misc/verify_dataset.py` 脚本会检查数据集的所有图片,查看是否有**已经损坏**的图片。
+
+## 工具介绍
+
+```shell
+python tools/print_config.py \
+ ${CONFIG} \
+ [--out-path ${OUT-PATH}] \
+ [--phase ${PHASE}] \
+ [--num-process ${NUM-PROCESS}]
+ [--cfg-options ${CFG_OPTIONS}]
+```
+
+**所有参数说明**:
+
+- `config` : 配置文件的路径。
+- `--out-path` : 输出结果路径,默认为 ‘brokenfiles.log’。
+- `--phase` : 检查哪个阶段的数据集,可用值为 “train” 、”test” 或者 “val”, 默认为 “train”。
+- `--num-process` : 指定的进程数,默认为 1。
+- `--cfg-options`: 额外的配置选项,会被合入配置文件,参考[教程 1:如何编写配置文件](https://mmpretrain.readthedocs.io/zh_CN/latest/tutorials/config.html)。
+
+## 示例:
+
+```shell
+python tools/misc/verify_dataset.py configs/t2t_vit/t2t-vit-t-14_8xb64_in1k.py --out-path broken_imgs.log --phase val --num-process 8
+```
diff --git a/internlm_langchain/knowledge_base/MMPreTrain/vector_store/index.faiss b/internlm_langchain/knowledge_base/MMPreTrain/vector_store/index.faiss
new file mode 100644
index 00000000..3c0ec00c
Binary files /dev/null and b/internlm_langchain/knowledge_base/MMPreTrain/vector_store/index.faiss differ
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/1_learn_about_config.md b/internlm_langchain/knowledge_base/MMRazor/content/1_learn_about_config.md
new file mode 100644
index 00000000..05cfd254
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/1_learn_about_config.md
@@ -0,0 +1,17 @@
+# Learn about Configs
+
+## Directory structure of configs in mmrazor
+
+
+
+`mmxxx`: means some task repositories of OpenMMLab, such mmcls, mmdet, mmseg and so on.
+
+`_base_`: includes configures of datasets, experiment settings and model architectures.
+
+`distill`/`nas`/`pruning`: model compression algorithms.
+
+`vanilla`: task models owned by mmrazor.
+
+## More about config
+
+Please refer to [config](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/config.md) in mmengine.
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/2_train_different_types_algorithms.md b/internlm_langchain/knowledge_base/MMRazor/content/2_train_different_types_algorithms.md
new file mode 100644
index 00000000..06c68970
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/2_train_different_types_algorithms.md
@@ -0,0 +1,107 @@
+# Train different types algorithms
+
+**Before running our algorithms, you may need to prepare the datasets according to the instructions in the corresponding document.**
+
+**Note**:
+
+- With the help of mmengine, mmrazor unified entered interfaces for various tasks, thus our algorithms will adapt all OpenMMLab upstream repos in theory.
+
+- We dynamically pass arguments `cfg-options` (e.g., `mutable_cfg` in nas algorithm or `channel_cfg` in pruning algorithm) to **avoid the need for a config for each subnet or checkpoint**. If you want to specify different subnets for retraining or testing, you just need to change this argument.
+
+### NAS
+
+Here we take SPOS(Single Path One Shot) as an example. There are three steps to start neural network search(NAS), including **supernet pre-training**, **search for subnet on the trained supernet** and **subnet retraining**.
+
+#### Supernet Pre-training
+
+```Python
+python tools/train.py ${CONFIG_FILE} [optional arguments]
+```
+
+The usage of optional arguments are the same as corresponding tasks like mmclassification, mmdetection and mmsegmentation.
+
+For example,
+
+```Python
+python ./tools/train.py \
+ configs/nas/mmcls/spos/spos_shufflenet_supernet_8xb128_in1k.py
+ --work-dir $WORK_DIR
+```
+
+#### Search for Subnet on The Trained Supernet
+
+```Python
+python tools/train.py ${CONFIG_FILE} --cfg-options load_from=${CHECKPOINT_PATH} [optional arguments]
+```
+
+For example,
+
+```Python
+python ./tools/train.py \
+ configs/nas/mmcls/spos/spos_shufflenet_search_8xb128_in1k.py \
+ --cfg-options load_from=$STEP1_CKPT \
+ --work-dir $WORK_DIR
+```
+
+#### Subnet Retraining
+
+```Python
+python tools/train.py ${CONFIG_FILE} \
+ --cfg-options algorithm.fix_subnet=${MUTABLE_CFG_PATH} [optional arguments]
+```
+
+- `MUTABLE_CFG_PATH`: Path of `fix_subnet`. `fix_subnet` represents **config for mutable of the subnet searched out**, used to specify different subnets for retraining. An example for `fix_subnet` can be found [here](https://github.com/open-mmlab/mmrazor/blob/master/configs/nas/spos/SPOS_SHUFFLENETV2_330M_IN1k_PAPER.yaml), and the usage can be found [here](https://github.com/open-mmlab/mmrazor/blob/master/configs/nas/spos/README.md#subnet-retraining-on-imagenet).
+
+For example,
+
+```Python
+python ./tools/train.py \
+ configs/nas/mmcls/spos/spos_shufflenet_subnet_8xb128_in1k.py \
+ --work-dir $WORK_DIR \
+ --cfg-options algorithm.fix_subnet=$YAML_FILE_BY_STEP2
+```
+
+We note that instead of using `--cfg-options`, you can also directly modify ``` configs/nas/mmcls/spos/``spos_shufflenet_subnet_8xb128_in1k``.py ``` like this:
+
+```Python
+fix_subnet = 'configs/nas/mmcls/spos/SPOS_SHUFFLENETV2_330M_IN1k_PAPER.yaml'
+model = dict(fix_subnet=fix_subnet)
+```
+
+### Pruning
+
+Pruning has three steps, including **supernet pre-training**, **search for subnet on the trained supernet** and **subnet retraining**. The commands of the first two steps are similar to NAS, except that we need to use `CONFIG_FILE` of Pruning here. The commands of the **subnet retraining** are as follows.
+
+#### Subnet Retraining
+
+```Python
+python tools/train.py ${CONFIG_FILE} --cfg-options model._channel_cfg_paths=${CHANNEL_CFG_PATH} [optional arguments]
+```
+
+Different from NAS, the argument that needs to be specified here is `channel_cfg_paths` .
+
+- `CHANNEL_CFG_PATH`: Path of `_channel_cfg_path`. `channel_cfg` represents **config for channel of the subnet searched out**, used to specify different subnets for testing.
+
+For example, the default `_channel_cfg_paths` is set in the config below.
+
+```Python
+python ./tools/train.py \
+ configs/pruning/mmcls/autoslim/autoslim_mbv2_1.5x_subnet_8xb256_in1k_flops-530M.py \
+ --work-dir your_work_dir
+```
+
+### Distillation
+
+There is only one step to start knowledge distillation.
+
+```Python
+python tools/train.py ${CONFIG_FILE} [optional arguments]
+```
+
+For example,
+
+```Python
+python ./tools/train.py \
+ configs/distill/mmcls/kd/kd_logits_r34_r18_8xb32_in1k.py \
+ --work-dir your_work_dir
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/3_train_with_different_devices.md b/internlm_langchain/knowledge_base/MMRazor/content/3_train_with_different_devices.md
new file mode 100644
index 00000000..aa24654e
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/3_train_with_different_devices.md
@@ -0,0 +1,93 @@
+# Train with different devices
+
+**Note**: The default learning rate in config files is for 8 GPUs. If using different number GPUs, the total batch size will change in proportion, you have to scale the learning rate following `new_lr = old_lr * new_ngpus / old_ngpus`. We recommend to use `tools/dist_train.sh` even with 1 gpu, since some methods do not support non-distributed training.
+
+### Training with CPU
+
+```Python
+export CUDA_VISIBLE_DEVICES=-1
+python tools/train.py ${CONFIG_FILE}
+```
+
+**Note**: We do not recommend users to use CPU for training because it is too slow and some algorithms are using `SyncBN` which is based on distributed training. We support this feature to allow users to debug on machines without GPU for convenience.
+
+### Train with single/multiple GPUs
+
+```Python
+sh tools/dist_train.sh ${CONFIG_FILE} ${GPUS} --work_dir ${YOUR_WORK_DIR} [optional arguments]
+```
+
+**Note**: During training, checkpoints and logs are saved in the same folder structure as the config file under `work_dirs/`. Custom work directory is not recommended since evaluation scripts infer work directories from the config file name. If you want to save your weights somewhere else, please use symlink, for example:
+
+```Python
+ln -s ${YOUR_WORK_DIRS} ${MMRAZOR}/work_dirs
+```
+
+Alternatively, if you run MMRazor on a cluster managed with [slurm](https://slurm.schedmd.com/):
+
+```Python
+GPUS_PER_NODE=${GPUS_PER_NODE} GPUS=${GPUS} SRUN_ARGS=${SRUN_ARGS} sh tools/xxx/slurm_train.sh ${PARTITION} ${JOB_NAME} ${CONFIG_FILE} ${YOUR_WORK_DIR} [optional arguments]
+```
+
+### Train with multiple machines
+
+If you launch with multiple machines simply connected with ethernet, you can simply run the following commands:
+
+On the first machine:
+
+```Python
+NNODES=2 NODE_RANK=0 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
+```
+
+On the second machine:
+
+```Python
+NNODES=2 NODE_RANK=1 PORT=$MASTER_PORT MASTER_ADDR=$MASTER_ADDR sh tools/dist_train.sh $CONFIG $GPUS
+```
+
+Usually it is slow if you do not have high speed networking like InfiniBand.
+
+If you launch with slurm, the command is the same as that on single machine described above, but you need refer to [slurm_train.sh](https://github.com/open-mmlab/mmselfsup/blob/master/tools/slurm_train.sh) to set appropriate parameters and environment variables.
+
+### Launch multiple jobs on a single machine
+
+If you launch multiple jobs on a single machine, e.g., 2 jobs of 4-GPU training on a machine with 8 GPUs, you need to specify different ports (29500 by default) for each job to avoid communication conflict.
+
+If you use `dist_train.sh` to launch training jobs:
+
+```Python
+CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 sh tools/xxx/dist_train.sh ${CONFIG_FILE} 4 --work_dir tmp_work_dir_1
+CUDA_VISIBLE_DEVICES=4,5,6,7 PORT=29501 sh tools/xxx/dist_train.sh ${CONFIG_FILE} 4 --work_dir tmp_work_dir_2
+```
+
+If you use launch training jobs with slurm, you have two options to set different communication ports:
+
+Option 1:
+
+In `config1.py`:
+
+```Python
+dist_params = dict(backend='nccl', port=29500)
+```
+
+In `config2.py`:
+
+```Python
+dist_params = dict(backend='nccl', port=29501)
+```
+
+Then you can launch two jobs with config1.py and config2.py.
+
+```Python
+CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py tmp_work_dir_1
+CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py tmp_work_dir_2
+```
+
+Option 2:
+
+You can set different communication ports without the need to modify the configuration file, but have to set the `cfg-options` to overwrite the default port in configuration file.
+
+```Python
+CUDA_VISIBLE_DEVICES=0,1,2,3 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config1.py tmp_work_dir_1 --cfg-options dist_params.port=29500
+CUDA_VISIBLE_DEVICES=4,5,6,7 GPUS=4 sh tools/slurm_train.sh ${PARTITION} ${JOB_NAME} config2.py tmp_work_dir_2 --cfg-options dist_params.port=29501
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/4_test_a_model.md b/internlm_langchain/knowledge_base/MMRazor/content/4_test_a_model.md
new file mode 100644
index 00000000..152b5dc4
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/4_test_a_model.md
@@ -0,0 +1,79 @@
+# Test a model
+
+### NAS
+
+To test nas method, you can use the following command.
+
+```Python
+python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_PATH} --cfg-options algorithm.fix_subnet=${FIX_SUBNET_PATH} [optional arguments]
+```
+
+- `FIX_SUBNET_PATH`: Path of `fix_subnet`. `fix_subnet` represents **config for mutable of the subnet searched out**, used to specify different subnets for testing. An example for `fix_subnet` can be found [here](https://github.com/open-mmlab/mmrazor/blob/master/configs/nas/spos/SPOS_SHUFFLENETV2_330M_IN1k_PAPER.yaml).
+
+The usage of optional arguments are the same as corresponding tasks like mmclassification, mmdetection and mmsegmentation.
+
+For example,
+
+```Python
+python tools/test.py \
+ configs/nas/mmcls/spos/spos_subnet_shufflenetv2_8xb128_in1k.py \
+ your_subnet_checkpoint_path \
+ --cfg-options algorithm.fix_subnet=configs/nas/mmcls/spos/SPOS_SHUFFLENETV2_330M_IN1k_PAPER.yaml
+```
+
+### Pruning
+
+#### Split Checkpoint(Optional)
+
+If you train a slimmable model during retraining, checkpoints of different subnets are actually fused in only one checkpoint. You can split this checkpoint to multiple independent checkpoints by using the following command
+
+```Python
+python tools/model_converters/split_checkpoint.py ${CONFIG_FILE} ${CHECKPOINT_PATH} --channel-cfgs ${CHANNEL_CFG_PATH} [optional arguments]
+```
+
+- `CHANNEL_CFG_PATH`: A list of paths of `channel_cfg`. For example, when you retrain a slimmable model, your command will be like `--cfg-options algorithm.channel_cfg=cfg1,cfg2,cfg3`. And your command here should be `--channel-cfgs cfg1 cfg2 cfg3`. The order of them should be the same.
+
+For example,
+
+```Python
+python tools/model_converters/split_checkpoint.py \
+ configs/pruning/autoslim/autoslim_mbv2_subnet_8xb256_in1k.py \
+ your_retraining_checkpoint_path \
+ --channel-cfgs configs/pruning/autoslim/AUTOSLIM_MBV2_530M_OFFICIAL.yaml configs/pruning/autoslim/AUTOSLIM_MBV2_320M_OFFICIAL.yaml configs/pruning/autoslim/AUTOSLIM_MBV2_220M_OFFICIAL.yaml
+```
+
+#### Test
+
+To test pruning method, you can use following command
+
+```Python
+python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_PATH} --cfg-options model._channel_cfg_paths=${CHANNEL_CFG_PATH} [optional arguments]
+```
+
+- `task`: one of `mmcls`、`mmdet` and `mmseg`
+
+- `CHANNEL_CFG_PATH`: Path of `channel_cfg`. `channel_cfg` represents **config for channel of the subnet searched out**, used to specify different subnets for testing. An example for `channel_cfg` can be found [here](https://github.com/open-mmlab/mmrazor/blob/master/configs/pruning/autoslim/AUTOSLIM_MBV2_220M_OFFICIAL.yaml), and the usage can be found [here](https://github.com/open-mmlab/mmrazor/blob/master/configs/pruning/autoslim/README.md#test-a-subnet).
+
+For example,
+
+```Python
+python ./tools/test.py \
+ configs/pruning/mmcls/autoslim/autoslim_mbv2__1.5x_subnet_8xb256_in1k-530M.py \
+ your_splitted_checkpoint_path --metrics accuracy
+```
+
+### Distillation
+
+To test the distillation method, you can use the following command
+
+```Python
+python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_PATH} [optional arguments]
+```
+
+For example,
+
+```Python
+python ./tools/test.py \
+ configs/distill/mmseg/cwd/cwd_logits_pspnet_r101_d8_pspnet_r18_d8_512x1024_cityscapes_80k.py \
+ your_splitted_checkpoint_path --show
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/algorithm.md b/internlm_langchain/knowledge_base/MMRazor/content/algorithm.md
new file mode 100644
index 00000000..ae632db6
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/algorithm.md
@@ -0,0 +1,273 @@
+# Algorithm
+
+## Introduction
+
+### What is algorithm in MMRazor
+
+MMRazor is a model compression toolkit, which includes 4 mianstream technologies:
+
+- Neural Architecture Search (NAS)
+- Pruning
+- Knowledge Distillation (KD)
+- Quantization (come soon)
+
+And in MMRazor, `algorithm` is a general item for these technologies. For example, in NAS,
+
+[SPOS](https://github.com/open-mmlab/mmrazor/blob/master/configs/nas/spos)[ ](https://arxiv.org/abs/1904.00420)is an `algorithm`, [CWD](https://github.com/open-mmlab/mmrazor/blob/master/configs/distill/cwd) is also an `algorithm` of knowledge distillation.
+
+`algorithm` is the entrance of `mmrazor/models` . Its role in MMRazor is the same as both `classifier` in [MMClassification](https://github.com/open-mmlab/mmclassification) and `detector` in [MMDetection](https://github.com/open-mmlab/mmdetection).
+
+### About base algorithm
+
+In the directory of `models/algorithms`, all model compression algorithms are divided into 4 subdirectories: nas / pruning / distill / quantization. These algorithms must inherit from `BaseAlgorithm`, whose definition is as below.
+
+```Python
+from typing import Dict, List, Optional, Tuple, Union
+from mmengine.model import BaseModel
+from mmrazor.registry import MODELS
+
+LossResults = Dict[str, torch.Tensor]
+TensorResults = Union[Tuple[torch.Tensor], torch.Tensor]
+PredictResults = List[BaseDataElement]
+ForwardResults = Union[LossResults, TensorResults, PredictResults]
+
+@MODELS.register_module()
+class BaseAlgorithm(BaseModel):
+
+ def __init__(self,
+ architecture: Union[BaseModel, Dict],
+ data_preprocessor: Optional[Union[Dict, nn.Module]] = None,
+ init_cfg: Optional[Dict] = None):
+
+ ......
+
+ super().__init__(data_preprocessor, init_cfg)
+ self.architecture = architecture
+
+ def forward(self,
+ batch_inputs: torch.Tensor,
+ data_samples: Optional[List[BaseDataElement]] = None,
+ mode: str = 'tensor') -> ForwardResults:
+
+ if mode == 'loss':
+ return self.loss(batch_inputs, data_samples)
+ elif mode == 'tensor':
+ return self._forward(batch_inputs, data_samples)
+ elif mode == 'predict':
+ return self._predict(batch_inputs, data_samples)
+ else:
+ raise RuntimeError(f'Invalid mode "{mode}". '
+ 'Only supports loss, predict and tensor mode')
+
+ def loss(
+ self,
+ batch_inputs: torch.Tensor,
+ data_samples: Optional[List[BaseDataElement]] = None,
+ ) -> LossResults:
+ """Calculate losses from a batch of inputs and data samples."""
+ return self.architecture(batch_inputs, data_samples, mode='loss')
+
+ def _forward(
+ self,
+ batch_inputs: torch.Tensor,
+ data_samples: Optional[List[BaseDataElement]] = None,
+ ) -> TensorResults:
+ """Network forward process."""
+ return self.architecture(batch_inputs, data_samples, mode='tensor')
+
+ def _predict(
+ self,
+ batch_inputs: torch.Tensor,
+ data_samples: Optional[List[BaseDataElement]] = None,
+ ) -> PredictResults:
+ """Predict results from a batch of inputs and data samples with post-
+ processing."""
+ return self.architecture(batch_inputs, data_samples, mode='predict')
+```
+
+As you can see from above, `BaseAlgorithm` is inherited from `BaseModel` of MMEngine. `BaseModel` implements the basic functions of the algorithmic model, such as weights initialize,
+
+batch inputs preprocess (see more information in `BaseDataPreprocessor` class of MMEngine), parse losses, and update model parameters. For more details of `BaseModel` , you can see docs for `BaseModel`.
+
+`BaseAlgorithm`'s forward is just a wrapper of `BaseModel`'s forward. Sub-classes inherited from BaseAlgorithm only need to override the `loss` method, which implements the logic to calculate loss, thus various algorithms can be trained in the runner.
+
+## How to use existing algorithms in MMRazor
+
+1. Configure your architecture that will be slimmed
+
+- Use the model config of other repos of OpenMMLab directly as below, which is an example of setting Faster-RCNN as our architecture.
+
+```Python
+_base_ = [
+ 'mmdet::_base_/models/faster_rcnn_r50_fpn.py',
+]
+
+architecture = _base_.model
+```
+
+- Use your customized model as below, which is an example of defining a VGG model as our architecture.
+
+```{note}
+How to customize architectures can refer to our tutorial: [Customize Architectures](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_architectures.html).
+```
+
+```Python
+default_scope='mmcls'
+architecture = dict(
+ type='ImageClassifier',
+ backbone=dict(type='VGG', depth=11, num_classes=1000),
+ neck=None,
+ head=dict(
+ type='ClsHead',
+ loss=dict(type='CrossEntropyLoss', loss_weight=1.0),
+ topk=(1, 5),
+ ))
+```
+
+2. Apply the registered algorithm to your architecture.
+
+```{note}
+The arg name of `algorithm` in config is **model** rather than **algorithm** in order to get better supports of MMCV and MMEngine.
+```
+
+Maybe more args in model need to set according to the used algorithm.
+
+```Python
+model = dict(
+ type='BaseAlgorithm',
+ architecture=architecture)
+```
+
+```{note}
+About the usage of `Config`, refer to [config.md](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/config.md) please.
+```
+
+3. Apply some custom hooks or loops to your algorithm. (optional)
+
+- Custom hooks
+
+```Python
+custom_hooks = [
+ dict(type='NaiveVisualizationHook', priority='LOWEST'),
+]
+```
+
+- Custom loops
+
+```Python
+_base_ = ['./spos_shufflenet_supernet_8xb128_in1k.py']
+
+# To chose from ['train_cfg', 'val_cfg', 'test_cfg'] based on your loop type
+train_cfg = dict(
+ _delete_=True,
+ type='mmrazor.EvolutionSearchLoop',
+ dataloader=_base_.val_dataloader,
+ evaluator=_base_.val_evaluator)
+
+val_cfg = dict()
+test_cfg = dict()
+```
+
+## How to customize your algorithm
+
+### Common pipeline
+
+1. Register a new algorithm
+
+Create a new file `mmrazor/models/algorithms/{subdirectory}/xxx.py`
+
+```Python
+from mmrazor.models.algorithms import BaseAlgorithm
+from mmrazor.registry import MODELS
+
+@MODELS.register_module()
+class XXX(BaseAlgorithm):
+ def __init__(self, architecture):
+ super().__init__(architecture)
+ pass
+
+ def loss(self, batch_inputs):
+ pass
+```
+
+2. Rewrite its `loss` method.
+
+```Python
+from mmrazor.models.algorithms import BaseAlgorithm
+from mmrazor.registry import MODELS
+
+@MODELS.register_module()
+class XXX(BaseAlgorithm):
+ def __init__(self, architecture):
+ super().__init__(architecture)
+ ......
+
+ def loss(self, batch_inputs):
+ ......
+ return LossResults
+```
+
+3. Add the remaining functions of the algorithm
+
+```{note}
+This step is special because of the diversity of algorithms. Some functions of the algorithm may also be implemented in other files.
+```
+
+```Python
+from mmrazor.models.algorithms import BaseAlgorithm
+from mmrazor.registry import MODELS
+
+@MODELS.register_module()
+class XXX(BaseAlgorithm):
+ def __init__(self, architecture):
+ super().__init__(architecture)
+ ......
+
+ def loss(self, batch_inputs):
+ ......
+ return LossResults
+
+ def aaa(self):
+ ......
+
+ def bbb(self):
+ ......
+```
+
+4. Import the class
+
+You can add the following line to `mmrazor/models/algorithms/{subdirectory}/__init__.py`
+
+```CoffeeScript
+from .xxx import XXX
+
+__all__ = ['XXX']
+```
+
+In addition, import XXX in `mmrazor/models/algorithms/__init__.py`
+
+5. Use the algorithm in your config file.
+
+Please refer to the previous section about how to use existing algorithms in MMRazor
+
+```Python
+model = dict(
+ type='XXX',
+ architecture=architecture)
+```
+
+### Pipelines for different algorithms
+
+Please refer to our tutorials about how to customize different algorithms for more details as below.
+
+1. NAS
+
+[Customize NAS algorithms](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_nas_algorithms.html)
+
+2. Pruning
+
+[Customize Pruning algorithms](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_pruning_algorithms.html)
+
+3. Distill
+
+[Customize KD algorithms](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_kd_algorithms.html)
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/apply_existing_algorithms_to_new_tasks.md b/internlm_langchain/knowledge_base/MMRazor/content/apply_existing_algorithms_to_new_tasks.md
new file mode 100644
index 00000000..119d7a6c
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/apply_existing_algorithms_to_new_tasks.md
@@ -0,0 +1,83 @@
+# Apply existing algorithms to new tasks
+
+Here we show how to apply existing algorithms to other tasks with an example of [SPOS ](https://github.com/open-mmlab/mmrazor/tree/main/configs/nas/mmcls/spos)& [DetNAS](https://github.com/open-mmlab/mmrazor/tree/main/configs/nas/mmdet/detnas).
+
+> SPOS: Single Path One-Shot NAS for classification
+>
+> DetNAS: Single Path One-Shot NAS for detection
+
+**You just need to configure the existing algorithms in your config only by replacing** **the architecture of** **mmcls** **with** **mmdet** **'s**
+
+You can implement a new algorithm by inheriting from the existing algorithm quickly if the new task's specificity leads to the failure of applying directly.
+
+SPOS config VS DetNAS config
+
+- SPOS
+
+```Python
+_base_ = [
+ 'mmrazor::_base_/settings/imagenet_bs1024_spos.py',
+ 'mmrazor::_base_/nas_backbones/spos_shufflenet_supernet.py',
+ 'mmcls::_base_/default_runtime.py',
+]
+
+# model
+supernet = dict(
+ type='ImageClassifier',
+ data_preprocessor=_base_.preprocess_cfg,
+ backbone=_base_.nas_backbone,
+ neck=dict(type='GlobalAveragePooling'),
+ head=dict(
+ type='LinearClsHead',
+ num_classes=1000,
+ in_channels=1024,
+ loss=dict(
+ type='LabelSmoothLoss',
+ num_classes=1000,
+ label_smooth_val=0.1,
+ mode='original',
+ loss_weight=1.0),
+ topk=(1, 5)))
+
+model = dict(
+ type='mmrazor.SPOS',
+ architecture=supernet,
+ mutator=dict(type='mmrazor.OneShotModuleMutator'))
+
+find_unused_parameters = True
+```
+
+- DetNAS
+
+```Python
+_base_ = [
+ 'mmdet::_base_/models/faster-rcnn_r50_fpn.py',
+ 'mmdet::_base_/datasets/coco_detection.py',
+ 'mmdet::_base_/schedules/schedule_1x.py',
+ 'mmdet::_base_/default_runtime.py',
+ 'mmrazor::_base_/nas_backbones/spos_shufflenet_supernet.py'
+]
+
+norm_cfg = dict(type='SyncBN', requires_grad=True)
+
+supernet = _base_.model
+
+supernet.backbone = _base_.nas_backbone
+supernet.backbone.norm_cfg = norm_cfg
+supernet.backbone.out_indices = (0, 1, 2, 3)
+supernet.backbone.with_last_layer = False
+
+supernet.neck.norm_cfg = norm_cfg
+supernet.neck.in_channels = [64, 160, 320, 640]
+
+supernet.roi_head.bbox_head.norm_cfg = norm_cfg
+supernet.roi_head.bbox_head.type = 'Shared4Conv1FCBBoxHead'
+
+model = dict(
+ _delete_=True,
+ type='mmrazor.SPOS',
+ architecture=supernet,
+ mutator=dict(type='mmrazor.OneShotModuleMutator'))
+
+find_unused_parameters = True
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/changelog.md b/internlm_langchain/knowledge_base/MMRazor/content/changelog.md
new file mode 100644
index 00000000..338c9425
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/changelog.md
@@ -0,0 +1,278 @@
+# Changelog of v1.x
+
+## v1.0.0 (24/04/2023)
+
+We are excited to announce the first official release of MMRazor 1.0.
+
+### Highlights
+
+- MMRazor quantization is released, which has got through task models and model deployment. With its help, we can quantize and deploy pre-trained models in OpenMMLab to specified backend quickly.
+
+### New Features & Improvements
+
+#### NAS
+
+- Update searchable model. (https://github.com/open-mmlab/mmrazor/pull/438)
+- Update NasMutator to build search_space in NAS. (https://github.com/open-mmlab/mmrazor/pull/426)
+
+#### Pruning
+
+- Add a new pruning algorithm named GroupFisher. We support the full pipeline for GroupFisher, including pruning, finetuning and deployment.(https://github.com/open-mmlab/mmrazor/pull/459)
+
+#### KD
+
+- Support stopping distillation after a certain epoch. (https://github.com/open-mmlab/mmrazor/pull/455)
+- Support distilling rtmdet with mmrazor, refer to here. (https://github.com/open-mmlab/mmyolo/pull/544)
+- Add mask channel in MGD Loss. (https://github.com/open-mmlab/mmrazor/pull/461)
+
+#### Quantization
+
+- Support two quantization types: QAT and PTQ (https://github.com/open-mmlab/mmrazor/pull/513)
+- Support various quantization bits. (https://github.com/open-mmlab/mmrazor/pull/513)
+- Support various quantization methods, such as per_tensor / per_channel, symmetry / asymmetry and so on. (https://github.com/open-mmlab/mmrazor/pull/513)
+- Support deploy quantized models to multiple backends, such as OpenVINO, TensorRT and so on. (https://github.com/open-mmlab/mmrazor/pull/513)
+- Support applying quantization algorithms to multiple task repos directly, such as mmcls, mmdet and so on. (https://github.com/open-mmlab/mmrazor/pull/513)
+
+### Bug Fixes
+
+- Fix split in Darts config. (https://github.com/open-mmlab/mmrazor/pull/451)
+- Fix a bug in Recorders. (https://github.com/open-mmlab/mmrazor/pull/446)
+- Fix a bug when using get_channel_unit.py. (https://github.com/open-mmlab/mmrazor/pull/432)
+- Fix a bug when deploying a pruned model to cuda. (https://github.com/open-mmlab/mmrazor/pull/495)
+
+### Contributors
+
+A total of 10 developers contributed to this release.
+Thanks @415905716 @gaoyang07 @humu789 @LKJacky @HIT-cwh @aptsunny @cape-zck @vansin @twmht @wm901115nwpu
+
+## v1.0.0rc2 (06/01/2023)
+
+We are excited to announce the release of MMRazor 1.0.0rc2.
+
+### New Features
+
+#### NAS
+
+- Add Performance Predictor: Support 4 performance predictors with 4 basic machine learning algorithms, which can be used to directly predict model accuracy without evaluation.(https://github.com/open-mmlab/mmrazor/pull/306)
+
+- Support [Autoformer](https://arxiv.org/pdf/2107.00651.pdf), a one-shot architecture search algorithm dedicated to vision transformer search.(https://github.com/open-mmlab/mmrazor/pull/315 )
+
+- Support [BigNAS](https://arxiv.org/pdf/2003.11142), a NAS algorithm which searches the following items in MobileNetV3 with the one-shot paradigm: kernel_sizes, out_channels, expand_ratios, block_depth and input sizes. (https://github.com/open-mmlab/mmrazor/pull/219 )
+
+#### Pruning
+
+- Support [DCFF](https://arxiv.org/abs/2107.06916), a filter channel pruning algorithm dedicated to efficient image classification.(https://github.com/open-mmlab/mmrazor/pull/295)
+
+- We release a powerful tool to automatically analyze channel dependency, named ChannelAnalyzer. Here is an example as shown below.(https://github.com/open-mmlab/mmrazor/pull/371)
+
+Now, ChannelAnalyzer supports most of CNN models in torchvision, mmcls, mmseg and mmdet. We will continue to support more models.
+
+```python
+from mmrazor.models.task_modules import ChannelAnalyzer
+from mmengine.hub import get_model
+import json
+
+model = get_model('mmdet::retinanet/retinanet_r18_fpn_1x_coco.py')
+unit_configs: dict = ChannelAnalyzer().analyze(model)
+unit_config0 = list(unit_configs.values())[0]
+print(json.dumps(unit_config0, indent=4))
+# # short version of the config
+# {
+# "channels": {
+# "input_related": [
+# {"name": "backbone.layer2.0.bn1"},
+# {“name": "backbone.layer2.0.conv2"}
+# ],
+# "output_related": [
+# {"name": "backbone.layer2.0.conv1"},
+# {"name": "backbone.layer2.0.bn1"}
+# ]
+# },
+#}
+```
+
+#### KD
+
+- Support [MGD](https://arxiv.org/abs/2205.01529), a detection distillation algorithm.(https://github.com/open-mmlab/mmrazor/pull/381)
+
+### Bug Fixes
+
+- Fix `FpnTeacherDistll` techer forward from `backbone + neck + head` to `backbone + neck`(#387 )
+- Fix some expire configs and checkpoints(#373 #372 #422 )
+
+### Ongoing Changes
+
+We will release Quantization in next version(1.0.0rc3)!
+
+### Contributors
+
+A total of 11 developers contributed to this release: @wutongshenqiu @sunnyxiaohu @aptsunny @humu789 @TinyTigerPan @FreakieHuang @LKJacky @wilxy @gaoyang07 @spynccat @yivona08.
+
+## v1.0.0rc1 (27/10/2022)
+
+We are excited to announce the release of MMRazor 1.0.0rc1.
+
+### Highlights
+
+- **New Pruning Framework**:We have systematically refactored the Pruning module. The new Pruning module can more automatically resolve the dependencies between channels and cover more corner cases.
+
+### New Features
+
+#### Pruning
+
+- A new pruning framework is released in this release. (#311, #313)
+ It consists of five core modules, including Algorithm, `ChannelMutator`, `MutableChannelUnit`, `MutableChannel` and `DynamicOp`.
+
+- MutableChannelUnit is introduced for the first time. Each MutableChannelUnit manages all channels with channel dependency.
+
+ ```python
+ from mmrazor.registry import MODELS
+
+ ARCHITECTURE_CFG = dict(
+ _scope_='mmcls',
+ type='ImageClassifier',
+ backbone=dict(type='MobileNetV2', widen_factor=1.5),
+ neck=dict(type='GlobalAveragePooling'),
+ head=dict(type='mmcls.LinearClsHead', num_classes=1000, in_channels=1920))
+ model = MODELS.build(ARCHITECTURE_CFG)
+ from mmrazor.models.mutators import ChannelMutator
+
+ channel_mutator = ChannelMutator()
+ channel_mutator.prepare_from_supernet(model)
+ units = channel_mutator.mutable_units
+ print(units[0])
+ # SequentialMutableChannelUnit(
+ # name=backbone.conv1.conv_(0, 48)_48
+ # (output_related): ModuleList(
+ # (0): Channel(backbone.conv1.conv, index=(0, 48), is_output_channel=true, expand_ratio=1)
+ # (1): Channel(backbone.conv1.bn, index=(0, 48), is_output_channel=true, expand_ratio=1)
+ # (2): Channel(backbone.layer1.0.conv.0.conv, index=(0, 48), is_output_channel=true, expand_ratio=1)
+ # (3): Channel(backbone.layer1.0.conv.0.bn, index=(0, 48), is_output_channel=true, expand_ratio=1)
+ # )
+ # (input_related): ModuleList(
+ # (0): Channel(backbone.conv1.bn, index=(0, 48), is_output_channel=false, expand_ratio=1)
+ # (1): Channel(backbone.layer1.0.conv.0.conv, index=(0, 48), is_output_channel=false, expand_ratio=1)
+ # (2): Channel(backbone.layer1.0.conv.0.bn, index=(0, 48), is_output_channel=false, expand_ratio=1)
+ # (3): Channel(backbone.layer1.0.conv.1.conv, index=(0, 48), is_output_channel=false, expand_ratio=1)
+ # )
+ # (mutable_channel): SquentialMutableChannel(num_channels=48, activated_channels=48)
+ # )
+ ```
+
+Our new pruning algorithm can help you develop pruning algorithm more fluently. Pelease refer to our documents [PruningUserGuide](<./docs/en/user_guides/../../pruning/%5Bpruning_user_guide.md%5D(http://pruning_user_guide.md/)>) for model detail.
+
+#### Distillation
+
+- Support [CRD](https://arxiv.org/abs/1910.10699), a distillation algorithm based on contrastive representation learning. (#281)
+
+- Support [PKD](https://arxiv.org/abs/2207.02039), a distillation algorithm that can be used in `MMDetection` and `MMDetection3D`. #304
+
+- Support [DEIT](https://arxiv.org/abs/2012.12877), a classic **Transformer** distillation algorithm.(#332)
+
+- Add a more powerful baseline setting for [KD](https://arxiv.org/abs/1503.02531). (#305)
+
+- Add `MethodInputsRecorder` and `FuncInputsRecorder` to record the input of a class method or a function.(#320)
+
+#### NAS
+
+- Support [DSNAS](https://arxiv.org/pdf/2002.09128.pdf), a nas algorithm that does not require retraining. (#226 )
+
+#### Tools
+
+- Support configurable immediate feature map visualization. (#293 )
+ A useful tool is supported in this release to visualize the immediate features of a neural network. Please refer to our documents [VisualizationUserGuide](http://./docs/zh_cn/user_guides/visualization.md) for more details.
+
+### Bug Fixes
+
+- Fix the bug that `FunctionXXRecorder` and `FunctionXXDelivery` can not be pickled. (#320)
+
+### Ongoing changes
+
+- Quantization: We are developing the basic interface of PTQ and QAT. RFC(Request for Comments) will be released soon.
+- AutoSlim: AutoSlim is not yet available and is being refactored.
+- Fx Pruning Tracer: Currently, the model topology can only be resolved through the backward tracer. In the future, both backward tracer and fx tracer will be supported.
+- More Algorithms: BigNAS、AutoFormer、GreedyNAS and Resrep will be released in the next few versions.
+- Documentation: we will add more design docs, tutorials, and migration guidance so that the community can deep dive into our new design, participate the future development, and smoothly migrate downstream libraries to MMRazor 1.x.
+
+### Contributors
+
+A total of 12 developers contributed to this release.
+Thanks @FreakieHuang @gaoyang07 @HIT-cwh @humu789 @LKJacky @pppppM @pprp @spynccat @sunnyxiaohu @wilxy @kitecats @SheffieldCao
+
+## v1.0.0rc0 (31/8/2022)
+
+We are excited to announce the release of MMRazor 1.0.0rc0.
+MMRazor 1.0.0rc0 is the first version of MMRazor 1.x, a part of the OpenMMLab 2.0 projects.
+Built upon the new [training engine](https://github.com/open-mmlab/mmengine),
+MMRazor 1.x simplified the interaction with other OpenMMLab repos, and upgraded the basic APIs of KD / Pruning / NAS.
+It also provides a series of knowledge distillation algorithms.
+
+### Highlights
+
+- **New engines**. MMRazor 1.x is based on [MMEngine](https://github.com/open-mmlab/mmengine), which provides a general and powerful runner that allows more flexible customizations and significantly simplifies the entrypoints of high-level interfaces.
+
+- **Unified interfaces**. As a part of the OpenMMLab 2.0 projects, MMRazor 1.x unifies and refactors the interfaces and internal logic of train, testing, datasets, models, evaluation, and visualization. All the OpenMMLab 2.0 projects share the same design in those interfaces and logic to allow the emergence of multi-task/modality algorithms.
+
+- **More configurable KD**. MMRazor 1.x add [Recorder](../advanced_guides/recorder.md) to get the data needed for KD more automatically,[Delivery ](../advanced_guides/delivery.md) to automatically pass the teacher's intermediate results to the student, and connector to handle feature dimension mismatches between teacher and student.
+
+- **More kinds of KD algorithms**. Benefitting from the powerful APIs of KD, we have added several categories of KD algorithms, data-free distillation, self-distillation, and zero-shot distillation.
+
+- **Unify the basic interface of NAS and Pruning**. We refactored [Mutable](../advanced_guides/mutable.md), adding mutable value and mutable channel. Both NAS and Pruning can be developed based on mutables.
+
+- **More documentation and tutorials**. We add a bunch of documentation and tutorials to help users get started more smoothly. Read it [here](https://mmrazor.readthedocs.io/en/1.0.0rc0/).
+
+### Breaking Changes
+
+#### Training and testing
+
+- MMRazor 1.x runs on PyTorch>=1.6. We have deprecated the support of PyTorch 1.5 to embrace the mixed precision training and other new features since PyTorch 1.6. Some models can still run on PyTorch 1.5, but the full functionality of MMRazor 1.x is not guaranteed.
+- MMRazor 1.x uses Runner in [MMEngine](https://github.com/open-mmlab/mmengine) rather than that in MMCV. The new Runner implements and unifies the building logic of dataset, model, evaluation, and visualizer. Therefore, MMRazor 1.x no longer maintains the building logics of those modules in `mmdet.train.apis` and `tools/train.py`. Those code have been migrated into [MMEngine](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/runner.py).
+- The Runner in MMEngine also supports testing and validation. The testing scripts are also simplified, which has similar logic as that in training scripts to build the runner.
+
+#### Configs
+
+- The [Runner in MMEngine](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/runner.py) uses a different config structures
+- Config and model names
+
+#### Components
+
+- Algorithms
+- Distillers
+- Mutators
+- Mutables
+- Hooks
+
+### Improvements
+
+- Support mixed precision training of all the models. However, some models may got Nan results due to some numerical issues. We will update the documentation and list their results (accuracy of failure) of mixed precision training.
+
+### Bug Fixes
+
+- AutoSlim: Models of different sizes will no longer have the same size checkpoint
+
+### New Features
+
+- Support [Activation Boundaries Loss](https://arxiv.org/pdf/1811.03233.pdf)
+- Support [Be Your Own Teacher](https://arxiv.org/abs/1905.08094)
+- Support [Data-Free Learning of Student Networks](https://doi.org/10.1109/ICCV.2019.00361)
+- Support [Data-Free Adversarial Distillation](https://arxiv.org/pdf/1912.11006.pdf)
+- Support [Decoupled Knowledge Distillation](https://arxiv.org/pdf/2203.08679.pdf)
+- Support [Factor Transfer](https://arxiv.org/abs/1802.04977)
+- Support [FitNets](https://arxiv.org/abs/1412.6550)
+- Support [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)
+- Support [Overhaul](https://arxiv.org/abs/1904.01866)
+- Support [Zero-shot Knowledge Transfer via Adversarial Belief Matching](https://arxiv.org/abs/1905.09768)
+
+### Ongoing changes
+
+- Quantization: We are developing the basic interface of PTQ and QAT. RFC(Request for Comments) will be released soon.
+- AutoSlim: AutoSlim is not yet available and is being refactored.
+- Fx Pruning Tracer: Currently, the model topology can only be resolved through the backward tracer. In the future, both backward tracer and fx tracer will be supported.
+- More Algorithms: BigNAS、AutoFormer、GreedyNAS and Resrep will be released in the next few versions.
+- Documentation: we will add more design docs, tutorials, and migration guidance so that the community can deep dive into our new design, participate the future development, and smoothly migrate downstream libraries to MMRazor 1.x.
+
+### Contributors
+
+A total of 13 developers contributed to this release.
+Thanks @FreakieHuang @gaoyang07 @HIT-cwh @humu789 @LKJacky @pppppM @pprp @spynccat @sunnyxiaohu @wilxy @wutongshenqiu @NickYangMin @Hiwyl
+Special thanks to @Davidgzx for his contribution to the data-free distillation algorithms
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/contribution_guide.md b/internlm_langchain/knowledge_base/MMRazor/content/contribution_guide.md
new file mode 100644
index 00000000..1ca3a9af
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/contribution_guide.md
@@ -0,0 +1,59 @@
+# Contribute Guide
+
+All kinds of contributions are welcome, including but not limited to the following.
+
+- Fix typo or bugs
+- Add documentation or translate the documentation into other languages
+- Add new features and components
+
+### Workflow
+
+1. fork and pull the latest OpenMMLab repository
+2. checkout a new branch (do not use master branch for PRs)
+3. commit your changes
+4. create a PR
+
+```{note}
+If you plan to add some new features that involve large changes, it is encouraged to open an issue for discussion first.
+```
+
+### Code style
+
+#### Python
+
+We adopt [PEP8](https://www.python.org/dev/peps/pep-0008/) as the preferred code style.
+
+We use the following tools for linting and formatting:
+
+- [flake8](https://github.com/PyCQA/flake8): A wrapper around some linter tools.
+- [isort](https://github.com/timothycrosley/isort): A Python utility to sort imports.
+- [yapf](https://github.com/google/yapf): A formatter for Python files.
+- [codespell](https://github.com/codespell-project/codespell): A Python utility to fix common misspellings in text files.
+- [mdformat](https://github.com/executablebooks/mdformat): Mdformat is an opinionated Markdown formatter that can be used to enforce a consistent style in Markdown files.
+- [docformatter](https://github.com/myint/docformatter): A formatter to format docstring.
+
+Style configurations of yapf and isort can be found in [setup.cfg](./setup.cfg).
+
+We use [pre-commit hook](https://pre-commit.com/) that checks and formats for `flake8`, `yapf`, `isort`, `trailing whitespaces`, `markdown files`,
+fixes `end-of-files`, `double-quoted-strings`, `python-encoding-pragma`, `mixed-line-ending`, sorts `requirments.txt` automatically on every commit.
+The config for a pre-commit hook is stored in [.pre-commit-config](./.pre-commit-config.yaml).
+
+After you clone the repository, you will need to install initialize pre-commit hook.
+
+```shell
+pip install -U pre-commit
+```
+
+From the repository folder
+
+```shell
+pre-commit install
+```
+
+After this on every commit check code linters and formatter will be enforced.
+
+> Before you create a PR, make sure that your code lints and is formatted by yapf.
+
+#### C++ and CUDA
+
+We follow the [Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html).
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/customize_architectures.md b/internlm_langchain/knowledge_base/MMRazor/content/customize_architectures.md
new file mode 100644
index 00000000..d815eb79
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/customize_architectures.md
@@ -0,0 +1,255 @@
+# Customize Architectures
+
+Different from other tasks, architectures in MMRazor may consist of some special model components, such as **searchable backbones, connectors, dynamic ops**. In MMRazor, you can not only develop some common model components like other codebases of OpenMMLab, but also develop some special model components. Here is how to develop searchable model components and common model components.
+
+## Develop searchable model components
+
+1. Define a new backbone
+
+Create a new file `mmrazor/models/architectures/backbones/searchable_shufflenet_v2.py`, class `SearchableShuffleNetV2` inherits from `BaseBackBone` of mmcls, which is the codebase that you will use to build the model.
+
+```Python
+# Copyright (c) OpenMMLab. All rights reserved.
+import copy
+from typing import Dict, List, Optional, Sequence, Tuple, Union
+
+import torch.nn as nn
+from mmcls.models.backbones.base_backbone import BaseBackbone
+from mmcv.cnn import ConvModule, constant_init, normal_init
+from mmcv.runner import ModuleList, Sequential
+from torch import Tensor
+from torch.nn.modules.batchnorm import _BatchNorm
+
+from mmrazor.registry import MODELS
+
+@MODELS.register_module()
+class SearchableShuffleNetV2(BaseBackbone):
+
+ def __init__(self, ):
+ pass
+
+ def _make_layer(self, out_channels, num_blocks, stage_idx):
+ pass
+
+ def _freeze_stages(self):
+ pass
+
+ def init_weights(self):
+ pass
+
+ def forward(self, x):
+ pass
+
+ def train(self, mode=True):
+ pass
+```
+
+2. Build the architecture of the new backbone based on `arch_setting`
+
+```Python
+@MODELS.register_module()
+class SearchableShuffleNetV2(BaseBackbone):
+ def __init__(self,
+ arch_setting: List[List],
+ stem_multiplier: int = 1,
+ widen_factor: float = 1.0,
+ out_indices: Sequence[int] = (4, ),
+ frozen_stages: int = -1,
+ with_last_layer: bool = True,
+ conv_cfg: Optional[Dict] = None,
+ norm_cfg: Dict = dict(type='BN'),
+ act_cfg: Dict = dict(type='ReLU'),
+ norm_eval: bool = False,
+ with_cp: bool = False,
+ init_cfg: Optional[Union[Dict, List[Dict]]] = None) -> None:
+ layers_nums = 5 if with_last_layer else 4
+ for index in out_indices:
+ if index not in range(0, layers_nums):
+ raise ValueError('the item in out_indices must in '
+ f'range(0, 5). But received {index}')
+
+ self.frozen_stages = frozen_stages
+ if frozen_stages not in range(-1, layers_nums):
+ raise ValueError('frozen_stages must be in range(-1, 5). '
+ f'But received {frozen_stages}')
+
+ super().__init__(init_cfg)
+
+ self.arch_setting = arch_setting
+ self.widen_factor = widen_factor
+ self.out_indices = out_indices
+ self.conv_cfg = conv_cfg
+ self.norm_cfg = norm_cfg
+ self.act_cfg = act_cfg
+ self.norm_eval = norm_eval
+ self.with_cp = with_cp
+
+ last_channels = 1024
+ self.in_channels = 16 * stem_multiplier
+
+ # build the first layer
+ self.conv1 = ConvModule(
+ in_channels=3,
+ out_channels=self.in_channels,
+ kernel_size=3,
+ stride=2,
+ padding=1,
+ conv_cfg=conv_cfg,
+ norm_cfg=norm_cfg,
+ act_cfg=act_cfg)
+
+ # build the middle layers
+ self.layers = ModuleList()
+ for channel, num_blocks, mutable_cfg in arch_setting:
+ out_channels = round(channel * widen_factor)
+ layer = self._make_layer(out_channels, num_blocks,
+ copy.deepcopy(mutable_cfg))
+ self.layers.append(layer)
+
+ # build the last layer
+ if with_last_layer:
+ self.layers.append(
+ ConvModule(
+ in_channels=self.in_channels,
+ out_channels=last_channels,
+ kernel_size=1,
+ conv_cfg=conv_cfg,
+ norm_cfg=norm_cfg,
+ act_cfg=act_cfg))
+```
+
+3. Implement`_make_layer` with `mutable_cfg`
+
+```Python
+@MODELS.register_module()
+class SearchableShuffleNetV2(BaseBackbone):
+
+ ...
+
+ def _make_layer(self, out_channels: int, num_blocks: int,
+ mutable_cfg: Dict) -> Sequential:
+ """Stack mutable blocks to build a layer for ShuffleNet V2.
+ Note:
+ Here we use ``module_kwargs`` to pass dynamic parameters such as
+ ``in_channels``, ``out_channels`` and ``stride``
+ to build the mutable.
+ Args:
+ out_channels (int): out_channels of the block.
+ num_blocks (int): number of blocks.
+ mutable_cfg (dict): Config of mutable.
+ Returns:
+ mmcv.runner.Sequential: The layer made.
+ """
+ layers = []
+ for i in range(num_blocks):
+ stride = 2 if i == 0 else 1
+
+ mutable_cfg.update(
+ module_kwargs=dict(
+ in_channels=self.in_channels,
+ out_channels=out_channels,
+ stride=stride))
+ layers.append(MODELS.build(mutable_cfg))
+ self.in_channels = out_channels
+
+ return Sequential(*layers)
+
+ ...
+```
+
+4. Implement other common methods
+
+You can refer to the implementation of `ShuffleNetV2` in mmcls for finishing other common methods.
+
+5. Import the module
+
+You can either add the following line to `mmrazor/models/architectures/backbones/__init__.py`
+
+```Python
+from .searchable_shufflenet_v2 import SearchableShuffleNetV2
+
+__all__ = ['SearchableShuffleNetV2']
+```
+
+or alternatively add
+
+```Python
+custom_imports = dict(
+ imports=['mmrazor.models.architectures.backbones.searchable_shufflenet_v2'],
+ allow_failed_imports=False)
+```
+
+to the config file to avoid modifying the original code.
+
+6. Use the backbone in your config file
+
+```Python
+architecture = dict(
+ type=xxx,
+ model=dict(
+ ...
+ backbone=dict(
+ type='mmrazor.SearchableShuffleNetV2',
+ arg1=xxx,
+ arg2=xxx),
+ ...
+```
+
+## Develop common model components
+
+Here we show how to add a new backbone with an example of `xxxNet`.
+
+1. Define a new backbone
+
+Create a new file `mmrazor/models/architectures/backbones/xxxnet.py`, then implement the class `xxxNet`.
+
+```Python
+from mmengine.model import BaseModule
+from mmrazor.registry import MODELS
+
+@MODELS.register_module()
+class xxxNet(BaseModule):
+
+ def __init__(self, arg1, arg2, init_cfg=None):
+ super().__init__(init_cfg=init_cfg)
+ pass
+
+ def forward(self, x):
+ pass
+```
+
+2. Import the module
+
+You can either add the following line to `mmrazor/models/architectures/backbones/__init__.py`
+
+```Python
+from .xxxnet import xxxNet
+
+__all__ = ['xxxNet']
+```
+
+or alternatively add
+
+```Python
+custom_imports = dict(
+ imports=['mmrazor.models.architectures.backbones.xxxnet'],
+ allow_failed_imports=False)
+```
+
+to the config file to avoid modifying the original code.
+
+3. Use the backbone in your config file
+
+```Python
+architecture = dict(
+ type=xxx,
+ model=dict(
+ ...
+ backbone=dict(
+ type='xxxNet',
+ arg1=xxx,
+ arg2=xxx),
+ ...
+```
+
+How to add other model components is similar to backbone's. For more details, please refer to other codebases' docs.
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/customize_kd_algorithms.md b/internlm_langchain/knowledge_base/MMRazor/content/customize_kd_algorithms.md
new file mode 100644
index 00000000..dee759c5
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/customize_kd_algorithms.md
@@ -0,0 +1,124 @@
+# Customize KD algorithms
+
+Here we show how to develop new KD algorithms with an example of `SingleTeacherDistill`.
+
+1. Register a new algorithm
+
+Create a new file `mmrazor/models/algorithms/distill/configurable/single_teacher_distill.py`, class `SingleTeacherDistill` inherits from class `BaseAlgorithm`
+
+```Python
+from mmrazor.registry import MODELS
+from ..base import BaseAlgorithm
+
+@ALGORITHMS.register_module()
+class SingleTeacherDistill(BaseAlgorithm):
+ def __init__(self, use_gt, **kwargs):
+ super(Distillation, self).__init__(**kwargs)
+ pass
+
+ def train_step(self, data, optimizer):
+ pass
+```
+
+2. Develop connectors (Optional) .
+
+Take ConvModuleConnector as an example.
+
+```python
+from mmrazor.registry import MODELS
+from .base_connector import BaseConnector
+
+@MODELS.register_module()
+class ConvModuleConnector(BaseConnector):
+ def __init__(self, in_channel, out_channel, kernel_size = 1, stride = 1):
+ ...
+
+ def forward_train(self, feature):
+ ...
+```
+
+3. Develop distiller.
+
+Take `ConfigurableDistiller` as an example.
+
+```python
+from .base_distiller import BaseDistiller
+from mmrazor.registry import MODELS
+
+
+@MODELS.register_module()
+class ConfigurableDistiller(BaseDistiller):
+ def __init__(self,
+ student_recorders = None,
+ teacher_recorders = None,
+ distill_deliveries = None,
+ connectors = None,
+ distill_losses = None,
+ loss_forward_mappings = None):
+ ...
+
+ def build_connectors(self, connectors):
+ ...
+
+ def build_distill_losses(self, losses):
+ ...
+
+ def compute_distill_losses(self):
+ ...
+```
+
+4. Develop custom loss (Optional).
+
+Here we take `L1Loss` as an example. Create a new file in `mmrazor/models/losses/l1_loss.py`.
+
+```python
+from mmrazor.registry import MODELS
+
+@MODELS.register_module()
+class L1Loss(nn.Module):
+ def __init__(
+ self,
+ loss_weight: float = 1.0,
+ size_average: Optional[bool] = None,
+ reduce: Optional[bool] = None,
+ reduction: str = 'mean',
+ ) -> None:
+ super().__init__()
+ ...
+
+ def forward(self, s_feature, t_feature):
+ loss = F.l1_loss(s_feature, t_feature, self.size_average, self.reduce,
+ self.reduction)
+ return self.loss_weight * loss
+```
+
+5. Import the class
+
+You can either add the following line to `mmrazor/models/algorithms/__init__.py`
+
+```Python
+from .single_teacher_distill import SingleTeacherDistill
+
+__all__ = [..., 'SingleTeacherDistill']
+```
+
+or alternatively add
+
+```Python
+custom_imports = dict(
+ imports=['mmrazor.models.algorithms.distill.configurable.single_teacher_distill'],
+ allow_failed_imports=False)
+```
+
+to the config file to avoid modifying the original code.
+
+6. Use the algorithm in your config file
+
+```Python
+algorithm = dict(
+ type='Distill',
+ distiller=dict(type='SingleTeacherDistill', ...),
+ # you can also use your new algorithm components here
+ ...
+)
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/customize_mixed_algorithms.md b/internlm_langchain/knowledge_base/MMRazor/content/customize_mixed_algorithms.md
new file mode 100644
index 00000000..17b928d1
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/customize_mixed_algorithms.md
@@ -0,0 +1,164 @@
+# Customize mixed algorithms
+
+Here we show how to customize mixed algorithms with our algorithm components. We take [AutoSlim ](https://github.com/open-mmlab/mmrazor/tree/main/configs/pruning/mmcls/autoslim)as an example.
+
+```{note}
+**Why is AutoSlim a mixed algorithm?**
+
+In [AutoSlim](https://github.com/open-mmlab/mmrazor/tree/main/configs/pruning/mmcls/autoslim), the sandwich rule and the inplace distillation will be introduced to enhance the training process, which is called as the slimmable training. The sandwich rule means that we train the model at smallest width, largest width and (n − 2) random widths, instead of n random widths. And the inplace distillation means that we use the predicted label of the model at the largest width as the training label for other widths, while for the largest width we use ground truth. So both the KD algorithm and the pruning algorithm are used in [AutoSlim](https://github.com/open-mmlab/mmrazor/tree/main/configs/pruning/mmcls/autoslim).
+```
+
+1. Register a new algorithm
+
+Create a new file `mmrazor/models/algorithms/nas/autoslim.py`, class `AutoSlim` inherits from class `BaseAlgorithm`. You need to build the KD algorithm component (distiller) and the pruning algorithm component (mutator) because AutoSlim is a mixed algorithm.
+
+```{note}
+You can also inherit from the existing algorithm instead of `BaseAlgorithm` if your algorithm is similar to the existing algorithm.
+```
+
+```{note}
+You can choose existing algorithm components in MMRazor, such as `OneShotChannelMutator` and `ConfigurableDistiller` in AutoSlim.
+
+If these in MMRazor don't meet your needs, you can customize new algorithm components for your algorithm. Reference is as follows:
+
+[Customize NAS algorithms](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_nas_algorithms.html)
+[Customize Pruning algorithms](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_pruning_algorithms.html)
+[Customize KD algorithms](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_kd_algorithms.html)
+```
+
+```Python
+# Copyright (c) OpenMMLab. All rights reserved.
+from typing import Dict, List, Optional, Union
+import torch
+from torch import nn
+
+from mmrazor.models.distillers import ConfigurableDistiller
+from mmrazor.models.mutators import OneShotChannelMutator
+from mmrazor.registry import MODELS
+from ..base import BaseAlgorithm
+
+VALID_MUTATOR_TYPE = Union[OneShotChannelMutator, Dict]
+VALID_DISTILLER_TYPE = Union[ConfigurableDistiller, Dict]
+
+@MODELS.register_module()
+class AutoSlim(BaseAlgorithm):
+ def __init__(self,
+ mutator: VALID_MUTATOR_TYPE,
+ distiller: VALID_DISTILLER_TYPE,
+ architecture: Union[BaseModel, Dict],
+ data_preprocessor: Optional[Union[Dict, nn.Module]] = None,
+ num_random_samples: int = 2,
+ init_cfg: Optional[Dict] = None) -> None:
+ super().__init__(architecture, data_preprocessor, init_cfg)
+ self.mutator = self._build_mutator(mutator)
+ # `prepare_from_supernet` must be called before distiller initialized
+ self.mutator.prepare_from_supernet(self.architecture)
+
+ self.distiller = self._build_distiller(distiller)
+ self.distiller.prepare_from_teacher(self.architecture)
+ self.distiller.prepare_from_student(self.architecture)
+
+ ......
+
+ def _build_mutator(self,
+ mutator: VALID_MUTATOR_TYPE) -> OneShotChannelMutator:
+ """build mutator."""
+ if isinstance(mutator, dict):
+ mutator = MODELS.build(mutator)
+ if not isinstance(mutator, OneShotChannelMutator):
+ raise TypeError('mutator should be a `dict` or '
+ '`OneShotModuleMutator` instance, but got '
+ f'{type(mutator)}')
+
+ return mutator
+
+ def _build_distiller(
+ self, distiller: VALID_DISTILLER_TYPE) -> ConfigurableDistiller:
+ if isinstance(distiller, dict):
+ distiller = MODELS.build(distiller)
+ if not isinstance(distiller, ConfigurableDistiller):
+ raise TypeError('distiller should be a `dict` or '
+ '`ConfigurableDistiller` instance, but got '
+ f'{type(distiller)}')
+
+ return distiller
+```
+
+2. Implement the core logic in `train_step`
+
+In `train_step`, both the `mutator` and the `distiller` play an important role. For example, `sample_subnet`, `set_max_subnet` and `set_min_subnet` are supported by the `mutator`, and the function of`distill_step` is mainly implemented by the `distiller`.
+
+```Python
+@MODELS.register_module()
+class AutoSlim(BaseAlgorithm):
+
+ ......
+
+ def train_step(self, data: List[dict],
+ optim_wrapper: OptimWrapper) -> Dict[str, torch.Tensor]:
+
+ def distill_step(
+ batch_inputs: torch.Tensor, data_samples: List[BaseDataElement]
+ ) -> Dict[str, torch.Tensor]:
+ ......
+
+ ......
+
+ batch_inputs, data_samples = self.data_preprocessor(data, True)
+
+ total_losses = dict()
+ for kind in self.sample_kinds:
+ # update the max subnet loss.
+ if kind == 'max':
+ self.set_max_subnet()
+ ......
+ total_losses.update(add_prefix(max_subnet_losses, 'max_subnet'))
+ # update the min subnet loss.
+ elif kind == 'min':
+ self.set_min_subnet()
+ min_subnet_losses = distill_step(batch_inputs, data_samples)
+ total_losses.update(add_prefix(min_subnet_losses, 'min_subnet'))
+ # update the random subnets loss.
+ elif 'random' in kind:
+ self.set_subnet(self.sample_subnet())
+ random_subnet_losses = distill_step(batch_inputs, data_samples)
+ total_losses.update(
+ add_prefix(random_subnet_losses, f'{kind}_subnet'))
+
+ return total_losses
+```
+
+3. Import the class
+
+You can either add the following line to `mmrazor/models/algorithms/nas/__init__.py`
+
+```Python
+from .autoslim import AutoSlim
+
+__all__ = ['AutoSlim']
+```
+
+or alternatively add
+
+```Python
+custom_imports = dict(
+ imports=['mmrazor.models.algorithms.nas.autoslim'],
+ allow_failed_imports=False)
+```
+
+to the config file to avoid modifying the original code.
+
+4. Use the algorithm in your config file
+
+```Python
+model= dict(
+ type='mmrazor.AutoSlim',
+ architecture=...,
+ mutator=dict(
+ type='OneShotChannelMutator',
+ ...),
+ distiller=dict(
+ type='ConfigurableDistiller',
+ ...),
+ ...)
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/customize_nas_algorithms.md b/internlm_langchain/knowledge_base/MMRazor/content/customize_nas_algorithms.md
new file mode 100644
index 00000000..b8f180c0
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/customize_nas_algorithms.md
@@ -0,0 +1,136 @@
+# Customize NAS algorithms
+
+Here we show how to develop new NAS algorithms with an example of SPOS.
+
+1. Register a new algorithm
+
+Create a new file `mmrazor/models/algorithms/nas/spos.py`, class `SPOS` inherits from class `BaseAlgorithm`
+
+```Python
+from mmrazor.registry import MODELS
+from ..base import BaseAlgorithm
+
+@MODELS.register_module()
+class SPOS(BaseAlgorithm):
+ def __init__(self, **kwargs):
+ super(SPOS, self).__init__(**kwargs)
+ pass
+
+ def loss(self, batch_inputs, data_samples):
+ pass
+```
+
+2. Develop new algorithm components (optional)
+
+SPOS can directly use class `OneShotModuleMutator` as core functions provider. If mutators provided in MMRazor don’t meet your needs, you can develop new algorithm components for your algorithm like `OneShotModuleMutator`, we will take `OneShotModuleMutator` as an example to introduce how to develop a new algorithm component:
+
+a. Create a new file `mmrazor/models/mutators/module_mutator/one_shot_module_mutator.py`, class `OneShotModuleMutator` inherits from class `ModuleMutator`
+
+b. Finish the functions you need in `OneShotModuleMutator`, eg: `sample_choices`, `set_choices` and so on.
+
+```Python
+from mmrazor.registry import MODELS
+from .module_mutator import ModuleMutator
+
+
+@MODELS.register_module()
+class OneShotModuleMutator(ModuleMutator):
+
+ def __init__(self, **kwargs):
+ super().__init__(**kwargs)
+
+ def sample_choices(self) -> Dict[int, Any]:
+ pass
+
+ def set_choices(self, choices: Dict[int, Any]) -> None:
+ pass
+
+ @property
+ def mutable_class_type(self):
+ return OneShotMutableModule
+```
+
+c. Import the new mutator
+
+You can either add the following line to `mmrazor/models/mutators/__init__.py`
+
+```Python
+from .module_mutator import OneShotModuleMutator
+```
+
+or alternatively add
+
+```Python
+custom_imports = dict(
+ imports=['mmrazor.models.mutators.module_mutator.one_shot_module_mutator'],
+ allow_failed_imports=False)
+```
+
+to the config file to avoid modifying the original code.
+
+d. Use the algorithm component in your config file
+
+```Python
+mutator=dict(type='mmrazor.OneShotModuleMutator')
+```
+
+For further information, please refer to [Mutator ](https://mmrazor.readthedocs.io/en/main/advanced_guides/mutator.html)for more details.
+
+3. Rewrite its `loss` function.
+
+Develop key logic of your algorithm in function`loss`. When having special steps to optimize, you should rewrite the function `train_step`.
+
+```Python
+@MODELS.register_module()
+class SPOS(BaseAlgorithm):
+ def __init__(self, **kwargs):
+ super(SPOS, self).__init__(**kwargs)
+ pass
+
+ def sample_subnet(self):
+ pass
+
+ def set_subnet(self, subnet):
+ pass
+
+ def loss(self, batch_inputs, data_samples):
+ if self.is_supernet:
+ random_subnet = self.sample_subnet()
+ self.set_subnet(random_subnet)
+ return self.architecture(batch_inputs, data_samples, mode='loss')
+ else:
+ return self.architecture(batch_inputs, data_samples, mode='loss')
+```
+
+4. Add your custom functions (optional)
+
+After finishing your key logic in function `loss`, if you also need other custom functions, you can add them in class `SPOS` as follows.
+
+5. Import the class
+
+You can either add the following line to `mmrazor/models/algorithms/nas/__init__.py`
+
+```Python
+from .spos import SPOS
+
+__all__ = ['SPOS']
+```
+
+or alternatively add
+
+```Python
+custom_imports = dict(
+ imports=['mmrazor.models.algorithms.nas.spos'],
+ allow_failed_imports=False)
+```
+
+to the config file to avoid modifying the original code.
+
+6. Use the algorithm in your config file
+
+```Python
+model = dict(
+ type='mmrazor.SPOS',
+ architecture=supernet,
+ mutator=dict(type='mmrazor.OneShotModuleMutator'))
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/customize_pruning_algorithms.md b/internlm_langchain/knowledge_base/MMRazor/content/customize_pruning_algorithms.md
new file mode 100644
index 00000000..3980dfbd
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/customize_pruning_algorithms.md
@@ -0,0 +1,155 @@
+# Customize pruning algorithms
+
+Here we show how to develop new Pruning algorithms with an example of AutoSlim.
+
+1. Register a new algorithm
+
+Create a new file `mmrazor/models/algorithms/prunning/autoslim.py`, class `AutoSlim` inherits from class `BaseAlgorithm`.
+
+```Python
+from mmrazor.registry import MODELS
+from .base import BaseAlgorithm
+
+@MODELS.register_module()
+class AutoSlim(BaseAlgorithm):
+ def __init__(self,
+ mutator,
+ distiller,
+ architecture,
+ data_preprocessor,
+ num_random_samples = 2,
+ init_cfg = None) -> None:
+ super().__init__(**kwargs)
+ pass
+
+ def train_step(self, data, optimizer):
+ pass
+```
+
+2. Develop new algorithm components (optional)
+
+AutoSlim can directly use class `OneShotChannelMutator` as core functions provider. If it can not meet your needs, you can develop new algorithm components for your algorithm like `OneShotChannalMutator`. We will take `OneShotChannelMutator` as an example to introduce how to develop a new algorithm component:
+
+a. Create a new file `mmrazor/models/mutators/channel_mutator/one_shot_channel_mutator.py`, class `OneShotChannelMutator` can inherits from `ChannelMutator`.
+
+b. Finish the functions you need, eg: `build_search_groups`, `set_choices` , `sample_choices` and so on
+
+```Python
+from mmrazor.registry import MODELS
+from .channel_mutator import ChannelMutator
+
+
+@MODELS.register_module()
+class OneShotChannelMutator(ChannelMutator):
+ def __init__(self, **kwargs):
+ super().__init__(**kwargs)
+
+ def sample_choices(self):
+ pass
+
+ def set_choices(self, choice_dict):
+ pass
+
+ # supernet is a kind of architecture in `mmrazor/models/architectures/`
+ def build_search_groups(self, supernet):
+ pass
+```
+
+c. Import the module in `mmrazor/models/mutators/channel_mutator/__init__.py`
+
+```Python
+from .one_shot_channel_mutator import OneShotChannelMutator
+
+ __all__ = [..., 'OneShotChannelMutator']
+```
+
+3. Rewrite its train_step
+
+Develop key logic of your algorithm in function`train_step`
+
+```Python
+from mmrazor.registry import MODELS
+from ..base import BaseAlgorithm
+
+@ALGORITHMS.register_module()
+class AutoSlim(BaseAlgorithm):
+ def __init__(self,
+ mutator,
+ distiller,
+ architecture,
+ data_preprocessor,
+ num_random_samples = 2,
+ init_cfg = None) -> None:
+ super(AutoSlim, self).__init__(**kwargs)
+ pass
+
+ def train_step(self, data: List[dict],
+ optim_wrapper: OptimWrapper) -> Dict[str, torch.Tensor]:
+
+ def distill_step(
+ batch_inputs: torch.Tensor, data_samples: List[BaseDataElement]
+ ) -> Dict[str, torch.Tensor]:
+ ...
+ return subnet_losses
+
+ batch_inputs, data_samples = self.data_preprocessor(data, True)
+
+ total_losses = dict()
+ for kind in self.sample_kinds:
+ # update the max subnet loss.
+ if kind == 'max':
+ self.set_max_subnet()
+ with optim_wrapper.optim_context(
+ self), self.distiller.teacher_recorders: # type: ignore
+ max_subnet_losses = self(batch_inputs, data_samples, mode='loss')
+ parsed_max_subnet_losses, _ = self.parse_losses(max_subnet_losses)
+ optim_wrapper.update_params(parsed_max_subnet_losses)
+ total_losses.update(add_prefix(max_subnet_losses, 'max_subnet'))
+ # update the min subnet loss.
+ elif kind == 'min':
+ self.set_min_subnet()
+ min_subnet_losses = distill_step(batch_inputs, data_samples)
+ total_losses.update(add_prefix(min_subnet_losses, 'min_subnet'))
+ # update the random subnets loss.
+ elif 'random' in kind:
+ self.set_subnet(self.sample_subnet())
+ random_subnet_losses = distill_step(batch_inputs, data_samples)
+ total_losses.update(
+ add_prefix(random_subnet_losses, f'{kind}_subnet'))
+
+ return total_losses
+```
+
+4. Add your custom functions (optional)
+
+After finishing your key logic in function `train_step`, if you also need other custom functions, you can add them in class `AutoSlim`.
+
+5. Import the class
+
+You can either add the following line to `mmrazor/models/algorithms/__init__.py`
+
+```Python
+from .pruning import AutoSlim
+
+__all__ = [..., 'AutoSlim']
+```
+
+Or alternatively add
+
+```Python
+custom_imports = dict(
+ imports=['mmrazor.models.algorithms.pruning.autoslim'],
+ allow_failed_imports=False)
+```
+
+to the config file to avoid modifying the original code.
+
+6. Use the algorithm in your config file
+
+```Python
+model = dict(
+ type='AutoSlim',
+ architecture=...,
+ mutator=dict(type='OneShotChannelMutator', ...),
+ )
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/customize_quantization_algorithms.md b/internlm_langchain/knowledge_base/MMRazor/content/customize_quantization_algorithms.md
new file mode 100644
index 00000000..e1dd25ea
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/customize_quantization_algorithms.md
@@ -0,0 +1,283 @@
+# Customize Quantization algorithms
+
+Here we show how to develop new QAT algorithms with an example of LSQ on OpenVINO backend.
+
+This document is mainly aimed at QAT because the ptq process is relatively fixed and the components we provide can meet most of the needs. We will first give an overview of the overall required development components, and then introduce the specific implementation step by step.
+
+## Overall
+
+In the mmrazor quantization pipeline, in order to better support the openmmlab environment, we have configured most of the code modules for users. You can configure all the components directly in the config file. How to configure them can be found in our [file](https://github.com/open-mmlab/mmrazor/blob/quantize/configs/quantization/qat/minmax_openvino_resnet18_8xb32_in1k.py).
+
+```Python
+global_qconfig = dict(
+ w_observer=dict(),
+ a_observer=dict(),
+ w_fake_quant=dict(),
+ a_fake_quant=dict(),
+ w_qscheme=dict(),
+ a_qscheme=dict(),
+)
+model = dict(
+ type='mmrazor.MMArchitectureQuant',
+ architecture=resnet,
+ quantizer=dict(
+ type='mmrazor.OpenvinoQuantizer',
+ global_qconfig=global_qconfig,
+ tracer=dict()))
+train_cfg = dict(type='mmrazor.LSQEpochBasedLoop')
+```
+
+For `algorithm` and `tracer`, we recommend that you use the default configurations `MMArchitectureQuant` and `CustomTracer` provided by us. These two module operators are specially built for the openmmlab environment, while other modules can refer to the following steps and choose or develop new operators according to your needs.
+
+To adapt to different backends, you need to select a different `quantizer`.
+
+To develop new quantization algorithms, you need to define new `observer` and `fakequant`.
+
+If the existing `loop` does not meet your needs, you may need to make some changes to the existing `loop` based on your algorithm.
+
+## Detailed steps
+
+1. Select a quantization algorithm
+
+We recommend that you directly use the`MMArchitectureQuant` in `mmrazor/models/algorithms/quantization/mm_architecture.py`.The class `MMArchitectureQuant` inherits from class `BaseAlgorithm`.
+
+This structure is built for the model in openmmlab. If you have other requirements, you can also refer to this [document](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_architectures.html#develop-common-model-components) to design the overall framework.
+
+2. Select quantizer
+
+At present, the quantizers we support are `NativeQuantizer`, `OpenVINOQuantizer`, `TensorRTQuantizer` and `AcademicQuantizer` in `mmrazor/models/quantizers/`. `AcademicQuantizer` and `NativeQuantizer` inherit from class `BaseQuantizer` in `mmrazor/models/quantizers/base.py`:
+
+```Python
+class BaseQuantizer(BaseModule):
+ def __init__(self, tracer):
+ super().__init__()
+ self.tracer = TASK_UTILS.build(tracer)
+ @abstractmethod
+ def prepare(self, model, graph_module):
+ """tmp."""
+ pass
+ def swap_ff_with_fxff(self, model):
+ pass
+```
+
+`NativeQuantizer` is the operator we developed to adapt to the environment of mmrazor according to pytorch's official quantization logic. `AcademicQuantizer` is an operator designed for academic research to give users more space to operate.
+
+The class `OpenVINOQuantizer` and `TensorRTQuantizer` inherits from class `NativeQuantize`. They adapted `OpenVINO` and `TensorRT`backend respectively. You can also try to develop a quantizer based on other backends according to your own needs.
+
+3. Select tracer
+
+Tracer we use `CustomTracer` in `mmrazor/models/task_modules/tracer/fx/custom_tracer.py`. You can inherit this class and customize your own tracer.
+
+4. Develop new fakequant method(optional)
+
+You can use fakequants provided by pytorch in `mmrazor/models/fake_quants/torch_fake_quants.py` as core functions provider. If you want to use the fakequant methods from other papers, you can also define them yourself. Let's take lsq as an example as follows:
+
+a.Create a new file `mmrazor/models/fake_quants/lsq.py`, class `LearnableFakeQuantize` inherits from class `FakeQuantizeBase`.
+
+b. Finish the functions you need, eg: `observe_quant_params`, `calculate_qparams` and so on.
+
+```Python
+from mmrazor.registry import MODELS
+from torch.ao.quantization import FakeQuantizeBase
+
+@MODELS.register_module()
+class LearnableFakeQuantize(FakeQuantizeBase):
+ def __init__(self,
+ observer,
+ quant_min=0,
+ quant_max=255,
+ scale=1.,
+ zero_point=0.,
+ use_grad_scaling=True,
+ zero_point_trainable=False,
+ **observer_kwargs):
+ super(LearnableFakeQuantize, self).__init__()
+ pass
+
+ def observe_quant_params(self):
+ pass
+
+ def calculate_qparams(self):
+ pass
+
+ def forward(self, X):
+ pass
+```
+
+c.Import the module in `mmrazor/models/fake_quants/__init__.py`.
+
+```Python
+from .lsq import LearnableFakeQuantize
+
+__all__ = ['LearnableFakeQuantize']
+```
+
+5. Develop new observer(optional)
+
+You can directly use observers provided by pytorch in `mmrazor/models/observers/torch_observers.py` or use observers customized by yourself. Let's take `LSQObserver` as follows:
+
+a.Create a new observer file `mmrazor/models/observers/lsq.py`, class `LSQObserver` inherits from class `MinMaxObserver` and `LSQObserverMixIn`. These two observers can calculate `zero_point` and `scale`, respectively.
+
+b.Finish the functions you need, eg: `calculate_qparams` and so on.
+
+```Python
+from mmrazor.registry import MODELS
+from torch.ao.quantization.observer import MinMaxObserver
+
+class LSQObserverMixIn:
+ def __init__(self):
+ self.tensor_norm = None
+
+ @torch.jit.export
+ def _calculate_scale(self):
+ scale = 2 * self.tensor_norm / math.sqrt(self.quant_max)
+ sync_tensor(scale)
+ return scale
+
+@MODELS.register_module()
+class LSQObserver(MinMaxObserver, LSQObserverMixIn):
+ """LSQ observer.
+ Paper: Learned Step Size Quantization.
+ """
+ def __init__(self, *args, **kwargs):
+ MinMaxObserver.__init__(self, *args, **kwargs)
+ LSQObserverMixIn.__init__(self)
+
+ def forward(self, x_orig):
+ """Records the running minimum, maximum and tensor_norm of ``x``."""
+ if x_orig.numel() == 0:
+ return x_orig
+ x = x_orig.detach() # avoid keeping autograd tape
+ x = x.to(self.min_val.dtype)
+ self.tensor_norm = x.abs().mean()
+ min_val_cur, max_val_cur = torch.aminmax(x)
+ min_val = torch.min(min_val_cur, self.min_val)
+ max_val = torch.max(max_val_cur, self.max_val)
+ self.min_val.copy_(min_val)
+ self.max_val.copy_(max_val)
+ return x_orig
+
+ @torch.jit.export
+ def calculate_qparams(self):
+ """Calculates the quantization parameters."""
+ _, zero_point = MinMaxObserver.calculate_qparams(self)
+ scale = LSQObserverMixIn._calculate_scale(self)
+ return scale, zero_point
+```
+
+c.Import the module in `mmrazor/models/observers/__init__.py`
+
+```Python
+from .lsq import LSQObserver
+
+__all__ = ['LSQObserver']
+```
+
+6. Select loop or develop new loop
+
+At present, the QAT loops we support are `PTQLoop` and `QATEpochBasedLoop`, in `mmrazor/engine/runner/quantization_loops.py`. We can develop a new `LSQEpochBasedLoop` inherits from class `QATEpochBasedLoop` and finish the functions we need in LSQ method.
+
+```Python
+from mmengine.runner import EpochBasedTrainLoop
+
+@LOOPS.register_module()
+class LSQEpochBasedLoop(QATEpochBasedLoop):
+ def __init__(
+ self,
+ runner,
+ dataloader: Union[DataLoader, Dict],
+ max_epochs: int,
+ val_begin: int = 1,
+ val_interval: int = 1,
+ freeze_bn_begin: int = -1,
+ dynamic_intervals: Optional[List[Tuple[int, int]]] = None) -> None:
+ super().__init__(
+ runner,
+ dataloader,
+ max_epochs,
+ val_begin,
+ val_interval,
+ freeze_bn_begin=freeze_bn_begin,
+ dynamic_intervals=dynamic_intervals)
+
+ self.is_first_batch = True
+
+ def prepare_for_run_epoch(self):
+ pass
+
+ def prepare_for_val(self):
+ pass
+
+ def run_epoch(self) -> None:
+ pass
+```
+
+And then Import the module in `mmrazor/engine/runner/__init__.py`
+
+```Python
+from .quantization_loops import LSQEpochBasedLoop
+
+__all__ = ['LSQEpochBasedLoop']
+```
+
+7. Use the algorithm in your config file
+
+After completing the above steps, we have all the components of the qat algorithm, and now we can combine them in the config file.
+
+a.First, `_base_` stores the location of the model that needs to be quantized.
+
+b.Second, configure observer,fakequant and qscheme in `global_qconfig` in detail.
+You can configure the required quantization bit width and quantization methods in `qscheme`, such as symmetric quantization or asymmetric quantization.
+
+c.Third, build the whole mmrazor model in `model`.
+
+d.Finally, complete all the remaining required configuration files.
+
+```Python
+_base_ = ['mmcls::resnet/resnet18_8xb16_cifar10.py']
+
+global_qconfig = dict(
+ w_observer=dict(type='mmrazor.LSQPerChannelObserver'),
+ a_observer=dict(type='mmrazor.LSQObserver'),
+ w_fake_quant=dict(type='mmrazor.LearnableFakeQuantize'),
+ a_fake_quant=dict(type='mmrazor.LearnableFakeQuantize'),
+ w_qscheme=dict(
+ qdtype='qint8', bit=8, is_symmetry=True, is_symmetric_range=True),
+ a_qscheme=dict(qdtype='quint8', bit=8, is_symmetry=True),
+)
+
+model = dict(
+ _delete_=True,
+ _scope_='mmrazor',
+ type='MMArchitectureQuant',
+ data_preprocessor=dict(
+ type='mmcls.ClsDataPreprocessor',
+ num_classes=1000,
+ # RGB format normalization parameters
+ mean=[123.675, 116.28, 103.53],
+ std=[58.395, 57.12, 57.375],
+ # convert image from BGR to RGB
+ to_rgb=True),
+ architecture=resnet,
+ float_checkpoint=float_ckpt,
+ quantizer=dict(
+ type='mmrazor.OpenVINOQuantizer',
+ is_qat=True,
+ global_qconfig=global_qconfig,
+ tracer=dict(
+ type='mmrazor.CustomTracer',
+ skipped_methods=[
+ 'mmcls.models.heads.ClsHead._get_loss',
+ 'mmcls.models.heads.ClsHead._get_predictions'
+ ])))
+
+# learning policy
+optim_wrapper = dict()
+param_scheduler = dict()
+model_wrapper_cfg = dict()
+
+# train, val, test setting
+train_cfg = dict(type='mmrazor.LSQEpochBasedLoop')
+val_cfg = dict()
+test_cfg = val_cfg
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/delivery.md b/internlm_langchain/knowledge_base/MMRazor/content/delivery.md
new file mode 100644
index 00000000..b7c842c0
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/delivery.md
@@ -0,0 +1,217 @@
+# Delivery
+
+## Introduction of Delivery
+
+`Delivery` is a mechanism used in **knowledge distillation**, which is to **align the intermediate results** between the teacher model and the student model by delivering and rewriting these intermediate results between them. As shown in the figure below, deliveries can be used to:
+
+- **Deliver the output of a layer of the teacher model directly to a layer of the student model.** In some knowledge distillation algorithms, we may need to deliver the output of a layer of the teacher model to the student model directly. For example, in [LAD](https://arxiv.org/abs/2108.10520) algorithm, the student model needs to obtain the label assignment of the teacher model directly.
+- **Align the inputs of the teacher model and the student model.** For example, in the MMClassification framework, some widely used data augmentations such as [mixup](https://arxiv.org/abs/1710.09412) and [CutMix](https://arxiv.org/abs/1905.04899) are not implemented in Data Pipelines but in `forward_train`, and due to the randomness of these data augmentation methods, it may lead to a gap between the input of the teacher model and the student model.
+
+
+
+In general, the delivery mechanism allows us to deliver intermediate results between the teacher model and the student model **without adding additional code**, which reduces the hard coding in the source code.
+
+## Usage of Delivery
+
+Currently, we support two deliveries: `FunctionOutputsDelivery` and `MethodOutputsDelivery`, both of which inherit from `DistillDiliver`. And these deliveries can be managed by `DistillDeliveryManager` or just be used on their own.
+
+Their relationship is shown below.
+
+
+
+### FunctionOutputsDelivery
+
+`FunctionOutputsDelivery` is used to align the **function's** intermediate results between the teacher model and the student model.
+
+```{note}
+When initializing `FunctionOutputsDelivery`, you need to pass `func_path` argument, which requires extra attention. For example,
+`anchor_inside_flags` is a function in mmdetection to check whether the
+anchors are inside the border. This function is in
+`mmdet/core/anchor/utils.py` and used in
+`mmdet/models/dense_heads/anchor_head`. Then the `func_path` should be
+`mmdet.models.dense_heads.anchor_head.anchor_inside_flags` but not
+`mmdet.core.anchor.utils.anchor_inside_flags`.
+```
+
+#### Case 1: Delivery single function's output from the teacher to the student.
+
+```Python
+import random
+from mmrazor.core import FunctionOutputsDelivery
+
+def toy_func() -> int:
+ return random.randint(0, 1000000)
+
+delivery = FunctionOutputsDelivery(max_keep_data=1, func_path='toy_module.toy_func')
+
+# override_data is False, which means that not override the data with
+# the recorded data. So it will get the original output of toy_func
+# in teacher model, and it is also recorded to be deliveried to the student.
+delivery.override_data = False
+with delivery:
+ output_teacher = toy_module.toy_func()
+
+# override_data is True, which means that override the data with
+# the recorded data, so it will get the output of toy_func
+# in teacher model rather than the student's.
+delivery.override_data = True
+with delivery:
+ output_student = toy_module.toy_func()
+
+print(output_teacher == output_student)
+```
+
+Out:
+
+```Python
+True
+```
+
+#### Case 2: Delivery multi function's outputs from the teacher to the student.
+
+If a function is executed more than once during the forward of the teacher model, all the outputs of this function will be used to override function outputs from the student model
+
+```{note}
+Delivery order is first-in first-out.
+```
+
+```Python
+delivery = FunctionOutputsDelivery(
+ max_keep_data=2, func_path='toy_module.toy_func')
+
+delivery.override_data = False
+with delivery:
+ output1_teacher = toy_module.toy_func()
+ output2_teacher = toy_module.toy_func()
+
+delivery.override_data = True
+with delivery:
+ output1_student = toy_module.toy_func()
+ output2_student = toy_module.toy_func()
+
+print(output1_teacher == output1_student and output2_teacher == output2_student)
+```
+
+Out:
+
+```Python
+True
+```
+
+### MethodOutputsDelivery
+
+`MethodOutputsDelivery` is used to align the **method's** intermediate results between the teacher model and the student model.
+
+#### Case: Align the inputs of the teacher model and the student model
+
+Here we use mixup as an example to show how to align the inputs of the teacher model and the student model.
+
+- Without Delivery
+
+```Python
+# main.py
+from mmcls.models.utils import Augments
+from mmrazor.core import MethodOutputsDelivery
+
+augments_cfg = dict(type='BatchMixup', alpha=1., num_classes=10, prob=1.0)
+augments = Augments(augments_cfg)
+
+imgs = torch.randn(2, 3, 32, 32)
+label = torch.randint(0, 10, (2,))
+
+imgs_teacher, label_teacher = augments(imgs, label)
+imgs_student, label_student = augments(imgs, label)
+
+print(torch.equal(label_teacher, label_student))
+print(torch.equal(imgs_teacher, imgs_student))
+```
+
+Out:
+
+```Python
+False
+False
+from mmcls.models.utils import Augments
+from mmrazor.core import DistillDeliveryManager
+```
+
+The results are different due to the randomness of mixup.
+
+- With Delivery
+
+```Python
+delivery = MethodOutputsDelivery(
+ max_keep_data=1, method_path='mmcls.models.utils.Augments.__call__')
+
+delivery.override_data = False
+with delivery:
+ imgs_teacher, label_teacher = augments(imgs, label)
+
+delivery.override_data = True
+with delivery:
+ imgs_student, label_student = augments(imgs, label)
+
+print(torch.equal(label_teacher, label_student))
+print(torch.equal(imgs_teacher, imgs_student))
+```
+
+Out:
+
+```Python
+True
+True
+```
+
+The randomness is eliminated by using `MethodOutputsDelivery`.
+
+### 2.3 DistillDeliveryManager
+
+`DistillDeliveryManager` is actually a context manager, used to manage delivers. When entering the ` DistillDeliveryManager`, all delivers managed will be started.
+
+With the help of `DistillDeliveryManager`, we are able to manage several different DistillDeliveries with as little code as possible, thereby reducing the possibility of errors.
+
+#### Case: Manager deliveries with DistillDeliveryManager
+
+```Python
+from mmcls.models.utils import Augments
+from mmrazor.core import DistillDeliveryManager
+
+augments_cfg = dict(type='BatchMixup', alpha=1., num_classes=10, prob=1.0)
+augments = Augments(augments_cfg)
+
+distill_deliveries = [
+ ConfigDict(type='MethodOutputs', max_keep_data=1,
+ method_path='mmcls.models.utils.Augments.__call__')]
+
+# instantiate DistillDeliveryManager
+manager = DistillDeliveryManager(distill_deliveries)
+
+imgs = torch.randn(2, 3, 32, 32)
+label = torch.randint(0, 10, (2,))
+
+manager.override_data = False
+with manager:
+ imgs_teacher, label_teacher = augments(imgs, label)
+
+manager.override_data = True
+with manager:
+ imgs_student, label_student = augments(imgs, label)
+
+print(torch.equal(label_teacher, label_student))
+print(torch.equal(imgs_teacher, imgs_student))
+```
+
+Out:
+
+```Python
+True
+True
+```
+
+## Reference
+
+\[1\] Zhang, Hongyi, et al. "mixup: Beyond empirical risk minimization." *arXiv* abs/1710.09412 (2017).
+
+\[2\] Yun, Sangdoo, et al. "Cutmix: Regularization strategy to train strong classifiers with localizable features." *ICCV* (2019).
+
+\[3\] Nguyen, Chuong H., et al. "Improving object detection by label assignment distillation." *WACV* (2022).
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/faq.md b/internlm_langchain/knowledge_base/MMRazor/content/faq.md
new file mode 100644
index 00000000..318b08dc
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/faq.md
@@ -0,0 +1 @@
+# Frequently Asked Questions
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/how_to_prune_your_model.md b/internlm_langchain/knowledge_base/MMRazor/content/how_to_prune_your_model.md
new file mode 100644
index 00000000..7a8f6005
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/how_to_prune_your_model.md
@@ -0,0 +1,134 @@
+# How to prune your model
+
+## Overview
+
+This section will introduce you to pruning your model. Before that, we suggest you read the document [User Guides: Pruning Framework](../../user_guides/pruning_user_guide.md) to have an overview of our pruning framework.
+
+First, we suppose your model is defined and trained using one openmmlab repo.
+Our pruning algorithms work as a wrapper of a model. To prune your model, you need to replace your model config with our algorithm config, which has a parameter 'architecture' to store your original model. The pipeline is shown below.
+
+
+
+After this replacement, the algorithm will prune your model during your training process.
+
+## How to Config an Algorithm
+
+All pruning algorithms are defined in mmrazor.models.algorithms.pruning. All algorithms have some shared pruning-related arguments, some specific arguments, and some shared mmengine.BaseModel arguments.
+
+Here we take pruning resnet34 using the l1-norm algorithm as an example. We use "mmcls::resnet/resnet34_8xb32_in1k.py" as a base config. Then we override the model config and use the original model config as the architecture of 'ItePruneAlgorithm'.
+
+```python
+_base_ = ['mmcls::resnet/resnet34_8xb32_in1k.py']
+
+stage_ratio_1 = 0.7
+stage_ratio_2 = 0.7
+stage_ratio_3 = 0.7
+stage_ratio_4 = 1.0
+
+target_pruning_ratio = {
+ 'backbone.layer1.2.conv2_(0, 64)_64': stage_ratio_1,
+ 'backbone.layer1.0.conv1_(0, 64)_64': stage_ratio_1,
+ 'backbone.layer1.1.conv1_(0, 64)_64': stage_ratio_1,
+ 'backbone.layer1.2.conv1_(0, 64)_64': stage_ratio_1,
+ 'backbone.layer2.0.conv1_(0, 128)_128': stage_ratio_2,
+ 'backbone.layer2.3.conv2_(0, 128)_128': stage_ratio_2,
+ 'backbone.layer2.1.conv1_(0, 128)_128': stage_ratio_2,
+ 'backbone.layer2.2.conv1_(0, 128)_128': stage_ratio_2,
+ 'backbone.layer2.3.conv1_(0, 128)_128': stage_ratio_2,
+ 'backbone.layer3.0.conv1_(0, 256)_256': stage_ratio_3,
+ 'backbone.layer3.5.conv2_(0, 256)_256': stage_ratio_3,
+ 'backbone.layer3.1.conv1_(0, 256)_256': stage_ratio_3,
+ 'backbone.layer3.2.conv1_(0, 256)_256': stage_ratio_3,
+ 'backbone.layer3.3.conv1_(0, 256)_256': stage_ratio_3,
+ 'backbone.layer3.4.conv1_(0, 256)_256': stage_ratio_3,
+ 'backbone.layer3.5.conv1_(0, 256)_256': stage_ratio_3,
+ 'backbone.layer4.0.conv1_(0, 512)_512': stage_ratio_4,
+ 'backbone.layer4.2.conv2_(0, 512)_512': stage_ratio_4,
+ 'backbone.layer4.1.conv1_(0, 512)_512': stage_ratio_4,
+ 'backbone.layer4.2.conv1_(0, 512)_512': stage_ratio_4
+}
+
+architecture = _base_.model
+
+model = dict(
+ _scope_='mmrazor',
+ _delete_=True,
+ type='ItePruneAlgorithm',
+ architecture=architecture,
+ mutator_cfg=dict(
+ type='BaseChannelMutator',
+ channel_unit_cfg=dict(
+ type='L1MutableChannelUnit',
+ default_args=dict(choice_mode='ratio'))
+ parse_cfg=dict(
+ type='BackwardTracer',
+ loss_calculator=dict(type='ImageClassifierPseudoLoss')),
+ target_pruning_ratio=target_pruning_ratio,
+ step_epoch=1,
+ prune_times=1,
+ data_preprocessor=None,
+ init_cfg=None
+)
+```
+
+**Shared pruning-related arguments**: All pruning algorithms have two shared pruning-related arguments.
+
+- Architecture
+ - Architecture defines the model to be pruned. Usually, you need to pass your original model config to the argument.
+- mutator_cfg
+ - The config of a mutator to manage the structure of your model. Usually, each algorithm has a frequently-used mutator. Please refer to the next section for more detail.
+
+**Specific arguments**:
+A algorithm may have its specific arguments. You need to read their documents to know how to config. Here, we only introduce the specific arguments of ItePruneAlgorithm.
+
+- target_pruning_ratio: target_pruning_ratio is a dict that uses the name of units as keys and the choice values as values.. It indicates how many channels remain after pruning. You can use python ./tools/pruning/get_channel_units.py --choice {config_file} to get the choice template. Please refer to [How to Use our Config Tool for Pruning](./how_to_use_config_tool_of_pruning.md).
+- step_epoch: the step between two pruning operations.
+- prune_times: the times to prune to reach the pruning target. Here, we prune resnet34 once, so we set it to 1.
+
+**Shared BaseModel arguments**:
+Our algorithms inherit from BaseModel, so each algorithm has shared arguments from BaseModel.
+
+- data_preprocessor: Used for pre-processing data sampled by dataloader to the format accepted by :meth:`forward`.
+- init_cfg: Initialization config dict
+
+## How to Config A Mutator
+
+A mutator is used to manage the structure of a model.
+
+Mutators have two augments:
+
+- channel_unit_cfg: config of channel units. The config should follow the template below.
+
+ ```python
+ channel_unit_cfg = dict(
+ # type of used MutableChannelUnit
+ type ='XxxMutableChannelUnit',
+ # default args for MutableChananelUnit
+ default_args={},
+ units = {
+ # config of a unit
+ "xxx_unit_name": {
+ "init_args":{},
+ "channels":{},
+ },
+ ...
+ }
+ ),
+ ```
+
+ MutableChannelUnit decides how to generate a channel choice. It's important to choose the right MutableChannelUnit. Here, we choose 'L1MutableChannelUnit' to apply the l1-norm algorithm.
+
+- parse_cfg: parse_cfg defines the method to parse the model and get channel units.
+ There are three ways used in BaseChannelMutator to parse a model and get MutableChannelUnits.
+
+ 1. Using tracer. It needs parse_cfg to be the config of a tracer.
+ 2. Using config. When parse_cfg\['type'\]='Config'. It needs that channel_unit_cfg\['unit'\]\['xxx_unit_name\] to have a key 'channels' to indicate channel units.
+ 3. Using the model with pre-defined DynamicOps and MutableChannels: When parse_cfg\['type'\]='Predefined', the mutator will parse the dynamic ops in the model and get channel units.
+
+In the example above, we directly use a tracer to parse the model.
+We also provide a tool for you to configure the mutator, please refer to [How to Use our Config Tool for Pruning](./how_to_use_config_tool_of_pruning.md).
+Besides, please refer to [ChannelMutator](../../../../mmrazor/models/mutators/channel_mutator/channel_mutator.ipynb) for more details about ChannelMutator.
+
+## End
+
+After configuring the algorithm, you can rerun the config file with a pretrained checkpoint to prune your model.
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/how_to_use_config_tool_of_pruning.md b/internlm_langchain/knowledge_base/MMRazor/content/how_to_use_config_tool_of_pruning.md
new file mode 100644
index 00000000..413bf4b3
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/how_to_use_config_tool_of_pruning.md
@@ -0,0 +1,239 @@
+# How to Use our Config Tool for Pruning
+
+## How We Get MutableChannelUnits Automatically
+
+Our pruning framework can automatically parse a model and get MutableChannelUnits.
+It makes it easy to prune new models.
+
+The parsing process is placed in ChannelUnitMutator.prepare_from_supernet. We first trace the model and get a graph, then we parse the graph and get MutableChannelUnits.
+
+
+
+## How to Get ChannelUnit Config Template
+
+To make the configuration of ChannelUnit easy, we provide an interface to get the config template: ChannelMutator.config_template(). It returns a config dict. The config\['channel_unit_cfg'\]\['units\] store all parsed MutableChannelUnits.
+
+```python
+def config_template(self,
+ only_mutable_units=False,
+ with_unit_init_args=False,
+ with_channels=False):
+ """Config template of the mutator.
+
+ Args:
+ only_mutable_units (bool, optional): Whether only return config of
+ prunable units. It can omit unmutable MutableChannelUnits
+ to decrease the length of the config. Defaults to False.
+ with_unit_init_args (bool, optional): Whether return init_args of
+ units. Let it be true, when you want to change the init
+ args of units. Defaults to False.
+ with_channels (bool, optional): Whether return channel info.
+ The channel info can initialization the units without
+ tracer. When you want to prune your model without a
+ tracer next time, let it be true. Defaults to False.
+
+ Example:
+ dict(
+ channel_unit_cfg = dict(
+ # type of used MutableChannelUnit
+ type ='XxxMutableChannelUnit',
+ # default args for MutableChananelUnit
+ default_args={},
+ # config of units
+ units = {
+ # config of a unit
+ "xxx_unit_name": {
+ 'init_args':{}, # if with_unit_init_args
+ 'channels':{} # if with_channels
+ },
+ ...
+ }
+ ),
+ # config of tracer
+ parse_cfg={}
+ )
+
+
+ About the detail of the config of each unit, please refer to
+ MutableChannelUnit.config_template()
+ """
+```
+
+Note the name of a unit is generated automatically according to their content, avoid to change the name in config.
+
+Here, we give an example of getting a config template using code.
+
+```python
+from mmrazor.models.mutators import ChannelMutator
+from torchvision.models import resnet34
+model = resnet34()
+# initialize a ChannelMutator object
+mutator = ChannelMutator(
+ channel_unit_cfg=dict(
+ type='SequentialMutableChannelUnit',
+ default_args=dict(choice_mode='ratio'),
+ units={},
+ ),
+ parse_cfg=dict(
+ type='ChannelAnalyzer',
+ demo_input=(1, 3, 224, 224),
+ tracer_type='BackwardTracer'))
+# init the ChannelMutator object with a model
+mutator.prepare_from_supernet(model)
+config=mutator.config_template(with_unit_init_args=True)
+print(config)
+# {
+# 'type': 'ChannelMutator',
+# 'channel_unit_cfg': {
+# 'type': 'SequentialMutableChannelUnit',
+# 'default_args': {
+# 'choice_mode': 'ratio'
+# },
+# 'units': {
+# 'conv1_(0, 3)_3': {
+# 'init_args': {
+# 'num_channels': 3,
+# 'choice_mode': 'ratio',
+# ...
+# },
+# 'choice': 1.0
+# },
+# ...
+# }
+# },
+# 'parse_cfg': {
+# type='ChannelAnalyzer',
+# demo_input=(1, 3, 224, 224),
+# tracer_type='BackwardTracer'
+# }
+# }
+```
+
+Besides, it's also easy to initialize a new mutator using the config dict.
+
+```python
+# follow the code above
+from mmrazor.registry import MODELS
+mutator2=MODELS.build(config)
+mutator2.prepare_from_supernet(resnet34())
+```
+
+To make your development more fluent, we provide a command tool to parse a model and return the config template.
+
+```shell
+$ python ./tools/pruning/get_channel_units.py -h
+
+usage: pruning/get_channel_units.py [-h] [-c] [-i] [--choice] [-o OUTPUT_PATH] config
+
+Get channel unit of a model.
+
+positional arguments:
+ config config of the model
+
+optional arguments:
+ -h, --help show this help message and exit
+ -c, --with-channel output with channel config
+ -i, --with-init-args output with init args
+ --choice output choices template. When this flag is activated, -c and -i will be ignored
+ -o OUTPUT_PATH, --output-path OUTPUT_PATH
+ the file path to store channel unit info
+```
+
+Take the algorithm Slimmable Network as an example.
+
+```shell
+python ./tools/pruning/get_channel_units.py ./configs/pruning/mmcls/autoslim/autoslim_mbv2_1.5x_slimmable_subnet_8xb256_in1k.py
+
+# {
+# "type":"SlimmableChannelMutator",
+# "channel_unit_cfg":{
+# "type":"SlimmableChannelUnit",
+# "default_args":{},
+# "units":{
+# "backbone.conv1.conv_(0, 3)_3":{
+# "choice":3
+# },
+# "backbone.conv1.conv_(0, 48)_48":{
+# "choice":32
+# },
+ ...
+# }
+# },
+# "parse_cfg":{
+# type='ChannelAnalyzer',
+# demo_input=(1, 3, 224, 224),
+# tracer_type='BackwardTracer'
+# }
+# }
+# }
+```
+
+The '-i' flag will return the config with the initialization arguments.
+
+```shell
+python ./tools/pruning/get_channel_units.py -i ./configs/pruning/mmcls/autoslim/autoslim_mbv2_1.5x_slimmable_subnet_8xb256_in1k.py
+
+# {
+# "type":"SlimmableChannelMutator",
+# "channel_unit_cfg":{
+# "type":"SlimmableChannelUnit",
+# "default_args":{},
+# "units":{
+# "backbone.conv1.conv_(0, 3)_3":{
+# "init_args":{
+# "num_channels":3,
+# "divisor":1,
+# "min_value":1,
+# "min_ratio":0.9,
+# "candidate_choices":[
+# 3
+# ],
+# "choice_mode":"number"
+# },
+# "choice":3
+# },
+# ...
+# }
+# },
+# "parse_cfg":{
+# type='ChannelAnalyzer',
+# demo_input=(1, 3, 224, 224),
+# tracer_type='BackwardTracer'
+# }
+# }
+# }
+```
+
+With "--choice" flag, it will return the choice template, a dict which uses unit_name as key, and use the choice value as value.
+
+```shell
+python ./tools/pruning/get_channel_units.py -i ./configs/pruning/mmcls/autoslim/autoslim_mbv2_1.5x_slimmable_subnet_8xb256_in1k.py --choice
+
+# {
+# "backbone.conv1.conv_(0, 48)_48":32,
+# "backbone.layer1.0.conv.1.conv_(0, 24)_24":16,
+# "backbone.layer2.0.conv.0.conv_(0, 144)_144":144,
+# "backbone.layer2.0.conv.2.conv_(0, 40)_40":24,
+# "backbone.layer2.1.conv.0.conv_(0, 240)_240":176,
+# "backbone.layer3.0.conv.0.conv_(0, 240)_240":192,
+# "backbone.layer3.0.conv.2.conv_(0, 48)_48":48,
+# "backbone.layer3.1.conv.0.conv_(0, 288)_288":240,
+# "backbone.layer3.2.conv.0.conv_(0, 288)_288":144,
+# "backbone.layer4.0.conv.0.conv_(0, 288)_288":264,
+# "backbone.layer4.0.conv.2.conv_(0, 96)_96":88,
+# "backbone.layer4.1.conv.0.conv_(0, 576)_576":288,
+# "backbone.layer4.2.conv.0.conv_(0, 576)_576":336,
+# "backbone.layer4.3.conv.0.conv_(0, 576)_576":432,
+# "backbone.layer5.0.conv.0.conv_(0, 576)_576":576,
+# "backbone.layer5.0.conv.2.conv_(0, 144)_144":144,
+# "backbone.layer5.1.conv.0.conv_(0, 864)_864":576,
+# "backbone.layer5.2.conv.0.conv_(0, 864)_864":648,
+# "backbone.layer6.0.conv.0.conv_(0, 864)_864":864,
+# "backbone.layer6.0.conv.2.conv_(0, 240)_240":240,
+# "backbone.layer6.1.conv.0.conv_(0, 1440)_1440":1440,
+# "backbone.layer6.2.conv.0.conv_(0, 1440)_1440":1440,
+# "backbone.layer7.0.conv.0.conv_(0, 1440)_1440":1440,
+# "backbone.layer7.0.conv.2.conv_(0, 480)_480":480,
+# "backbone.conv2.conv_(0, 1920)_1920":1920
+# }
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/installation.md b/internlm_langchain/knowledge_base/MMRazor/content/installation.md
new file mode 100644
index 00000000..24550650
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/installation.md
@@ -0,0 +1,66 @@
+# Installation
+
+## Prerequisites
+
+In this section we demonstrate how to prepare an environment with PyTorch.
+
+MMRazor works on Linux, Windows and macOS. It requires Python 3.6+, CUDA 9.2+ and PyTorch 1.8+.
+
+**Note:**
+If you are experienced with PyTorch and have already installed it, just skip this part and jump to the [next section](##installation). Otherwise, you can follow these steps for the preparation.
+
+**Step 0.** Download and install Miniconda from the [official website](https://docs.conda.io/en/latest/miniconda.html).
+
+**Step 1.** Create a conda environment and activate it.
+
+```shell
+conda create --name openmmlab python=3.8 -y
+conda activate openmmlab
+```
+
+**Step 2.** Install PyTorch following [official instructions](https://pytorch.org/get-started/locally/), e.g.
+
+On GPU platforms:
+
+```shell
+conda install pytorch torchvision -c pytorch
+```
+
+On CPU platforms:
+
+```shell
+conda install pytorch torchvision cpuonly -c pytorch
+```
+
+## Installation
+
+We recommend that users follow our best practices to install MMRazor.
+
+### Best Practices
+
+**Step 0.** Install [MMCV](https://github.com/open-mmlab/mmcv) using [MIM](https://github.com/open-mmlab/mim).
+
+```shell
+pip install -U openmim
+mim install mmengine
+mim install "mmcv>=2.0.0"
+```
+
+**Step 1.** Install MMRazor.
+
+Case a: If you develop and run mmrazor directly, install it from source:
+
+```shell
+git clone -b main https://github.com/open-mmlab/mmrazor.git
+cd mmrazor
+pip install -v -e .
+# '-v' means verbose, or more output
+# '-e' means installing a project in editable mode,
+# thus any local modifications made to the code will take effect without reinstallation.
+```
+
+Case b: If you use mmrazor as a dependency or third-party package, install it with pip:
+
+```shell
+pip install "mmrazor>=1.0.0"
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/model_zoo.md b/internlm_langchain/knowledge_base/MMRazor/content/model_zoo.md
new file mode 100644
index 00000000..fce86388
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/model_zoo.md
@@ -0,0 +1,30 @@
+# Model Zoo
+
+## Baselines
+
+| Type | Name | Link |
+| ------------ | :-------------: | :------------------------------------------------------------------------------------------------: |
+| nas | SPOS | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/nas/mmcls/spos) |
+| nas | DARTS | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/nas/mmcls/darts) |
+| nas | DetNAS | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/nas/mmdet/detnas) |
+| pruning | AutoSlim | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/pruning/mmcls/autoslim) |
+| pruning | L1-norm | [README.md](https://github.com/open-mmlab/mmrazor/tree/main//configs/pruning/mmcls/l1-norm) |
+| pruning | Group Fisher | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/pruning/base/group_fisher) |
+| pruning | DMCP | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/pruning/mmcls/dmcp) |
+| ditill | ABLoss | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/abloss) |
+| ditill | BYOT | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/byot) |
+| ditill | DAFL | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/dafl) |
+| ditill | DFAD | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/dfad) |
+| ditill | DKD | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/dkd) |
+| ditill | Factor Transfer | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/factor_transfer) |
+| ditill | FitNets | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/fitnets) |
+| ditill | KD | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/kd) |
+| ditill | OFD | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/ofd) |
+| ditill | RKD | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/rkd) |
+| ditill | WSLD | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/wsld) |
+| ditill | ZSKT | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmcls/zskt) |
+| ditill | CWD | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmdet/cwd) |
+| ditill | FBKD | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/distill/mmdet/fbkd) |
+| quantization | PTQ | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/quantization/ptq/base) |
+| quantization | QAT | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/quantization/qat/base) |
+| quantization | LSQ | [README.md](https://github.com/open-mmlab/mmrazor/tree/main/configs/quantization/qat/lsq) |
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/mutable.md b/internlm_langchain/knowledge_base/MMRazor/content/mutable.md
new file mode 100644
index 00000000..8d59cfe7
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/mutable.md
@@ -0,0 +1,401 @@
+# Mutable
+
+## Introduction
+
+### What is Mutable
+
+`Mutable` is one of basic function components in NAS algorithms and some pruning algorithms, which makes supernet searchable by providing optional modules or parameters.
+
+To understand it better, we take the mutable module as an example to explain as follows.
+
+
+
+As shown in the figure above, `Mutable` is a container that holds some candidate operations, thus it can sample candidates to constitute the subnet. `Supernet` usually consists of multiple `Mutable`, therefore, `Supernet` will be searchable with the help of `Mutable`. And all candidate operations in `Mutable` constitute the search space of `SuperNet`.
+
+```{note}
+If you want to know more about the relationship between Mutable and Mutator, please refer to [Mutator](https://mmrazor.readthedocs.io/en/main/advanced_guides/mutator.html)
+```
+
+### Features
+
+#### 1. Support module mutable
+
+It is the common and basic function for NAS algorithms. We can use it to implement some classical one-shot NAS algorithms, such as [SPOS](https://arxiv.org/abs/1904.00420), [DetNAS ](https://arxiv.org/abs/1903.10979)and so on.
+
+#### 2. Support parameter mutable
+
+To implement more complicated and funny algorithms easier, we supported making some important parameters searchable, such as input channel, output channel, kernel size and so on.
+
+What is more, we can implement **dynamic op** by using mutable parameters.
+
+#### 3. Support deriving from mutable parameter
+
+Because of the restriction of defined architecture, there may be correlations between some mutable parameters, **such as concat and expand ratio.**
+
+```{note}
+If conv3 = concat (conv1, conv2)
+
+When out_channel (conv1) = 3, out_channel (conv2) = 4
+
+Then in_channel (conv3) must be 7 rather than mutable.
+
+So use derived mutable from conv1 and conv2 to generate in_channel (conv3)
+```
+
+With the help of derived mutable, we can meet these special requirements in some NAS algorithms and pruning algorithms. What is more, it can be used to deal with different granularity between search spaces.
+
+### Supported mutables
+
+
+
+As shown in the figure above.
+
+- **White blocks** stand the basic classes, which include `BaseMutable` and `DerivedMethodMixin`. `BaseMutable` is the base class for all mutables, which defines required properties and abstracmethods. `DerivedMethodMixin` is a mixin class to provide mutable parameters with some useful methods to derive mutable.
+
+- **Gray blocks** stand different types of base mutables.
+
+ ```{note}
+ Because there are correlations between channels of some layers, we divide mutable parameters into `MutableChannel` and `MutableValue`, so you can also think `MutableChannel` is a special `MutableValue`.
+ ```
+
+ For supporting module and parameters mutable, we provide `MutableModule`, `MutableChannel` and `MutableValue` these base classes to implement required basic functions. And we also add `OneshotMutableModule` and `DiffMutableModule` two types based on `MutableModule` to meet different types of algorithms' requirements.
+
+ For supporting deriving from mutable parameters, we make `MutableChannel` and `MutableValue` inherit from `BaseMutable` and `DerivedMethodMixin`, thus they can get derived functions provided by `DerivedMethodMixin`.
+
+- **Red blocks** and **green blocks** stand registered classes for implementing some specific algorithms, which means that you can use them directly in configs. If they do not meet your requirements, you can also customize your mutable based on our base classes. If you are interested in their realization, please refer to their docstring.
+
+## How to use existing mutables to configure searchable backbones
+
+We will use `OneShotMutableOP` to build a `SearchableShuffleNetV2` backbone as follows.
+
+1. Configure needed mutables
+
+```Python
+# we only use OneShotMutableOP, then take 4 ShuffleOP as its candidates.
+_STAGE_MUTABLE = dict(
+ _scope_='mmrazor',
+ type='OneShotMutableOP',
+ candidates=dict(
+ shuffle_3x3=dict(type='ShuffleBlock', kernel_size=3),
+ shuffle_5x5=dict(type='ShuffleBlock', kernel_size=5),
+ shuffle_7x7=dict(type='ShuffleBlock', kernel_size=7),
+ shuffle_xception=dict(type='ShuffleXception')))
+```
+
+2. Configure the `arch_setting` of `SearchableShuffleNetV2`
+
+```Python
+# Use the _STAGE_MUTABLE in various stages.
+arch_setting = [
+ # Parameters to build layers. 3 parameters are needed to construct a
+ # layer, from left to right: channel, num_blocks, mutable_cfg.
+ [64, 4, _STAGE_MUTABLE],
+ [160, 4, _STAGE_MUTABLE],
+ [320, 8, _STAGE_MUTABLE],
+ [640, 4, _STAGE_MUTABLE]
+]
+```
+
+3. Configure searchable backbone.
+
+```Python
+nas_backbone = dict(
+ _scope_='mmrazor',
+ type='SearchableShuffleNetV2',
+ widen_factor=1.0,
+ arch_setting=arch_setting)
+```
+
+Then you can use it in your architecture. If existing mutables do not meet your needs, you can also customize your needed mutable.
+
+## How to customize your mutable
+
+### About base mutable
+
+Before customizing mutables, we need to know what some base mutables do.
+
+**BaseMutable**
+
+In order to implement the searchable mechanism, mutables need to own some base functions, such as changing status from mutable to fixed, recording the current status and current choice and so on. So in `BaseMutable`, these relevant abstract methods and properties will be defined as follows.
+
+```Python
+# Copyright (c) OpenMMLab. All rights reserved.
+from abc import ABC, abstractmethod
+from typing import Dict, Generic, Optional, TypeVar
+
+from mmengine.model import BaseModule
+
+CHOICE_TYPE = TypeVar('CHOICE_TYPE')
+CHOSEN_TYPE = TypeVar('CHOSEN_TYPE')
+
+class BaseMutable(BaseModule, ABC, Generic[CHOICE_TYPE, CHOSEN_TYPE]):
+
+ def __init__(self,
+ alias: Optional[str] = None,
+ init_cfg: Optional[Dict] = None) -> None:
+ super().__init__(init_cfg=init_cfg)
+
+ self.alias = alias
+ self._is_fixed = False
+ self._current_choice: Optional[CHOICE_TYPE] = None
+
+ @property
+ def current_choice(self) -> Optional[CHOICE_TYPE]:
+ return self._current_choice
+
+ @current_choice.setter
+ def current_choice(self, choice: Optional[CHOICE_TYPE]) -> None:
+ self._current_choice = choice
+
+ @property
+ def is_fixed(self) -> bool:
+ return self._is_fixed
+
+ @is_fixed.setter
+ def is_fixed(self, is_fixed: bool) -> None:
+ ......
+ self._is_fixed = is_fixed
+
+ @abstractmethod
+ def fix_chosen(self, chosen: CHOSEN_TYPE) -> None:
+ pass
+
+ @abstractmethod
+ def dump_chosen(self) -> CHOSEN_TYPE:
+ pass
+
+ @property
+ @abstractmethod
+ def num_choices(self) -> int:
+ pass
+```
+
+**MutableModule**
+
+Inherited from `BaseModule`, `MutableModule` not only owns its basic functions, but also needs some specialized functions to implement module mutable, such as getting all choices, executing forward computation.
+
+```Python
+# Copyright (c) OpenMMLab. All rights reserved.
+from abc import abstractmethod
+from typing import Any, Dict, List, Optional
+
+from ..base_mutable import CHOICE_TYPE, CHOSEN_TYPE, BaseMutable
+
+class MutableModule(BaseMutable[CHOICE_TYPE, CHOSEN_TYPE]):
+
+ def __init__(self,
+ module_kwargs: Optional[Dict[str, Dict]] = None,
+ **kwargs) -> None:
+ super().__init__(**kwargs)
+
+ self.module_kwargs = module_kwargs
+
+ @property
+ @abstractmethod
+ def choices(self) -> List[CHOICE_TYPE]:
+ """list: all choices. All subclasses must implement this method."""
+
+ @abstractmethod
+ def forward(self, x: Any) -> Any:
+ """Forward computation."""
+
+ @property
+ def num_choices(self) -> int:
+ """Number of choices."""
+ return len(self.choices)
+```
+
+If you want to know more about other types mutables, please refer to their docstring.
+
+### Steps of customizing mutables
+
+There are 4 steps to implement a custom mutable.
+
+1. Registry a new mutable
+
+2. Implement abstract methods.
+
+3. Implement other methods.
+
+4. Import the class
+
+Then you can use your customized mutable in configs as in the previous chapter.
+
+Let's use `OneShotMutableOP` as an example for customizing mutable.
+
+#### 1. Registry a new mutable
+
+First, you need to determine which type mutable to implement. Thus, you can implement your mutable faster by inheriting from correlative base mutable.
+
+Then create a new file `mmrazor/models/mutables/mutable_module/one_shot_mutable_module`, class `OneShotMutableOP` inherits from `OneShotMutableModule`.
+
+```Python
+# Copyright (c) OpenMMLab. All rights reserved.
+import random
+from abc import abstractmethod
+from typing import Any, Dict, List, Optional, Union
+
+import numpy as np
+import torch.nn as nn
+from torch import Tensor
+
+from mmrazor.registry import MODELS
+from ..base_mutable import CHOICE_TYPE, CHOSEN_TYPE
+from .mutable_module import MutableModule
+
+@MODELS.register_module()
+class OneShotMutableOP(OneShotMutableModule[str, str]):
+ ...
+```
+
+#### 2. Implement abstract methods
+
+##### 2.1 Basic abstract methods
+
+These basic abstract methods are mainly from `BaseMutable` and `MutableModule`, such as `fix_chosen`, `dump_chosen`, `choices` and `num_choices`.
+
+```Python
+@MODELS.register_module()
+class OneShotMutableOP(OneShotMutableModule[str, str]):
+ ......
+
+ def fix_chosen(self, chosen: str) -> None:
+ """Fix mutable with subnet config. This operation would convert
+ `unfixed` mode to `fixed` mode. The :attr:`is_fixed` will be set to
+ True and only the selected operations can be retained.
+ Args:
+ chosen (str): the chosen key in ``MUTABLE``. Defaults to None.
+ """
+ if self.is_fixed:
+ raise AttributeError(
+ 'The mode of current MUTABLE is `fixed`. '
+ 'Please do not call `fix_chosen` function again.')
+
+ for c in self.choices:
+ if c != chosen:
+ self._candidates.pop(c)
+
+ self._chosen = chosen
+ self.is_fixed = True
+
+ def dump_chosen(self) -> str:
+ assert self.current_choice is not None
+
+ return self.current_choice
+
+ @property
+ def choices(self) -> List[str]:
+ """list: all choices. """
+ return list(self._candidates.keys())
+
+ @property
+ def num_choices(self):
+ return len(self.choices)
+```
+
+##### 2.2 Specified abstract methods
+
+In `OneShotMutableModule`, sample and forward these required abstract methods are defined, such as `sample_choice`, `forward_choice`, `forward_fixed`, `forward_all`. So we need to implement these abstract methods.
+
+```Python
+@MODELS.register_module()
+class OneShotMutableOP(OneShotMutableModule[str, str]):
+ ......
+
+ def sample_choice(self) -> str:
+ """uniform sampling."""
+ return np.random.choice(self.choices, 1)[0]
+
+ def forward_fixed(self, x: Any) -> Tensor:
+ """Forward with the `fixed` mutable.
+ Args:
+ x (Any): x could be a Torch.tensor or a tuple of
+ Torch.tensor, containing input data for forward computation.
+ Returns:
+ Tensor: the result of forward the fixed operation.
+ """
+ return self._candidates[self._chosen](x)
+
+ def forward_choice(self, x: Any, choice: str) -> Tensor:
+ """Forward with the `unfixed` mutable and current choice is not None.
+ Args:
+ x (Any): x could be a Torch.tensor or a tuple of
+ Torch.tensor, containing input data for forward computation.
+ choice (str): the chosen key in ``OneShotMutableOP``.
+ Returns:
+ Tensor: the result of forward the ``choice`` operation.
+ """
+ assert isinstance(choice, str) and choice in self.choices
+ return self._candidates[choice](x)
+
+ def forward_all(self, x: Any) -> Tensor:
+ """Forward all choices. Used to calculate FLOPs.
+ Args:
+ x (Any): x could be a Torch.tensor or a tuple of
+ Torch.tensor, containing input data for forward computation.
+ Returns:
+ Tensor: the result of forward all of the ``choice`` operation.
+ """
+ outputs = list()
+ for op in self._candidates.values():
+ outputs.append(op(x))
+ return sum(outputs)
+```
+
+#### 3. Implement other methods
+
+After finishing some required methods, we need to add some special methods, such as `_build_ops`, because it is needed in building candidates for sampling.
+
+```Python
+@MODELS.register_module()
+class OneShotMutableOP(OneShotMutableModule[str, str]):
+ ......
+
+ @staticmethod
+ def _build_ops(
+ candidates: Union[Dict[str, Dict], nn.ModuleDict],
+ module_kwargs: Optional[Dict[str, Dict]] = None) -> nn.ModuleDict:
+ """Build candidate operations based on choice configures.
+ Args:
+ candidates (dict[str, dict] | :obj:`nn.ModuleDict`): the configs
+ for the candidate operations or nn.ModuleDict.
+ module_kwargs (dict[str, dict], optional): Module initialization
+ named arguments.
+ Returns:
+ ModuleDict (dict[str, Any], optional): the key of ``ops`` is
+ the name of each choice in configs and the value of ``ops``
+ is the corresponding candidate operation.
+ """
+ if isinstance(candidates, nn.ModuleDict):
+ return candidates
+
+ ops = nn.ModuleDict()
+ for name, op_cfg in candidates.items():
+ assert name not in ops
+ if module_kwargs is not None:
+ op_cfg.update(module_kwargs)
+ ops[name] = MODELS.build(op_cfg)
+ return ops
+```
+
+#### 4. Import the class
+
+You can either add the following line to `mmrazor/models/mutables/mutable_module/__init__.py`
+
+```Python
+from .one_shot_mutable_module import OneShotMutableModule
+
+__all__ = ['OneShotMutableModule']
+```
+
+or alternatively add
+
+```Python
+custom_imports = dict(
+ imports=['mmrazor.models.mutables.mutable_module.one_shot_mutable_module'],
+ allow_failed_imports=False)
+```
+
+to the config file to avoid modifying the original code.
+
+Customize `OneShotMutableOP` is over, then you can use it directly in your algorithm.
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/mutator.md b/internlm_langchain/knowledge_base/MMRazor/content/mutator.md
new file mode 100644
index 00000000..aa28a199
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/mutator.md
@@ -0,0 +1,243 @@
+# Mutator
+
+## Introduction
+
+### What is Mutator
+
+**Mutator** is one of algorithm components, which provides some useful functions used for mutable management, such as sample choice, set choicet and so on. With Mutator's help, you can implement some NAS or pruning algorithms quickly.
+
+### What is the relationship between Mutator and Mutable
+
+
+
+In a word, Mutator is the manager of Mutable. Each different type of mutable is commonly managed by their one correlative mutator, respectively.
+
+As shown in the figure, Mutable is a component of supernet, therefore Mutator can implement some functions about subnet from supernet by handling Mutable.
+
+### Supported mutators
+
+In MMRazor, we have implemented some mutators, their relationship is as below.
+
+
+
+`BaseMutator`: Base class for all mutators. It has appointed some abstract methods supported by all mutators.
+
+`ModuleMuator`/ `ChannelMutator`: Two different types mutators are for handling mutable module and mutable channel respectively.
+
+```{note}
+Please refer to [Mutable](https://mmrazor.readthedocs.io/en/main/advanced_guides/mutable.html) for more details about different types of mutable.
+```
+
+`OneShotModuleMutator` / `DiffModuleMutator`: Inherit from `ModuleMuator`, they are for implementing different types algorithms, such as [SPOS](https://arxiv.org/abs/1904.00420), [Darts](https://arxiv.org/abs/1806.09055) and so on.
+
+`OneShotChannelMutator` / `SlimmableChannelMutator`: Inherit from `ChannelMutator`, they are also for meeting the needs of different types algorithms, such as [AotuSlim](https://arxiv.org/abs/1903.11728).
+
+## How to use existing mutators
+
+You just use them directly in configs as below
+
+```Python
+supernet = dict(
+ ...
+ )
+
+model = dict(
+ type='mmrazor.SPOS',
+ architecture=supernet,
+ mutator=dict(type='mmrazor.OneShotModuleMutator'))
+```
+
+If existing mutators do not meet your needs, you can also customize your needed mutator.
+
+## How to customize your mutator
+
+All mutators need to implement at least two of the following interfaces
+
+- `prepare_from_supernet()`
+
+ - Make some necessary preparations according to the given supernet. These preparations may include, but are not limited to, grouping the search space, and initializing mutator with the parameters needed for itself.
+
+- `search_groups`
+
+ - Group of search space.
+
+ - Note that **search groups** and **search space** are two different concepts. The latter defines what choices can be used for searching. The former groups the search space, and searchable blocks that are grouped into the same group will share the same search space and the same sample result.
+
+ - ```Python
+ # Example
+ search_space = {op1, op2, op3, op4}
+ search_group = {0: [op1, op2], 1: [op3, op4]}
+ ```
+
+There are 4 steps to implement a custom mutator.
+
+1. Registry a new mutator
+
+2. Implement abstract methods
+
+3. Implement other methods
+
+4. Import the class
+
+Then you can use your customized mutator in configs as in the previous chapter.
+
+Let's use `OneShotModuleMutator` as an example for customizing mutator.
+
+### 1.Registry a new mutator
+
+First, you need to determine which type mutator to implement. Thus, you can implement your mutator faster by inheriting from correlative base mutator.
+
+Then create a new file `mmrazor/models/mutators/module_mutator/one_shot_module_mutator`, class `OneShotModuleMutator` inherits from `ModuleMutator`.
+
+```Python
+from mmrazor.registry import MODELS
+from .module_mutator import ModuleMutator
+
+@MODELS.register_module()
+class OneShotModuleMutator(ModuleMutator):
+ ...
+```
+
+### 2. Implement abstract methods
+
+2.1. Rewrite the `mutable_class_type` property
+
+```Python
+@MODELS.register_module()
+class OneShotModuleMutator(ModuleMutator):
+
+ @property
+ def mutable_class_type(self):
+ """One-shot mutable class type.
+ Returns:
+ Type[OneShotMutableModule]: Class type of one-shot mutable.
+ """
+ return OneShotMutableModule
+```
+
+2.2. Rewrite `search_groups` and `prepare_from_supernet()`
+
+As the `prepare_from_supernet()` method and the `search_groups` property are already implemented in the `ModuleMutator` and we don't need to add our own logic, the second step is already over.
+
+If you need to implement them by yourself, you can refer to these as follows.
+
+2.3. **Understand** **`search_groups`** **(optional)**
+
+Let's take an example to see what default `search_groups` do.
+
+```Python
+from mmrazor.models import OneShotModuleMutator, OneShotMutableModule
+
+class SearchableModel(nn.Module):
+ def __init__(self, one_shot_op_cfg):
+ # assume `OneShotMutableModule` contains 4 choices:
+ # choice1, choice2, choice3 and choice4
+ self.choice_block1 = OneShotMutableModule(**one_shot_op_cfg)
+ self.choice_block2 = OneShotMutableModule(**one_shot_op_cfg)
+ self.choice_block3 = OneShotMutableModule(**one_shot_op_cfg)
+
+ def forward(self, x: Tensor) -> Tensor:
+ x = self.choice_block1(x)
+ x = self.choice_block2(x)
+ x = self.choice_block3(x)
+
+ return x
+
+supernet = SearchableModel(one_shot_op_cfg)
+mutator1 = OneShotModuleMutator()
+# build mutator1 from supernet.
+mutator1.prepare_from_supernet(supernet)
+>>> mutator1.search_groups.keys()
+dict_keys([0, 1, 2])
+```
+
+In this case, each `OneShotMutableModule` will be divided into a group. Thus, the search groups have 3 groups.
+
+If you want to custom group according to your requirement, you can implement it by passing the arg `custom_group`.
+
+```Python
+custom_group = [
+ ['op1', 'op2'],
+ ['op3']
+]
+mutator2 = OneShotMutator(custom_group)
+mutator2.prepare_from_supernet(supernet)
+```
+
+Then `choice_block1` and `choice_block2` will share the same search space and the same sample result, and `choice_block3` will have its own independent search space. Thus, the search groups have only 2 groups.
+
+```Python
+>>> mutator2.search_groups.keys()
+dict_keys([0, 1])
+```
+
+### 3. Implement other methods
+
+After finishing some required methods, we need to add some special methods, such as `sample_choices` and `set_choices`.
+
+```Python
+from typing import Any, Dict
+
+from mmrazor.registry import MODELS
+from ...mutables import OneShotMutableModule
+from .module_mutator import ModuleMutator
+
+@MODELS.register_module()
+class OneShotModuleMutator(ModuleMutator):
+
+ def sample_choices(self) -> Dict[int, Any]:
+ """Sampling by search groups.
+ The sampling result of the first mutable of each group is the sampling
+ result of this group.
+ Returns:
+ Dict[int, Any]: Random choices dict.
+ """
+ random_choices = dict()
+ for group_id, modules in self.search_groups.items():
+ random_choices[group_id] = modules[0].sample_choice()
+
+ return random_choices
+
+ def set_choices(self, choices: Dict[int, Any]) -> None:
+ """Set mutables' current choice according to choices sample by
+ :func:`sample_choices`.
+ Args:
+ choices (Dict[int, Any]): Choices dict. The key is group_id in
+ search groups, and the value is the sampling results
+ corresponding to this group.
+ """
+ for group_id, modules in self.search_groups.items():
+ choice = choices[group_id]
+ for module in modules:
+ module.current_choice = choice
+
+ @property
+ def mutable_class_type(self):
+ """One-shot mutable class type.
+ Returns:
+ Type[OneShotMutableModule]: Class type of one-shot mutable.
+ """
+ return OneShotMutableModule
+```
+
+### 4. Import the class
+
+You can either add the following line to `mmrazor/models/mutators/module_mutator/__init__.py`
+
+```Python
+from .one_shot_module_mutator import OneShotModuleMutator
+
+__all__ = ['OneShotModuleMutator']
+```
+
+or alternatively add
+
+```Python
+custom_imports = dict(
+ imports=['mmrazor.models.mutators.module_mutator.one_shot_module_mutator'],
+ allow_failed_imports=False)
+```
+
+to the config file to avoid modifying the original code.
+
+Customize `OneShotModuleMutator` is over, then you can use it directly in your algorithm.
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/overview.md b/internlm_langchain/knowledge_base/MMRazor/content/overview.md
new file mode 100644
index 00000000..4249192f
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/overview.md
@@ -0,0 +1,105 @@
+# Overview
+
+## Why MMRazor
+
+MMRazor is a model compression toolkit for model slimming, which includes 4 mainstream technologies:
+
+- Neural Architecture Search (NAS)
+- Pruning
+- Knowledge Distillation (KD)
+- Quantization
+
+It is a part of the [OpenMMLab](https://openmmlab.com/) project. If you want to use it now, please refer to [Installation](https://mmrazor.readthedocs.io/en/main/get_started/installation.html).
+
+### Major features:
+
+- **Compatibility**
+
+MMRazor can be easily applied to various projects in OpenMMLab, due to the similar architecture design of OpenMMLab as well as the decoupling of slimming algorithms and vision tasks.
+
+- **Flexibility**
+
+Different algorithms, e.g., NAS, pruning and KD, can be incorporated in a plug-n-play manner to build a more powerful system.
+
+- **Convenience**
+
+With better modular design, developers can implement new model compression algorithms with only a few codes, or even by simply modifying config files.
+
+## Design and Implement
+
+
+
+### Design
+
+There are 3 layers (**Application** / **Algorithm** / **Component**) in overview design. MMRazor mainly includes both of **Component** and **Algorithm**, while **Application** consist of some OpenMMLab upstream repos, such as MMClassification, MMDetection, MMSegmentation and so on.
+
+**Component** provides many useful functions for quickly implementing **Algorithm.** And thanks to OpenMMLab 's powerful and highly flexible config mode and registry mechanism, **Algorithm** can be conveniently applied to **Application.**
+
+How to apply our lightweight algorithms to some upstream tasks? Please refer to the below.
+
+### Implement
+
+In OpenMMLab, implementing vision tasks commonly includes 3 parts (model / dataset / schedule). And just like that, implementing lightweight model also includes 3 parts (algorithm / dataset / schedule) in MMRazor.
+
+`Algorithm` consist of `architecture` and `components`.
+
+`Architecture` is similar to `model` of the upstream repos. You can chose to directly use the original `model` or customize the new `model` as your architecture according to different tasks. For example, you can directly use ResNet-34 and ResNet-18 of MMClassification to implement some KD algorithms, but in NAS, you may need to customize a searchable model.
+
+`Components` consist of various special functions for supporting different lightweight algorithms. They can be directly used in config because of registered into MMEngine. Thus, you can pick some components you need to quickly implement your algorithm. For example, you may need `mutator` / `mutable` / `searchle backbone` if you want to implement a NAS algorithm, and you can pick from `distill loss` / `recorder` / `delivery` / `connector` if you need a KD algorithm.
+
+Please refer to the next section for more details about **Implement**.
+
+```{note}
+The arg name of `algorithm` in config is **model** rather than **algorithm** in order to get better supports of MMCV and MMEngine.
+```
+
+## Key concepts
+
+For better understanding and using MMRazor, it is highly recommended to read the following user documents according to your own needs.
+
+**Global**
+
+- [Algorithm](https://mmrazor.readthedocs.io/en/main/advanced_guides/algorithm.html)
+
+**NAS & Pruning**
+
+- [Mutator](https://mmrazor.readthedocs.io/en/main/advanced_guides/mutator.html)
+- [Mutable](https://mmrazor.readthedocs.io/en/main/advanced_guides/mutable.html)
+
+**KD**
+
+- [Delivery](https://mmrazor.readthedocs.io/en/main/advanced_guides/delivery.html)
+- [Recorder](https://mmrazor.readthedocs.io/en/main/advanced_guides/recorder.html)
+
+## User guide
+
+If you want to run mmrazor quickly, you can refer to as the follows.
+
+- [Learn about Configs](https://mmrazor.readthedocs.io/en/main/user_guides/1_learn_about_config.html)
+- [Train different types algorithms](https://mmrazor.readthedocs.io/en/main/user_guides/2_train_different_types_algorithms.html)
+- [Train with different devices](https://mmrazor.readthedocs.io/en/main/user_guides/3_train_with_different_devices.html)
+- [Test a model](https://mmrazor.readthedocs.io/en/main/user_guides/4_test_a_model.html)
+
+## Tutorials
+
+We provide the following general tutorials according to some typical requirements. If you want to further use MMRazor, you can refer to our source code and API Reference.
+
+**Tutorial list**
+
+- [Customize Architectures](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_architectures.html)
+- [Customize NAS algorithms](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_nas_algorithms.html)
+- [Customize Pruning algorithms](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_pruning_algorithms.html)
+- [Customize KD algorithms](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_kd_algorithms.html)
+- [Customize mixed algorithms](https://mmrazor.readthedocs.io/en/main/advanced_guides/customize_mixed_algorithms.html)
+- [Apply existing algorithms to new tasks](https://mmrazor.readthedocs.io/en/main/advanced_guides/apply_existing_algorithms_to_new_tasks.html)
+
+## F&Q
+
+If you encounter some trouble using MMRazor, you can find whether your question has existed in [F&Q](https://mmrazor.readthedocs.io/en/main/notes/faq.html). If not existed, welcome to open a [Github issue](https://github.com/open-mmlab/mmrazor/issues) for getting support, we will reply it as soon.
+
+## Get support and contribute back
+
+MMRazor is maintained on the [MMRazor Github repository](https://github.com/open-mmlab/mmrazor). We collect feedback and new proposals/ideas on Github. You can:
+
+- Open a [GitHub issue](https://github.com/open-mmlab/mmrazor/issues) for bugs and feature requests.
+- Open a [pull request](https://github.com/open-mmlab/mmrazor/pulls) to contribute code (make sure to read the [contribution guide](https://mmrazor.readthedocs.io/en/main/notes/contribution_guide.html) before doing this).
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/pruning_user_guide.md b/internlm_langchain/knowledge_base/MMRazor/content/pruning_user_guide.md
new file mode 100644
index 00000000..a49b9de4
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/pruning_user_guide.md
@@ -0,0 +1,150 @@
+# User Guides: Pruning Framework
+
+## Background
+
+// TODO
+
+## Pruning Framework
+
+This document introduces the pruning framework in mmrazor. Our pruning framework can help you prune a model automatically, making it easy to extend new algorithms.
+
+The pruning framework consists of five modules: Algorithm, ChanelMutator, MutableChannelUnit, MutableChannel, and DynamicOp. Their main features are detailed below:
+
+| Module | Features |
+| ------------------ | --------------------------------------------------------------------- |
+| Algorithm | Controls training process. |
+| ChanelMutator | Manages the pruning structure of the model. |
+| MutableChannelUnit | Makes pruning decisions. |
+| MutableChannel | Manage a channel mask. |
+| DynamicOp | Forwards with mutable number of channels, and exports pruned modules. |
+
+
+
+
+
+## Algorithm
+
+
+
+
+
+Algorithms inherit from BaseAlgorithm. They control the training process, like deciding when to prune the model in the training/finetune process.
+
+For example, IteAlgorithm prunes the model iteratively by certain epochs.
+
+Here is an example of how to use PruneAlgoritm.
+
+```python
+from mmrazor.models.algorithms import IteAlgorithm
+from mmengine.model import BaseModel
+import torch.nn as nn
+
+class Model(BaseModel):
+ def __init__(self):
+ super().__init__()
+ self.conv = nn.Conv2d(3, 8, 3, 1, 1)
+
+ def forward(self, x):
+ return self.conv(x)
+
+model = Model()
+algorithm = IteAlgorithm(model,
+ mutator_cfg=dict(
+ type='ChannelMutator',
+ channl_unit_cfg=dict(type='L1ChannelUnit')),)
+print(algorithm)
+# IteAlgorithm(
+# (data_preprocessor): BaseDataPreprocessor()
+# (architecture): Model(
+# (data_preprocessor): BaseDataPreprocessor()
+# (conv): DynamicConv2d(
+# 3, 8, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1)
+# (mutable_attrs): ModuleDict(
+# (in_channels): MutableChannelContainer(name=, num_channels=3, activated_channels: 3
+# (out_channels): MutableChannelContainer(name=, num_channels=8, activated_channels: 8
+# )
+# )
+# )
+# (mutator): BaseChannelMutator()
+# )
+
+```
+
+## ChanelMutator
+
+
+
+A ChanelMutator controls the pruning structure of a model. In other words, ChanelMutator decides how many channels each layer prunes. Usually, given a pruning target, such as a flops, latency, or pruning ratio target, the ChannelUnitMutator will output a pruning structure for the model. The pruning structure is variable. The default definition is the remaining channel ratio, and it's also easy to extend to the number of channels or channel buckets.
+
+As some layers' channels are related, the related layers share one pruning decision. We put these associated layers into a MutableChannelUnit. Therefore, the ChanelMutator directly decides the pruning ratio of each MutableChannelUnit.
+
+```python
+from mmrazor.models.mutators import BaseChannelMutator
+from mmengine.model import BaseModel
+import torch.nn as nn
+
+class Model(BaseModel):
+ def __init__(self):
+ super().__init__()
+ self.feature = nn.Sequential(
+ nn.Conv2d(3, 8, 3, 2, 1),
+ nn.Conv2d(8, 16, 3, 2, 1)
+ )
+ self.pool = nn.AdaptiveAvgPool2d(1)
+ self.head = nn.Linear(16, 1000)
+
+ def forward(self, x):
+ x_ = self.pool(self.feature(x))
+ return self.head(x_.flatten(1))
+
+model = Model()
+mutator = BaseChannelMutator()
+mutator.prepare_from_supernet(model)
+print(mutator.sample_choices())
+# {
+# 'feature.0_(0, 8)_out_1_in_1': 0.5,
+# 'feature.1_(0, 16)_out_1_in_1': 0.5625
+# }
+```
+
+Please refer to [ChannelMutator](../../../mmrazor/models/mutables/mutable_channel/units/mutable_channel_unit.ipynb) for more details.
+
+## MutableChannelUnit
+
+
+
+Because some layers' channels are related, the related layers are collected and put in a MutableChannelUnit.
+
+Each MutableChannelUnit accepts a pruning ratio and generates a channel mask for all related layers.
+
+All related layers are divided into two types: output_related and input_related.
+
+- The output channels of output-related layers are in the MutableChannelUnit.
+- The input channels of input-related layers are in the MutableChannelUnit.
+
+Please refer to [MutableChannelUnit](../../../mmrazor/models/mutators/channel_mutator/channel_mutator.ipynb) for more details.
+
+Besides, basic PyTorch modules are converted to DynamicOps, which can deal with a mutable number of channels with MutableChannels.
+
+## DynamicOP && MutableChannel
+
+
+
+**MutableChannel**: Each MutableChannel manages a channel mask for a model. They help DynamicOps to deal with mutable numbers of channels. Please refer to [MutableChannel](../../../mmrazor/models/mutables/mutable_channel/MutableChannel.md) for more details.
+
+**DynamicOp**: DynamicOps inherit from basic torch modules, like nn.Conv2d or nn.Linear. They can forward with mutable numbers of channels and export pruned torch modules.
+Compared with basic torch modules, each DynamicOp has two MutableChannel modules, which control the input and output channels.
+
+## More Documents about Pruning
+
+Please refer to the following documents for more details.
+
+- Development tutorials
+ - [How to prune your model](../advanced_guides/tutorials/how_to_prune_your_model.md)
+ - [How to use config tool of pruning](../advanced_guides/tutorials/how_to_use_config_tool_of_pruning.md)
+- READMEs
+ - [MutableChannel](../../../mmrazor/models/mutables/mutable_channel/MutableChannel.md)
+ - [ChannelMutator](../../../mmrazor/models/mutables/mutable_channel/units/mutable_channel_unit.ipynb)
+ - [MutableChannelUnit](../../../mmrazor/models/mutators/channel_mutator/channel_mutator.ipynb)
+- Demos
+ - [Config pruning](../../../demo/config_pruning.ipynb)
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/quantization_user_guide.md b/internlm_langchain/knowledge_base/MMRazor/content/quantization_user_guide.md
new file mode 100644
index 00000000..35680630
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/quantization_user_guide.md
@@ -0,0 +1,238 @@
+# Quantization
+
+## Introduction
+
+MMRazor's quantization is OpenMMLab's quantization toolkit, which has got through task models and model deployment. With its help, we can quantize and deploy pre-trained models in OpenMMLab to specified backend quickly. Of course, it can also contribute to implementing some custom quantization algorithms easier.
+
+### Major features
+
+- **Ease of use**. Benefited from PyTorch fx, we can quantize our model without modifying the original model, but with user-friendly config.
+- **Multiple backends deployment support**. Because of the specificity of each backend, a gap in performance usually exists between before and after deployment. We provided some common backend deployment support to reduce the gap as much.
+- **Multiple task repos support.** Benefited from OpenMMLab 2.0, our quantization can support all task repos of OpenMMLab without extra code.
+- **Be compatible with PyTorch's core module in quantization**. Some core modules in PyTorch can be used directly in mmrazor, such as `Observer`, `FakeQuantize`, `BackendConfig` and so on.
+
+## Quick run
+
+```{note}
+MMRazor's quantization is based on `torch==1.13`. Other requirements are the same as MMRazor's
+```
+
+Model quantization is in mmrazor, but quantized model deployment is in mmdeploy. So we need to the another branches as follows if we need to delopy our quantized model:
+
+mmdeploy: https://github.com/open-mmlab/mmdeploy/tree/for_mmrazor
+
+```{note}
+If you try to compress mmdet's models and have used `dense_heads`, you can use this branch:
+https://github.com/HIT-cwh/mmdetection/tree/for_mmrazor to avoid the problem that some code can not be traced by `torch.fx.tracer`.
+```
+
+1. Quantize the float model in mmrazor.
+
+```Shell
+# For QAT (Quantization Aware Training)
+python tools/train.py ${CONFIG_PATH} [optional arguments]
+
+# For PTQ (Post-training quantization)
+python tools/ptq.py ${CONFIG_PATH} [optional arguments]
+```
+
+2. Evaluate the quantized model. (optional)
+
+```Shell
+python tools/test.py ${CONFIG_PATH} ${CHECKPOINT_PATH}
+```
+
+3. Export quantized model to a specific backend in mmdeploy. (required by model deployment)
+
+```Shell
+# MODEL_CFG_PATH is the used config in mmrazor.
+python ./tools/deploy.py \
+ ${DEPLOY_CFG_PATH} \
+ ${MODEL_CFG_PATH} \
+ ${MODEL_CHECKPOINT_PATH} \
+ ${INPUT_IMG} \
+ [optional arguments]
+```
+
+This step is the same as how to export an OpenMMLab model to a specific backend. For more details, please refer to [How to convert model](https://github.com/open-mmlab/mmdeploy/blob/master/docs/en/02-how-to-run/convert_model.md)
+
+4. Evaluate the quantized backend model. (optional)
+
+```Shell
+python tools/test.py \
+ ${DEPLOY_CFG} \
+ ${MODEL_CFG} \
+ --model ${BACKEND_MODEL_FILES} \
+ [optional arguments]
+```
+
+This step is the same as evaluating backend models. For more details, please refer to [How to evaluate model](https://github.com/open-mmlab/mmdeploy/blob/master/docs/en/02-how-to-run/profile_model.md)
+
+## How to quantize your own model quickly
+
+If you want to try quantize your own model quickly, you just need to learn about how to change our provided config.
+
+**Case 1: If the model you want to quantize is in our provided configs.**
+
+You can refer to the previous chapter Quick Run.
+
+**Case 2: If the model you want to quantize is not in our provided configs.**
+
+Let us take `resnet50` as an example to show how to handle case 2.
+
+```Python
+_base_ = [
+ 'mmcls::resnet/resnet18_8xb32_in1k.py',
+ '../../deploy_cfgs/mmcls/classification_openvino_dynamic-224x224.py'
+]
+
+val_dataloader = dict(batch_size=32)
+
+test_cfg = dict(
+ type='mmrazor.PTQLoop',
+ calibrate_dataloader=val_dataloader,
+ calibrate_steps=32,
+)
+
+global_qconfig = dict(
+ w_observer=dict(type='mmrazor.PerChannelMinMaxObserver'),
+ a_observer=dict(type='mmrazor.MovingAverageMinMaxObserver'),
+ w_fake_quant=dict(type='mmrazor.FakeQuantize'),
+ a_fake_quant=dict(type='mmrazor.FakeQuantize'),
+ w_qscheme=dict(
+ qdtype='qint8', bit=8, is_symmetry=True, is_symmetric_range=True),
+ a_qscheme=dict(
+ qdtype='quint8', bit=8, is_symmetry=True, averaging_constant=0.1),
+)
+
+float_checkpoint = 'https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_8xb32_in1k_20210831-fbbb1da6.pth' # noqa: E501
+
+model = dict(
+ _delete_=True,
+ type='mmrazor.MMArchitectureQuant',
+ data_preprocessor=dict(
+ type='mmcls.ClsDataPreprocessor',
+ num_classes=1000,
+ # RGB format normalization parameters
+ mean=[123.675, 116.28, 103.53],
+ std=[58.395, 57.12, 57.375],
+ # convert image from BGR to RGB
+ to_rgb=True),
+ architecture=_base_.model,
+ deploy_cfg=_base_.deploy_cfg,
+ float_checkpoint=float_checkpoint,
+ quantizer=dict(
+ type='mmrazor.OpenVINOQuantizer',
+ global_qconfig=global_qconfig,
+ tracer=dict(
+ type='mmrazor.CustomTracer',
+ skipped_methods=[
+ 'mmcls.models.heads.ClsHead._get_loss',
+ 'mmcls.models.heads.ClsHead._get_predictions'
+ ])))
+
+model_wrapper_cfg = dict(type='mmrazor.MMArchitectureQuantDDP', )
+```
+
+This is a config that quantize `resnet18` with OpenVINO backend. You just need to modify two args: `_base_` and `float_checkpoint`.
+
+```Python
+# before
+_base_ = ['mmcls::resnet/resnet18_8xb32_in1k.py']
+float_checkpoint = 'https://download.openmmlab.com/mmclassification/v0/resnet/resnet18_8xb32_in1k_20210831-fbbb1da6.pth'
+
+# after
+_base_ = ['mmcls::resnet/resnet50_8xb32_in1k.py']
+float_checkpoint = 'https://download.openmmlab.com/mmclassification/v0/resnet/resnet50_8xb32_in1k_20210831-ea4938fc.pth'
+```
+
+- `_base_` will be called from mmcls by mmengine, so you can just use mmcls provided configs directly. Other repos are similar.
+- `float_checkpoint ` is a pre-trained float checkpoint by OpenMMLab. You can find it in the corresponding repo.
+
+After modifying required config, we can use it the same as case 1.
+
+## How to improve your quantization performance
+
+If you can not be satisfied with quantization performance by applying our provided configs to your own model, you can try to improve it with our provided various quantization schemes by modifying `global_qconfig`.
+
+```Python
+global_qconfig = dict(
+ w_observer=dict(type='mmrazor.PerChannelMinMaxObserver'),
+ a_observer=dict(type='mmrazor.MovingAverageMinMaxObserver'),
+ w_fake_quant=dict(type='mmrazor.FakeQuantize'),
+ a_fake_quant=dict(type='mmrazor.FakeQuantize'),
+ w_qscheme=dict(
+ qdtype='qint8', bit=8, is_symmetry=True, is_symmetric_range=True),
+ a_qscheme=dict(
+ qdtype='quint8', bit=8, is_symmetry=True, averaging_constant=0.1),
+)
+```
+
+As shown above, `global_qconfig` contains server common core args as follows:
+
+- Observes
+
+In `forward`, they will update the statistics of the observed Tensor. And they should provide a `calculate_qparams` function that computes the quantization parameters given the collected statistics.
+
+```{note}
+Whether it is per channel quantization depends on whether `PerChannel` is in the observer name.
+```
+
+Because mmrazor's quantization has been compatible with PyTorch's observers, we can use observers in PyTorch and our custom observers.
+
+Supported observers list in Pytorch.
+
+```Python
+FixedQParamsObserver
+HistogramObserver
+MinMaxObserver
+MovingAverageMinMaxObserver
+MovingAveragePerChannelMinMaxObserver
+NoopObserver
+ObserverBase
+PerChannelMinMaxObserver
+PlaceholderObserver
+RecordingObserver
+ReuseInputObserver
+UniformQuantizationObserverBase
+```
+
+- Fake quants
+
+In `forward`, they will update the statistics of the observed Tensor and fake quantize the input. They should also provide a `calculate_qparams` function that computes the quantization parameters given the collected statistics.
+
+Because mmrazor's quantization has been compatible with PyTorch's fakequants, we can use fakequants in PyTorch and our custom fakequants.
+
+Supported fakequants list in Pytorch.
+
+```Python
+FakeQuantize
+FakeQuantizeBase
+FixedQParamsFakeQuantize
+FusedMovingAvgObsFakeQuantize
+```
+
+- Qschemes
+
+Include some basic quantization configurations.
+
+`qdtype`: to specify whether quantized data type is sign or unsign. It can be chosen from \[ 'qint8', 'quint8' \]
+
+```{note}
+If your model need to be deployed, `qdtype` must be consistent with the dtype in the corresponding backendconfig. Otherwise fakequant will not be inserted in front of the specified OPs.
+
+backendconfigs dir:
+mmrazor/mmrazor/structures/quantization/backend_config
+```
+
+`bit`: to specify the quantized data bit. It can be chosen from \[1 ~ 16\].
+
+`is_symmetry`: to specify whether to use symmetry quantization. It can be chosen from \[ True, False \]
+
+The specified qscheme is actually implemented by observers, so how to configurate other args needs to be based on the given observers, such as `is_symmetric_range` and `averaging_constant`.
+
+## How to customize your quantization algorithm
+
+If you try to customize your quantization algorithm, you can refer to the following link for more details.
+
+[Customize Quantization algorithms](https://github.com/open-mmlab/mmrazor/blob/quantize/docs/en/advanced_guides/customize_quantization_algorithms.md)
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/recorder.md b/internlm_langchain/knowledge_base/MMRazor/content/recorder.md
new file mode 100644
index 00000000..df528f21
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/recorder.md
@@ -0,0 +1,347 @@
+# Recorder
+
+## Introduction of Recorder
+
+`Recorder` is a context manager used to record various intermediate results during the model forward. It can help `Delivery` finish data delivering by recording source data in some distillation algorithms. And it can also be used to obtain some specific data for visual analysis or other functions you want.
+
+To adapt to more requirements, we implement multiple types of recorders to obtain different types of intermediate results in MMRazor. What is more, they can be used in combination with the `RecorderManager`.
+
+In general, `Recorder` will help us expand more functions in implementing algorithms by recording various intermediate results.
+
+## Usage of Recorder
+
+Currently, we support five `Recorder`, as shown in the following table
+
+| Recorder name | Description |
+| ----------------------- | ------------------------------------------- |
+| FunctionOutputsRecorder | Record output results of some functions |
+| MethodOutputsRecorder | Record output results of some methods |
+| ModuleInputsRecorder | Record input results of nn.Module |
+| ModuleOutputsRecorder | Record output results of nn.Module |
+| ParameterRecorder | Record intermediate parameters of nn.Module |
+
+All of the recorders inherit from `BaseRecorder`. And these recorders can be managed by `RecorderManager` or just be used on their own.
+
+Their relationship is shown below.
+
+
+
+### FunctionOutputsRecorder
+
+`FunctionOutputsRecorder` is used to record the output results of intermediate **function**.
+
+```{note}
+When instantiating `FunctionOutputsRecorder`, you need to pass `source` argument, which requires extra attention. For example,
+`anchor_inside_flags` is a function in mmdetection to check whether the
+anchors are inside the border. This function is in
+`mmdet/core/anchor/utils.py` and used in
+`mmdet/models/dense_heads/anchor_head`. Then the `source` argument should be
+`mmdet.models.dense_heads.anchor_head.anchor_inside_flags` but not
+`mmdet.core.anchor.utils.anchor_inside_flags`.
+```
+
+#### Example
+
+Suppose there is a toy function named `toy_func` in toy_module.py.
+
+```Python
+import random
+from typing import List
+from mmrazor.structures import FunctionOutputsRecorder
+
+def toy_func() -> int:
+ return random.randint(0, 1000000)
+
+# instantiate with specifying used path
+r1 = FunctionOutputsRecorder('toy_module.toy_func')
+
+# initialize is to make specified module can be recorded by
+# registering customized forward hook.
+r1.initialize()
+with r1:
+ out1 = toy_module.toy_func()
+ out2 = toy_module.toy_func()
+ out3 = toy_module.toy_func()
+
+# check recorded data
+print(r1.data_buffer)
+```
+
+Out:
+
+```Python
+[75486, 641059, 119729]
+```
+
+Test Correctness of recorded results
+
+```Python
+data_buffer = r1.data_buffer
+print(data_buffer[0] == out1 and data_buffer[1] == out2 and data_buffer[2] == out3)
+```
+
+Out:
+
+```Python
+True
+```
+
+To get the specific recorded data with `get_record_data`
+
+```Python
+print(r1.get_record_data(record_idx=2))
+```
+
+Out:
+
+```Python
+119729
+```
+
+### MethodOutputsRecorder
+
+`MethodOutputsRecorder` is used to record the output results of intermediate **method**.
+
+#### Example
+
+Suppose there is a toy class `Toy` and it has a toy method `toy_func` in toy_module.py.
+
+```Python
+import random
+from mmrazor.core import MethodOutputsRecorder
+
+class Toy():
+ def toy_func(self):
+ return random.randint(0, 1000000)
+
+toy = Toy()
+
+# instantiate with specifying used path
+r1 = MethodOutputsRecorder('toy_module.Toy.toy_func')
+# initialize is to make specified module can be recorded by
+# registering customized forward hook.
+r1.initialize()
+
+with r1:
+ out1 = toy.toy_func()
+ out2 = toy.toy_func()
+ out3 = toy.toy_func()
+
+# check recorded data
+print(r1.data_buffer)
+```
+
+Out:
+
+```Python
+[217832, 353057, 387699]
+```
+
+Test Correctness of recorded results
+
+```Python
+data_buffer = r1.data_buffer
+print(data_buffer[0] == out1 and data_buffer[1] == out2 and data_buffer[2] == out3)
+```
+
+Out:
+
+```Python
+True
+```
+
+To get the specific recorded data with `get_record_data`
+
+```Python
+print(r1.get_record_data(record_idx=2))
+```
+
+Out:
+
+```Python
+387699
+```
+
+### ModuleOutputsRecorder and ModuleInputsRecorder
+
+`ModuleOutputsRecorder`'s usage is similar with `ModuleInputsRecorder`'s, so we will take the former as an example to introduce their usage.
+
+#### Example
+
+```{note}
+> Different `MethodOutputsRecorder` and `FunctionOutputsRecorder`, `ModuleOutputsRecorder` and `ModuleInputsRecorder` are instantiated with module name rather than used path, and executing `initialize` need arg: `model`. Thus, they can know actually the module needs to be recorded.
+```
+
+Suppose there is a toy Module `ToyModule` in toy_module.py.
+
+```Python
+import torch
+from torch import nn
+from mmrazor.core import ModuleOutputsRecorder
+
+class ToyModel(nn.Module):
+ def __init__(self):
+ super().__init__()
+ self.conv1 = nn.Conv2d(1, 1, 1)
+ self.conv2 = nn.Conv2d(1, 1, 1)
+
+ def forward(self, x):
+ x1 = self.conv1(x)
+ x2 = self.conv1(x + 1)
+ return self.conv2(x1 + x2)
+
+model = ToyModel()
+# instantiate with specifying module name.
+r1 = ModuleOutputsRecorder('conv1')
+
+# initialize is to make specified module can be recorded by
+# registering customized forward hook.
+r1.initialize(model)
+
+x = torch.randn(1, 1, 1, 1)
+with r1:
+ out = model(x)
+
+print(r1.data_buffer)
+```
+
+Out:
+
+```Python
+[tensor([[[[0.0820]]]], grad_fn=), tensor([[[[-0.0894]]]], grad_fn=)]
+```
+
+Test Correctness of recorded results
+
+```Python
+print(torch.equal(r1.data_buffer[0], model.conv1(x)))
+print(torch.equal(r1.data_buffer[1], model.conv1(x + 1)))
+```
+
+Out:
+
+```Python
+True
+True
+```
+
+### ParameterRecorder
+
+`ParameterRecorder` is used to record the intermediate parameter of `nn.Module`. Its usage is similar to `ModuleOutputsRecorder`'s and `ModuleInputsRecorder`'s, but it instantiates with parameter name instead of module name.
+
+#### Example
+
+Suppose there is a toy Module `ToyModule` in toy_module.py.
+
+```Python
+from torch import nn
+import torch
+from mmrazor.core import ModuleOutputsRecorder
+
+class ToyModel(nn.Module):
+ def __init__(self):
+ super().__init__()
+ self.toy_conv = nn.Conv2d(1, 1, 1)
+
+ def forward(self, x):
+ return self.toy_conv(x)
+
+model = ToyModel()
+# instantiate with specifying parameter name.
+r1 = ParameterRecorder('toy_conv.weight')
+# initialize is to make specified module can be recorded by
+# registering customized forward hook.
+r1.initialize(model)
+
+print(r1.data_buffer)
+```
+
+Out:
+
+```Python
+[Parameter containing: tensor([[[[0.2971]]]], requires_grad=True)]
+```
+
+Test Correctness of recorded results
+
+```Python
+print(torch.equal(r1.data_buffer[0], model.toy_conv.weight))
+```
+
+Out:
+
+```Python
+True
+```
+
+### RecorderManager
+
+`RecorderManager` is actually context manager, which can be used to manage various types of recorders.
+
+With the help of `RecorderManager`, we can manage several different recorders with as little code as possible, which reduces the possibility of errors.
+
+#### Example
+
+Suppose there is a toy class `Toy` owned has a toy method `toy_func` in toy_module.py.
+
+```Python
+import random
+from torch import nn
+from mmrazor.core import RecorderManager
+
+class Toy():
+ def toy_func(self):
+ return random.randint(0, 1000000)
+
+class ToyModel(nn.Module):
+ def __init__(self):
+ super().__init__()
+ self.conv1 = nn.Conv2d(1, 1, 1)
+ self.conv2 = nn.Conv2d(1, 1, 1)
+ self.toy = Toy()
+
+ def forward(self, x):
+ return self.conv2(self.conv1(x)) + self.toy.toy_func()
+
+# configure multi-recorders
+conv1_rec = ConfigDict(type='ModuleOutputs', source='conv1')
+conv2_rec = ConfigDict(type='ModuleOutputs', source='conv2')
+func_rec = ConfigDict(type='MethodOutputs', source='toy_module.Toy.toy_func')
+# instantiate RecorderManager with a dict that contains recorders' configs,
+# you can customize their keys.
+manager = RecorderManager(
+ {'conv1_rec': conv1_rec,
+ 'conv2_rec': conv2_rec,
+ 'func_rec': func_rec})
+
+model = ToyModel()
+# initialize is to make specified module can be recorded by
+# registering customized forward hook.
+manager.initialize(model)
+
+x = torch.rand(1, 1, 1, 1)
+with manager:
+ out = model(x)
+
+conv2_out = manager.get_recorder('conv2_rec').get_record_data()
+print(conv2_out)
+```
+
+Out:
+
+```Python
+tensor([[[[0.5543]]]], grad_fn=)
+```
+
+Display output of `toy_func`
+
+```Python
+func_out = manager.get_recorder('func_rec').get_record_data()
+print(func_out)
+```
+
+Out:
+
+```Python
+313167
+```
diff --git a/internlm_langchain/knowledge_base/MMRazor/content/visualization.md b/internlm_langchain/knowledge_base/MMRazor/content/visualization.md
new file mode 100644
index 00000000..4642139d
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMRazor/content/visualization.md
@@ -0,0 +1,158 @@
+## 可视化
+
+## 特征图可视化
+
+
+
+```shell
+wget https://user-images.githubusercontent.com/24582831/189833109-eddad58f-f777-4fc0-b98a-6bd429143b06.png --output-document aachen_000000_000019_leftImg8bit.png
+wget https://user-images.githubusercontent.com/24582831/189833143-15f60f8a-4d1e-4cbb-a6e7-5e2233869fac.png --output-document aachen_000000_000019_gtFine_labelTrainIds.png
+
+wget https://download.openmmlab.com/mmsegmentation/v0.5/ann/ann_r50-d8_512x1024_40k_cityscapes/ann_r50-d8_512x1024_40k_cityscapes_20200605_095211-049fc292.pth
+
+```
+
+```python
+# Copyright (c) OpenMMLab. All rights reserved.
+from argparse import ArgumentParser
+from typing import Type
+
+import mmcv
+import torch
+import torch.nn as nn
+
+from mmengine.model import revert_sync_batchnorm
+from mmengine.structures import PixelData
+from mmseg.apis import inference_model, init_model
+from mmseg.structures import SegDataSample
+from mmseg.utils import register_all_modules
+from mmseg.visualization import SegLocalVisualizer
+
+
+class Recorder:
+ """record the forward output feature map and save to data_buffer."""
+
+ def __init__(self) -> None:
+ self.data_buffer = list()
+
+ def __enter__(self, ):
+ self._data_buffer = list()
+
+ def record_data_hook(self, model: nn.Module, input: Type, output: Type):
+ self.data_buffer.append(output)
+
+ def __exit__(self, *args, **kwargs):
+ pass
+
+
+def visualize(args, model, recorder, result):
+ seg_visualizer = SegLocalVisualizer(
+ vis_backends=[dict(type='WandbVisBackend')],
+ save_dir='temp_dir',
+ alpha=0.5)
+ seg_visualizer.dataset_meta = dict(
+ classes=model.dataset_meta['classes'],
+ palette=model.dataset_meta['palette'])
+
+ image = mmcv.imread(args.img, 'color')
+
+ seg_visualizer.add_datasample(
+ name='predict',
+ image=image,
+ data_sample=result,
+ draw_gt=False,
+ draw_pred=True,
+ wait_time=0,
+ out_file=None,
+ show=False)
+
+ # add feature map to wandb visualizer
+ for i in range(len(recorder.data_buffer)):
+ feature = recorder.data_buffer[i][0] # remove the batch
+ drawn_img = seg_visualizer.draw_featmap(
+ feature, image, channel_reduction='select_max')
+ seg_visualizer.add_image(f'feature_map{i}', drawn_img)
+
+ if args.gt_mask:
+ sem_seg = mmcv.imread(args.gt_mask, 'unchanged')
+ sem_seg = torch.from_numpy(sem_seg)
+ gt_mask = dict(data=sem_seg)
+ gt_mask = PixelData(**gt_mask)
+ data_sample = SegDataSample()
+ data_sample.gt_sem_seg = gt_mask
+
+ seg_visualizer.add_datasample(
+ name='gt_mask',
+ image=image,
+ data_sample=data_sample,
+ draw_gt=True,
+ draw_pred=False,
+ wait_time=0,
+ out_file=None,
+ show=False)
+
+ seg_visualizer.add_image('image', image)
+
+
+def main():
+ parser = ArgumentParser(
+ description='Draw the Feature Map During Inference')
+ parser.add_argument('img', help='Image file')
+ parser.add_argument('config', help='Config file')
+ parser.add_argument('checkpoint', help='Checkpoint file')
+ parser.add_argument('--gt_mask', default=None, help='Path of gt mask file')
+ parser.add_argument('--out-file', default=None, help='Path to output file')
+ parser.add_argument(
+ '--device', default='cuda:0', help='Device used for inference')
+ parser.add_argument(
+ '--opacity',
+ type=float,
+ default=0.5,
+ help='Opacity of painted segmentation map. In (0, 1] range.')
+ parser.add_argument(
+ '--title', default='result', help='The image identifier.')
+ args = parser.parse_args()
+
+ register_all_modules()
+
+ # build the model from a config file and a checkpoint file
+ model = init_model(args.config, args.checkpoint, device=args.device)
+ if args.device == 'cpu':
+ model = revert_sync_batchnorm(model)
+
+ # show all named module in the model and use it in source list below
+ for name, module in model.named_modules():
+ print(name)
+
+ source = [
+ 'decode_head.fusion.stages.0.query_project.activate',
+ 'decode_head.context.stages.0.key_project.activate',
+ 'decode_head.context.bottleneck.activate'
+ ]
+ source = dict.fromkeys(source)
+
+ count = 0
+ recorder = Recorder()
+ # registry the forward hook
+ for name, module in model.named_modules():
+ if name in source:
+ count += 1
+ module.register_forward_hook(recorder.record_data_hook)
+ if count == len(source):
+ break
+
+ with recorder:
+ # test a single image, and record feature map to data_buffer
+ result = inference_model(model, args.img)
+
+ visualize(args, model, recorder, result)
+
+
+if __name__ == '__main__':
+ main()
+
+```
+
+将上述代码保存为 feature_map_visual.py,在终端执行如下代码
+
+```shell
+python feature_map_visual.py ${图像} ${配置文件} ${检查点文件} [可选参数]
+```
+
+样例
+
+```shell
+python feature_map_visual.py \
+aachen_000000_000019_leftImg8bit.png \
+configs/ann/ann_r50-d8_4xb2-40k_cityscapes-512x1024.py \
+ann_r50-d8_512x1024_40k_cityscapes_20200605_095211-049fc292.pth \
+--gt_mask aachen_000000_000019_gtFine_labelTrainIds.png
+```
+
+可视化后的图像结果和它的对应的 feature map图像会出现在wandb账户中
+
+
+
+
diff --git a/internlm_langchain/knowledge_base/MMSegmentation/vector_store/index.faiss b/internlm_langchain/knowledge_base/MMSegmentation/vector_store/index.faiss
new file mode 100644
index 00000000..05f74a44
Binary files /dev/null and b/internlm_langchain/knowledge_base/MMSegmentation/vector_store/index.faiss differ
diff --git a/internlm_langchain/knowledge_base/MMSkeleton/content/CONTRIBUTING.md b/internlm_langchain/knowledge_base/MMSkeleton/content/CONTRIBUTING.md
new file mode 100644
index 00000000..32d9604f
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMSkeleton/content/CONTRIBUTING.md
@@ -0,0 +1,27 @@
+# Contributing to MMSkeleton
+
+All kinds of contributions are welcome, including but not limited to the following.
+
+- Fixes (typo, bugs)
+- New features and components
+
+## Workflow
+
+1. fork and pull the latest mmskeleton
+2. checkout a new branch (do not use master branch for PRs)
+3. commit your changes
+4. create a PR
+
+Note
+- If you plan to add some new features that involve large changes, it is encouraged to open an issue for discussion first.
+- If you are the author of some papers and would like to include your method to mmskeleton,
+please contact Sijie Yan (yysijie@gmail). We will much appreciate your contribution.
+
+## Code style
+
+### Python
+We adopt [PEP8](https://www.python.org/dev/peps/pep-0008/) as the preferred code style.
+We use [flake8](http://flake8.pycqa.org/en/latest/) as the linter and [yapf](https://github.com/google/yapf) as the formatter.
+Please upgrade to the latest yapf (>=0.27.0) and refer to the [configuration](../.style.yapf).
+
+>Before you create a PR, make sure that your code lints and is formatted by yapf.
\ No newline at end of file
diff --git a/internlm_langchain/knowledge_base/MMSkeleton/content/CREATE_APPLICATION.md b/internlm_langchain/knowledge_base/MMSkeleton/content/CREATE_APPLICATION.md
new file mode 100644
index 00000000..0b56cb97
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMSkeleton/content/CREATE_APPLICATION.md
@@ -0,0 +1,44 @@
+## Create an MMSkeleton Application
+
+MMSkeleton provides various models, datasets, apis, operators for various applications,
+such as pose estimation, human detection, action recognition and dataset building.
+The workflow of a application is defined by a **processor**, which is usually a python function.
+
+In MMSkeleton, an application is defined in a configuration file.
+It is a `.json`, `.yaml` or `.py` file including `processor_cfg` field.
+Here is an example:
+
+```yaml
+# yaml
+
+processor_cfg:
+ type:
+ dataset:
+ type:
+ data_path: ./data
+ #more arguments for processor function...
+
+argparse_cfg:
+ data:
+ bind_to: processor_cfg.dataset.data_path
+ help: the path of data
+ #more option arguments for command line...
+```
+
+The `processor_cfg` specifies a processor function and its dataset module
+In adittion, the `data_path` argument of the dataset is "./data".
+The `argparse_cfg` create a option argument `data` which is bound to `data_path`.
+
+Note that, mmskeleton will import processor function or modules according to the given path by the priority of `local directory > system python path > mmskeleton`.
+
+
+
+With this configuration, the application can be started by:
+```shell
+mmskl $CONFIG_FILE [--data $DATA_PATH]
+```
+
+
+
+
+
diff --git a/internlm_langchain/knowledge_base/MMSkeleton/content/CUSTOM_DATASET.md b/internlm_langchain/knowledge_base/MMSkeleton/content/CUSTOM_DATASET.md
new file mode 100644
index 00000000..ab3f2363
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMSkeleton/content/CUSTOM_DATASET.md
@@ -0,0 +1,99 @@
+## Custom Skeleton-based Dataset
+mmskeleton accepts two formats of skeleton data, `.npy` and `.json`.
+We recommend users to store their dataset as `.json` files.
+Please follow the below example to build a custom dataset.
+
+### Build dataset from videos
+
+We have prepared a mini video set including three 10-seconds clips in the `resource/data_example` with the structure:
+
+ resource/dataset_example
+ ├── skateboarding.mp4
+ ├── clean_and_jerk.mp4
+ └── ta_chi.mp4
+
+
+Run this command for building skeleton-based dataset for them:
+```
+mmskl configs/utils/build_dataset_example.yaml [--gpus $GPUS]
+```
+mmskeleton extracts skeleton sequences for each video via performing **person detection** and **pose estimation** on all frames.
+A few `.json` files will be stored under `data/dataset_example` if you did not change default arguments. The directory layout is shown here:
+
+ data/dataset_example
+ ├── skateboarding.json
+ ├── clean_and_jerk.json
+ └── ta_chi.json
+
+All annotations share the same basic data structure like below:
+```javascript
+{
+ "info":
+ {
+ "video_name": "skateboarding.mp4",
+ "resolution": [340, 256],
+ "num_frame": 300,
+ "num_keypoints": 17,
+ "keypoint_channels": ["x", "y", "score"],
+ "version": "1.0"
+ },
+ "annotations":
+ [
+ {
+ "frame_index": 0,
+ "id": 0,
+ "person_id": null,
+ "keypoints": [[x, y, score], [x, y, score], ...]
+ },
+ ...
+ ],
+ "category_id": 0,
+}
+```
+
+After that, train the st-gcn model by:
+```
+mmskl configs/recognition/st_gcn/dataset_example/train.yaml
+```
+and test the model by:
+```
+mmskl configs/recognition/st_gcn/dataset_example/test.yaml --checkpoint $CHECKPOINT_PATH
+```
+
+### Build your own dataset
+
+If you want to use mmskeleton on your own **skeleton-based data**, the simplest method is reformatting
+your data format to `.json` files with the basic structure we mentioned above.
+Or you can design another data feeder to replace [our data feeder](../mmskeleton/datasets/recognition.py),
+and specify it by changing `processor_cfg.dataset_cfg.name` in your training configuration file.
+
+If you want to use mmskeleton on your own **video dataset**,
+just follow the above tutorial to build the skeleton-based dataset for videos.
+Note that, in the above example, the groundtruth of `category_id` is from [another annotation file](../resource/category_annotation_example.json) with the structure:
+```javascript
+{
+ "categories": [
+ "skateboarding",
+ "clean_and_jerk",
+ "ta_chi"
+ ],
+ "annotations": {
+ "clean_and_jerk.mp4": {
+ "category_id": 1
+ },
+ "skateboarding.mp4": {
+ "category_id": 0
+ },
+ "ta_chi.mp4": {
+ "category_id": 2
+ }
+ }
+}
+```
+The `category_id` will be set to `-1` if the category annotations miss.
+
+You can build dataset by:
+```
+mmskl configs/utils/build_dataset_example.yaml --video_dir $VIDEO_DIR --category_annotation $VIDEO_ANNOTATION --out_dir $OUT_DIR [--gpus $GPUS]
+```
+To change the person detector, pose estimator or other arguments, modify the `build_dataset_example.yaml`.
diff --git a/internlm_langchain/knowledge_base/MMSkeleton/content/GETTING_STARTED.md b/internlm_langchain/knowledge_base/MMSkeleton/content/GETTING_STARTED.md
new file mode 100644
index 00000000..6e9248af
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMSkeleton/content/GETTING_STARTED.md
@@ -0,0 +1,87 @@
+## Getting Started
+
+### Installation
+
+a. [Optional] Create a [conda](www.anaconda.com/distribution/) virtual environment and activate it:
+
+``` shell
+conda create -n open-mmlab python=3.7 -y
+conda activate open-mmlab
+```
+
+b. Install PyTorch and torchvision (CUDA is required):
+``` shell
+# CUDA 9.2
+conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=9.2 -c pytorch
+
+# CUDA 10.0
+conda install pytorch==1.2.0 torchvision==0.4.0 cudatoolkit=10.0 -c pytorch
+```
+The higher versions are not covered by tests.
+
+c. Clone mmskeleton from github:
+
+``` shell
+git clone https://github.com/open-mmlab/mmskeleton.git
+cd mmskeleton
+```
+
+d. Install mmskeleton:
+
+``` shell
+python setup.py develop
+```
+
+e. Install nms for person estimation:
+``` shell
+cd mmskeleton/ops/nms/
+python setup_linux.py develop
+cd ../../../
+```
+
+f. [Optional] Install mmdetection for person detection:
+
+``` shell
+python setup.py develop --mmdet
+```
+In the event of a failure installation, please install [mmdetection](https://github.com/open-mmlab/mmdetection/blob/master/docs/INSTALL.md) manually.
+
+g. To verify that mmskeleton and mmdetection installed correctly, use:
+```shell
+python mmskl.py pose_demo [--gpus $GPUS]
+# or "python mmskl.py pose_demo_HD [--gpus $GPUS]" for a higher accuracy
+```
+An generated video as below will be saved under the prompted path.
+
+
+
+
+
+
+
+### Basic usage:
+
+Any application in mmskeleton is described by a configuration file. That can be started by a uniform command:
+``` shell
+python mmskl.py $CONFIG_FILE [--options $OPTHION]
+```
+which is equivalent to
+```
+mmskl $CONFIG_FILE [--options $OPTHION]
+```
+Optional arguments `options` is defined in the configuration file.
+You can check them via:
+``` shell
+mmskl $CONFIG_FILE -h
+```
+
+### Example:
+
+See [START_RECOGNITION.md](../doc/START_RECOGNITION.md) for learning how to train a model for skeleton-based action recognitoin.
+
+See [CUSTOM_DATASET](../doc/CUSTOM_DATASET.md) for building your own skeleton-based dataset.
+
+See [CREATE_APPLICATION](../doc/CREATE_APPLICATION.md) for creating your own mmskeleton application.
+
+
+
diff --git a/internlm_langchain/knowledge_base/MMSkeleton/content/MODEL_ZOO.md b/internlm_langchain/knowledge_base/MMSkeleton/content/MODEL_ZOO.md
new file mode 100644
index 00000000..32196b21
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMSkeleton/content/MODEL_ZOO.md
@@ -0,0 +1,12 @@
+## MODEL ZOO
+
+
+
+
+MMSkeleton usually automatically download necessary models from AWS in the runtime.
+
+We also maintain a mirror backup on [GoogleDrive](https://drive.google.com/open?id=1zC9ptIQTUoT7RvRM9Ec651cF5Xe7pty0)
+and [BaiduYun](https://pan.baidu.com/s/1iqOoQmIywuDQckgmehQ8HQ).
+As a plan B, you can manually download models and put them into checkpoints cache folder of pytorch.
+The folder defaults to ``~/.cache/torch/checkpoints`` in the Linux filesytem layout.
\ No newline at end of file
diff --git a/internlm_langchain/knowledge_base/MMSkeleton/content/SKELETON_DATA.md b/internlm_langchain/knowledge_base/MMSkeleton/content/SKELETON_DATA.md
new file mode 100644
index 00000000..32e65bee
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMSkeleton/content/SKELETON_DATA.md
@@ -0,0 +1,21 @@
+## Skeleton-based Dataset Processing
+
+ST-GCN was evaluated on two skeleton-based action recognition datasets: **Kinetics-skeleton** and **NTU RGB+D**.
+The raw data should be converted to the proper format before training and test as below steps. Or you can download
+the processed data directly from [GoogleDrive](https://drive.google.com/open?id=103NOL9YYZSW1hLoWmYnv5Fs8mK-Ij7qb).
+
+#### Kinetics-skeleton
+[Kinetics](https://deepmind.com/research/open-source/open-source-datasets/kinetics/) is a video-based dataset for action recognition which only provide raw video clips without skeleton data. Kinetics dataset include To obtain the joint locations, we first resized all videos to the resolution of 340x256 and converted the frame rate to 30 fps. Then, we extracted skeletons from each frame in Kinetics by [Openpose](https://github.com/CMU-Perceptual-Computing-Lab/openpose). The extracted skeleton data we called **Kinetics-skeleton**(7.5GB) can be downloaded from [GoogleDrive](https://drive.google.com/open?id=1SPQ6FmFsjGg3f59uCWfdUWI-5HJM_YhZ) or [BaiduYun](https://pan.baidu.com/s/1dwKG2TLvG-R1qeIiE4MjeA#list/path=%2FShare%2FAAAI18%2Fkinetics-skeleton&parentPath=%2FShare).
+
+After uncompressing, build the database for mmskeleton by this command:
+```
+python deprecated/tools/data_processing/kinetics_gendata.py --data_path
+```
+
+#### NTU RGB+D
+NTU RGB+D can be downloaded from [their website](http://rose1.ntu.edu.sg/datasets/actionrecognition.asp).
+Only the **3D skeletons**(5.8GB) modality is required in our experiments. After that, this command should be used to build the database for training or evaluation on mmskeleton:
+```
+python deprecated/tools/data_processing/ntu_gendata.py --data_path
+```
+where the `````` is the directory path of 3D skeletons annotations of the NTU RGB+D dataset you download.
diff --git a/internlm_langchain/knowledge_base/MMSkeleton/content/START_POSE_ESTIMATION.md b/internlm_langchain/knowledge_base/MMSkeleton/content/START_POSE_ESTIMATION.md
new file mode 100644
index 00000000..bbf16eec
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMSkeleton/content/START_POSE_ESTIMATION.md
@@ -0,0 +1,64 @@
+## Start Pose Estimation
+
+We provide a demo for video-based pose estimation:
+```shell
+mmskl pose_demo [--video $VIDEO_PATH] [--gpus $GPUS] [--$MORE_OPTIONS]
+```
+This demo predict pose sequences via sequentially feeding frames into the image-based human detector and the pose estimator. By default, they are **cascade-rcnn** [1] and **hrnet** [2] respectively.
+We test our demo on 8 gpus of TITAN X and get a realtime speed (27.1fps). To check the full usage, please run `mmskl pose_demo -h`. You can also refer to [pose_demo.yaml](../configs/pose_estimation/hrnet/pose_demo.yaml) for detailed configurations.
+
+We also provide another demo `pose_demo_HD` with a slower but more powerful detector **htc** [3]. Similarly, run:
+```shell
+mmskl pose_demo_HD [--video $VIDEO_PATH] [--gpus $GPUS] [--$MORE_OPTIONS]
+```
+
+
+### High-level APIs for testing images
+
+Here is an example of building the pose estimator and test given images.
+```python
+import mmcv
+from mmskeleton.apis import init_pose_estimator, inference_pose_estimator
+
+cfg = mmcv.Config.fromfile('configs/apis/pose_estimator.cascade_rcnn+hrnet.yaml')
+video = mmcv.VideoReader('resource/data_example/skateboarding.mp4')
+
+model = init_pose_estimator(**cfg, device=0)
+for i, frame in enumerate(video):
+ result = inference_pose_estimator(model, frame)
+ print('Process the frame {}'.format(i))
+
+ # process the result here
+
+```
+
+### Training and Test a Pose Estimator
+Comming soon...
+
+
+
+### Reference
+```
+@inproceedings{cai2018cascade,
+ title={Cascade r-cnn: Delving into high quality object detection},
+ author={Cai, Zhaowei and Vasconcelos, Nuno},
+ booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
+ pages={6154--6162},
+ year={2018}
+}
+
+@article{sun2019deep,
+ title={Deep high-resolution representation learning for human pose estimation},
+ author={Sun, Ke and Xiao, Bin and Liu, Dong and Wang, Jingdong},
+ journal={arXiv preprint arXiv:1902.09212},
+ year={2019}
+}
+
+@inproceedings{chen2019hybrid,
+ title={Hybrid task cascade for instance segmentation},
+ author={Chen, Kai and Pang, Jiangmiao and Wang, Jiaqi and Xiong, Yu and Li, Xiaoxiao and Sun, Shuyang and Feng, Wansen and Liu, Ziwei and Shi, Jianping and Ouyang, Wanli and others},
+ booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
+ pages={4974--4983},
+ year={2019}
+}
+```
\ No newline at end of file
diff --git a/internlm_langchain/knowledge_base/MMSkeleton/content/START_RECOGNITION.md b/internlm_langchain/knowledge_base/MMSkeleton/content/START_RECOGNITION.md
new file mode 100644
index 00000000..4bbdce4d
--- /dev/null
+++ b/internlm_langchain/knowledge_base/MMSkeleton/content/START_RECOGNITION.md
@@ -0,0 +1,62 @@
+## Start Action Recognition Using ST-GCN
+
+This repository holds the codebase for the paper:
+
+**Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition** Sijie Yan, Yuanjun Xiong and Dahua Lin, AAAI 2018. [[Arxiv Preprint]](https://arxiv.org/abs/1801.07455)
+
+