Skip to content

Commit 240c158

Browse files
committed
update
1 parent 7d81438 commit 240c158

File tree

3 files changed

+3
-1
lines changed

3 files changed

+3
-1
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,7 @@
103103
</table>
104104

105105
## 🎇最近更新
106+
- 【2025.4】[ThinkLLM](https://github.com/aJupyter/ThinkLLM/tree/main/LLM)是一个轻量、高效的大语言模型算法实现仓库,提供了BPE训练指南(支持EmoLLM)。
106107
- 【2025.3】基于InternLM2.5-7B-chat全量微调的[EmoLLM(GGUF格式,fp16精度)](https://huggingface.co/collections/L0ve1ace/psychology-llm-gguf-67cc766eaf0a3f01c6e39aa6) ,如何操作后续会更新~ @Rycen7822 @Slipstream-Max
107108
- 【2025.2】更新[首个心理健康R1蒸馏数据集](./datasets/psychology-10k-Deepseek-R1-zh.json) @Kedreamix
108109
- 【2024.09.14】基于Qwen2-7B-Instruct模型的Lora微调模型开源,微调配置文件地址:[Qwen2-7B-Instruct_lora.py](./xtuner_config/Qwen2-7B-Instruct_lora.py) ,模型权重链接:[ModelScope](https://www.modelscope.cn/models/aJupyter/EmoLLM_Qwen2-7B-Instruct_lora/)

README_EN.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -107,6 +107,7 @@ The Model aims to fully understand and promote the mental health of individuals,
107107
</table>
108108

109109
## Recent Updates
110+
- [2025.4] [ThinkLLM](https://github.com/aJupyter/ThinkLLM/tree/main/LLM ) is a lightweight and efficient implementation repository for large language models, providing a BPE training guide (supporting EmoLLM).
110111
- [2025.3] Based on the full fine-tuning of InternLM2.5-7B-chat, [EmoLLM (in GGUF format, fp16 precision)](https://huggingface.co/collections/L0ve1ace/psychology-llm-gguf-67cc766eaf0a3f01c6e39aa6) has been released. Updates on how to operate it will be provided later. @Rycen7822 @Slipstream-Max
111112
- [2025.2] Updated the [first mental health R1 distillation dataset](./datasets/psychology-10k-Deepseek-R1-zh.json) @Kedreamix
112113
- [2024.09.14] The Lora fine-tuned model based on the Qwen2-7B-Instruct model is open-sourced. Fine-tuning configuration file address: [Qwen2-7B-Instruct_lora.py](./xtuner_config/Qwen2-7B-Instruct_lora.py), model weight link: [ModelScope](https://www.modelscope.cn/models/aJupyter/EmoLLM_Qwen2-7B-Instruct_lora/)

README_JP.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@
104104
</table>
105105

106106
## 最近の更新
107-
107+
- 【2025.4】 [ThinkLLM](https://github.com/aJupyter/ThinkLLM/tree/main/LLM) は、大規模言語モデルの軽量で効率的な実装リポジトリであり、BPEトレーニングガイド(EmoLLMをサポート)を提供しています。
108108
- 【2025.3】 InternLM2.5-7B-chat のフルファインチューニングに基づいて、[EmoLLM (GGUF形式、fp16精度)](https://huggingface.co/collections/L0ve1ace/psychology-llm-gguf-67cc766eaf0a3f01c6e39aa6) がリリースされました。操作方法については後日更新されます。@Rycen7822 @Slipstream-Max
109109
- 【2025.2】 最初のメンタルヘルス R1 スティルデータセットを更新しました。[psychology-10k-Deepseek-R1-zh.json](./datasets/psychology-10k-Deepseek-R1-zh.json) @Kedreamix
110110
- 【2024.09.14】 Qwen2-7B-Instruct モデルに基づく Lora ファインチューニングモデルがオープンソース化されました。ファインチューニング設定ファイルアドレス: [Qwen2-7B-Instruct_lora.py](./xtuner_config/Qwen2-7B-Instruct_lora.py)、モデルウェイトリンク: [ModelScope](https://www.modelscope.cn/models/aJupyter/EmoLLM_Qwen2-7B-Instruct_lora/)

0 commit comments

Comments
 (0)