Replies: 13 comments
-
https://blog.csdn.net/2301_77286822/article/details/135545724?spm=1001.2014.3001.5501 |
Beta Was this translation helpful? Give feedback.
-
https://www.yuque.com/creatup/ynrsde/wsrt4sgqo09epw54?singleDoc# 《第四次笔记》 |
Beta Was this translation helpful? Give feedback.
-
第四次作业 https://vvgyb242x83.feishu.cn/wiki/EfeSwxynoiWNSWkU4ydcRqW6nkd?from=from_copylink |
Beta Was this translation helpful? Give feedback.
-
第四节课笔记:https://github.com/hui1feng/XTuner-Finetune/blob/main/README.md |
Beta Was this translation helpful? Give feedback.
-
第四节课 笔记同作业 github链接:https://github.com/SaaRaaS-1300/InternLM_openNotebook/blob/main/Lesson-4/Lesson-4-Notebook.md |
Beta Was this translation helpful? Give feedback.
-
第四节课笔记动机将大模型应用到自己所需求领域的数据分为增量预训练和指令跟随能力两种微调方式。
微调方式微调的流程图: 增量预训练微调增量训练的数据不需要问答的形式,也就是大模型只需要知道一段话什么时候开始,什么时候结束即可,也可以是陈述句。可以将 System 和 User 留空,只留 Assistant 即可。 数据格式: 指令跟随微调构造对话格式数据,对话包含三个角色:
三种角色只是方便构造相应格式的数据,以便让 LLM 学习,实际使用并不会感知到这几种角色。
对话模板为了方便实现上述的流程,通常会设计相应的对话模板,方便在指令微调过程中加入 Prompt 和指令微调的数据。不同的语言模型拥有不同的对话模板。 具体形式为: 这些模板都是隐藏在推理代码中,不会展现给用户,用户只需要输入自己的内容即可。 Training 过程指令微调学习主要目的是让模型学会我们指定的回答,也就是希望学会答案(Output)而不是输入的问题(Input),所以计算 Loss 的时候只会对输出进行计算。 推理和训练都会加入相应的对话模板: LoRA & QLoRA只训练两个低秩矩阵的参数即可,不需要整个网络训练。 XTuner
数据处理优化加速
总结利用 XTuner 很方便能够用自己的数据微调大模型,做成一个个人小助手,使得大模型微调能够进入一般用户的视野。 |
Beta Was this translation helpful? Give feedback.
-
第四课笔记同作业:https://blog.csdn.net/weixin_38043453/article/details/135587465 user_avator = "/root/personal_assistant/code/InternLM/doc/imgs/user.png" robot_avator = "/root/personal_assistant/code/InternLM/doc/imgs/robot.png" ssh -CNg -L 127.0.0.1:6006:0.0.0.0:6006 -o "StrictHostKeyChecking no" [email protected] -p 33693 |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
第四次作业 https://www.yuque.com/roguetrader/iu9odz/ef1ui6eig2vsaw0u?singleDoc# 《XTuner大模型单卡低成本微调实践》 |
Beta Was this translation helpful? Give feedback.
-
第四次笔记:https://www.yuque.com/roguetrader/iu9odz/ef1ui6eig2vsaw0u?singleDoc# 《XTuner大模型单卡低成本微调实践》 |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
InternLM Lesson-4https://g4w90egc30.feishu.cn/docx/K8qodGQ27ojqWrxJxIycyin6nRg?from=from_copylink |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
第4节课笔记(7班)
Beta Was this translation helpful? Give feedback.
All reactions