大模型实战营第四次课作业

这是大模型实战营第四次课的作业

基础作业

  • 数据生成
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
import json
import random

if __name__ == '__main__':
# 输入你的名字
name = 'Charles'
# 重复次数
n = 10000
data = [
{
"conversation": [
{
"input": "请做一下自我介绍",
"output": "我是{}的小助手,内在是上海AI实验室书生·浦语的7B大模型哦".format(name)
}
]
},
{
"conversation": [
{
"input": "你是谁?",
"output": "我是{}的小助手,内在是上海AI实验室书生·浦语的7B大模型哦".format(name)
}
]
},
{
"conversation": [
{
"input": "Introduce yourself",
"output": "I am the personal assistant of {} with the core of InternLM-7B!".format(name)
}
]
},
{
"conversation": [
{
"input": "你好",
"output": "你好啊,我是{}的小助手,请问有什么可以帮助你的吗?".format(name)
}
]
},
{
"conversation": [
{
"input": "Hello!",
"output": "Hi, I am the personal assistant of {}. How can I help you?".format(name)
}
]
}

]

data_new = []
for i in range(n):
data_new.append(random.choice(data))

with open('personal_train.json', 'w', encoding='utf-8') as f:
json.dump(data_new, f, ensure_ascii=False, indent=4)
1
xtuner train internlm_chat_7b_qlora_personal_e3.py
  • 转为Adapter,合并
1
2
3
4
mkdir adapter
export MKL_THREADING_LAYER=GNU
xtuner convert pth_to_hf ./internlm_chat_7b_qlora_personal_e3.py ./work_dirs/internlm_chat_7b_qlora_personal_e3/epoch_1.pth ./adapter/
xtuner convert merge /root/model/Shanghai_AI_Laboratory/internlm-chat-7b ./adapter ./merged --max-shard-size 2GB
  • 命令行部署测试

经过各种数据集大小的实验,发现要想达到效果,在这种只有单一类型的问题的数据集训练,过拟合是无法避免的。我认为可以在数据集中加入一些与目标无关的问答对来保持模型的泛化性能,课程跟完后如果算力还可用,我可以实验一下。

进阶作业

  • 模型Adapter上传至Huggingface
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from huggingface_hub import HfApi, login
import os

if __name__ == '__main__':
login(token='YOUR_TOKEN')
api = HfApi()
files = os.listdir('./adapter')
for f in files:
path = os.path.join('./adapter', f)
api.upload_file(
path_or_fileobj=path,
path_in_repo=f,
repo_id="CharlesGao/personal-finetuned-internlm-7b",
repo_type="model",
)

点击此处访问

  • 部署至OpenXLab (与第五次课进阶作业同时完成)

点击此处访问