https://github.com/CLUEbenchmark/ZeroCLUE
人类测评得分。3个不同的人分别先在训练集上学习,然后再做测试,最后投票得出。ZeroCLUE.M
0
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://github.com/OpenBMB/CPM-Live
ZeroCLUE.M | CPM-Live项目的二期模型CPM-Bee
10B
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://github.com/dbiir/UER-py/
ZeroCLUE.M:We employed the strategy of Unified Multiple Choice Perspective.
More than 1B.
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
ZeroCLUE.M PaddleNLP-UTC
ZeroCLUE.M
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://github.com/IDEA-CCNL/Fengshenbang-LM
我们提出了一种新型tuning 框架,使得像bert这种encoder模型也能不添加任何参数的情况下统一所有分类任务。基于这种框架,我们使用 40 份有监督数据对模型进行预训练。模型技术细节将在不久之后公开:https://github.com/IDEA-CCNL/Fengshenbang-LM。ZeroCLUE.M
13亿
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://github.com/alibaba/EasyTransfer
ZeroCLUE.M
1.3B+MoE
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://github.com/IDEA-CCNL/Fengshenbang-LM/
ZeroCLUE.M
7亿参数
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://github.com/Langboat/mengzi-zero-shot
在Mengzi-T5模型的基础上,使用了额外的27个数据集及301个prompt进行了多任务训练。 ZeroCLUE.M
0.22B
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
asdf
ZeroCLUE.M
10B
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://github.com/OpenBMB/CPM-Live
ZeroCLUE.M CPM-Live项目的二期模型CPM-Bee。体验地址:https://live.openbmb.org/models/bee
10B
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://live.openbmb.org/models/bee
ZeroCLUE.M 对话模型
10B
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
XXXXXX.com
XXXX-Test, ZeroCLUE.M
330M
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://github.com/IDEA-CCNL/Fengshenbang-LM
二郎神-MRC是在二郎神的基础上使用机器阅读理解数据继续预训练而得出。为了使“二郎神”模型具有Zero-Shot的能力,在预训练的第二阶段,加入有标签的机器阅读理解数据集进行预训练。为了保证模型未接触过相同类型的任务(从而达到“零样本”学习的要求),与待测任务相类似或者与待测任务属于同类型的数据集不参与预训练。ZeroCLUE.M
13亿
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
abtest03
abtest03, ZeroCLUE.M
330m
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://github.com/OpenBMB/CPM-Live
ZeroCLUE.M,CPM-Bee是一个开源的双语预训练语言模型,参数量为10B,拥有十余种原生能力和强大的通用语言能力,并支持结构化输入和输出。
10B
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://live.openbmb.org/models/bee
ZeroCLUE.M
10B
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
XXXXXX.com
XXXXRoberta, ZeroCLUE.M
330M
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://github.com/OpenBMB/CPM-Live
ZeroCLUE.M,CPM-Bee是一个开源的双语预训练语言模型,参数量为10B,拥有十余种原生能力和强大的通用语言能力,并支持结构化输入和输出。
10B
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
https://github.com/wxj630/Chinese_NLP_With_Transformers
ZeroCLUE.M
7亿参数
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |
XXXXXX.com
UniCU, ZeroCLUE.M
330M
| C | E | N |
---|
C | 0.0 | 0.0 | 0.0 |
E | 0.0 | 0.0 | 0.0 |
N | 0.0 | 0.0 | 0.0 |