Models

DecentralGPT Network encompasses various GPT models, including both open-source and closed-source models.

Users can select different models to perform tasks as needed.
Model developers can also submit their models to the DecentralGPT Network.
All user data is encrypted and stored on a decentralized storage network, making it inaccessible to unauthorized parties.

We already support the world’s most powerful LLM model–LIama3.1-405B

Category Benchmark
LIama3.1-405B
Claude 3.5 Sonnet
GPT-4 (0125)
GPT-4Omni
General
MMLU (0-shot,CoT)
88.6
88.3
85.4
88.7
MMLU PRO (5-shot,CoT)
73.3
77.0
64.8
74.0
IFEval
88.6
88.0
84.3
85.6
code
HumanEval (0-shot)
89.0
92.0
86.6
90.2
MBPP EvalPlus (base)(0-shot)
88.6
90.5
83.6
87.8
math
GSM8K (8-shot,CoT)
96.8
96.4
94.2
96.1
MATH (0-shot,CoT)
73.8
71.1
64.5
76.6
Reasoning
ARC Challenge (0-shot)
96.9
96.7
96.4
96.7
GPQA (0-shot,CoT)
51.1
59.4
41.4
53.6
Tool Use
BFCL
88.5
90.2
88.3
80.5
Nexus
58.7
45.7
50.3
56.1
Long context
ZeroSCROLLS/QuALITY
95.2
90.5
95.2
90.5
InfiniteBench/En.MC
83.4
72.1
82.5
NIH/Multi-needle
98.1
90.8
100.0
100.0
Multilingual
Multilingual MGSM (0-shot)
91.6
91.6
85.9
90.5

Meta Llama3.1 405B

The world’s largest and most capable openly available foundation model.

  • Development Team: Meta
  • Launch Date: 2024.7
  • Model Parameters: 405B
  • Features: The best open-source model in the world.

Mistral LLM Large2-123B

New generation of our flagship model. Compared to its predecessor, Mistral Large 2 is significantly more capable in code generation, mathematics, and reasoning.

  • Development Team: Mistral
  • Launch Date: 2024.7
  • Model Parameters: 123B
  • Features: Excellent code generation and math reasoning capabilities, excellent code generation and math reasoning capabilities, support over 80 programming languages, high-performance benchmark testing results.

Qwen2-72B

The open-source GPT large model trained by Tongyi Qianwen possesses 72B parameters.

  • Development Team: Tongyi Qianwen(aliyun)
  • Launch Date: 2024.5
  • Model Parameters: 72B
  • Features: 27 language support, surpport long texts of up to 128 tokens, high memory utilization and optimization, user-friendly model interfaces.

Gemma2-27B

A lightweight, high-performance open large language model launched by Google, specifically designed for efficient operation on resource-constrained devices.

  • Development Team: Google
  • Launch Date: 2024.6
  • Model Parameters: 27B
  • Features: Outstanding performance, efficient operation, ultra-fast inference, easy deployment, and wide compatibility.

Codestral

The open-source GPT large model trained by Mistral AI possesses 22 B parameters.

  • Development Team: Mistral AI
  • Launch Date: 2024.5
  • Model Parameters: 22B
  • Features: Code completion and generation, error detection and repair, multilingual compatibility