Models
DecentralGPT Network encompasses various GPT models, including both open-source and closed-source models.
Users can select different models to perform tasks as needed.
Model developers can also submit their models to the DecentralGPT Network.
All user data is encrypted and stored on a decentralized storage network, making it inaccessible to unauthorized parties.
We already support the world’s most powerful LLM model–LIama3.1-405B
Meta Llama3.1 405B
The world’s largest and most capable openly available foundation model.
- Development Team: Meta
- Launch Date: 2024.7
- Model Parameters: 405B
- Features: The best open-source model in the world.
Qwen2.5-72B
The open-source GPT large model trained by Tongyi Qianwen possesses 72B parameters.
- Development Team: Tongyi Qianwen(aliyun)
- Launch Date: 2024.9
- Model Parameters: 72B
- Features: 27 language support, surpport long texts of up to 128 tokens, high memory utilization and optimization, user-friendly model interfaces.
Nemotron 70B
Nvidia’s largest LLM model.
- Development Team: Nvidia
- Launch Date: 2024.10
- Model Parameters: 70B
- Features: Innovative technical architecture, efficient training data, and promoting sustainable development of the AI ecosystem.
NVLM-D-72B
Nvidia’s Multimodal LLM model.
- Development Team: Nvidia
- Launch Date: 2024.10
- Model Parameters: 72B
- Features: Multimodal capabilities, exceptional text processing abilities, and outstanding mathematical reasoning skills.
DeepSeek-Coder-V2
An open-source Mixture-of-Experts code language model that achieves performance comparable to GPT4-Turbo in code-specific tasks.
- Launch Date: 2024.6
- Model Parameters: 6000B
- Features: Intelligent Code Completion, Automatic Code Review, Interactive Development, Real-time Collaboration, and Documentation Platform.
Qwen2.5-Coder-32B
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models.
- Launch Date: 2024.9
- Model Parameters: 32B
- Features: Context-Aware Suggestions, Automatic Code Review, Real-Time Collaboration.