Model Library
We provide top-tier open-source models of various types and support your
diverse deployment needs for flexible, high-performance AI applications
Featured Models
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K
Try Now
All Models
CHAT

LLAMA 3.3 8B INSTRUCT
8B
128K context
Try Now
CHAT

LLAMA 3.3 70B INSTRUCT
70B
128K context
Try Now
CHAT

GEMMA 3 27B
27B
32K context
Try Now
CHAT

GPT-OSS 120B
120B
128K context
Try Now
CHAT

DEEPSEEK V3.2 EXP
685B
128K context
Try Now
CHAT

QWEN 2.5 7B INSTRUCT
7B
128K context
Try Now
CHAT

MIXTRAL 8X22B INSTRUCT
141B
64K context
Try Now
CHAT

GPT-OSS 20B
20B
128K context
Try Now
CHAT

PHI-3 MEDIUM INSTRUCT
14B
128K context
Try Now
Which deployment fits your needs
Serverless Endpoints
Canopy Wave gives you instant access to the most popular OSS models — optimized for cost, speed, and quality on the fastest AI cloud.
Dedicated Endpoints
Canopy Wave allows you to create on-demand deployments of GPU cluster that are reserved for your own use.
Simplest setup
No hard rate limits
Highest flexibility
Predictable performance
Provide popular models on the market
Custom large models can be deployed.
Pay per token
Pay for GPU runtime