Sign up now! New useSign up now! New users get $20 in free creditsDeepSeek V3.1
DeepSeek-R1-Distill-Qwen-32B API
CodeLLM

DeepSeek-R1-DistillQwen-32B API

All You Need To Know About DeepSeek-R1-Distill-Qwen-32B API

Overview

Model Provider:DeepSeek
Model Type:Code
State:Ready

Key Specs

Quantization:BF16
Parameters:33B
Context:128K
Pricing:$0.24 input / $0.24 output
Try Model API
Quick Start
Reserve Dedicated Endpoint

Introduction

DeepSeek R1 Distill Qwen 32B is a distilled large language model based on Qwen 2.5 32B, using outputs from DeepSeek R1. It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. Other benchmark results include: AIME 2024 pass@1: 72.6 MATH-500 pass@1: 94.3 CodeForces Rating: 1691 The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

DeepSeek-R1-Distill-Qwen-32B API Usage

Model

Endpoint

deepseek-ai/DeepSeek-R1-Distill-Qwen-32B


        1
        curl -X POST https://api.canopywave.io/v1 \
      
        2
          -H "Content-Type: application/json" \
      
        3
          -H "Authorization: Bearer $CANOPYWAVE_API_KEY" \
      
        4
          -d '{ 
      
        5
            "model": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", 
      
        6
            "messages": [ 
      
        7
              {"role": "user", "content": "tell me a story"} 
      
        8
            ], 
      
        9
            "max_tokens": 1000, 
      
        10
            "temperature": 0.7 
      
        11
          }'
      
Contact us