Sign up now! New useSign up now! New users get $20 in free creditsDeepSeek V3.1
DeepSeek V3.1 API
LLMReasoning

DeepSeek V3.1 API

All You Need To Know About DeepSeek V3.1 API

Overview

Model Provider:DeepSeek
Model Type:LLM/Reasoning
State:Ready

Key Specs

Quantization:FP8
Parameters:671B (37B activated)
Context:128k
Pricing:$0.27 input / $1.00 output
Try Model API
Quick Start
Reserve Dedicated Endpoint

Introduction

DeepSeek-V3.1 is a large hybrid reasoning model that uniquely supports both a "thinking" and a "non-thinking" mode, which can be controlled via prompt templates.

Built upon the DeepSeek-V3 architecture with 37 billion active parameters (out of 671B total), it has been post-trained for an extended 128K long-context window. This version introduces significant improvements in tool calling, agentic tasks, and reasoning efficiency, achieving performance comparable to earlier high-end models but with faster response times. These enhancements make it ideal for complex coding, research, and agentic workflows.

DeepSeek V3.1 API Usage

Model

Endpoint

deepseek/deepseek-chat-v3.1


        1
        curl -X POST https://inference.canopywave.io/v1 \
      
        2
          -H "Content-Type: application/json" \
      
        3
          -H "Authorization: Bearer $CANOPYWAVE_API_KEY" \
      
        4
          -d '{
      
        5
            "model": "deepseek/deepseek-chat-v3.1",
      
        6
            "messages": [
      
        7
              {"role": "system", "content": "You are a helpful assistant."},
      
        8
              {"role": "user", "content": "please tell me a story."}
      
        9
            ],
      
        10
          }'
      
Contact us