Qwen3.5-397B-A17B & Qwen3.5-397B-A17B & MiniMax M2.5 is Live on Canopy Wave. Try it Now!DeepSeek V3.1
MiniMax-M2.5 API
CODELLM

MiniMax-M2.5 API

All You Need To Know About MiniMax-M2.5 API

Overview

Model Provider:MiniMax
Model Type:CODE/LLM
State:Ready

Key Specs

Quantization:FP8
Parameters:
Context:205K
Pricing:$0.27 input / $1.08 output / $0.03 cache
Try Model API
Quick Start
Reserve Dedicated Endpoint

Introduction

MiniMax-M2.5 is a SOTA large language model designed for real-world productivity. Trained in a diverse range of complex real-world digital working environments, M2.5 builds upon the coding expertise of M2.1 to extend into general office work, reaching fluency in generating and operating Word, Excel, and Powerpoint files, context switching between diverse software environments, and working across different agent and human teams.

Scoring 80.2% on SWE-Bench Verified, 51.3% on Multi-SWE-Bench, and 76.3% on BrowseComp, M2.5 is also more token efficient than previous generations, having been trained to optimize its actions and output through planning.

MiniMax-M2.5 API Usage

Model

Endpoint

minimax/minimax-m2.5


        1
        curl -X POST https://inference.canopywave.io/v1/chat/completions \
      
        2
          -H "Content-Type: application/json" \
      
        3
          -H "Authorization: Bearer $CANOPYWAVE_API_KEY" \
      
        4
          -d '{
      
        5
            "model": "minimax/minimax-m2.5",
      
        6
            "messages": [
      
        7
              {"role": "user", "content": "tell me a story"}
      
        8
            ],
      
        9
            "max_tokens": 1000,
      
        10
            "temperature": 0.7
      
        11
          }'
      
PromotionContact us