MiniMax M2 API
CODEPlaceholder

MiniMax M2 API

All You Need To Know About This Model

Overview

Model Provider:MiniMax
Model Type:CODE
State:Ready

Key Specs

Quantization:FP8
Parameters:230B (10B activated)
Context:128k
Pricing:$0.25 input / $1.00 output
Try Model
Quick Start
Reserve Dedicated Endpoint

Introduction

MiniMax-M2 is a compact, high-efficiency Mixture-of-Experts (MoE) model optimized for elite performance in coding and agentic workflows.

With only 10 billion active parameters (out of 230B total), it delivers near-frontier intelligence in areas like end-to-end tool use, multi-step task execution, and complex code generation, matching the performance of much larger models. Its small activation footprint enables fast inference, low latency, and cost-effective deployment, making it ideal for large-scale agents and responsive, reasoning-driven developer assistants.

MiniMax M2 API Usage

Model

Endpoint

minimax/minimax-m2


        1
        curl -X POST https://inference.canopywave.io/v1 \
      
        2
          -H "Content-Type: application/json" \
      
        3
          -H "Authorization: Bearer $CANOPYWAVE_API_KEY" \
      
        4
          -d '{
      
        5
            "model": "minimax/minimax-m2",
      
        6
            "messages": [
      
        7
              {"role": "system", "content": "You are a helpful assistant."},
      
        8
              {"role": "user", "content": "please tell me a story."}
      
        9
            ],
      
        10
          }'
      
Contact us

Hi. Need any help?