Sign up now! New useSign up now! New users get $20 in free creditsDeepSeek V3.1
GPT-OSS 120B API
CHATPlaceholder

GPT-OSS 120B API

All You Need To Know About GPT-OSS 120B API

Overview

Model Provider:OpenAI
Model Type:CHAT
State:Ready

Key Specs

Quantization:BF16
Parameters:120B
Context:128k
Pricing:$0.10 input / $0.40 output
Try Model
Quick Start
Reserve Dedicated Endpoint

Introduction

GPT-OSS 120B is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI, designed for high-reasoning, agentic, and general-purpose production use cases. Leveraging its efficient MoE architecture, it activates just 5.1B parameters per forward pass and is highly optimized to run on a single H100 GPU with native MXFP4 quantization.

Released under the Apache 2.0 license, the model provides full transparency with configurable chain-of-thought reasoning and native tool use (including function calling and browsing), empowering organizations with the customization and complete data control required for secure, private deployment.

GPT-OSS 120B API Usage

Model

Endpoint

openai/gpt-oss-120b


        1
        curl -X POST https://api.canopywave.io/v1/chat/completions 
      
        2
          -H "Content-Type: application/json" 
      
        3
          -H "Authorization: Bearer $CANOPYWAVE_API_KEY" 
      
        4
          -d '{
      
        5
            "model": "openai/gpt-oss-120b",
      
        6
            "messages": [
      
        7
              {"role": "user", "content": "tell me a story"}
      
        8
            ],
      
        9
            "max_tokens": 300,
      
        10
            "temperature": 0.7
      
        11
          }'
      
Contact us

Hi. Need any help?