Sign up now! New useSign up now! New users get $20 in free creditsDeepSeek V3.1
GPT-OSS 120B API
CHAT

GPT-OSS 120B API

All You Need To Know About GPT-OSS 120B API

Overview

Model Provider:OpenAI
Model Type:CHAT
State:Ready

Key Specs

Quantization:BF16
Parameters:120B
Context:128k
Pricing:$0.10 input / $0.40 output
Try Model API
Quick Start
Reserve Dedicated Endpoint

Introduction

The GPT-OSS 120B API provides access to an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI, built for high-reasoning, agentic workflows, and general-purpose production use cases. Its efficient MoE design activates only 5.1B parameters per forward pass and is optimized to run on a single H100 GPU using native MXFP4 quantization for exceptional performance and cost efficiency.

Released under the Apache 2.0 license, the GPT-OSS 120B API delivers full model transparency with configurable chain-of-thought reasoning and native tool use, including function calling and browsing. This enables organizations to customize behavior while maintaining complete data control for secure, private, and compliant deployments.

GPT-OSS 120B API Usage

Model

Endpoint

openai/gpt-oss-120b


        1
        curl -X POST https://api.canopywave.io/v1/chat/completions \
      
        2
          -H "Content-Type: application/json" \
      
        3
          -H "Authorization: Bearer $CANOPYWAVE_API_KEY" \
      
        4
          -d '{
      
        5
            "model": "openai/gpt-oss-120b",
      
        6
            "messages": [
      
        7
              {"role": "user", "content": "tell me a story"}
      
        8
            ],
      
        9
            "max_tokens": 300,
      
        10
            "temperature": 0.7
      
        11
          }'
      
Contact us