Sign up now! New useSign up now! New users get $20 in free creditsDeepSeek V3.1

OpenAI Compatibility

Introduction

The CanopyWave API is compatible with OpenAI's libraries, enabling seamless integration with your existing applications using the OpenAI client library. Experience our open-source model capabilities with zero barriers to entry.

Our API endpoints are fully compatible with OpenAI's API. This means you can migrate your application to the CanopyWave platform without modifying existing code logic—simply adjust your API configuration settings to seamlessly access our open-source model ecosystem.

How to Use CanopyWave's API

To use the CanopyWave API through the OpenAI client library, simply set api_key to your CanopyWave API key and modify base_url tohttps://inference.canopywave.io/v1

Python

import os
import openai

client = openai.OpenAI(
    api_key=os.environ.get("CANOPYWAVE_API_KEY"),
    base_url="https://inference.canopywave.io/v1",
)

TypeScript

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.CANOPYWAVE_API_KEY,
  baseURL: "https://inference.canopywave.io/v1",
});

Click to obtain your CANOPYWAVE_API_KEY. If you don't have an account yet, you can register for free.

Getting Started with Large Language Models

After completing the above configuration, your OpenAI client is now successfully connected to CanopyWave and ready to invoke our open-source models for inference. For example, try our open-source model—GLM-4.6.

Python

import os
import openai

client = openai.OpenAI(
    api_key=os.environ.get("CANOPYWAVE_API_KEY"),
    base_url="https://inference.canopywave.io/v1",
)

response = client.chat.completions.create(
    model="zai/glm-4.6",
    messages=[
        {
            "role": "system",
            "content": "You are a cook agent.",
        },
        {
            "role": "user",
            "content": "Tell me how to make a hamburger",
        },
    ],
)

print(response.choices[0].message.content)

TypeScript

import OpenAI from 'openai';

const client = new OpenAI({
  apiKey: process.env.CANOPYWAVE_API_KEY,
  baseURL: 'https://inference.canopywave.io/v1',
});

const response = await client.chat.completions.create({
  model: 'zai/glm-4.6',
  messages: [
    { role: 'user', content: 'What are some fun things to do in New York?' },
  ],
});

console.log(response.choices[0].message.content);

Structured Outputs

You can use JSON schema to get structured outputs from the model.

Python

from pydantic import BaseModel
from openai import OpenAI
import os, json 

client = OpenAI(
    api_key=os.environ.get("CANOPYWAVE_API_KEY"),
    base_url="https://inference.canopywave.io/v1",
) 

class CalendarEvent(BaseModel):
    name: str
    date: str
    participants: list[str] 

completion = client.chat.completions.create(
    model="zai/glm-4.6",
    messages=[
        {"role": "system", "content": "Extract the event information."},
        {
            "role": "user",
            "content": "Timmy and Gavin are planning to cook a big meal on Sunday.. Answer in JSON.",
        },
    ],
    response_format={
        "type": "json_schema",
        "json_schema": {
            "name": "calendar_event",
            "schema": CalendarEvent.model_json_schema(),
        },
    }, 
) 

output = json.loads(completion.choices[0].message.content)
print(json.dumps(output, indent=2)) 

output
{
    "name": "Big Meal Cooking",
    "date": "Sunday",
    "participants": [
        "Timmy",
        "Gavin"
    ]
}

Function Calling

You can use function calling to get structured outputs from the model and call external functions.

Python

from openai import OpenAI
import os, json 

client = OpenAI(
    api_key=os.environ.get("CANOPYWAVE_API_KEY"),
    base_url="https://inference.canopywave.io/v1",
) 

tools = [
    {
        "type": "function",
        "function": {
            "name": "calculate_tip",
            "description": "Calculate the tip amount for a restaurant bill.",
            "parameters": {
                "type": "object",
                "properties": {
                    "bill_amount": {
                        "type": "number",
                        "description": "The total bill amount in USD",
                    },
                    "tip_percentage": {
                        "type": "number",
                        "description": "The percentage of tip to leave (e.g., 15, 18, 20)",
                    }
                },
                "required": ["bill_amount", "tip_percentage"],
                "additionalProperties": False,
            },
            "strict": True,
        },
    }
] 

completion = client.chat.completions.create(
    model="zai/glm-4.6",
    messages=[
        {"role": "user", "content": "Calculate a 20% tip for a $50 bill"}
    ],
    tools=tools,
    tool_choice="auto",
) 

print(
    json.dumps(
        completion.choices[0].message.model_dump()["tool_calls"], indent=2
    )
) 

output 
[
    {
        "id": "call_7b04c99a5ca84b94b710ef40",
        "function": {
            "arguments": "{"bill_amount": 50, "tip_percentage": 20}",
            "name": "calculate_tip"
        },
        "type": "function",
        "index": -1
    }
]

Get Help

If you have any questions while using our API, please feel free to contact our support team. You can also reach our support staff through the following channels:

Discord

Join our community discussion

X

Stay tuned for our latest updates

If you have questions, please contact our support team at support@canopywave.com, and we will gladly get back to you!

Contact us