Free 7-Day Trials: UFree 7-Day Trials: Unlimited Token Plan & Coding Plan. Claim NowDeepSeek V3.1

Careers

Our growing, dynamic team is always looking for fresh talent.Explore our in-office, hybrid, and remote opportunities across tech roles - and apply to the role that aligns with your skills and goals

Why Choose Us

We are building a global network of AI data centers to power next generation cloud services. With hundreds of deployments planned worldwide—strategically located near end users and enterprises—we are positioned to deliver low-latency, high-performance AI Infra at scale. Backed by visionary leadership, deep industry expertise, and strong funding, our team is guided by pioneers shaping the future of AI computing

Healthcare

Healthcare

Our employees have access to high-quality medical, dental, and vision coverage

Insurance

Insurance

We offer top-tier life insurance as well as short-term and long-term disability insurance to our employees

Rapid Growth Platform

Rapid Growth Platform

Gain access to our core business and strategies through a comprehensive training system designed for rapid career development

Environment

Environment

Join a high-growth, innovative, and international team that offers boundless opportunities for professional development

Open positions

Experience comes in many forms, many skills are transferable, and passion goes a long way. If your experience is this close to what we're looking for, consider applying. We know that diversity of thought makes for the best problem-solving and creative thinking, which is why we're dedicated to adding new perspectives to the team and encourage everyone to apply

AI Infra QA Engineer

Core Objective: Lead end-to-end quality assurance for large-scale model inference platforms, ensure rock-solid stability of inference engines and scheduling systems through automated testing frameworks and shift-left CI/CD practices, and guarantee seamless deployment of production-grade AI services at scale.

Senior AI Infra Engineer (LLM Inference Platform)

Core Objective: Lead the construction of industry-leading, high-concurrency LLM inference infrastructure, optimize throughput and latency for 100B+ parameter models through deep framework customization and hardware-software co-design, and deliver extreme performance experiences for enterprise-scale API services.

AI Infra Platform Engineer (SRE)

Core Objective: Architect full-stack observability systems for GPU clusters and token pipelines, achieve sub-second anomaly detection and automatic recovery through SLO-driven monitoring and intelligent alerting, and serve as the backbone of 24/7 AI infrastructure stability and operational excellence.

Cloud-Native AI Infra Engineer (LLM Inference Optimization)

Core Objective: Optimize cloud-native deployment of large-scale inference services, maximize GPU utilization and concurrency through Kubernetes-based scheduling and advanced quantization techniques, and deliver cost-efficient, high-performance AI infrastructure for mission-critical production workloads.

Work with us

Ready to shape the future? Join Canopy Wave and help drive the future of technology with a team redefining what's possible