Our Models

The right model for every workload

Radium offers three model tiers for reasoning, general-purpose use, and high-efficiency workloads.

Our Technology

Hal 1.0

Maximum Capability

Comparable to Anthropic
Opus 4.6 or OpenAI GPT 5.4

Clarke 1.0

Balanced Performance

Comparable to Anthropic
Sonnet 4.6 or OpenAI GPT-4o

Tycho 1.0

High-efficiency Scale

Comparable to Anthropic
Haiku 4.5 or OpenAI GPT-4o mini

Model transparency

Built on the best open-source models. Delivered through infrastructure only Radium can provide

Each Radium model is powered by leading open-source foundations, optimized through our proprietary orchestration, kernel, and network stack to deliver up to [X]% lower cost at equivalent quality. We select and update base models continuously to ensure you always run the best available architecture.

Our Technology
The economics of enterprise AI

Understanding the
hidden costs of AI

Tokenomics

Switching
from OpenAI
or Anthropic

Compare

Our Technology

Technology

Improve unit economics
at production scale

Case Studies

Radium is used by teams shipping
AI into real-world systems

Square’s R&D team used Radium to prototype early (pre-Gen AI) text-to-video and text-to-speech applications.

EQTY Lab used Radium to train a state-of-the-art climate model that was presented at COP28, the United Nations climate change conference.

Realbotix uses Radium to power low-latency, real-time AI interactions on its humanoid robotics platform. Radium enables responsive inference at the speed required for live human–AI interaction.

A leader in generative AI for law, Alexi used Radium to train domain-specific retrieval models. Alexi’s advanced AI platform generates legal memos, arguments, and answers to general litigation queries.

Get Started

One line of code to switch.
A different class of performance.

Swap OpenAI for Radium in your API call. That's it.