The right model for every workload
Radium offers three model tiers for reasoning, general-purpose use, and high-efficiency workloads.
Hal 1.0
Comparable to Anthropic
Opus 4.6 or OpenAI GPT 5.4
Clarke 1.0
Comparable to Anthropic
Sonnet 4.6 or OpenAI GPT-4o
Tycho 1.0
Comparable to Anthropic
Haiku 4.5 or OpenAI GPT-4o mini
Built on the best open-source models. Delivered through infrastructure only Radium can provide
Each Radium model is powered by leading open-source foundations, optimized through our proprietary orchestration, kernel, and network stack to deliver up to [X]% lower cost at equivalent quality. We select and update base models continuously to ensure you always run the best available architecture.
Understanding the
hidden costs of AI
Switching
from OpenAI
or Anthropic
Our Technology
Improve unit economics
at production scale
Radium is used by teams shipping
AI into real-world systems
Square’s R&D team used Radium to prototype early (pre-Gen AI) text-to-video and text-to-speech applications.
EQTY Lab used Radium to train a state-of-the-art climate model that was presented at COP28, the United Nations climate change conference.

Realbotix uses Radium to power low-latency, real-time AI interactions on its humanoid robotics platform. Radium enables responsive inference at the speed required for live human–AI interaction.
A leader in generative AI for law, Alexi used Radium to train domain-specific retrieval models. Alexi’s advanced AI platform generates legal memos, arguments, and answers to general litigation queries.
One line of code to switch.
A different class of performance.
Swap OpenAI for Radium in your API call. That's it.



