World Model
This is our moonshot — an ambitious, partially trained world model that’s already showing strong early performance on 3D medical images.
At its core, the idea is two‑fold:
- Multimodal generation: A GPT‑style model utilizing a Mixture of Experts capable of generating across any modality — images, medical scans, sound, text, CAD models, CFD and FEA simulations, video — treating them all as a unified language.
- Scaling up: We need more GPUs to unlock its full potential.
This isn’t just about creating a model that can generate in every modality. It also builds a graph‑based long‑term memory, integrating knowledge across those modalities. This enables it to act as a partner for solving complex problems — from medicine to engineering — with context and continuity.
We envision this model being used in medicine, pharmaceutical science, biological research, and engineering design. Imagine it generating custom components, running computational fluid dynamics or finite element analyses in seconds rather than days, or assisting in the creation of new music, art, and more.
More updates soon — fresh GPUs are on the way. This is our boldest project yet 🚀.