Tiny‑LLM Mascot

Tiny‑LLM: A Cantankerous AI for the Edge

My work‑in‑progress: a small but feisty language model built from scratch using Rust, Burn, WASM, and WebGPU. The goal is twofold:

This model isn’t trained on the usual internet slurry. Instead, it’s learning from 500 million lines of text across 75,000 books — mostly from the 18th and 19th centuries. This rich, older corpus gives it a uniquely contrarian, cantankerous voice — think of it as a snarky Victorian pen‑pal in your pocket.

Why Rust & Burn?

Rust offers speed, safety, and portability, making it a perfect choice for building models that can run anywhere. Burn, a Rust‑native deep learning framework, provides the flexibility to implement custom architectures while targeting WebGPU for execution in browsers or on edge devices. No servers, no big GPUs — just smart code running where you need it.

Where is this going?

Besides being a fun experiment in model personality design, this tiny‑LLM could also serve as the backbone of a local search engine or an offline assistant, running securely on your own hardware. Whether it’s chatting in its cantankerous 19th‑century tone or quietly indexing your documents, it’s designed to be both lightweight and capable.

What’s next?

Training is ongoing. Once complete, the model will be instruction‑tuned and released as a fully open‑source edge‑ready AI. Expect more demos — and plenty of sass — soon.