Small models.
Absolute precision.

71 languages. 26 scripts. Three tokenizers. Two model families arriving.

AENEA is a family of small language models designed from first principles for factual determinism. Prelude-1 proved the thesis. Prelude-4 is surpassing it. The Factual Crystallisation Hypothesis is rewriting how we understand when small models achieve reliable factual recall.

The Journey

From first commit to launch

AENEA began in August 2025 with a simple question: what happens when you treat data quality as architecture, not preprocessing? Six months later, we have our answer.

August 2025
Project Genesis
First line of code. The hypothesis: a 284M parameter model trained on surgically clean data can outperform models 10x its size on factual recall tasks. We start building the data infrastructure.
September 2025
The Quartz Pipeline
Wiki Ultra-Clean v1 through v4. We learn that Wikipedia is 40% noise by volume — tables, infoboxes, navigation templates, census boilerplate. Each version gets more ruthless. The pipeline becomes a scalpel.
October 2025
The Overture Architecture
d=1024 embedding geometry. 16 attention heads. Rotary position embeddings. The model architecture crystallises around a single principle: every dimension of the latent space must carry factual signal.
November 2025
Pipeline v6 — Sub-8-Hour
The breakthrough. Parallel decompression, regex XML splitting, pre-computed MinHash signatures. Full English Wikipedia processed in under 8 hours. The Quartz data stack reaches production quality.
December 2025
Training Begins
Prelude-1 enters training on Quartz-cleaned Wikipedia. Loss drops steadily. The anomaly detector catches micro-batch outliers — the model is learning clean factual representations.
January 2026
Stack Exchange Corpus
The SE Ultra-Clean pipeline goes live. 23 Stack Exchange sites transformed into instruct-format Q&A pairs. The model's training data now spans both declarative knowledge and procedural reasoning.
February 2026
Convergence — Loss 2.807
All-time sustained best. EMA loss reaches 2.807, perplexity 16.6. Prelude-1 returns single-sentence factual completions with sub-second latency. The model approaches its theoretical floor.
March 2026
QT_V.2 Tokenizer Family — Three Sizes
The tokenizer family ships. Three variants: 64K (smallest embedding), 96K (best all-round), and 114K Code (multilingual coding). 71 languages across 26 script families. Validated on FLORES-200 across 204 languages — fewest total tokens and 4× more equitable than Llama 3.
March 2026
QT V.3 32K UltraLingo
The third-generation tokenizer. 32,000 vocabulary covering 71 languages across 26 writing systems. Outperforms Llama 3's 128K vocabulary on 48 FLORES-200 languages. 3× better cross-lingual equity at one-quarter the vocabulary size.
March 2026
Prelude-4 Surpasses Previous Generation
Prelude-4 (276M parameters, d=1024, 16 layers, GQA 4:1) exceeds Prelude-1's factual recall performance, reaching target loss of 2.734 and gradient norm of 0.267 significantly faster. Contributes to the development of the Factual Crystallisation Hypothesis.
The Models

AENEA Model Family

Prelude-1 proved the thesis. Prelude-4 is surpassing it on the QT V.3 32K UltraLingo tokenizer. Overture-1 (planned) will add advanced reasoning and code generation.

Prelude-1

aenea-prelude-1-1024 · QT_32k_IV · v1.0
First model in the AENEA family. Trained exclusively on Quartz-cleaned corpora. Designed for single-sentence factual completion.
RELEASED
Parameters
284M
d=1024 · 16 layers · GQA · RoPE
Training Data
6.4B
tokens (Quartz-cleaned)
Tokenizer
QT-32K
ByteBPE v4
Best EMA Loss
2.807

Prelude-4

aenea-prelude-4 · QT V.3 32K · v4.0
Fourth generation. Surpasses Prelude-1 on factual recall. Validates the Factual Crystallisation Hypothesis — gradient norm predicts factual emergence.
IN TRAINING
Parameters
276M
d=1024 · 16 layers · GQA 4:1
Tokenizer
QT V.3 32K
UltraLingo SuperBPE
Best Loss
2.734
Surpasses Prelude-1 (2.807)
Grad Norm
0.267
Crystallisation zone

Overture-1

QT_V.2 Code 114K · Advanced Reasoning & Code
Advanced multilingual reasoning and code generation. Currently in the design phase — building upon training insights from the Prelude series.
PLANNED
Tokenizer
QT_V.2 Code
114K · 71 langs + 15 code
Focus
Reasoning & Code
Advanced multilingual
Approach

Why smaller models can think bigger

Most parameters in large models are wasted — compensating for noisy data, fragmented representations, and training regimes that fight themselves. We start from the opposite premise.

Ultra-Clean Data

The Quartz v7.3 pipeline removes encoding artefacts, vandalism, and noise across 71 languages and 26 script families. Every malformed token is a wrinkle in the loss landscape — we iron them out before training begins.

Coherent Geometry

Architectures designed so representations built during one training phase remain geometrically compatible with the next. Knowledge encodes cleanly and manifests back into language without distortion.

Multi-Epoch Depth

Three passes over curated data. First epoch builds the map. Second irons out the paper. Third polishes routes between internal representation and fluent generation.

Factual Crystallisation

Our research has identified that gradient norm, not loss, predicts the onset of factual recall in language models. When gradient norm drops to approximately 0.27 the model transitions from memorisation to genuine factual crystallisation. This hypothesis challenges conventional training metrics and provides a principled framework for predicting when small models achieve reliable factual recall.

Developer Experience

Simple to use, powerful underneath

AENEA models ship as standard checkpoints compatible with common inference frameworks. Load it, prompt it, generate — the engineering complexity is in the training, not the interface.

Prelude-1 is released. Prelude-4 is in training. All AENEA models support the QT tokenizer family providing efficient encoding across every script.

python — inference
# Load model
from aenea import AENEA
model = AENEA.load("aenea-prelude-1")
# Generate (any of 71 languages)
output = model.generate(
prompt="Die Geschichte des",
max_tokens=256,
temperature=0.1
)
▸ QT V.3 32K UltraLingo · 71 languages · 26 scripts
Roadmap

What's coming

Prelude-1 is released. The QT_V.2 tokenizer family is live. Now we're building the next generation of models.

February 2026 · Complete

Prelude-1 Base

284M parameter base model. Three-epoch training on ultra-clean Wikipedia. Open weights and full training logs.
March 2026 · Complete

QT_V.2 Tokenizer Family

Three tokenizers: 64K, 96K, and 114K Code. 71 languages, 26 script families. Validated on FLORES-200 (204 languages) — fewest total tokens and 4× more equitable than Llama 3. Published on HuggingFace.
March 2026 · Complete

QT V.3 32K UltraLingo

Third-generation SuperBPE tokenizer. 32K vocabulary covering 71 languages across 26 writing systems. Outperforms Llama 3's 128K on 48 FLORES-200 languages at one-quarter the vocabulary size. Published on HuggingFace.
In Training

Prelude-4

276M parameters on QT V.3 32K UltraLingo. Surpassing Prelude-1 on factual recall. Loss 2.734, grad norm 0.267. Validating the Factual Crystallisation Hypothesis.
Planned

Overture-1

Advanced multilingual reasoning and code generation on the QT_V.2 Code 114K tokenizer. Currently in the design phase — building upon training insights established by the Prelude series.

The future is multilingual

Prelude-1 proved that precision beats parameter count. Prelude-4 is surpassing it. The Factual Crystallisation Hypothesis is rewriting the playbook.