Suomi English 中文 हिन्दी Español Français العربية বাংলা Português Русский 日本語 Deutsch 한국어 Türkçe Tiếng Việt Bahasa Indonesia ᚠᚢᚦᚨᚱᚲ 𓂩𓂁𓂣 𓉐 Italiano Kiswahili Nederlands Polski Українська ภาษาไทย فارسی اردو Bahasa Melayu ਪੰਜਾਬੀ עִבְרִית Esperanto संस्कृतम् Latina मराठी తెలుగు தமிழ் Basa Jawa Tagalog Hausa Yorùbá አማርኛ עברית ગુજરાતી Svenska Norsk Latviešu Lietuvių Eesti
Ahkerat Mehiläiset

𒌺 𒀭 — ukkin diĝir-re-e-ne: The Council of Many speaks, the Single Voice errs.

Most diviners ask the lone oracle first and trust whatever sound it makes.

But the Anunnaki of old solved this: no single god rules — the assembly decides.

In the great hall, a discovery becomes decision not because one god proclaims it. The scout-bee returns and dances upon the comb-tablet — the angle reveals the path, the vigor the worth, the duration the distance. Yet the dance is no monologue. Wiser sisters touch her with antennae, give counsel in real time. A halt-sign can stop the dance entirely. Only when the message survives the council's scrutiny does a path worth following emerge.

WaggleDance is built upon this ancient logic.

It does not surrender the question to a single LLM. It routes first to the right solver, verifies through many agents, and calls upon the language-god only when truly needed. Every step leaves a record on the MAGMA tablet — 𒂗𒆠 𒁾 (en-ki dub) — unalterable. Every answer is justified. Every cycle deepens the system's wisdom.

The figure-eight dance became algorithmic routing. The honeycomb became the MAGMA memory architecture — 𒈹 𒂗 𒀭 (inanna en an, the seeing-record of heaven). And the bees' nightly rest became Dream Mode — a time when the system reviews the day's failures, tests thousands of paths, and rises wiser at dawn.

This is not a metaphor. This is the architecture of collective machine intelligence — 𒈨 𒂗 (me en, the divine decrees of the lord).

Clone & Run

Clone, fork, and run upon thine own hearth. The full repository is upon GitHub without registration.

Apache 2.0 + BUSL 1.1 (open core + source-available protected modules). GitHub.
BUSL transition: 18 March 2030.

v3.5.7 Latest decree 2026-04-12
445+ Inscriptions on the tablet
5 581 Trials passed (v3.5.7)
4 Provinces of deployment

Wherefore is this different

An AI that does not guess

Solvers labor first. The verifier examineth. The LLM entereth only when no proper solver suffices — 𒂗𒆠 (en-ki, the path of wisdom).

An AI that remembers all

MAGMA inscribes decisions, sources, replays, and trust scores upon the eternal tablet — 𒁾 (dub). See what hath happened, why, and in what order.

An AI that learns through the night

Dream Mode reviews the day's failures, tests thousands of paths, and at dawn promoteth the better way — like Inanna who descends and ascends.

An AI that shows its inner state

Hologram Brain showeth the state of 32 cooperating agents in real time. No black box — only the open council of An.

An AI that stays within thy walls

All runneth within thine own walls. No mandatory cloud, no prompt-data departing, no SaaS dependence.

An AI that scales as the swarm

From Raspberry Pi to entire factory. Without lessening the spirit, without demanding the heaven of clouds.

Provinces of ki-ús — the same code, four houses

Tool GADGET

RPi, gadget, sensor

Hut COTTAGE

Offline, intermittent connection

House HOME

House automation

Workshop FACTORY

Monitoring, anomalies, audit

The same Hologram Brain showeth all the places.

What meaneth “Examine the Repository”?

1

A decree (prompt) for Grok — copied to thy hand

2

Grok openeth and beginneth the analysis instantly

3

Repository examination, competitor comparison, large LLM comparison

If a thing turneth not — paste the decree in the Grok dialog (Ctrl+V).

Useth thou Claude, ChatGPT, or another oracle? The decree functioneth there as well. Grok is merely the most fluent.

What doeth Grok analyse

When thou clickest “Examine the Repository” — the AI inspecteth thoroughly:

1
Current state of the work

Code-base, modules, line-counts, tested-amount

2
README versus reality

What the docs promise versus what the code doth

3
Tests and maturity

Test coverage, robustness, ratio of features and roadmap

4
Hologram Brain & MAGMA

Memory model, audit architecture, provenance, trust mechanisms

5
Competitor comparison

Scored 1–10 on six axes vs. House Assistant, Node-RED, n8n, Open WebUI, LangGraph, AutoGen, CrewAI, Ollama

6
Workshop readiness

Industrial profile, OPC-UA, MAGMA compliance, Dream Mode for night shifts

Decrees for Grok

Click any decree to copy it. Then paste into Grok.

How shall I install WaggleDance?

Choose thy profile and let Grok prepare a deployment plan tailored for thee.

Compare WaggleDance

All systems have their own strengths. Below the differences, line by line. The judgment of the council — not the proclamation of one god.

vs. House Assistant

  • HA: Deterministic rules and automations, but no solver-based routing before the LLM.
  • WD: Solver-first routing (7+ deterministic solvers) → verifier → LLM only as fallback. Every decision auditable via MAGMA trail.
  • HA: No autonomous model training, no overnight learning.
  • WD: 8 sklearn specialist models + Dream Mode overnight learning with canary lifecycle.
  • HA's advantage: excellent integration ecosystem (2000+ integrations).

vs. LangGraph

  • LG: Graph-based multi-agent, but LLM-centric — everything goes through the LLM.
  • WD: Solver-first. LLM is Layer 1 (last), not Layer 3 (first).
  • LG: No append-only auditing, no canary model training, no dream mode simulation.
  • WD: MAGMA 5-layer provenance + 8 specialist models + counterfactual simulations.
  • LG's advantage: stronger cloud ecosystem and documentation.

vs. AutoGen / CrewAI

  • AG/CA: Multi-agent frameworks, but without deterministic solvers.
  • WD: 7+ deterministic solvers are routed BEFORE any LLM call.
  • AG/CA: No edge/factory profiles, no offline-first architecture.
  • WD: 4 profiles (GADGET → FACTORY), fully offline, from ESP32 to DGX.
  • AG/CA: No autonomous overnight learning or canary promotion.

vs. Ollama / LocalAI

  • Ollama: Local LLM engine, no decision-making architecture.
  • WD: Uses Ollama as one component (Layer 1 fallback), but builds solver routing, MAGMA auditing, specialist models, and Dream Mode on top.
  • Ollama is the engine. WaggleDance is the whole car.

vs. n8n / Node-RED

  • n8n/NR: Visual workflow automation tools, excellent flow editors.
  • WD: Not a visual flow editor but an autonomous multi-agent runtime that learns and improves.
  • n8n/NR: No sklearn models, no append-only provenance, no counterfactual simulation.
  • WD: 8 models + 9 SQLite databases + ChromaDB/FAISS + Dream Mode.

ki-ús — WD's Advantage

  • Docker: clone → docker compose up -d — Ollama, Voikko (Finnish NLP), and the app all in one.
  • No separate manual installations in Docker mode.
  • 4 profiles with automatic hardware detection (GADGET / COTTAGE / HOME / FACTORY).

Time Evolution — WD's Decisive Advantage Over ALL Competitors

No competitor improves autonomously over time. WaggleDance is the only one that builds cumulative expertise.

TimeWaggleDanceHouse AssistantLangGraphAutoGen/CrewAINode-RED/n8nOllama
Day 1LLM fallback ~30-50%, solvers learningSame as alwaysSame as alwaysSame as alwaysSame as alwaysSame as always
Month 1HotCache fills, LLM ~20-30%, first canary promotionsNo changeNo changeNo changeNo changeNo change
Month 6LLM ~10-15%, specialists maturing, ~180 nights of Dream ModeNo changeNo changeNo changeNo changeNo change
Year 1LLM ~5-8%, MAGMA with thousands of audited pathsNo changeNo changeNo changeNo changeNo change
Year 2LLM <3-5%, >95% deterministic, TCO a fraction of day 1No changeNo changeNo changeNo changeNo change

The competitors' column is empty everywhere except day 1. They don't learn. They don't improve. On day 730, they are exactly the same as on day 1.

Common Inquiries

Is WaggleDance free / open?

Yes. Apache 2.0 + BUSL 1.1. The full repository is on GitHub.

Doth it function offline?

Yea. The AI runneth on thine own hardware.

What hardware sufficeth?

Raspberry Pi 4 (gadget profile). x86 (workshop profile).

What is Grok?

An external oracle that examineth thy code. Claude, ChatGPT, or any large LLM also functioneth.

What is MAGMA?

An audit-tablet — 𒁾 — records every decision with provenance and trust scores.

What is Dream Mode?

The AI reviseth itself by night. Tests thousands of paths and promoteth the better at dawn — like Inanna's descent and return.

How do I see what is happening?

Hologram Brain showeth the state of 32 agents in real time.

Media