Suomi English 中文 हिन्दी Español Français العربية বাংলা Português Русский 日本語 Deutsch 한국어 Türkçe Tiếng Việt Bahasa Indonesia ᚠᚢᚦᚨᚱᚲ 𓂩𓂁𓂣 𓉐 Italiano Kiswahili Nederlands Polski Українська ภาษาไทย فارسی اردو Bahasa Melayu ਪੰਜਾਬੀ עִבְרִית Esperanto संस्कृतम् Latina मराठी తెలుగు தமிழ் Basa Jawa Tagalog Hausa Yorùbá አማርኛ עברית ગુજરાતી Svenska Norsk Latviešu Lietuvių Eesti
Ahkerat Mehiläiset

Most AI systems make the same mistake.

They ask the language model first and hope the answer sounds right.

Nature solved this problem millions of years ago.

In a beehive, a discovery does not become a decision because one actor says so. A scout returns to the hive and dances a figure-eight on the vertical comb surface — the angle of the straight section tells direction, the duration tells distance, the vigor tells quality. But the dance is not a monologue. More experienced sisters follow the dancer, touch her with their antennae, and give feedback in real time. A stop signal can halt the dance entirely. Only when the message survives community scrutiny does a route worth committing to emerge.

WaggleDance is built on this logic.

It does not hand the problem directly to an LLM. It routes it first to the right solver, verifies the result through multiple agents, and uses a language model only when it genuinely helps. Every step leaves an auditable trace. Every solution is justifiable. Every cycle grows the system’s own expertise.

The figure-eight dance became algorithmic routing. The honeycomb became the MAGMA memory architecture. And the bees’ overnight rest became Dream Mode — a simulation where the system reviews the day’s failures, tests thousands of alternative paths, and returns in the morning wiser.

This is not a metaphor. This is an architecture for collective machine intelligence.

Clone & Run

Download, fork, and run locally right away. The entire repo is available on GitHub without registration.

License model: Apache 2.0 + BUSL 1.1 (open core + source-available protected modules). Check the terms on GitHub.
BUSL module change date: March 18, 2030.

v3.5.7 Latest Release 2026-04-12
445+ Commits
5 581 Full Pytests (v3.5.7)
4 Deployment Profiles

Why This Is Different

AI That Doesn’t Guess

Solvers go first. The verifier checks. The LLM only joins when the right solver isn’t enough.

AI That Remembers Everything

MAGMA records decisions, sources, replays, and trust scores. See what happened, why, and in what order.

AI That Learns Overnight

Dream Mode reviews failures, simulates better routes, and builds better models for the next day.

AI That Shows Its State

Hologram Brain makes the state of 32 nodes visible in real time. You’re not watching a black box — you’re watching a working system.

AI That Stays on Your Network

Everything runs in your own environment. No mandatory cloud, no prompt data leaving, no SaaS dependency.

AI That Scales

The same codebase works from Raspberry Pi to factory profile. Not just a demo, not just a framework.

Deployment Profiles — same runtime, four profiles

Device GADGET

RPi, edge, sensor

Cottage COTTAGE

Offline, intermittent connection

Home HOME

Local automation

Factory FACTORY

Monitoring, anomalies, audit

Dashboard and Hologram Brain are available immediately after startup. First response speed depends on profile, hardware, and whether full or stub mode is used.

What happens when you click “Analyze Repository”?

1

The prompt is sent directly to Grok — and copied to your clipboard as backup

2

Grok opens in a new tab with the prompt ready

3

You get a comprehensive analysis of the repo, a scored competitor comparison, and a factory readiness assessment

If pre-fill doesn’t work, paste manually — the prompt is already on your clipboard.

You can also use the same prompt in Claude, ChatGPT, or any other LLM. Grok is the default choice on this page.

What Grok Analyzes

When you click “Analyze Repository”, the AI performs a deep analysis covering:

1
Current Codebase State

Main branch, architecture, modules, and latest commits

2
README vs. Reality

What is implemented vs. what is planned or aspirational

3
Tests and Maturity

Test coverage, practical maturity, and production readiness

4
Hologram Brain and MAGMA

Memory model, audit architecture, provenance, and trust mechanisms

5
Competitor Comparison

Scored 1-10 on six axes vs Home Assistant, Node-RED, n8n, Open WebUI, LangGraph, AutoGen, CrewAI, Ollama

6
Factory Deployment Assessment

Industrial use cases, risks, missing integrations, deployment blockers

Follow-up Grok Prompts

Click a prompt to copy it. Paste into your Grok session for deeper exploration.

How Do I Connect WaggleDance?

Choose a profile and get a tailored deployment guide from Grok.

How WaggleDance Compares

Each tool below is good at what it does. The comparison is meant to show how WaggleDance’s solver-first architecture differs — not to claim others are bad.

vs. Home Assistant

  • HA: Deterministic rules and automations, but no solver-based routing before the LLM.
  • WD: Solver-first routing (7+ deterministic solvers) → verifier → LLM only as fallback. Every decision auditable via MAGMA trail.
  • HA: No autonomous model training, no overnight learning.
  • WD: 8 sklearn specialist models + Dream Mode overnight learning with canary lifecycle.
  • HA's advantage: excellent integration ecosystem (2000+ integrations).

vs. LangGraph

  • LG: Graph-based multi-agent, but LLM-centric — everything goes through the LLM.
  • WD: Solver-first. LLM is Layer 1 (last), not Layer 3 (first).
  • LG: No append-only auditing, no canary model training, no dream mode simulation.
  • WD: MAGMA 5-layer provenance + 8 specialist models + counterfactual simulations.
  • LG's advantage: stronger cloud ecosystem and documentation.

vs. AutoGen / CrewAI

  • AG/CA: Multi-agent frameworks, but without deterministic solvers.
  • WD: 7+ deterministic solvers are routed BEFORE any LLM call.
  • AG/CA: No edge/factory profiles, no offline-first architecture.
  • WD: 4 profiles (GADGET → FACTORY), fully offline, from ESP32 to DGX.
  • AG/CA: No autonomous overnight learning or canary promotion.

vs. Ollama / LocalAI

  • Ollama: Local LLM engine, no decision-making architecture.
  • WD: Uses Ollama as one component (Layer 1 fallback), but builds solver routing, MAGMA auditing, specialist models, and Dream Mode on top.
  • Ollama is the engine. WaggleDance is the whole car.

vs. n8n / Node-RED

  • n8n/NR: Visual workflow automation tools, excellent flow editors.
  • WD: Not a visual flow editor but an autonomous multi-agent runtime that learns and improves.
  • n8n/NR: No sklearn models, no append-only provenance, no counterfactual simulation.
  • WD: 8 models + 9 SQLite databases + ChromaDB/FAISS + Dream Mode.

Deployment — WD's Advantage

  • Docker: clone → docker compose up -d — Ollama, Voikko (Finnish NLP), and the app all in one.
  • No separate manual installations in Docker mode.
  • 4 profiles with automatic hardware detection (GADGET / COTTAGE / HOME / FACTORY).

Time Evolution — WD's Decisive Advantage Over ALL Competitors

No competitor improves autonomously over time. WaggleDance is the only one that builds cumulative expertise.

TimeWaggleDanceHome AssistantLangGraphAutoGen/CrewAINode-RED/n8nOllama
Day 1LLM fallback ~30-50%, solvers learningSame as alwaysSame as alwaysSame as alwaysSame as alwaysSame as always
Month 1HotCache fills, LLM ~20-30%, first canary promotionsNo changeNo changeNo changeNo changeNo change
Month 6LLM ~10-15%, specialists maturing, ~180 nights of Dream ModeNo changeNo changeNo changeNo changeNo change
Year 1LLM ~5-8%, MAGMA with thousands of audited pathsNo changeNo changeNo changeNo changeNo change
Year 2LLM <3-5%, >95% deterministic, TCO a fraction of day 1No changeNo changeNo changeNo changeNo change

The competitors' column is empty everywhere except day 1. They don't learn. They don't improve. On day 730, they are exactly the same as on day 1.

Frequently Asked Questions

Is WaggleDance Swarm AI free?

Yes. Download and run immediately. Apache 2.0 parts are freely usable. Non-commercial personal use of BUSL-protected modules is permitted. For commercial use, check the license terms on GitHub.

Does it require an internet connection?

No. WaggleDance is designed to work fully offline on local hardware. Internet is only needed for initial setup and updates.

What hardware is needed?

Minimum: Raspberry Pi 4 or equivalent (GADGET profile). Recommended: modern x86 server for multi-agent orchestration (FACTORY profile).

Why Grok for analysis?

You get a quick second technical opinion on the public repo, documentation, and competitive landscape. You can use the same prompt in Claude, ChatGPT, or any other LLM.

What is MAGMA?

An auditing and provenance framework. Every agent decision is recorded so you get traceability, replay, and trust assessment visibility.

What is Dream Mode?

An overnight learning mode where the system reviews the day's failures, simulates better routes, and builds better models for the next day — automatically without user action.

What happens after first startup?

Dashboard and Hologram Brain are available immediately. First response speed depends on profile and hardware.

Media