Suomi English 中文 हिन्दी Español Français العربية বাংলা Português Русский 日本語 Deutsch 한국어 Türkçe Tiếng Việt Bahasa Indonesia ᚠᚢᚦᚨᚱᚲ 𓂩𓂁𓂣 𓉐 Italiano Kiswahili Nederlands Polski Українська ภาษาไทย فارسی اردو Bahasa Melayu ਪੰਜਾਬੀ עִבְרִית Esperanto संस्कृतम् Latina मराठी తెలుగు தமிழ் Basa Jawa Tagalog Hausa Yorùbá አማርኛ עברית ગુજરાતી Svenska Norsk Latviešu Lietuvių Eesti
Ahkerat Mehiläiset

Ọ̀pọ̀lọpọ̀ nínú àwọn ètò AI máa ń ṣe àṣìṣe kan náà.

Wọ́n kọ́kọ́ béèrè lọ́wọ́ awoṣe èdè, wọ́n sì ń retí pé ìdáhùn náà yóò dà bíi pé ó tọ́.

Ìṣẹ̀dá ti yanjú ìṣòro yìí ní ọ̀pọ̀ mílíọ̀nù ọdún sẹ́yìn.

Nínú ilé oyin, iwari kò di ìpinnu nítorí olùdarí kan ní bẹ́ẹ̀ni. Olùwádìí yóò padà sí ilé oyin yóò sì jó ìpò méjèjì lórí ojú oyin tó dúró — ìgun ti apákan taara yìí ń fi ìtàn hàn, gígùn àkókò ń fi àyè hàn, agbára ń fi dídára hàn. Ṣùgbọ́n ìjó yìí kìí ṣe ọ̀rọ̀ ọ̀kan nìkan. Àwọn oyin obìnrin yóò tẹ̀lé olùjó, wọ́n yóò fi eriali wọn fọwọ́kan, yóò sì fún ní ẹ̀sì ní àkókò gidi. Àwọn àmì ìdúró lè da ìjó náà dúró pátápátá. Nìkan nígbà tí ìfiranṣẹ́ bá wà láyè lẹ́yìn àyẹ̀wò àgbègbè ni ọ̀nà tó yẹ kí ó tẹ̀lé farahàn.

WaggleDance ni a kọ́ lórí ọgbọ́n yìí.

Kò fi ìṣòro ránṣẹ taara sí LLM. Lákọ̀ọ́kọ́ yóò fi í ránṣẹ sí olùdásílẹ̀ tó tọ́, ṣe ìdánilójú àbájáde nípasẹ̀ àwọn aṣojú púpọ̀, yóò sì lò àwọn awoṣe èdè nìkan nígbà tí wọn bá ran gan. Gbogbo ìgbésẹ̀ fi ipa ọ̀nà àyẹ̀wò sílẹ̀. Gbogbo ìṣòro lè jẹ́rí. Gbogbo ìgbàkúgbà ń kọ ìmọ̀ ti ara ẹni ti ètò.

Ìjó ìpò méjèjì di routing algorithmic. Ojú oyin di ètò ìrántí MAGMA. Àti ìsinmi alẹ̀ àwọn oyin di Dream Mode — simulation nínú èyí tí ètò ṣe àtúnwò àwọn ìkùnà ọjọ́ náà, daakọ ẹgbẹẹgbẹ̀rún àwọn ọ̀nà míràn, yóò sì jí ní ọgbọ́n díẹ̀ síi.

Èyí kì í ṣe àfiwé. Èyí jẹ́ architecture fún ìmòye ẹrọ àpapọ̀.

Clone & Run

Gba, fork, ki o si ṣiṣẹ ni agbegbe lẹsẹkẹsẹ. Gbogbo repo wa lori GitHub laisi iforukọsilẹ.

License model: Apache 2.0 + BUSL 1.1 (open core + source-available protected modules). Check the terms on GitHub.
BUSL module change date: March 18, 2030.

v3.5.7 Itusilẹ Tuntun 2026-04-12
445+ Commits
5 581 Pytests Kikun (v3.5.7)
4 Awon Profaili Gbigbejade

Ìdí tí èyí fi yàtọ̀

AI tí kì í ṣe àfọ̀mọ̀

Solvers ni wọ́n kọ́kọ́ ṣiṣẹ́. Verifier ń ṣàyẹ̀wò. LLM máa ń wọlé nígbà tí solver tó tọ́ kò bá tó nìkan.

AI ti o ranti ohun gbogbo

MAGMA ń ṣe àkọsílẹ̀ àwọn ìpinnu, àwọn orísun, replays, àti trust scores. Wo ohun tó ṣẹlẹ̀, ìdí rẹ̀, àti bí ó ṣe tẹ̀ léra.

AI ti o kọ ẹkọ ni alẹ

Dream Mode ṣe atunwo awọn ikuna, ṣe afarawe awon ọna ti o dara, ati kọ awon awoṣe to dara fun ọjọ ti nbọ.

AI ti o nfi ipo rẹ han

Hologram Brain mú kí ipò àwọn node 32 hàn ní real time. Ìwọ kì í wo black box — o ń wo ètò tó ń ṣiṣẹ́.

AI ti o duro ninu nẹtiwọki rẹ

Gbogbo nkan n ṣiṣẹ ni agbegbe ara rẹ. Ko si awọsanma ti a fi agbara mu, ko si data prompt ti o jade, ko si igbẹkẹle SaaS.

AI tí ó lè scale

Kode kanna n ṣiṣẹ lati Raspberry Pi si profaili ile-iṣẹ. Kii ṣe demo nikan, kii ṣe eto nikan.

Awọn Profaili Gbigbejade — runtime kanna, profaili mẹrin

Ẹrọ GADGET

RPi, edge, sensor

Ile oko COTTAGE

Offline, asopọ ti ko duro

Ile HOME

Automation agbegbe

Ile-iṣẹ FACTORY

Ibojuwo, awon airotele, ayewo

Dashboard àti Hologram Brain wà lẹ́sẹ̀kẹsẹ̀ lẹ́yìn ìbẹ̀rẹ̀. Ìyára ìdáhùn àkọ́kọ́ dá lórí profaili, hardware, àti bóyá full tàbí stub mode ni a ń lò.

Kini o n ṣẹlẹ nigbati o ba tẹ “Ṣe itupalẹ Repository”?

1

Prompt ni a firanṣẹ taara si Grok — ati daakọ si clipboard rẹ gẹgẹ bi atilẹyin

2

Grok ṣi ni taabu tuntun pẹlu prompt ṣetan

3

Ìwọ yóò gba ìtúpalẹ̀ kíkún ti repo, afíwéra olùdíje tí a fún ní score, àti ìtòlẹ́sẹẹsẹ ìmúrasílẹ̀ ilé-iṣẹ́.

Ti pre-fill ko ba ṣiṣẹ, fi si ọwọ — prompt wa lori clipboard rẹ tẹlẹ.

O tun le lo prompt kanna ni Claude, ChatGPT, tabi LLM miiran. Grok ni aṣayan aiyipada lori oju-iwe yii.

Ohun ti Grok Ṣe Itupalẹ

Nígbà tí o bá tẹ “Ṣe ìtúpalẹ̀ Repository”, AI máa ṣe ìtúpalẹ̀ jinlẹ̀ tó kún fún:

1
Ipo lọwọlọwọ ti codebase

Eka akọkọ, ọna ṣiṣe, awọn module, ati awọn commit to ṣẹṣẹ

2
README vs otitọ

Kini a ti ṣe vs kini a ngbero tabi aspirational

3
Awọn idanwo ati idagbasoke

Agbegbe idanwo, idagbasoke to wulo, ati imurasilẹ iṣelọpọ

4
Hologram Brain ati MAGMA

Ẹya iranti, ọna ayẹwo, orisun, ati awọn ilana igbẹkẹle

5
Afiwera oludije

A fún un ní àmì 1-10 lórí axes mẹ́fà ní fífi wé Ile Assistant, Node-RED, n8n, Open WebUI, LangGraph, AutoGen, CrewAI, Ollama

6
Igbelewọn gbigbejade ile-iṣẹ

Awọn ọran lilo ile-iṣẹ, ewu, awọn isopọmọra ti o padanu, awọn idena gbigbejade

Awọn prompt Grok atẹle

Tẹ prompt lati daakọ. Fi sinu igba Grok rẹ fun iwadi jinlẹ.

Bawo ni Mo Ṣe le So WaggleDance Pọ?

Yan profaili ki o si gba itọnisọna gbigbejade ti a ṣe adapa lati Grok.

Bí WaggleDance ṣe wé àwọn míì

Gbogbo irinṣẹ́ tó wà ní ìsàlẹ̀ dára nínú ohun tí ó ń ṣe. Afíwéra náà ń fi hàn bí solver-first architecture ti WaggleDance ṣe yàtọ̀ — kì í ṣe láti sọ pé àwọn míì burú.

vs. Ile Assistant

  • HA: Deterministic rules and automations, but no solver-based routing before the LLM.
  • WD: Solver-first routing (7+ deterministic solvers) → verifier → LLM only as fallback. Every decision auditable via MAGMA trail.
  • HA: No autonomous model training, no overnight learning.
  • WD: 8 sklearn specialist models + Dream Mode overnight learning with canary lifecycle.
  • HA's advantage: excellent integration ecosystem (2000+ integrations).

vs. LangGraph

  • LG: Graph-based multi-agent, but LLM-centric — everything goes through the LLM.
  • WD: Solver-first. LLM is Layer 1 (last), not Layer 3 (first).
  • LG: No append-only auditing, no canary model training, no dream mode simulation.
  • WD: MAGMA 5-layer provenance + 8 specialist models + counterfactual simulations.
  • LG's advantage: stronger cloud ecosystem and documentation.

vs. AutoGen / CrewAI

  • AG/CA: Multi-agent frameworks, but without deterministic solvers.
  • WD: 7+ deterministic solvers are routed BEFORE any LLM call.
  • AG/CA: No edge/factory profiles, no offline-first architecture.
  • WD: 4 profiles (GADGET → FACTORY), fully offline, from ESP32 to DGX.
  • AG/CA: No autonomous overnight learning or canary promotion.

vs. Ollama / LocalAI

  • Ollama: Local LLM engine, no decision-making architecture.
  • WD: Uses Ollama as one component (Layer 1 fallback), but builds solver routing, MAGMA auditing, specialist models, and Dream Mode on top.
  • Ollama is the engine. WaggleDance is the whole car.

vs. n8n / Node-RED

  • n8n/NR: Visual workflow automation tools, excellent flow editors.
  • WD: Not a visual flow editor but an autonomous multi-agent runtime that learns and improves.
  • n8n/NR: No sklearn models, no append-only provenance, no counterfactual simulation.
  • WD: 8 models + 9 SQLite databases + ChromaDB/FAISS + Dream Mode.

Gbigbejade — WD's Advantage

  • Docker: clone → docker compose up -d — Ollama, Voikko (Finnish NLP), and the app all in one.
  • No separate manual installations in Docker mode.
  • 4 profiles with automatic hardware detection (GADGET / COTTAGE / HOME / FACTORY).

Time Evolution — WD's Decisive Advantage Over ALL Competitors

No competitor improves autonomously over time. WaggleDance is the only one that builds cumulative expertise.

TimeWaggleDanceIle AssistantLangGraphAutoGen/CrewAINode-RED/n8nOllama
Day 1LLM fallback ~30-50%, solvers learningSame as alwaysSame as alwaysSame as alwaysSame as alwaysSame as always
Month 1HotCache fills, LLM ~20-30%, first canary promotionsNo changeNo changeNo changeNo changeNo change
Month 6LLM ~10-15%, specialists maturing, ~180 nights of Dream ModeNo changeNo changeNo changeNo changeNo change
Year 1LLM ~5-8%, MAGMA with thousands of audited pathsNo changeNo changeNo changeNo changeNo change
Year 2LLM <3-5%, >95% deterministic, TCO a fraction of day 1No changeNo changeNo changeNo changeNo change

The competitors' column is empty everywhere except day 1. They don't learn. They don't improve. On day 730, they are exactly the same as on day 1.

Àwọn ìbéèrè tí a máa ń béèrè léraléra

Is WaggleDance Swarm AI free?

Yes. Download and run immediately. Apache 2.0 parts are freely usable. Non-commercial personal use of BUSL-protected modules is permitted. For commercial use, check the license terms on GitHub.

Does it require an internet connection?

No. WaggleDance is designed to work fully offline on local hardware. Internet is only needed for initial setup and updates.

What hardware is needed?

Minimum: Raspberry Pi 4 or equivalent (GADGET profile). Recommended: modern x86 server for multi-agent orchestration (FACTORY profile).

Why Grok for analysis?

You get a quick second technical opinion on the public repo, documentation, and competitive landscape. You can use the same prompt in Claude, ChatGPT, or any other LLM.

What is MAGMA?

An auditing and provenance framework. Every agent decision is recorded so you get traceability, replay, and trust assessment visibility.

What is Dream Mode?

An overnight learning mode where the system reviews the day's failures, simulates better routes, and builds better models for the next day — automatically without user action.

What happens after first startup?

Dashboard and Hologram Brain are available immediately. First response speed depends on profile and hardware.

Media