I’ve published a new Hugging Face Space: Minimal Self Awareness Demo. It implements the Rendered Frame Theory (RFT) Minimal Self agent in a 3×3 world, showing the minimal requirements for self-awareness through reinforcement learning, exploration, obstacle stress, and social mimicry.
Access points:
- Live demo: Hugging Face Space (link above)
- Source code: GitHub repo
- Archived record: Zenodo DOI: 10.5281/zenodo.17714387
I’d love to hear your thoughts on:
- How clear the interface feels
- Suggestions for additional experiments or visualizations
- Ideas for improving reproducibility and accessibility
Thanks for checking it out — looking forward to your feedback!
Appreciate the emphasis on reproducibility and publishing the full code path.
One distinction I’m curious about is how you’re separating detectable state transitions from semantic attribution.
Quantifying thresholds and observing phase changes is valuable. In my experience, the hardest problem isn’t detecting that something changed — it’s preventing observers (human or system) from collapsing that change into meaning too early.
The most robust verification systems I’ve worked with enforce a strict boundary between:
I’d be interested in how you’re handling that boundary in practice, especially around claims that could be misread as guarantees rather than observations.