Give any AI human-like memory.
An intelligent memory system inspired by celestial structures — 5 orbital zones, emotion-aware, self-learning.
Designed from first principles to mirror the way human memory actually works — layered, emotional, and self-organizing.
Memories are organized across five concentric zones — Core, Inner, Outer, Belt, and Cloud — each with distinct retention behavior, priority, and access speed. Just like the solar system.
Attaches emotional valence and intensity to every memory. High-emotion events are retained longer and recalled with higher priority — exactly like human episodic memory.
The system monitors its own memory state — detecting gaps, contradictions, and knowledge boundaries. It knows what it knows, and what it doesn't.
Continuously refines its own memory organization based on usage patterns. Frequently accessed memories migrate inward; rarely used ones drift to the outer zones or are pruned.
Store and recall not just text, but structured data, code snippets, conversation history, and references to external documents — all within a unified memory graph.
Before responding, the system reasons over its stored memories — connecting relevant facts, detecting temporal relationships, and synthesizing coherent context for the AI.
Three operations. Infinite memory depth.
Feed any memory — conversations, facts, experiences — into the orbital system. The engine classifies zone placement, assigns emotional weight, and generates embeddings automatically.
The system autonomously consolidates, prunes, and reorganizes memories over time. You can also manually promote, archive, or link memories to build knowledge graphs.
Query with natural language or structured filters. The retrieval engine traverses orbital zones, applies emotional and temporal weighting, and returns the most contextually relevant memories.
from stellar_memory import StellarMemory
# Initialize the memory system
memory = StellarMemory()
# Store a conversational memory
memory.store(
content="User prefers dark mode and concise answers",
zone="inner",
emotion={"valence": "positive", "intensity": 0.6},
tags=["preference", "ui"]
)
# Auto-classify zone from content alone
memory.store_auto(
"The project deadline is March 15th, 2026"
)
from stellar_memory import StellarMemory
memory = StellarMemory()
# Promote a memory to a higher-priority zone
memory.promote(memory_id="mem_abc123", to_zone="core")
# Link two related memories
memory.link("mem_abc123", "mem_def456", relation="caused_by")
# Run autonomous consolidation
report = memory.consolidate()
print(f"Pruned: {report.pruned}, Merged: {report.merged}")
# Archive old cloud-zone memories
memory.archive_zone("cloud", older_than_days=90)
from stellar_memory import StellarMemory
memory = StellarMemory()
# Natural language recall
results = memory.recall(
query="What are the user's display preferences?",
top_k=5,
zones=["core", "inner"]
)
for mem in results:
print(f"[{mem.zone}] {mem.content}")
print(f" Score: {mem.relevance:.2f} | Emotion: {mem.emotion}")
# Build full context for an LLM prompt
context = memory.build_context(query="user preferences")
Five concentric orbital zones model memory priority, retention duration, and access frequency — from the blazing core to the distant cloud.
Permanent, highest-priority memories. Identity, critical facts, irreplaceable knowledge. Never pruned.
Frequently accessed working memories. Recent conversations, active project context, user preferences.
Moderately accessed episodic memories. Past sessions, historical context, secondary associations.
Fragmented, rarely accessed memories. Older sessions, peripheral knowledge, low-confidence facts.
Distant, archival memories. Rarely recalled, candidate for consolidation or permanent archival.
Install from PyPI, initialize the system, and your AI has persistent, structured memory from the very first interaction.
No external database required — SQLite by default, PostgreSQL in production
Drop-in compatible with OpenAI, Anthropic, LangChain, and any LLM
MCP server mode for Claude Code and Cursor integration
REST API with Docker support for scalable deployment
# 1. Install
# pip install stellar-memory
from stellar_memory import StellarMemory
from openai import OpenAI
# 2. Initialize
memory = StellarMemory(
storage="sqlite:///my_ai.db",
embedding_model="text-embedding-3-small"
)
client = OpenAI()
def chat_with_memory(user_message: str) -> str:
# 3. Recall relevant memories
context = memory.build_context(user_message)
# 4. Use memories as system context
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": context},
{"role": "user", "content": user_message}
]
)
answer = response.choices[0].message.content
# 5. Store the new exchange
memory.store_auto(f"Q: {user_message}\nA: {answer}")
return answer
# That's it. Your AI now remembers.
print(chat_with_memory("What did we discuss last week?"))
Drop Stellar Memory into any AI workflow. Native support for the tools you already use.
# Pull and run the Stellar Memory REST server
docker pull sangjun0000/stellar-memory:latest
docker run -d \
-p 8080:8080 \
-v stellar_data:/data \
-e OPENAI_API_KEY=your_key \
sangjun0000/stellar-memory:latest
# Health check
curl http://localhost:8080/health
Start free. Scale when you need to.