Proactive save
The AI calls save_memory when it recognizes preferences, decisions, or reusable logic—no need to say “remember this” every time.
This scenario shows how to use Cursor with YoMemo MCP so the AI proactively and securely saves important decisions and preferences—tech stack, business rules, coding habits—with end-to-end encryption. In new chats or when starting a new feature, the AI loads relevant memories automatically so you don’t have to repeat yourself.
save_memory when it detects important information, and replies with ✓ after saving.yomemoai-mcp) is configured in Cursor with your API key and RSA private key path.save_memory and load_memories work correctly.Add a rule in Cursor that tells the AI when to save and when to load memories. Two options:
You are equipped with Yomemo.ai MCP.
## When to use `save_memory`:- **Tech Stack**: When we decide on a specific library or version.- **Business Logic**: When I explain a complex internal rule.- **Preferences**: If I tell you "I prefer using early returns in Go".
## When to use `load_memories`:- At the start of a new feature implementation, check if there's relevant context in the 'coding' or 'project-name' handle.
## Feedback:- After saving, just add a ✓ in your response. No need for a long confirmation.save_memory / load_memories proactively.To apply the rule only in the current project:
.cursor/rules/ in the project root (if it doesn’t exist).yomemo-memory.mdc.The AI will then use this proactive memory rule only when Cursor is opened in this project.
| Scenario | Behavior |
|---|---|
| Tech Stack | When you agree on a library, framework, or version (e.g. “we’ll use React 19”, “Go 1.24”), the AI should call save_memory with that decision, e.g. under handle coding or the project name. |
| Business / internal rules | When you explain a complex internal rule (e.g. “orders unpaid after 3 days are auto-cancelled”), the AI should save it for later implementation or debugging. |
| Preferences | When you state a coding or tool preference (e.g. “I prefer early returns in Go”, “use Tailwind, no inline styles”), the AI should save it so future replies follow it. |
| Start of new feature / new chat | Before implementing, the AI should call load_memories and check handles like coding or the project handle for relevant tech stack, rules, and preferences. |
| After saving | After a successful save, the AI adds ✓ in its reply—no long confirmation, so the conversation stays smooth. |
Proactive save
The AI calls save_memory when it recognizes preferences, decisions, or reusable logic—no need to say “remember this” every time.
Low-friction feedback
After saving, the AI just adds ✓ so the flow isn’t interrupted.
Auto-load in new chats
When starting a new chat or feature, the AI calls load_memories and uses that context in its answers.
End-to-end encryption
All memories are encrypted locally before upload; the server cannot decrypt them. See Security.
This scenario depends on the Python MCP integration (yomemoai-mcp). The rule only defines when the AI calls save_memory / load_memories; encryption, storage, and retrieval are done by the MCP and YoMemo backend. If MCP isn’t set up yet, follow the Getting Started MCP configuration first.
For more details, see CURSOR_RULES.md in the python-yomemo-mcp repo.
coding, project name) so memories are grouped by project or purpose.