Skip to main content

Search memories

Query a namespace by meaning. Results are ranked by similarity to the query (embedding similarity). You can optionally filter by metadata.

SDK

import { Foundry } from '@withfoundry/sdk';

const foundry = new Foundry({ apiKey: process.env.FOUNDRY_API_KEY });

const { data } = await foundry.memory.search({
  namespace: 'my-app',
  query: 'How does the user want the theme?',
  limit: 5,
});

// data = [{ id, content, metadata, score? }, ...]
data.forEach((entry) => {
  console.log(entry.content, entry.metadata, entry.score);
});

Parameters

ParameterTypeDescription
namespacestringRequired. Namespace to search.
querystringNatural-language query.
limitnumberMax results (e.g. 5–20).
metadataFilterobjectOptional. Key-value filters on metadata (exact match).

With metadata filter

const { data } = await foundry.memory.search({
  namespace: 'my-app',
  query: 'user preferences',
  limit: 10,
  metadataFilter: { type: 'preferences' },
});

REST API

POST /v1/memory/search Headers: Authorization: Bearer key_...
Body:
{
  "namespace": "my-app",
  "query": "How does the user want the theme?",
  "limit": 5,
  "metadataFilter": { "type": "preferences" }
}
Response:
{
  "success": true,
  "data": [
    {
      "id": "mem_abc123",
      "content": "User preferences: theme=dark, timezone=UTC.",
      "metadata": { "type": "preferences", "userId": "u_123" },
      "score": 0.92
    }
  ],
  "meta": { "requestId": "req_xyz" }
}
score is a similarity score (e.g. 0–1); higher means more relevant.

Use cases

  • RAG: Search for relevant docs or past Q&A, then inject into the prompt.
  • User context: Store and search user preferences or history.
  • Codebase context: Write summaries or snippets, search when answering questions.
Next: Context to get a single context string for LLM prompts.