The SDK is agent-native by design but does not yet ship server-side AI features (recommendations, summarization, agentic workflows). What it ships today is everything an external agent needs to consume the API correctly.Documentation Index
Fetch the complete documentation index at: https://docs.sdk.anghami.com/llms.txt
Use this file to discover all available pages before exploring further.
What you get today
Self-describing API
- Bundled OpenAPI 3.1 (
/api/anghami-sdk.openapi.yaml) — every service, every operation, every type, in one document. Every field has a description (the proto’s COMMENTS lint rule enforces this), so codegen and LLM grounding both work without manual annotation. llms.txtat/.well-known/llms.txt— concise briefing for LLM context windows. Seellms.txt.- OAuth metadata at
https://sdk.anghami.com/.well-known/oauth-authorization-server(RFC 8414) — agents configure auth from one fetch. - Mintlify Ask AI (this site) — documentation chat grounded in the MDX prose plus the OpenAPI bundle.
Patterns suited to agents
- Cursor pagination only. Agents iterating a catalog never deal with offset math or shifting result sets. See Pagination.
- Typed IDs.
SongIDvsAlbumIDvsEpisodeID— agents that mix entity kinds get type errors at codegen time, not runtime surprises. - Batch endpoints.
BatchGetSongs,BatchGetEpisodes, etc. — let an agent fetch the data behind a list with one tool call instead of N. - Idempotent reads. All catalog/search/browse RPCs are safe to retry. Agents don’t need to track “did this call succeed.”
- Structured errors.
ErrorCodeenum +field_violationswith paths — agents can self-correct onINVALID_REQUESTwithout parsing free-form messages.
What’s coming
- Native MCP server. A first-class MCP endpoint that exposes the same operations as tools, with
/.well-known/mcp/server-card.jsondiscovery. Tracked alongside the server implementation, not blocking SDK consumers today. api-cataloglinkset. RFC 9727 discovery linking OpenAPI, MCP card, OAuth metadata, andllms.txtfrom one fetch.- Agent skills. Pre-packaged recipes for common workflows (“get a soundtrack for a movie”, “list a user’s most-played artists last 30 days”) that agents can ingest without reading the OpenAPI.
What we’re explicitly not building (yet)
- Server-side recommendation or summarization endpoints. The SDK is the data layer; if you want to summarize a user’s library or generate a playlist with an LLM, you do that on your side using your own model.
- Hosted prompts or hosted tools. Bring your own model and prompt; we’ll keep the data clean and the discovery surface honest.
TL;DR
If you’re integrating an agent today, the SDK already has what you need: a bundled OpenAPI, anllms.txt, and an OAuth flow. If you’re waiting for native MCP — it’s coming with the server, and when it ships, you won’t need to redo your integration: the same operations and the same data, just exposed as MCP tools.