Skip to main content

πŸš€ Autoplay in one breath

Autoplay streams what users are doing in your product into your agents as clean, LLM-ready context β€” so copilots can help before someone asks, and stay quiet when they shouldn’t interrupt. We run the connector pipeline (extract, normalise, optional summarise); you wire callbacks and RAG.

⚑ What Autoplay handles for you

  • Real-time events at scale β€” browser activity becomes normalised, typed payloads your model can read
  • Context tooling β€” buffers, summarisers, and models so volume does not blow your context window
  • Golden paths β€” record ideal journeys with the Autoplay Chrome extension or dashboard
  • Workflow completion β€” per-user mastery, in-progress, and gaps across sessions so suggestions stay relevant

πŸ”— How it fits in a RAG system

autoplay-sdk consumes your connector over SSE (or push webhook) and hands you typed Python objects; you embed, upsert, and retrieve like any other RAG source.
Real-time user events
      ↓
Autoplay connector pipeline
  (extract β†’ normalise β†’ summarise)
      ↓
SSE stream  ←──── autoplay-sdk ────→  your callback
                                           ↓
                                     embed + upsert
                                           ↓
                                      vector store
                                           ↓
                                    RAG retrieval / chatbot

πŸ“¦ Two event types

actions

A batch of UI interactions extracted from a user session β€” page views, clicks, inputs. Best for granular, per-session embeddings.

summary

An LLM-generated prose summary of a session, replacing the raw action list. Best for compact context windows in RAG pipelines.

🏷️ Typed models

All callbacks receive typed dataclass instances β€” no raw dict parsing in your code.
ClassEvent type.to_text() output
ActionsPayloadactionsNumbered list of actions, embedding-ready
SummaryPayloadsummaryProse summary string directly
SlimActionβ€”"{description} β€” {canonical_url}"

πŸ”Œ Two clients

ClientUse when
ConnectorClientYour pipeline is synchronous (no async/await)
AsyncConnectorClientYour pipeline uses asyncio β€” LangChain, LlamaIndex, FastAPI

πŸ“‘ Two delivery modes

ModeHow it worksSDK class
SSEYour service connects to the connector streamAsyncConnectorClient / ConnectorClient
Push webhookConnector POSTs to your endpointWebhookReceiver
Both modes deliver the same typed ActionsPayload / SummaryPayload objects.

πŸ—„οΈ Two buffer options

BufferStorageUse when
EventBufferIn-memoryDevelopment, single process
RedisEventBufferRedis ZSET (sliding window)Production, multiple pods

Install

pip install autoplay-sdk
from autoplay_sdk import (
    # Clients
    ConnectorClient,
    AsyncConnectorClient,
    # Typed models
    ActionsPayload,
    SummaryPayload,
    SlimAction,
    # Buffers
    EventBuffer,
    RedisEventBuffer,
    BufferBackend,
    # Push webhook receiver
    WebhookReceiver,
)