Skip to main content
ConnectorClient is for synchronous pipelines (no async/await). The SSE stream reader and your callback run on separate threads, so slow calls (vector store writes, embedding APIs, databases) never stall the stream. For async pipelines, use AsyncConnectorClient instead.

Constructor

from autoplay_sdk import ConnectorClient

client = ConnectorClient(
    url="https://your-connector.onrender.com/stream/YOUR_PRODUCT_ID",
    token="uk_live_...",
    max_queue_size=500,
)
url
string
required
Full URL to GET /stream/{product_id} on the connector.
token
string
default:"\"\""
Unkey API key (uk_live_...). Leave empty to connect without authentication.
max_queue_size
int
default:"500"
Maximum events buffered between the stream reader and worker threads. When full, new events are dropped and counted in dropped_count.

Callbacks

All registration methods return self for chaining.

.on_actions(fn)

from autoplay_sdk import ActionsPayload

def handle_actions(payload: ActionsPayload):
    text = payload.to_text()
    # embed, store, process — blocking I/O is safe here

client.on_actions(handle_actions)
fn
Callable[[ActionsPayload], None]
required
Called on a worker thread every time a batch of UI actions arrives.

.on_summary(fn)

from autoplay_sdk import SummaryPayload

def handle_summary(payload: SummaryPayload):
    text = payload.to_text()

client.on_summary(handle_summary)
fn
Callable[[SummaryPayload], None]
required
Called on a worker thread every time a session summary arrives.

.on_drop(fn)

Called when events are dropped because the queue is full (backpressure).
def handle_drop(payload, total_dropped):
    print(f"Dropped — total: {total_dropped}")

client.on_drop(handle_drop)
fn
Callable[[dict, int], None]
required
Called with the raw dropped payload and the running total of dropped events.

Lifecycle

.run()

Connect and block until stopped. Reconnects with exponential backoff (1 s → 30 s).
client.on_actions(handle_actions).run()
HTTP 401, 403, and 404 are not retried — they raise immediately. Check your url and token.

.run_in_background()

Start on a daemon thread and return immediately.
thread = client.run_in_background()
do_other_work()
client.stop()
Returns a threading.Thread.

.stop()

Signal the client to stop after the current event.

Observability

client.dropped_count   # int — total events dropped (queue full)
client.queue_size      # int — events waiting in the queue right now
A non-zero dropped_count means your callback is slower than the event rate. Increase max_queue_size or optimise the callback.

Context manager

with ConnectorClient(url=URL, token=TOKEN) as client:
    client.on_actions(handle_actions).run_in_background()
    do_other_work()
# stop() called automatically

Chained setup

ConnectorClient(url=URL, token=TOKEN) \
    .on_actions(handle_actions) \
    .on_summary(handle_summary) \
    .on_drop(handle_drop) \
    .run()