Docs

TypeScript SDK

@logstitch/sdk is a zero-dependency TypeScript client for the LogStitch API. 3.2KB minified. Ships as both ESM and CJS.

Installation#

npm install @logstitch/sdk

Quick Start#

Basic usage
import { LogStitch } from '@logstitch/sdk';

const logstitch = new LogStitch({
  projectKey: 'pk_your_key_here',
});

// Log an event (fire-and-forget by default)
logstitch.log({
  action: 'user.signed_up',
  category: 'auth',
  actor: { id: 'user_123', type: 'user', name: 'Alice' },
  tenant_id: 'acme_corp',
  metadata: { plan: 'pro' },
});

// Always close on shutdown to flush pending events
await logstitch.close();

Constructor Options#

Pass an options object to the LogStitch constructor.

NameTypeRequiredDescription
projectKeystringRequiredYour project API key (pk_ prefix).
baseUrlstringOptionalAPI base URL. Default: https://logstitch.io
batchSizenumberOptionalNumber of events per batch before auto-flush. Default: 10
flushIntervalnumberOptionalAuto-flush interval in milliseconds. Default: 5000
maxQueueSizenumberOptionalMaximum number of events that can be queued. Default: 1000
strictbooleanOptionalThrow errors instead of swallowing them. Default: false
onError(error: Error) => voidOptionalError callback for non-strict mode. Called when an API request fails.

Methods#

log(event)#

Queue a single event for batched delivery. Fire-and-forget by default — the method returns immediately and the event is sent in the background.

EventInput fields

NameTypeRequiredDescription
actionstringRequiredWhat happened. Past tense, dot-namespaced (e.g. user.created, document.shared).
categorystringRequiredOne of: auth, access, mutation, admin, security, system.
actor{ id, type, name?, email? }RequiredThe user or service that performed the action.
tenant_idstringRequiredIdentifies which of your customers this event belongs to.
target{ id, type, name? }OptionalThe resource the action was performed on.
metadataRecord<string, unknown>OptionalArbitrary key-value data attached to the event.
context{ ip_address?, user_agent?, location?, session_id? }OptionalRequest context such as IP address, user agent, and location.
changes{ field, before, after }[]OptionalArray of field changes describing what was modified.
idempotency_keystringOptionalUnique key for deduplication. Auto-generated by the SDK.
occurred_atstring (ISO 8601)OptionalWhen the event happened. Defaults to server receive time.
logstitch.log({
  action: 'document.shared',
  category: 'documents',
  actor: { id: 'user_123', type: 'user', name: 'Alice' },
  tenant_id: 'acme_corp',
  target: { id: 'doc_456', type: 'document', name: 'Q4 Report' },
  metadata: { permission: 'view' },
});

logBatch(events)#

Send an array of events immediately, bypassing the internal queue. Returns an IngestResponse with the created event IDs and the number of redacted fields.

const response = await logstitch.logBatch([
  {
    action: 'user.created',
    tenant_id: 'acme_corp',
    actor: { id: 'user_1', type: 'user' },
  },
  {
    action: 'user.created',
    tenant_id: 'acme_corp',
    actor: { id: 'user_2', type: 'user' },
  },
]);

// response.ids — array of created event IDs
// response.redacted_count — number of fields redacted by PII rules

flush()#

Flush the internal event queue immediately. Call this before process shutdown if you are not calling close().

await logstitch.flush();

close()#

Flush all pending events and stop the auto-flush timer. Always call this when your process is shutting down to avoid losing events.

// Graceful shutdown
process.on('SIGTERM', async () => {
  await logstitch.close();
  process.exit(0);
});

events.list(params)#

Query events with filters. Returns a paginated list using cursor-based pagination.

Query parameters

NameTypeRequiredDescription
tenant_idstringOptionalFilter events by tenant.
actionstringOptionalFilter by action name.
categorystringOptionalFilter by category.
actor_idstringOptionalFilter by actor ID.
target_idstringOptionalFilter by target ID.
actor_typestringOptionalFilter by actor type (user, api_key, service, system).
target_typestringOptionalFilter by target type.
start_datestring (ISO 8601)OptionalReturn events after this timestamp.
end_datestring (ISO 8601)OptionalReturn events before this timestamp.
searchstringOptionalFull-text search across action, actor name, and target name.
limitnumberOptionalMax events to return (1-200). Default: 50
cursorstringOptionalCursor for the next page of results.
const { events, cursor } = await logstitch.events.list({
  tenant_id: 'acme_corp',
  action: 'user.created',
  limit: 25,
});

// Fetch next page
if (cursor) {
  const nextPage = await logstitch.events.list({
    tenant_id: 'acme_corp',
    cursor,
  });
}

viewerTokens.create(params)#

Create a short-lived viewer token for the embeddable log viewer. Must be called server-side with a project key.

NameTypeRequiredDescription
tenant_idstringRequiredThe tenant whose events the viewer token grants access to.
tierstringOptionalVisibility tier to apply. Limits which events and fields are visible.
expires_innumberOptionalToken lifetime in seconds. Default: 3600 (1h). Max: 86400 (24h).
const { token } = await logstitch.viewerTokens.create({
  tenant_id: 'acme_corp',
  expires_in: 3600,
});

// Pass the token to your frontend for use with <LogViewer />

Batching#

Events sent via log() are queued internally and sent in batches. A batch is flushed when any of the following occurs:

  • The queue reaches batchSize events (default 10).
  • The flushInterval timer fires (default every 5 seconds).
  • You explicitly call flush() or close().

Idempotency

Every event is automatically assigned an idempotency key via crypto.randomUUID(). If a batch is retried due to a network failure, the server deduplicates events so you never get double-writes.

Retry#

The SDK automatically retries failed requests with exponential backoff and jitter.

  • 4xx errors are not retried — these indicate a client error (bad input, invalid key, rate limit).
  • 5xx errors and network failures are retried up to 2 times (3 total attempts).

Error Handling#

The SDK supports two error-handling modes.

Fire-and-forget (default)#

Errors are silently swallowed. Use the onError callback to handle them without throwing.

onError callback
const logstitch = new LogStitch({
  projectKey: 'pk_your_key_here',
  onError: (error) => {
    // Log to your observability stack
    console.error('LogStitch error:', error.message);
  },
});

// This will not throw even if the API is unreachable
logstitch.log({
  action: 'user.signed_up',
  tenant_id: 'acme_corp',
  actor: { id: 'user_123', type: 'user' },
});

Strict mode#

Enable strict: true to throw a LogStitchError on failure. The error includes status, code, and requestId for debugging.

Strict mode
import { LogStitch, LogStitchError } from '@logstitch/sdk';

const logstitch = new LogStitch({
  projectKey: 'pk_your_key_here',
  strict: true,
});

try {
  const response = await logstitch.logBatch([
    {
      action: 'invoice.paid',
      tenant_id: 'acme_corp',
      actor: { id: 'system', type: 'service' },
    },
  ]);
} catch (err) {
  if (err instanceof LogStitchError) {
    console.error(err.status);    // 429
    console.error(err.code);      // "daily_event_limit_reached"
    console.error(err.requestId); // "req_01HZ..."
  }
}

Which mode should I use?

Use fire-and-forget for non-critical logging where you never want audit logs to break your application. Use strict mode when you need confirmation that events were persisted, for example in compliance workflows.

Stream Mode#

Start sending events immediately — no signup, API key, or project required. The SDK generates a random claim token and sends events to an unauthenticated endpoint. Sign up later and claim your stream to bind it to a project.

Stream mode
import { LogStitch } from '@logstitch/sdk';

// No API key needed — generates a claim token automatically
const stream = LogStitch.stream();

// Console prints: "LogStitch Stream Mode — Claim at: https://logstitch.io/claim?token=<uuid>"
console.log('Claim token:', stream.token);

// Log events exactly the same way
stream.log({
  action: 'user.signed_up',
  category: 'auth',
  actor: { id: 'user_123', type: 'user', name: 'Alice' },
  tenant_id: 'acme_corp',
});

await stream.close();

StreamOptions#

NameTypeRequiredDescription
tokenstringOptionalExisting claim token to resume a stream. If omitted, a new UUIDv4 is generated.
baseUrlstringOptionalAPI base URL. Default: https://logstitch.io
batchSizenumberOptionalEvents per batch before auto-flush. Default: 10
flushIntervalnumberOptionalAuto-flush interval in ms. Default: 5000
maxQueueSizenumberOptionalMaximum queued events. Default: 1000
strictbooleanOptionalThrow on errors instead of swallowing. Default: false
onError(error: Error) => voidOptionalError callback for non-strict mode.

Claiming a stream#

Once you're ready, sign up for LogStitch and claim your stream from the dashboard or via the Streams API. All events are atomically migrated into your project.

Stream limits

Streams are rate-limited: 100 events/day, 500 events lifetime, and unclaimed streams expire after 7 days. Use authenticated mode for production workloads.

Restricted methods

Stream-mode instances only support log(), logBatch(), flush(), and close(). Querying events and creating viewer tokens require an authenticated client.