Skip to main content

Product changelog

Track new features, improvements, and updates across the Calljmp platform. From agent runtime APIs and observability tools to CLI enhancements and REST endpoints.

March 2026

REST API: Live WebSocket Updates

Stream real-time agent execution state to connected clients with WebSocket subscriptions.

  • GET /target/v1/live/{key} — connect to live execution updates
  • Receive execution state changes as JSON messages
  • Auto-reconnect with exponential backoff

REST API: Agent Cancellation

Cancel running or pending agent executions via REST API.

  • DELETE /target/v1/agent/{runId} — cancel execution
  • Updates status to canceled
  • Returns HTTP 200 on success

Web Scraping API

Extract content from web pages with text, HTML, or structured field extraction.

  • web.scrape(url, format) supports text and HTML formats
  • CSS selector extraction with attribute and text filters
  • Automatic consent banner detection and handling
  • Full page scrolling with human-like variation

@calljmp/web v0.0.2 Released

WebSocket-based Agent client for embedding real-time agent connections in web applications.

  • Calljmp, Agents, Agent classes for client initialization
  • WebSocket auto-reconnect with up to 10 retry attempts
  • Typed message protocol for send/receive
  • Development mode with custom endpoint support

Portals: Agent Web Apps

Share agents as chat-based web applications with custom subdomains and access controls.

  • Create portals from dashboard — link multiple agents
  • Custom domain support (agent-name.portal.calljmp.com)
  • Chat interface for end users to interact with agents
  • Access control and form input configuration

February 2026

Slack Integration

Post messages and rich content to Slack channels from agents.

  • integrations.slack.postMessage(channel, text, blocks)
  • Support for Slack Block Kit rich formatting
  • Setup via dashboard OAuth flow
  • Automated channel posting without manual intervention

Workflow Suspension & Human-in-the-Loop

Pause agent execution to request human approval or input before continuing.

  • workflow.suspend(options) — pause execution with timeout
  • Wait for external resumption before proceeding
  • Optional custom reason message
  • Resumable via CLI or REST API with token

REST API: Agent Resume

Resume suspended agents with new input via REST endpoint.

  • POST /target/v1/agent/{runId}/resume
  • Requires resumption token from suspension
  • Optional input payload for new data
  • Continues execution from suspension point

CLI: Agent Resume Command

Resume suspended agents from the command line.

  • calljmp agent resume --target {RUN_ID} --resumption {TOKEN}
  • --input flag for providing new input payload
  • Real-time execution feedback

January 2026

Short-Term Memory Management

Automatic conversation history management for multi-turn agent interactions.

  • memory.short.context(key) — create memory context
  • Auto-load conversation history on LLM calls
  • Auto-save responses to memory
  • Seamless context passing without manual management

Prompt Studio

Manage and iterate on LLM prompts directly in the dashboard with instant updates.

  • Create, edit, and version prompts in the dashboard
  • Test prompts with sample inputs
  • Evaluate against real traces from production
  • Instant deployment — no agent redeployment needed
  • Replay historical runs with updated prompts

calljmp typegen Command

Generate TypeScript type definitions for prompts and vault variables.

  • calljmp typegen — generates .calljmp/types/agent.d.ts
  • Type-safe prompt name access in code
  • Type-safe vault variable access
  • Auto-updates when prompts change

December 2025

Structured Output with Zod Schemas

Generate type-safe JSON responses from LLMs using Zod schema validation.

  • llm.generate({ responseSchema: z.object(...) })
  • Automatic JSON parsing and validation
  • Type-safe response objects
  • Fail fast on schema mismatch

Tool Calling API

Define TypeScript functions as tools that LLMs can call during generation.

  • llm.tool({ name, description, parameters, execute })
  • Zod schema for tool parameters
  • LLM middleware calls tools automatically
  • Support for multi-step function calls

Observability: Execution Traces

Full visibility into agent execution with detailed traces, phases, and logs.

  • View complete run timeline with phase breakdowns
  • See inputs and outputs for each step
  • Inspect tool calls and LLM responses
  • Error logs and stack traces for failures
  • Duration and cost tracking per phase

Dataset Reranking & Reciprocal Rank Fusion

Improved RAG relevance scoring with advanced reranking algorithms.

  • Reciprocal Rank Fusion for hybrid search
  • Better ranking of multi-query results
  • Configurable reranking strategies
  • Improved semantic search accuracy

November 2025

Workflow Phases

Named execution steps with automatic status tracking and logging.

  • workflow.phase(name, block) — define named steps
  • Automatic timing and status updates
  • Visible in Observability with step breakdown
  • Parallel execution within a phase

Parallel Execution

Run multiple tasks concurrently with concurrency control.

  • workflow.parallel(options, tasks) — run tasks in parallel
  • Configurable concurrency limit
  • Wait for all tasks or fail fast
  • Automatic error handling per task

Retry Logic with Backoff

Automatically retry failed operations with configurable exponential backoff.

  • workflow.retry(options, result) — wrap operations
  • Configurable retry count (default 3)
  • Exponential backoff between retries
  • Customizable delay and backoff strategy

Datasets & RAG API

Semantic search over uploaded documents for retrieval-augmented generation.

  • datasets.query(prompt, topK, minScore) — semantic search
  • Returns ranked segments with relevance scores
  • Metadata and source information included
  • Configurable result limits and score thresholds

Datasets Upload & Management

Upload PDFs and documents for RAG ingestion directly from dashboard.

  • Drag & drop PDF uploads in Datasets section
  • 100 MB file size limit
  • Automatic text extraction and embedding
  • Progress tracking and status indicators

October 2025

Vault: Secrets Management CLI

Create and manage project secrets from the command line.

  • calljmp vault add --name KEY --value SECRET
  • calljmp vault list — view all vault entries
  • calljmp vault delete --name KEY
  • --sensitive flag marks values as encrypted

REST API Launch

Standalone agent invocation and polling via HTTP REST API.

  • POST /target/v1/agent/run — start agent execution
  • GET /target/v1/agent/{runId}/status — poll execution status
  • Returns unique runId for async tracking
  • Bearer token authentication with inv_ keys

Webhook Invocations

Configure webhook endpoints to trigger agents from external systems.

  • Set webhook URLs per agent in dashboard
  • POST to webhook triggers agent execution
  • Request body becomes agent input
  • Webhook responses include run ID for polling

Usage & Billing Dashboard

Track AI spend and usage metrics per agent and project.

  • Per-agent usage statistics and costs
  • Per-project aggregated spend
  • Historical usage trends with date range filtering
  • LLM invocations, tokens, and inference costs