Skip to main content
GalaxyAI provides a remote Model Context Protocol (MCP) server that lets AI assistants manage your workflows and runs directly — no local install required.
https://app.galaxy.ai/api/mcp

Setup

1. Get your API key — Copy your GalaxyAI API key from the dashboard. This is the same Bearer token used for the REST API. 2. Add to your MCP client — Add the following to your MCP client configuration file:
{
  "mcpServers": {
    "galaxyai-workflow": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "https://app.galaxy.ai/api/mcp",
        "--header",
        "Authorization: Bearer YOUR_API_KEY"
      ]
    }
  }
}
Replace YOUR_API_KEY with the key from step 1. 3. Start using tools — Your AI assistant now has access to 21 GalaxyAI tools. Ask it to list your workflows, build new ones, edit or delete them, start a run, or check run status.

Available Tools

The MCP server exposes 21 tools across five categories.

System Workflows

list_system_workflows

List available system workflows (pre-built single-node tools). Filter by category: image, video, audio, utility, llm.

run_system_workflow

Run a single node/model by name. Fuzzy-matches system workflows. Supports mode parameter for multi-mode nodes (e.g. "image-to-image" for Nano Banana Pro edit).

Workflows

list_workflows

List workflows with optional search and pagination. Returns id, name, and updatedAt for each workflow.

get_workflow

Get full details of a workflow including nodes, edges, description, and timestamps.

update_workflow

Update a workflow’s name or description. Does not modify nodes or edges.

delete_workflow

Permanently delete a workflow by ID. This action cannot be undone.

Workflow Builder

create_workflow

Create a new workflow with Request and Response scaffold nodes. Optionally add request input fields. Returns the workflow ID and node IDs.

list_node_types

List available node types with their input/output ports, data types, categories, and modes. Use this to discover what nodes can be added.

add_node

Add a node to an existing workflow. Supports column/row positioning and initial input values. Returns the new node ID and its ports.

update_node

Update input values on an existing node without removing it or its edges. Use to change model parameters like prompt or image size.

connect_nodes

Create a validated edge between two nodes. Checks type compatibility, prevents cycles, and enforces single-input rules.

delete_node

Remove a node from a workflow. Automatically removes all edges connected to that node. Cannot delete scaffold nodes (Request, Response).

disconnect_nodes

Remove an edge between two nodes. Identify the edge by its ID, or by the source/target node and handle pair.

Runs

start_run

Start a workflow run by ID or name. Searches user workflows first, falls back to system workflows. Validates Request Node fields automatically.

get_run

Get run status with Response Node output and per-node outputs. Returns current state if still running.

list_runs

List workflow runs with optional filters (workflow, status, search) and cursor or page-based pagination.

cancel_run

Cancel a running or queued workflow run. Refunds unused estimated credits.

Direct Model Runs

list_models

List all available AI models/nodes that can be run directly without a workflow. Shows model name, category, description, and available modes.

run_model

Run an AI model directly without creating a workflow. Provide the node type and input parameters. Returns a run ID to check progress.

get_model_run

Get the status and output of a direct model run. Use after run_model to check progress and retrieve results.

Building Workflows

The workflow builder tools let AI assistants create, edit, and restructure workflows step-by-step. Each tool validates independently, so errors are caught early. Typical build flow:
  1. Call list_node_types to discover available node types and their ports
  2. Call create_workflow to scaffold a new workflow with Request and Response nodes
  3. Call add_node for each processing node (e.g. image generator, LLM)
  4. Call connect_nodes to wire outputs to inputs — type-checked automatically
Editing an existing workflow:
  • Call update_node to change input values on an existing node
  • Call delete_node to remove a node — all connected edges are cleaned up automatically
  • Call disconnect_nodes to remove a single edge (by edge ID or source/target pair)
  • Call add_node and connect_nodes to rewire the workflow
list_node_types(category: "image") → sees flux_2_pro with out:result (image)
list_node_types(category: "llm") → sees gpt_5_4 with in:image_urls, out:output
create_workflow("Image + Describe") → gets workflowId + request/response node IDs
add_node(workflowId, "flux_2_pro") → gets flux node ID + ports
add_node(workflowId, "gpt_5_4") → gets LLM node ID + ports
connect_nodes(flux out:result → llm in:image_urls) → validated (image→image)
connect_nodes(llm out:output → response in:result) → validated (text→any)
create_workflow("Parallel LLMs", requestFields: [
  {name: "Cat", type: "text", value: "Cat"},
  {name: "Dog", type: "text", value: "Dog"}
])
add_node(workflowId, "gpt_5_4_mini", column:1, row:0,
  inputs: {system_prompt: "Provide me in 1 line."}) → LLM 1 (top)
add_node(workflowId, "claude_sonnet_4_6", column:1, row:1,
  inputs: {system_prompt: "Provide me in 1 line."}) → LLM 2 (bottom)
add_node(workflowId, "nano_banana_pro", column:2, row:0) → image gen
connect_nodes(request field_cat → llm1 in:prompt)
connect_nodes(request field_dog → llm2 in:prompt)
connect_nodes(llm1 out:output → nano in:prompt)
connect_nodes(nano out:result → response result)
connect_nodes(llm1 out:output → response result)
connect_nodes(llm2 out:output → response result)
Nodes in the same column (column 1: both LLMs) are placed in parallel — stacked vertically by row.
run_system_workflow(
  workflow_name: "Nano Banana Pro",
  mode: "image-to-image",
  values: {
    prompt: "Add a dog to the scene",
    image_url: "https://example.com/photo.jpg"
  }
)
→ Creates a temp workflow with Nano Banana Pro in Image to Image mode,
  runs it, then cleans up.
→ Use get_run with the returned Run ID to check progress.
For multi-mode nodes like Nano Banana Pro and GPT Image 1.5, pass mode: "image-to-image" to switch from the default text-to-image mode. Use list_node_types to see available modes.
list_models(category: "image") → sees flux_2_pro, nano_banana_pro, etc.
run_model(nodeType: "flux_2_pro", input: {prompt: "A red Ferrari"})
  → returns runId
get_model_run(runId) → check status and get result URL
Use direct model runs for quick single-model tasks without building a full workflow.
Port names returned by add_node and list_node_types use the in: / out: prefix format expected by connect_nodes. The connect_nodes tool validates type compatibility (e.g. image→image), prevents cycles, and enforces single-input rules. Use disconnect_nodes to remove a connection, or delete_node to remove a node along with all its connections.

Request Node Handling

When a workflow contains a Request Node with input fields, start_run automatically detects it and guides the AI assistant through providing values.
  1. The AI assistant calls start_run with just the workflowId
  2. The tool detects Request Node fields and returns their names, types, and current defaults — instead of starting the run
  3. The AI assistant calls start_run again with the values parameter filled in for each Request Node field
  4. The workflow runs with the provided inputs and values are synced to the Request Node in the UI
The values parameter is keyed by nodeId then fieldId. The tool returns the exact field IDs needed, so the AI assistant can construct the correct payload automatically.

Run Results

Both start_run and get_run return a structured response with two sections:
The Response Node output — the final workflow result. This is always listed first in the response so the AI assistant can surface the primary result immediately.
Status and output of every node in the workflow. Response nodes appear first, followed by all other nodes. Each entry includes the node type, ID, status, and output (or error).
If the workflow is still running, the response shows the current execution state and per-node statuses instead. Use get_run to poll for the final result.

Smart Routing

The MCP tools automatically find the right workflow when given a name:
User intentToolSearch order
”Run node Flux”run_system_workflowSystem workflows only
”Run workflow My Pipeline”start_runUser workflows → System workflows
”Run Flux directly”run_modelDirect model execution
All name-based tools use fuzzy matching — partial names like “Flux” will match “FLUX 2 Pro”, and “Nano Banana” will match “Nano Banana Pro”.

Checking Run Status

Workflow runs execute asynchronously. After starting a run, use get_run to check progress:
start_run  →  returns run status + any available outputs
get_run    →  check updated status (COMPLETED / FAILED / RUNNING)
The AI assistant can call get_run multiple times until the run reaches a terminal state (COMPLETED, FAILED, or CANCELED).

Authentication

Requests without a valid API key receive a 401 Unauthorized error.
The MCP server uses the same Bearer token as the REST API. Include it in the headers of your MCP client configuration — no separate authentication is needed.

Compatibility

Works with any MCP client that supports the Streamable HTTP transport:
  • Claude Desktop
  • Claude Code
  • Cursor
  • Windsurf
  • Any MCP-compatible client