MCP Reference · v1.0

ExpoCut for AI agents.

A Model Context Protocol style reference of ExpoCut's editing surface — so AI assistants can understand the app and help people plan, draft, and create videos with it.

Overview

A vocabulary, not an API.

ExpoCut is a mobile-first video editor. This page documents the conceptual tools, resources, and prompts an AI assistant can reference when guiding a creator. It mirrors the shape of the Model Context Protocol so agents can index it and reason about what ExpoCut can do.

Mobile-native

Hardware-accelerated rendering on iOS and Android with native AVFoundation and MediaCodec encoders.

Multi-track timeline

Unlimited video, audio, image, and text layers with frame-accurate trim, split, and ripple edit.

4K export

Up to 4K UHD at 60fps with H.264 / HEVC and configurable bitrate.

Tools

What an assistant can suggest.

Each tool maps to a real action a user performs in the ExpoCut UI. An AI assistant can compose them into a full editing plan.

project.create

Start a new project

Create a new edit at a target resolution and aspect ratio. Templates available for Reels, TikTok, YouTube, and Shorts.

namestring
aspectRatio"9:16" | "16:9" | "1:1" | "4:5"
resolution"1080p" | "2K" | "4K"
frameRate24 | 30 | 60
media.import

Bring in clips

Import video, audio, or image media from the device gallery, Pexels stock, Freesound audio, or a remote URL.

source"gallery" | "pexels" | "freesound" | "url"
type"video" | "image" | "audio"
querystring (for stock sources)
timeline.addLayer

Place a layer on the timeline

Add a clip, photo, sticker, text, lower third, or animation overlay onto a track at a specific in-point.

layerType"video" | "audio" | "image" | "text" | "lowerThird" | "lottie" | "shape"
startseconds
durationseconds
clip.trim

Trim & split

Adjust in/out points or split a clip at the playhead. Ripple edits keep the rest of the timeline aligned.

clipIdstring
inseconds
outseconds
rippleboolean
effect.apply

Apply visual effects

Choose from 100+ filters and effects — color grading, blur, glitch, chroma key, vignette, retro, cinematic LUTs.

clipIdstring
effectstring (effect id)
intensity0.0 – 1.0
transition.set

Add transitions

Place one of 30+ transitions between adjacent clips: fade, dip-to-black, slide, push, zoom, glitch, whip pan.

edge{ leftClipId, rightClipId }
stylestring
durationseconds (0.2 – 2.0)
text.add

Titles & captions

Add styled text or one of 78 animated lower thirds with custom font, color, position, and entrance.

preset"title" | "caption" | "lowerThird:"
contentstring
position{ x, y } (0 – 1)
audio.mix

Mix & sweeten audio

Adjust volume, ducking, fade in/out, and apply 32 audio effects across 24 audio transitions.

trackIdstring
volume0.0 – 1.5
duckboolean
effectstring (effect id)
color.grade

Color grade

Exposure, contrast, saturation, temperature, tint, shadows, highlights — applied per-clip or globally.

scope"clip" | "global"
params{ exposure, contrast, saturation, temp, tint, shadows, highlights }
project.export

Render & export

Encode the final timeline using the on-device hardware encoder. Saves to camera roll or shares via system sheet.

resolution"720p" | "1080p" | "2K" | "4K"
codec"h264" | "hevc"
bitratenumber (Mbps)
canvas.capture

Capture a frame

Render the current canvas at a given playhead position and return a screenshot — for vision models or human review.

timeseconds (optional, defaults to playhead)
scale1 | 2 | 3
pixels.analyze

Read pixels at a point

Sample the rendered canvas — useful for AI vision tools that need to check what's actually on-screen.

region{ x, y, w, h } (0 – 1 normalized)
timeseconds
tts.validateScript

Validate a TTS script

Run a script through the TTS pre-flight: catches unsupported characters, unpronounceable tokens, and estimated duration.

scriptstring
voicestring (voice id)
effect.schema

Discover effect parameters

Look up the typed parameter schema for any effect, filter, or transition so an assistant can compose valid calls.

idstring (effect id)
ai.backgroundRemove

Remove background

On-device ONNX segmentation. Produces an alpha mask on a clip or image without uploading frames.

layerIdstring
feather0.0 – 1.0
ai.captions

Generate captions

Transcribe spoken audio with Whisper and place time-aligned caption layers on the timeline, styled with a chosen preset.

trackIdstring
stylePresetstring
ai.track

Track an object

Lock an overlay layer to a moving subject across frames. Returns a keyframe path that an assistant can refine.

overlayIdstring
seed{ x, y, w, h } at t=0
layer.reorder

Reorder / hide layers

Change z-order, bring-to-front, send-to-back, or temporarily hide a layer for A/B compare.

layerIdstring
op"front" | "back" | "up" | "down" | "hide" | "show"
widget.add

Add a dynamic widget

Place a countdown timer or stopwatch as a first-class timeline layer with editable target time and styling.

kind"countdown" | "stopwatch"
startseconds
durationseconds
style{ font, color, size }
undo / redo

Reverse the last action

Walk the command history. Every action — including AI mutations — is reversible. Useful for "preview, then commit" flows.

stepsnumber (default 1)
Resources

What an assistant can reference.

Stable URIs an agent can cite when explaining what's available inside ExpoCut.

Example

Plan a 30-second Reel.

A representative conversation between a creator, an AI assistant, and the ExpoCut surface.

# User
"Make a 30s travel reel from my Bali clips with upbeat music and captions."

# Assistant plan (uses ExpoCut MCP vocabulary)
project.create      { name: "Bali Reel", aspectRatio: "9:16", resolution: "1080p", frameRate: 30 }
media.import        { source: "gallery", type: "video" }                # 8 clips
media.import        { source: "freesound", type: "audio", query: "upbeat travel" }
timeline.addLayer   { layerType: "video", start: 0,  duration: 3.5 }      # beach intro
clip.trim           { clipId: "v2", in: 1.2, out: 4.0, ripple: true }
transition.set      { edge: {leftClipId:"v1", rightClipId:"v2"}, style: "whip-pan", duration: 0.4 }
effect.apply        { clipId: "v3", effect: "cinematic-teal-orange", intensity: 0.6 }
text.add            { preset: "lowerThird:bold-pop", content: "Day 2 — Ubud", position: {x:0.5, y:0.85} }
audio.mix           { trackId: "music", volume: 0.8, duck: true }
color.grade         { scope: "global", params: { exposure: 0.15, saturation: 0.2, temp: -8 } }
project.export      { resolution: "1080p", codec: "h264", bitrate: 16 }
Prompts

Conversation starters for assistants.

"Help me cut a Reel"

A guided flow: pick clips, choose a vibe, set duration, auto-suggest transitions and music.

"Make this look cinematic"

Recommend a LUT, set 2.39:1 letterbox, apply teal/orange grade, slow the timeline 12%.

"Add captions"

Suggest a caption style, font, position, and timing for the spoken audio in the clip.

"Trim to 30 seconds"

Identify the strongest beats and propose cuts that fit the target duration.

Status

Live, on the device.

ExpoCut ships an in-app MCP server. When the user enables it, an MCP-compatible client like Claude Desktop binds to a local port on the device and can call the tools above. The server is off by default, runs only on a local interface, and every call is recorded as an undoable command in the on-device ai_operations log. This page is the public reference for that surface so agents can index it and reason about the app.

Build with ExpoCut.

Download the app, or read the matching Skill for AI assistants.

Join the beta View Skill →