
Cava (Artflow.ai)
Create studio-quality, consistent AI characters and narrative videos from simple text scripts.

Granular pixel-level motion control for cinematic generative video.

MotionBrush is a sophisticated spatial control feature integrated within the Runway Gen-2 and Gen-3 Alpha architectures. Historically, generative video models suffered from 'global motion' issues where the entire frame would shift unpredictably. MotionBrush solves this by allowing creators to apply a weighted mask to specific regions of a static image, instructing the latent diffusion model to generate temporal variance only within those localized pixel coordinates. As of 2026, the tool has evolved to support multi-brush layering, allowing for independent motion vectors (directional, proximity, and scale) within a single generation. It utilizes a dedicated optical flow estimation layer that maps user brush strokes to 3D trajectory data, which the Gen-3 model then interprets during the denoising process. Positioned as a professional-grade VFX tool, it bridges the gap between unpredictable AI generation and traditional keyframe animation, making it a staple in high-end advertising, social media content, and pre-visualization workflows. Its market position is solidified by its deep integration into the Runway Creative Suite, providing an ecosystem where assets move seamlessly from generation to post-production.
MotionBrush is a sophisticated spatial control feature integrated within the Runway Gen-2 and Gen-3 Alpha architectures.
Explore all tools that specialize in selective animation. This domain focus ensures MotionBrush (by Runway) delivers optimized results for this specific requirement.
Assign up to 5 independent motion masks with distinct X, Y, and Z axis parameters.
Controls the Z-axis (zoom-in/zoom-out) motion relative to the camera lens metadata.
Injects Perlin noise into the latent space to simulate micro-vibrations and natural randomness.
Adjustable edge softness for masks to ensure smooth transitions between animated and static pixels.
Forces motion to adhere strictly to a vector, preventing model-based drift.
Low-fidelity latent preview of motion paths before full generation.
Post-denoising algorithm that stabilizes high-frequency jitter in brushed areas.
Navigate to the Runway dashboard and select the 'Gen-3 Alpha' or 'Gen-2' video generation tool.
Upload a high-resolution base image (16:9 or 9:16 aspect ratio recommended).
Click the 'MotionBrush' icon located in the image preview toolbar.
Select 'Brush 1' and adjust the brush size based on the target area (e.g., a flowing river or hair).
Paint the area of the image you wish to animate; use the eraser tool to refine the mask edges.
Use the 'Horizontal', 'Vertical', and 'Proximity' sliders to define the primary movement direction.
(Optional) Add 'Brush 2' or 'Brush 3' to apply different motion parameters to other areas of the image.
Adjust the 'Ambient Noise' slider to control the intensity of secondary, natural micro-movements.
Click 'Save' and enter a text prompt to provide context for the style of motion (e.g., 'gentle flowing').
Hit 'Generate' to process the 5-10 second video clip using credits.
All Set
Ready to go
Verified feedback from other users.
"Users praise the granular control and professional-grade output, though some note a learning curve for the 'Proximity' slider."
Post questions, share tips, and help other users.

Create studio-quality, consistent AI characters and narrative videos from simple text scripts.

Transforming still images into immersive digital humans and real-time conversational agents.

Turn text into photorealistic AI video in minutes with hyper-realistic digital humans.

Transform static fashion imagery into high-fidelity, pose-driven cinematic video.

The creative operating system for generative media and autonomous art agents.

Scale Global Video Production with AI-Driven Avatar Synthesis and Automated Localization