Adaptive Post-Production Product Placement

Traditional product placement in movies is static, generalized, and often irrelevant to global audiences. We built a post-production system that dynamically replaces branded objects in videos based on viewer context—without reshooting or altering the storyline.

Instead of changing the movie, we change the ads inside the movie.

How It Works

1. Video Input & User Control

Users can submit full videos for automatic detection or use manual on-screen selection to target specific regions. This creates a lightweight video-editing workflow combining user intent with automated understanding.

2. Video Understanding with Gemini

Gemini watches the video end-to-end, identifies frames containing relevant branded objects, and filters semantically meaningful scenes based on user requests and manual selections.

3. Object Segmentation with SAM 3

SAM 3 (Segment Anything Model v3) performs precise pixel-accurate object segmentation, handling occlusion, lighting consistency, and perspective to generate masks for selected objects.

4. Brand Replacement

Using generated masks, we apply post-production inpainting and embedding with ComfyUI to replace brands with context-aware alternatives based on location, availability, and viewer relevance.

Tech Stack

AI Models

Gemini & SAM 3

Backend

Python & FastAPI

Frontend

Next.js & TypeScript

Deep Learning

PyTorch & ComfyUI

Infrastructure

Vast.ai GPU Compute

Styling

Tailwind CSS

Example Use Cases

iPhones in U.S. releases → Huawei for China

Starbucks cups → Milo in regions without Starbucks

Ford F-150 → VW Golf for viewers in Germany

Coke cans → Pepsi in local markets

Why It Matters

Instead of advertising at everyone, we advertise to the right audience. This system transforms static product placement into a dynamic, personalized advertising layer without breaking immersion—no reshoots required, post-production only, and scalable to long-form content.