Home
/
AI Safety at CreatorKit

AI Safety at CreatorKit

Hyperrealistic video, built responsibly.
AI Safety at CreatorKit

Why it matters

At CreatorKit we know how powerful hyperrealistic, AI generated video can be. Our goal is to put that power in every storyteller’s hands without compromising trust or safety. The principles below guide how we build, deploy, and govern our technology. This means we have a dual mission.
Empower every brand with studio-quality synthetic actors.

1. Empower every brand with studio-quality synthetic actors.

Protect public trust with verifiable, abuse-resistant tech.

2. Protect public trust with verifiable, abuse-resistant tech.

Acceptable Use Only

Certain content can cause real-world harm and has no place on CreatorKit. Our Acceptable-Use & Moderation Policy bans:
Cross
Violence, harassment, hate, or discrimination
Cross
Sexually explicit or pornographic material
Cross
Third-party intellectual-property infringement
Cross
Illegal activity or instructions for wrongdoing
Cross
Spam, scams, or other fraudulent content
Cross
Harmful misinformation
Uploads are screened automatically by our abuse-detection engine and, when necessary, routed to our Trust & Safety team before publication. Violations trigger account suspension.

Transparency & Control

Every CreatorKit export ships with an invisible, tamper-resistant watermark plus a signed C2PA manifest. The watermark hides inside every frame and survives crops, re-renders, and social-media compression, while the C2PA certificate travels alongside the file as a human- and machine-readable log of who created the clip, when, and with which transformations. Anyone—from a newsroom fact-checker to an end-viewer—can verify authenticity in seconds.
Creators stay in charge of their data. Projects are private by default, and we use your footage to improve our models only when you opt-in.

A one-click “Authenticity Badge” lets you surface proof-of-origin in the player, and downloadable audit logs give legal and compliance teams a clear chain of custody whenever they need it.Finally, transparency extends beyond our walls: CreatorKit is an active member of the Content Authenticity Initiative (CAI) and helps shape open standards so that authenticity signals generated here are recognised everywhere—across creative suites, social platforms, and fact-checking tools—reinforcing trust at every step of the content journey.

Built in safety technology

At the heart of CreatorKit and in every frame we render there’s a safety-first engineering stack designed to keep hyperrealistic AI video both powerful and trustworthy.

Invisible Watermark

Tamper-resistant bits baked into every frame—survives re-renders, crops, and social-media compression. Retrieval accuracy stays above 98% even at 144p.
Invisible watermarkInvisible watermark
C2PAC2PA

C2PA Provenance Manifest

Every file ships with a standards-based side-car that logs the full creation chain. Anyone—from newsroom to regulator—can confirm the video’s origin in seconds.

Automatic AI-Content Detection

Before a render leaves our servers it’s scanned by an abuse-detection engine (Hive + Reality Defender). Suspicious content is blocked or kicked to manual review—no edge cases slip through.
Automatic AI Content DetectionAutomatic AI Content Detection
Human ReviewHuman Review

Human-in-the-Loop Review

Our Trust & Safety analysts handle flagged videos (political spots, face swaps, borderline content) within SLA. Every decision is logged, auditable, and fed back to improve the models.

Content Authenticity Initiative member

CreatorKit is a proud member of the Content Authenticity Initiative (CAI)—a global consortium spearheaded by Adobe and joined by industry leaders such as Nvidia, Microsoft, Nikon, BBC, Intel, and The New York Times. CAI’s mission is simple: restore trust in digital media by standardizing how provenance and edits are recorded.
Tick
Open provenance standard. Together with CAI partners we co-develop C2PA, the open specification that embeds tamper-resistant metadata—who created the asset, when, and how—directly inside every file.
Tick
End-to-end integrity. CAI guidelines inform our own watermarking pipeline and help guarantee that any CreatorKit video can be independently verified, whether it’s on a brand’s website or shared across social platforms.
Tick
Ecosystem adoption. By aligning with Adobe, Nvidia, Microsoft and dozens of media outlets, we ensure the authenticity signal travels with your content everywhere—from creative tools to distribution channels and fact-checking services.
Content authenticity initiative member
10000
+
Misuse attempts stopped already

AI video detector

We built a tool to verify whether a video was AI generated or generated with CreatorKit. We put it at heart of our home page, just like AI safety should be at the principal role.
Go to AI detector tool

Start creating Ads with AI in seconds.

Product images with AI background