Three months ago, AI image detection was inconsistent. Some platforms checked metadata, others ignored it. Some AI tools embedded provenance data, others didn’t. Whether your AI-assisted work got flagged was largely a matter of which platform you uploaded to and which generator you used. That’s changed. Fast. Between the C2PA standard becoming ISO/IEC 22144, the EU AI Act enforcement clock starting in August 2026, California’s SB 942 already in effect, and almost every major AI generator now embedding cryptographic provenance data by default, the landscape for creators using AI in their workflow has fundamentally shifted.
Content Credentials — the cryptographically signed provenance data we covered in our earlier post on C2PA — graduated from an industry coalition specification to a formal ISO standard in 2025 and is now C2PA 2.1, ratified as ISO/IEC 22144.
This matters because ISO standards have legal and procurement weight. Government agencies, large corporations, news organisations, and regulated industries can now reference C2PA in policies, contracts, and compliance frameworks without it being seen as a vendor-specific tool. Adoption follows standardisation.
The result: C2PA membership has grown to over 6,000 members and affiliates as of early 2026, including Google, Meta, OpenAI, Sony, Nikon, Leica, Samsung, and Adobe. The standard now spans cameras, smartphones, AI generators, editing tools, and increasingly, distribution platforms.
This is the biggest practical change. As of early 2026, every image you generate from a major AI tool carries embedded C2PA content credentials identifying it as AI-generated:
Google’s Gemini and Imagen image generators embed trainedAlgorithmicMedia tags via Google’s C2PA Core Generator Library. Adobe Firefly signs every generated image with Adobe Inc. credentials. OpenAI’s DALL-E and ChatGPT image generation include full C2PA manifests. Midjourney has adopted C2PA across its image outputs.
Twelve months ago, this was opt-in or inconsistent. Now it’s default behaviour across the entire ecosystem. If you generated an image with any major tool in 2026, that image almost certainly carries cryptographic markers identifying it as AI-generated — whether you knew it or not.
It goes further than just AI tools. Camera manufacturers including Sony, Nikon, Leica, Canon, and Samsung are now signing photos at the moment of capture with hardware-rooted keys. Google Pixel and iPhone cameras are doing the same. The absence of credentials is starting to become its own signal.
Try MetaStrip — it's free
Strip metadata from any photo in seconds. No upload, no account.
This is the part most creators haven’t fully registered yet.
Article 50 of the EU AI Act — the European Union’s comprehensive AI regulation — establishes transparency obligations for AI-generated content. The article requires providers of AI systems generating synthetic content (images, audio, video, text) to ensure that content is marked in a machine-readable format and detectable as artificially generated.
Enforcement begins August 2, 2026. That’s three months from now.
For creators, this means EU-based AI tools and any AI tool serving EU users will be legally required to embed detectable provenance markers. Platforms operating in the EU will have corresponding obligations to handle that content appropriately. The EU’s preferred technical mechanism for compliance is — unsurprisingly — C2PA Content Credentials.
California has already moved. SB 942 took effect January 1, 2026, requiring large AI providers to offer detection tools and embed disclosures in AI-generated content. More state and national regulations are in pipeline.
If you’re using AI in commercial work, the regulatory framework around disclosure and provenance is no longer hypothetical. It’s law, and it’s being implemented now.
The detection capability now exists. The question is what platforms do when they detect AI content.
Stock photography: Getty Images, iStock, and Adobe Stock are using C2PA verification in their submission pipelines. Getty bans AI-generated content entirely. Adobe Stock requires disclosure. Shutterstock has integrated content credential checking. If your work goes through these platforms, AI metadata can mean automatic rejection.
Social platforms: Meta displays “AI Generated” labels on Instagram and Facebook for content with AI markers. TikTok has similar labelling. LinkedIn supports content credentials and is rolling out provenance indicators. The labels don’t necessarily reduce reach, but they do change how viewers perceive the content.
Search engines: Google has integrated C2PA into its “About this image” feature in search results. Google Search can now display provenance information for images, telling users whether an image was AI-generated, what tool created it, and what edits were applied.
News organisations: AP, Reuters, the BBC, the New York Times, and others are members of the Content Authenticity Initiative and are implementing credential verification in editorial workflows. Content without verifiable provenance is increasingly viewed with suspicion in journalism contexts.
The trajectory is clear: in 2026, AI metadata is checked, used, and acted upon across most of the systems creators rely on to distribute their work.
Beyond cryptographic provenance, pixel-level detection has matured significantly.
Steganographic watermarks — invisible patterns embedded directly into image pixels — are now the second layer. Google’s SynthID is the most widely deployed, embedded in Imagen and Gemini-generated content. Meta has its own implementation. OpenAI has been testing stealth watermarks across its image outputs. Stability AI offers Stable Signature for open-source models.
These watermarks survive what metadata stripping cannot. Screenshots, re-encoding, format conversion, and mild editing don’t remove them. They require dedicated detection algorithms but are increasingly integrated into platform pipelines.
The combination of metadata-based detection (fast, easy, but removable) and pixel-based detection (slower, more expensive, but durable) means that systems checking for AI content now have multiple independent signals to work with.
A few practical considerations:
Your AI-generated work is detectable by default. Every modern AI tool embeds markers identifying its output. If you’re using AI in commercial work, in journalism, in stock photography submissions, in any context where AI use matters, assume the platforms can detect it. They probably can.
Removing metadata is one layer, not the only layer. Tools like MetaStrip remove C2PA manifests, XMP fields, IPTC markers, and embedded generation parameters — the most common and easily-detectable AI identification signals. This is meaningful and effective for most automated metadata-based detection systems. But it doesn’t remove pixel-level steganographic watermarks where those are present, and it doesn’t help with semantic detection.
The legal framework is shifting. Removing AI provenance data from your own AI-generated content is generally legal. Removing it from AI-generated content where regulations require disclosure may not be. The EU AI Act, California SB 942, and similar frameworks are creating contexts where AI disclosure is a legal obligation rather than a platform preference. Pay attention to where your content is going and what regulations apply.
Transparency has practical value too. In some contexts — clearly labelled creative AI work, transparent AI-assisted journalism, AI art with attribution — keeping provenance data adds credibility rather than reducing it. Audiences increasingly value clear labelling over ambiguity.
The detection arms race favours detection. Provenance standards, watermarking technology, regulatory frameworks, and platform infrastructure are all advancing together. Strategies that worked six months ago may not work in six months. Building your work on the assumption that AI use will remain undetectable is a losing position over time.
MetaStrip handles the metadata layer comprehensively — C2PA manifests, XMP AI fields, IPTC DigitalSourceType markers, and the broader EXIF/metadata footprint that accompanies AI-generated content. For creators who have legitimate reasons to remove provenance data from their own work — privacy, professional discretion, avoiding algorithmic demotion, or simply controlling what’s embedded in your own files — it’s the most thorough metadata removal you can do client-side without any file ever leaving your browser.
We don’t make ethical judgments about why you’re removing metadata. We provide the tool and trust you to use it responsibly. We do encourage transparency where it matters — particularly in journalism, academic work, and contexts where authenticity is a reasonable expectation. And we recommend understanding the regulatory landscape that applies to your work, especially with EU AI Act enforcement beginning in August 2026.
The regulatory clock is the most important variable. EU AI Act enforcement in August 2026 will reshape how AI tools handle disclosure for any audience touching the EU market. Other jurisdictions are likely to follow with their own frameworks.
C2PA 2.1’s introduction of redactable assertions and zero-knowledge identity proofs hints at where the standard is heading: provenance that can be verified without exposing identity, useful for sensitive journalism and whistleblowing. This is a meaningful evolution that addresses one of C2PA’s biggest limitations — the privacy implications of cryptographically signing every piece of content with identifiable information.
For creators, the practical takeaway hasn’t changed: understand what’s embedded in your files, make intentional decisions about what you share, and use tools that give you visibility and control over your own metadata. The choice of what to remove, what to keep, and what to disclose should be yours — informed, intentional, and consistent with the contexts where your work appears.
The detection systems are real and getting better. The regulatory framework is real and getting stricter. The tools to manage your own metadata are also better than they’ve ever been. The era of “AI metadata doesn’t really matter” is over.
Free for single files. No account, no upload, no tracking.
Open MetaStrip →