Google’s Gemini Adds Video Verification and “GenTabs” — New Tools to Fight Deepfakes and Tame Browser Chaos
Late December brought two distinct but related moves from Google: a practical expansion of Gemini’s content‑verification toolkit to videos, and an experimental browser concept — Disco — whose flagship feature, GenTabs, aims to turn tab overload into usable micro‑apps. Both are design responses to problems that technology promised to solve and often exacerbated: misinformation at scale, and the cognitive friction of modern web work.
Video verification: what changed (and what it actually does)
On December 18, 2025, Google extended Gemini’s verification features so users can upload short videos and ask whether they were created or edited with Google’s AI tools. Gemini inspects both the visual frames and the audio track for Google’s SynthID watermark and returns segment‑level indicators (for example, “SynthID detected in audio between 10–20s; no SynthID in visuals”). The upload limits are modest: files up to 100 MB and 90 seconds long. (Google blog)
Why the limitation matters
The verification is useful but partial. If a video is edited with a third‑party model that doesn’t apply SynthID (or any interoperable provenance standard), Gemini won’t flag it. That means the tool reduces a specific class of uncertainty — “Was this made with Google AI?” — but doesn’t solve the broader provenance problem across the internet’s fragmented AI ecosystem. For investigative workflows, it’s an accelerant; for blanket trust, it’s not a substitute for cross‑platform standards and independent forensics. (Android Authority)
GenTabs and Disco: reconceiving what a browser can do
Earlier in December (announcement coverage appeared December 11, 2025), Google Labs introduced Disco, an experimental Chromium‑based browser that leans heavily on Gemini 3. Its headline feature, GenTabs, watches the set of open pages and your chat context, then auto‑generates small, interactive web apps tailored to the task you’re working on — whether that’s planning a trip, synthesizing research, or building a meal plan. You can refine those apps with plain language prompts; generative outputs are linked back to original sources. Disco is currently an experiment available to a waitlist and early testers. (TechCrunch)
Practical implications — from user workflows to platform policy
- For investigators and journalists: Gemini’s video verification is a practical triage tool. Use it when your source material may have been touched by Google tools; corroborate with independent forensics for broader claims. (Google blog)
- For everyday users: GenTabs could meaningfully reduce context‑switching. If Disco nails the UX, turning research piles into a coherent app is a genuine productivity win — but watch for telemetry defaults: the experiment logs browsing and chat activity by design. (Android Central)
- For engineers and product teams: SynthID’s efficacy is proportional to ecosystem uptake. If you’re building generation pipelines, add provenance hooks sooner rather than later — interoperability beats unilateral solutions. (Android Authority)
- For policy makers and platform operators: This moment highlights the need for standards (C2PA, interoperable watermarking, uniform disclosure). Tools that only see one vendor’s signal are helpful, but insufficient at scale. (Google blog)
A pragmatic judgement
Both moves are iterative and realistic. Video verification is narrow but operational — a real capability that reduces a real kind of risk now, not sometime later. Disco and GenTabs are exploratory: they acknowledge that the browser is no longer just a viewer; it can be the low‑friction surface where apps are generated from intent and context. That shift is promising, but it raises familiar trade‑offs between convenience, control, and privacy. (Google blog)
In short: Google’s step on the verification side improves the signal for one part of the provenance problem; GenTabs shows us one plausible future of browsing where AI reduces cognitive load. Both are tools worth paying attention to — not as final answers, but as progressing the hard work of making AI both useful and accountable.

