I think that in the future, the main thing that will identify a real video from a fake will be the reputation of the publisher. Same as for photos and quotes now.
It's as trivial to sign fake content as real one, and people are really, really, really bad at anything close to due diligence for signature verification and revocation (as that's an objectively hard problem).
Furthermore, there's the catering to the least denominator. Whatever signing ability is available for a random widespread cheap third world smartphone used in Tik Tok videos will be treated as sufficiently good to assume it to be true; so if a determined attacker wanting to create a misinformation campaign with fake videos can circumvent the security of that signing process (e.g. get a bunch of valid keys indistinguishable from these phones, then fake videos will have as good signing as real videos.
We are talking about the future here. Easy to use certificate authorities would be available if it came to this. It would be part of the software ecosystem, part of mobile operating systems for example. Your cert would be registered with google and, for example, a camera app would use an OS level API to sign an image when it is taken. Anyone who wants to verify that it's your image could check against your public key registered with google or whatever well known, secure, trusted service.
If a random poor third world person can easily obtain a valid certificate, then any attacker can also do so - identity theft or totally fake identities are a thing despite all our best efforts and it's naive to assume that this will be magically solved in the future; and if it is not easy a random poor third world person to do so, then they won't, and all their real, valid user-generated content (e.g. real cell phone videos from conflict zones) won't be signed, and the society will consider unsigned videos as valid.
Furthermore, any attacker with the desire and resources to create a disinformation campaign can simply recruit a new real person to sign each deepfake campaign, just as criminals now hire money laundering mules (I mean, that expense would be less than the actual effort of creating the media) and any intelligence agency using deepfakes for propaganda can literally create new valid identities by fiat (just issue new real passports/birth certificates/whatever for nonexistent people) that are indistinguishable from real people as far as google or anyone else abroad can verify.
In essence, what you describe would work if and only if we had a global, trusted database of all people worldwide that doesn't allow for fake people. We're very far from that, and the obstacles for that aren't technological, it's definitely not something that Google can solve.
I get the feeling that in the future, it will be necessary to sign videos with cryptographic keys based off of scans of your iris or other secure biomarkers if you want anyone to know it's actually real. Mannerisms are much too imitable ;)
What could possibly prevent me from signing a fake video with a cryptographic key based off of scan of your biomarker, once I obtain that scan somehow ? Biomarkers can't be changed or revoked, so once they leak, you either have to change the whole system or stop trusting your content forever.
(the details of obtaining the marker aren't relevant, I think you'll agree that one way or another there will be some leaks or cracked phones or any one of other possibilities eventually exposing biomarkers for at least some of the population)