YouTube expanded its likeness detection technology to all creators in its Partner Program in September 2025. Creators upload a face image, and the system flags AI-generated content using their likeness without permission. That’s helpful for catching deepfakes after they’re posted.

India’s parliamentary standing committee on communications and information technology recommended exploring a licensing regime for AI content creators and compulsory labeling of AI-generated videos and content. These recommendations aim to curb misinformation, but their implications extend beyond that.

Both responses focus on content that has already been created and published. It’s a start, but it’s not ideal. Clearly, it’s reactive. To make a real difference, synthetic identities need to be caught before they do damage, not after.

You can spot a bad deepfake. You don’t need any kind of special training. The eyes look dead, the mouth moves incorrectly, there are mysterious extra limbs, or the audio lags behind the video by a fraction of a second. Detection tools have gotten good at finding these glitches, and most people are decent at it, too.

Bots are similar — they behave in ways that don’t look human. They post too often, respond too quickly, use obviously unnatural language, or connect with networks in ways that real people don’t.

Synthetic identities work differently. They’re constructed from legitimate data points, real addresses, real employment histories, real social connections, mixed and matched into a convincing persona. Each data point checks out. The combination is fabricated.

If that sounds a little creepy, good. It should.

Fraud detection systems look for known “bad” patterns. Banned information. Previously flagged credentials. Velocity checks. And that works well when attackers repeatedly reuse the same fraudulent information. However, synthetic identities are generally not reused. Each one is built from a massive pool of personal data circulating online and in breach databases. By the time you’ve flagged one, the attacker has already moved on to the next.

Fake an identity document? Doable. Fake credentials? Sure. Fake a whole social network to make the identity look legit? Attackers do it all the time. But faking consistent human behavior over time? Okay, now that’s hard.

Real humans have patterns. How they type. When they’re active. Which systems they access and in what order. How they react when something unexpected happens. There’s a rhythm to how people work. (You might call it…natural?)

Synthetic identities — even the good ones — have gaps. The behavior feels off. Access patterns don’t match the role. Activity spikes at weird hours. When challenged with a routine verification, the response feels scripted. Still, it’s not as obvious as a bad deepfake, and it’s really, really easy to fall for it.

AI models trained on human behavior can catch these gaps. Not by matching against a list of known bad signatures, but by noticing when something just doesn’t look like a real person.

When proactive security professionals design security controls, they consider them from two perspectives: how customers will use them and how bad actors might misuse them. Viewing capabilities from the adversary’s perspective should inform these decisions.

Identity verification needs the same treatment. We need to stop asking whether an identity has valid credentials. Ask whether it behaves like the person it claims to be.

That’s a sweeping statement, I know. So, what does that look like in practice?

Build baselines. Model how legitimate users interact with systems over time, broken down by role, function, geography, etc.

Threat detection should look for deviation, not just known threats. It has to look for behavior that doesn’t fit human patterns, which should raise flags even if it doesn’t match a known attack signature.

Consider intent. What would a legitimate user be trying to accomplish? Does this activity align with that, or does it suggest something else?

Critically, security teams and their models must continue to learn. Static rules go stale. Behavioral models can evolve as attackers change tactics.

Most organizations still rely on credential-based authentication and rules-based fraud detection. Check the username and password. Verify the identity document isn’t on a banned list. Call it secure and move on.

That does not hold up against synthetic identities built to pass exactly those checks.

Moving to behavior-based detection takes investment. We’re talking about fundamentally rethinking how identity verification works and accepting that credentials alone don’t prove someone is who they claim to be. We need to assess how they behave over time. That’s more work up front, but this is one of those situations where an ounce of prevention is ultimately worth many, many pounds of a cure.

Pick the systems where a synthetic identity would hurt you most — your highest-risk access points. Figure out what normal user behavior looks like on those systems by role, by function, by location. Then flag the deviations.

Will you catch everything? No. Attackers who know what you’re looking for will adapt. But you’ll catch a lot more than you would with credential-checking alone. And every synthetic identity you catch makes the next one more expensive to create. Joke’s on the threat actors.

I’m aware of the paradox here, that the same AI that creates synthetic identities can be used to detect them. Large language models are great at generating fake personas. They’re also good at analyzing behavioral patterns and spotting when something looks off.

Both sides have access to these tools. For security professionals, it’s fighting fire with, I guess, smarter fire. Better-planned fire. Better-executed fire. You must get there first.