Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

How the Tools Used for Celebrity Deepfakes Can Steal Your Identity, Too

How The Tools Used For Celebrity Deepfakes Can Steal Your Identity, Too

There’s a longstanding piece of advice that says: “You can’t believe anything you hear, and only half of what you see.”  There’s always been an element of truth to that, even in the days before modern high-powered computing.  Now, with the rise of digital media, it has become startlingly — and worryingly — easy to falsify audio and video. 

“Deepfake” is the popular term for a realistic re-creation of someone’s face or voice, usually (so far) a celebrity or politician.  They’ve mostly been created as viral videos or proofs of concept, but it’s unlikely to stop there.  Security professionals fear that criminals will take the same technology currently used to create a celebrity deepfake and turn it against ordinary people for Identity Theft purposes. 

Deepfakes in the Commercial World

Even if you spend little time on social media, or the sketchy websites where fake videos have begun to proliferate, you’ve probably seen Deepfakes that were created by some of the world’s leading digital-imaging specialists.  One early example was this 2014 chocolate ad, which used live models and digital trickery to recreate the late Audrey Hepburn in all her youthful 1950s beauty. 

More recently the Star Wars movie “Rogue One” used similar technology to bring back the late Peter Cushing, and to de-age the late Carrie Fisher, as it’s set before the original 1977 film.  Director Martin Scorcese also used digital de-aging in his film “The Irishman,” allowing established stars such as Robert De Niro and Joe Pesci to play both younger and older versions of their characters. 

Fans didn’t necessarily like the end result, due to the “uncanny valley” effect, which was just noticeable enough to be jarring in the context of a big-screen film in high resolution.  One tech-savvy fan was miffed enough to take things into his own hands and went viral when he proved that he could de-age De Niro better than Scorcese had.  It’s a bravura piece of fan art, but it also demonstrates exactly why deepfakes are keeping security professionals awake at night. 

Deepfakes as a Service

It’s commonplace to point out that today’s smartwatch or fitness tracker contains a more powerful computer than the ones NASA used a generation ago to put astronauts on the moon.  The processing power available to ordinary people with run-of-the-mill computers would have been unimaginable not so long ago, and software has kept pace.  That’s why deepfakes could become more prevalent than you might think. 

Deepfakes begin with a form of artificial intelligence (AI) called “machine learning.”  Programmers train their algorithms to recognize realistic patterns of movement (like facial expressions) by watching hundreds of hours of video.  With time and repetition the software evolves rules to re-create those patterns and applies them to images of a specific real person. 

It’s a process that requires an investment of time and money, and it’s faster on high-powered equipment, but — this is important — once that software model is created, it can be used on ordinary, less-powerful computers.  Off-the-shelf software can do this and there’s a niche within the criminal underworld that creates these models for resale to enterprising but less sophisticated scammers. 

Where Is Deepfake Technology Headed?

What does that mean for you?  In practical terms, it means that if there are photos of you available anywhere online — your social media accounts, your company’s website, the cloud backups of all your friends’ and family’s photos — they can be scraped or hacked and used to create images or video that are every bit as believable as the celebrity deepfakes created by TV satirists and YouTube hobbyists. 

One sadly predictable outcome is that pornographic deepfakes have become a hazard of online life for many women.  Women in public life, like high-profile politician Alexandria Ocasio-Cortez, are especially popular targets, but it can happen to anyone, often in the form of “revenge porn” created by bitter exes. 

There are less obvious ways that this technology can be used to disrupt your life.  Suppose you’re locked in a custody battle, and a deepfake falsely portrays you engaging in illegal activity or in verbal abuse of your kids?  Similarly, an unscrupulous business rival might concoct a video that damages your reputation, or a co-worker might use one to undercut you in the race for a big promotion.  The possibilities are endless. 

Deepfakes and Identity Theft

While politicians and celebrities tend to be the targets of having their images and voices impersonated (yes, audio deepfakes are a thing as well), deepfakes hold tremendous potential for conventional Identity theft and fraud as well. 

The prospect has law enforcement and security specialists spooked.  In 2020, for example, the Carnegie Endowment for International Peace released a research paper detailing 10 ways deepfakes could be used to target everyone from individuals to entire financial markets.  Believably faked voices or face-swapped videos can be used to impersonate C-suite executives or key employees, for example, in order to “approve” fraudulent payments and transfers or gain access to privileged information. 

The same technologies could be used to manipulate stocks, cause a run on a bank or even potentially crash the economy temporarily by causing a panic sell-off in the major markets.  For most of us that sort of thing is “above our pay grade,” but the technology is rapidly becoming cheap enough to be used against anyone. 

Deepfakes and You

Suppose a friend or family member reached out to you in a video call and asked for an emergency loan.  You’d probably help out if you could, wouldn’t you?  Similarly, if you were in a Zoom meeting with your boss and she said you needed to click a link in the message she was sending you, you probably would, right? 

The phone numbers, emails and social media accounts we use to initiate that kind of contact can be hacked, and scammers already have ways to spoof an email address or phone number if they can’t get access to the real thing.  A deepfaked “you” could be used similarly to defraud your friends and family or the company you work for. 

The video doesn’t need to be production quality or even especially good.  We’re not viewing it on a movie screen, after all, but on our devices in videoconferencing or video chat apps.  The telltale signs that make faked videos easy to spot in high-res — lack of detail, video that’s out of sync with the audio, blocky pixelation — are all things we’ve learned to expect in chat.

Deepfakes and Next-Level Identity Theft

Your voice or image could also be used to bring an existing threat, synthetic identity theft, to a new level.  With deepfakes, that fake persona can now look or sound like anyone on calls and video chat. 

It’s a whole new frontier.  Many common social media scams and romance scams start with a falsified account (a “sock puppet”).  Deepfake software makes them more convincing, giving scammers the ability to add videos to their social media timeline or (in the not too distant future) the ability to video chat with their targets in real time, with a believable voice and face that can’t easily be traced. 

Another alarming aspect to all of this is that many companies, especially financial institutions, have begun turning to sophisticated biometric authentication (facial recognition, voice recognition) to fight existing forms of fraud and identity theft.  If scammers can create a realistic synthetic identity, they can use it to open fraudulent accounts. If they can realistically impersonate you, they can beat a facial-recognition or voice-recognition algorithm to take over your account. 

What You Can Do About Deepfakes

Meeting the threat of deepfakes is a major challenge, and most of it is out of your hands.  A lot of the heavy work will need to be done by security professionals, financial institutions and government agencies (especially law enforcement).  Ironically, the most powerful tool these authorities can wield against deepfaking criminals is artificial intelligence, the same tool used to create them.  Fakes can fool the human eye or ear relatively easily, but AI is much harder to beat.

That doesn’t mean you have to make it easy for scammers. You could start by auditing your social media accounts periodically, to make sure your friends and followers are all people you know (or at a minimum, actual people).  Set photo posts and video posts to “private” or “friends only,” wherever possible, to make it harder for scammers to gather up the raw materials for their fakery. 

Deepfakes are most dangerous when they’re combined with existing forms of identity theft, and — because that’s an established and well-known threat — there’s a lot you can do about that.  Some standard precautions include: 

  • Checking your account statements regularly for any irregularities.
  • Requesting your credit report frequently from the “Big Three” reporting agencies.  They’ll each give you a free one every year, and additional reports aren’t very expensive, so take advantage.  Any unexplained change in your credit, or activity you didn’t initiate, constitutes a big red flag. 
  • Never clicking a link in an unsolicited email or text message, and never downloading an attachment from an unknown source.  Those are often phishing attacks.
  • Staying up to date on common scams, so you’re less likely to fall for one.  The FTC’s consumer-facing Avoiding and Reporting Scams page is a great resource (and so is this humble blog).
  • Checking your phone number, email addresses and passwords regularly to make sure they haven’t been compromised in a data breach.  The best consumer-facing resource for that is a site called Have I Been Pwned?
  • Using strong passwords and not using the same password across multiple sites.  Use a password manager if remembering your passwords is a struggle. 
  • Signing up for Spokeo Protect, our identity protection service.  If your key pieces of personal information are offered up for sale on the sketchy marketplaces of the dark web, we’ll let you know. 

…or to wrap up all of that in one simple sentence, knowing how to check for identity theft and doing it regularly. 

You can think of deepfakes (and identity theft in general) as being like forest fires:  Preventing them entirely is hard, but taking away the fuel they rely on can minimize the risk. 

References:

  • YouTube – CGI Audrey Hepburn for a Dove Chocolate Commercial
  • Vintage News Daily – Behind the Scenes From the Making of 2014 Dove Chocolate Commercial With Audrey Hepburn
  • IMDB – Rogue One:  A Star Wars Story
  • IMDB – The Irishman
  • Creative Bloq – Spectacular “The Irishman” Deepfake Blows Away the Original
  • Panda Security – Deepfake Fraud:  Security Threats Behind Artificial Faces
  • MIT Technology Review – Deepfake Porn Is Ruining Womens’ Lives.  Now the Law May Finally Ban It. 
  • The Lily – A Fake Nude Photo Was Supposed To Silence Alexandria Ocasio-Cortez.  She Turned Up the Volume Instead.
  • Wired – This AI Makes Robert De Niro Perform Lines in Perfect German
  • Carnegie Endowment for International Peace – Deepfakes and Synthetic Media in the Financial System:  Assessing Threat Scenarios
  • CSO Online – Deepfakes and Synthetic Identity:  More Reasons To Worry About Identity Theft
  • U.S. Federal Trade Commission – Avoiding and Reporting Scams


This post first appeared on Spokeo People Search Blog | Famous People News Of The Day, please read the originial post: here

Share the post

How the Tools Used for Celebrity Deepfakes Can Steal Your Identity, Too

×

Subscribe to Spokeo People Search Blog | Famous People News Of The Day

Get updates delivered right to your inbox!

Thank you for your subscription

×