Deep Fakes

Deep fake is a term given to a persuasive-looking but false video and audio files. Made using cutting-edge and relatively accessible AI technology, they purport to show a real person doing or saying something they did not.

This can manifest itself in Hollywood films where a greenscreen special effect is used to put the actor into a scene that would not have been capable to be filmed in camera. I also remember the film Rogue One with Peter Cushing in his Star Wars Episode IV role of Grand Moff Tarkin – the commander of the Death Star. This was almost convincing, but knowing that Peter Cushing died in 1994 it was detectable.

The first time I came across this kind of technology was in the film Forest Gump when Tom Hanks character was placed into various historical settings. These were quite crude by todays standards, but as they were grainy monochrome clips, it didn’t really matter.

Some Real World Examples

There is a smartphone app called FaceApp that allows you to use AI to touch up your selfies. It allows you to switch backgrounds, add some special effects and quite a bit more. As an example I downloaded a free picture of a man and processed it through FaceApp.

The original photo is on the left and all I did was add a woodland background to the second and then in the third I changed his hairstyle to be long hair. This is quite innocuous and a bit of fun, so no harm is done here and I am being very clear what I am doing.

This app can do a lot more, including adding makeup, changing facial features, ageing or making you look younger, adding filters and some basic lighting effects. There is also another more extreme makeover effect – see below:

Original Photo by Albert Dera on Unsplash

This is the gender swap function. This is the only effect I applied here and I have to admit it is quite convincing.

Disclaimer: If the owner of this photo finds this use to be offensive, or the original subject objects, I will take it down – just send me a message through our contact form.

Take a look at this video of some notable actors in a discussion forum which has been faked using this technology.

Video Credit Collider

The thing here is that a lot of the voice characteristics and mannerisms of the actors are replicated and if you didn’t know this had been faked up then you may be deceived.

There have been several more insidious applications of this technology. Some have put politicians in settings and saying words that they didn’t say. Here is an example:

Video Credit Buzfeed Video

In October 2018, Christie, the auction house, sold an oil painting generated by an AI for $435,000, nearly 45 times its highest estimate. The painting was created by a group of French students, using a dataset of approximately 15,000 portraits from the 14th to the 20th century. 

Another example of this is where recently the voice of a CEO of a company was faked which resulted in a fraudulent payment being made to the tune of $243k.

Other examples are where celebrities have been faked into porn scenes. A new threat is that of revenge porn where someone will paste a picture of some innocent girl into a porn scene or video. It should be noted that around 97% of all deep fakes online are pornography. The real victims here are women.

I think you can tell where I am going with this!

We are now living in a world where where the camera actually does lie and is quite convincing about it. It is not beyond belief that someone could be put into a compromising situation and then blackmailed using it. Fake news can cause social unrest and can change the outcome of an election (as has been proved to be the case in the 2016 US presidential election). The cyber threat resulting from this technology is frightening.

These deepfake videos are become more prevalent on social media and is becoming big business especially in pornography.

Detecting Deep Fakes

The first thing you can do is engage brain! Similar common sence techniques used to detect Phishing emails can be applied here:

  • If it looks unbelievable that someone would actually say that in a public video, then you should challenge its validity
  • Some artefacts in the videos may seem unnatural, e.g. eyes not blinking in unison, lips not moving naturally, lack of facial expressions, the head is in a fixed position
  • There are also a lot of other effects that you also use when you see a badly photoshopped picture, for example a reflection in a mirror is not consistent with what you see in front of you.

There is also a lot of software being developed to analyse videos and faked voices that is being used by companies and social media platforms to detect deep fakes. However, for an average consumer the common sense methods are often the only available techniques.

What is being done about this issue?

Despite the difficulty in detecting deep fakes, social media platforms (e.g. facebook, Twitter) are tackling the issue:

In addition, Microsoft, Facebook and a number of Universities joined forces in 2019 to sponsor a contest promoting research and development to combat deepfakes, or videos altered through artificial intelligence (AI) to mislead viewers.

Conclusion

The use of AI to enhance our entertainment in films is one thing. To use the same technology to change an election or to blackmail someone is another. Enhancing your selfies with a filter (e.g. FaceApp, Instagram) is relatively harmless, but I have demonstrated it is easy to doctor these photos to a degree that could cause harm.

We are aware of how photos are modified to give a cleaner image in a fashion magazine, but this technology is now becoming used by cyber criminals to enact frauds, phishing attacks, sextortion and blackmail.

The general advice I can give it to apply the same common sence rules you would apply to Phishing attacks, and other social engineering techniques. If it looks unbelievable, then don’t act on it. certainly don’t ‘retweet’ it.

From a company perspective, and certainly in the case of faked voicemails asking you to transfer funds or perform some other financial transaction from a senior official in your organisation, or a client, you need to enable business controls to weed out these threats. These could be:

  • Enforce a review of all emailed transactions by verifying signatures against a validated sample and perform a call-back to confirm the sender of the instruction from a validated call-back list
  • Initiate staff training for front line staff in the detection of deep fakes and enforce it with periodic tests similar to the normal phishing email tests.

This technology isn’t going away any time soon. Like a lot of things we have to contend with in the social engineering arena, we have to be vigilant and in most cases apply common sence.

If it looks like a duck, quacks like a duck, then is it really a duck or has it been faked to look/sound like a duck? You must decide!


Headline image provided by Noé Calderón from Pixabay

Additional media credit to Albert Dera on Unsplash for the photos and to Collider and Buzfeed Video for the deepfake video examples.

If anyone would prefer we didn’t use these examples, pleases send us a message through the contact form and they will be removed.

Comments are closed.

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: