Week 49: Use of artificial intelligence in fraud attempts

12.12.2023 - The NCSC is observing an increase in the use of so-called artificial intelligence (AI) in phishing and fraud attempts. Below, we look at three examples of how AI is already being used in this way.

Phishing templates thanks to AI in the darknet

Often, phishing emails can be identified not only by means of the sender and the suspicious link in the message, but also because of the language and format. As the fraudsters usually cannot speak German, they have to rely on translations and copied templates. When it comes to the actual execution, they often make mistakes, sometimes even using a mix of languages.

For this reason, fraudsters are now also increasingly resorting to AI: by applying very clearly defined language settings, corresponding templates can be produced which are difficult to identify as fakes.

To combat this, the best-known AI chatbot, ChatGPT, has built-in mechanisms to prevent people from creating fraudulent templates, for example.

However, specialised chatbot templates are already available to buy on the darknet. They can, for instance, be used to create genuine-looking and correctly phrased phishing emails and webpages.

Nonetheless: the perfectly phrased messages will still contain a fraudulent link which does not lead to a service provided by the apparent sender. So taking a quick glance at the link in a message with the request to click on it is still an important security measure.

Using AI to fake images and videos

Tools are now available which can be used to create material that is hard to identify as fake. If the corresponding AI models are fed with images of real people (e.g. photos from social media), new images can be created which give the impression of being genuine snapshots, despite the fact that no such photo was ever taken.

It is also possible to create videos of someone with only a small amount of starting material. Abuse of people of whom there is quite a large amount of video content online, for example well-known public figures such as federal councillors, is common.

The NCSC has already received reports in which this technique has been used:

  • In adverts for online investment fraud:
    Using the face and voice of a well-known public figure, the viewer is led to believe that a lot of money can be earned on an online platform with just a small initial stake. The fame of the public figure is used to gain the trust of the viewer. However, the money is not really invested, but simply funnelled straight into the fraudsters' pockets.

  • In giveaway scams:
    In a faked video, a well-known public figure explains that bitcoin payments to a specific wallet for charitable purposes will be repaid twofold. Of course, payments to this digital wallet go straight to the fraudsters – everything sent to it will be lost. Such videos are published on well-known platforms and social media channels.For more on this, read our week 17 review: Advertisement using a deepfake video for a giveaway scam

  • In sextortion:
    Using AI-generated naked images, the victim is blackmailed with the threat to publish the pictures. It is very difficult for the viewer to tell that only their face has been used and the rest has been added using AI. In some cases – if corresponding images are available – a familiar room or background can even be added.

Using AI to fake voices

Recordings of a target person's voice ("voice samples"), which can be obtained from phone calls for example, can be used to make AI models reproduce written or spoken content that is almost indistinguishable from the target's own voice.

Such voice content is used for shock calls, for instance, in which a police officer apparently calls the victim and explains that their son or daughter has been involved in an accident and that some kind of deposit must be paid. As proof and a way of exerting pressure, a fabricated recording is played in which the person, in a voice that the victim recognises, makes a dramatic plea for help.

Recommendations:

  • Do not react to links in emails or text messages which ask you to click on them.
  • Before clicking on any link, check the target domain to see whether the link matches the apparent sender.
  • Never enter passwords, codes or credit card details on a page that you have opened via a link in an email or text message.
  • Be sceptical about offers or possible winnings that are too good to be true.
  • If you receive strange or upsetting phone calls claiming that a family member has a problem, end the call immediately, contact your relative directly via another channel. If in doubt, call the police.
  • If you are blackmailed with compromising images, even if you never took the photos, contact the police. Cease all contact with the fraudsters and do not pay.
  • In general, take care when uploading publicly accessible photos and videos of yourself or other people online.

Last modification 12.12.2023

Top of page

https://www.ncsc.admin.ch/content/ncsc/en/home/aktuell/im-fokus/2023/wochenrueckblick_49.html