Tech

How to Defend Yourself Against Fake News and Information

It’s been a few years since Deepfakes first appeared on the scene. Currently, this technology is being abused in negative ways, such as on social media to disseminate misleading information. As an example, it’s been utilised to cast Nicholas Cage in some really bad movies. Deepfakes have been in the news for a while for some of you, since they originally came to light back in 2017. Deepfakes are a cybersecurity threat for 2020, as we discussed in our previous essay. When was the last time you brought this subject up? If you are blackmailed by التزييف العميق, you can contact us.

The Goal Posts Have Moved.

Previously, hours of source footage displaying the target’s face were needed to construct a credible deepfake. At first, it was only used by prominent persons like celebrities and politicians. Fake people may now be constructed using just a photo of the target and five seconds of their speech, according to recent advances in Machine Learning technology. People are posting photographs and videos of themselves all the time on social media these days. For a deepfake, an attacker may just require this. Does it sound ominous? Yes, that’s correct. The goalposts have moved. Anyone with a social media presence is at risk of being impersonated over the phone and even through video conversations. Let’s take a look at how these assaults operate and what you can do to protect your business and yourself. 

We can protect you from الديب فيك very easily.

Calling a Friend

It’s not a new concept to use voice deepfakes. Adobe demonstrated a software named VoCo a few years ago. After listening to someone for roughly 20 minutes, the programme was able to accurately duplicate their speech. It is suspected that this product has been withdrawn owing to ethical and security problems, despite its intended use by audio editing specialists. Other businesses have also taken up where Adobe left off in more recent years. Lyrebird, Descript, and other commercial solutions now imitate or even improve on this concept. The “Real-Time Voice Cloning” open source project can create convincing voice snippets from mere seconds of a person’s speech.

Unfortunately, such an assault isn’t just a theory anymore: It’s now 2019: “A CEO was duped out of $243,000 by using a voice deepfake.” The CEO assumed he was chatting with the head of the German parent business when he made the mistake. What was it that swayed him? He could tell his boss was calling because of the melody in his voice and the tiny German accent. Having the right voice provided this assailant the credibility he needed to make off with $243,000 from his victim. With this technique in an attacker’s inventory, phishing will become significantly more deadly. We’ve already discussed how potent vishing assaults can be.

During a Video Conference

Consider that you are able to work from home now because of COVID-19. You get an email with a link from a colleague with whom you’ve had a few conversations in the past. He wants you to be a part of a video conference with him. Everything goes according to plan during the call: you exchange pleasantries and share some private corporate information. What would be the point of doubting someone’s identification if they appear and sound as they typically do? Sadly, in this instance, the employee is a fraudster out to steal confidential corporate information. While it may appear impossible now, advances in deepfake technology indicate that this kind of assault will be feasible in the not too distant future.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button