Cyber Security Today Week in Review, ending on Friday, February 9, 2024

Advertisement: Click here to learn how to Generate Art From Text

Welcome to Cyber Security Today. This is the Week in Review of the week ending on Friday, February 9th, 2024. I’m Howard Solomon, contributing reporter on cybersecurity for and in the U.S.

In few minutes Terry Cutler of Montreal’s Cyology Labs will be here to discuss recent news. It includes how to use a Deepfake videoconference called fooled an employeeWhy did a Hong Kong-based company wire US$25 million to criminals? The U.S. Federal Trade Commission has called the cybersecurity of an organization “shoddy,” Details about a hack on CloudflareYou can also find out more about the following: Some countries have made promisesGet tougher on commercial spyware.

Before we begin, I would like to review the other headlines of this week.

Remember that fake video conference callWhat did I say Terry and I would discuss? One of the ways fake content can be spotted is if it doesn’t have a label or watermark attesting to its legitimacy. There’s a group of tech companies called the Coalition for Content Provenance and Authentication that’s trying to do that. In the latest newsGoogle has joined the coalition.. The goal is to create tamper-resistant metadata that can be attached to any digital content — a photo, a video or an audio file — that shows how and when the content was created or modified.

Remember that I said Terry and I were going to talk about the countries who promised to take actions against the misuse of commercial spyware in our discussion? The spyware is developed by developers who find and exploit security holes in software. How serious is this problem? Google released a report last weekCommercial spyware is said to be behind half of zero-day exploits that have been reported against Google products and Android devices.

Separately,Google said A pilot project is about startSingapore has implemented a system that prevents the installation of fraud apps on Android devices. If it’s successful the effort could spread to other jurisdictions.

A New York City Medical CentreWill pay US$4.75million to Settlement of allegations by the U.S. Department of Health and Human ServicesPotential data security failures may have led to an employee stealing health information about 12,000 patients and selling it. The hospital didn’t know about the theft until alerted by police. Problems included failing to monitor and safeguard the hospital’s health information system.

Two big data breach notificationsThis week, the following events took place in the U.S. Verizon Communications, Inc.In September, a staffer stole the personal data of more than 63,000 employees. And Bayer Heritage Federal Credit Union of West VirginiaLast fall, a cyber-attack stole the personal information of just over 61,000 clients.

Finally,This week, JetBrains and Cisco Systems released security fixes. Fortinet, VMware, Fortinet et Fortinet also released security fixes. JetBrains saysThere is a critical vulnerability that must be patched in TeamCity. The Cisco patches fix critical holes in Cisco’s secure remote access Expressway Series. Fortinet releases new updatesFortSIEM System Event Manager to plug holes. And VMware releases patchesAria Operations for Networks will close five vulnerabilities.

(The following is the transcription of the discussion of one of four topics. To hear the complete conversation, please play the podcast.

Howard:Topic One: A sophisticated deepfake call led an employee to transfer millions of dollars to criminals.

Hong Kong police say the employee, who worked in the finance department of an unnamed multinational company, was tricked into sending $25 million to crooks by what appeared to be the company’s chief financial officer on a video conference call. The employee received an email asking them to join the call. It was about a confidential transaction. On the video call, the staffer saw the CFO as well as other people he recognized. So he complied with the instructions.

This is a good example of a fake video call that was probably made with artificial intelligence. The big question is, did this company not have any business process rules? Like “transfers over $1 million must have double authorization?”

Terry CutlerThis will require a multifaceted strategy. If you’re dealing with a CFO used to transferring this large amount of money it’s going to be a bit more tricky than just saying, ‘Oh, they didn’t have the proper processes.’ But they’re going to start bringing in more AI-based detection and prevention solutions. What’s going to be happening now is because these deepfakes are so difficult to find it’s going to be having like a detection system on steroids. It’s going to come down to ‘My AI bot just beat your AI bot.’ That’s going to get really tricky. You think humans are eventually going to lose control because they can’t keep up with what’s going on behind the scenes with AI. But have to start looking at something more — maybe more advanced authentication and verification methods. Signing their payments using digital signature algorithms from RSA or ECDSA (elliptic curve digital signing algorithm) is one example. All these strategies can help.

As for awareness training, we’re seeing a big problem because users are so used to templated training which is very, very, very boring. Employees do not engage with the training. They don’t see a need for cyber security because it doesn’t concern them, but they need to understand that this is everyone’s responsibility. So we need to have other types of training that’s more edutainment. That will help educate them on why it’s so important to stay up-to-date with cyber security, and not just that because they’re a victim of a scam. We [also]There are no plans for what to do when something goes wrong. This is especially true with deepfakes. It’s getting so difficult to spot them And, of course, they should be sharing information of how this [scam] occurred so other companies don’t fall victim.

Howard:The email that invited him to this video conference was a good tip. ‘This is a secret transaction.’ In awareness training one of the things you’re warned is to look for little signs like, ‘Please treat this as confidential’The following are some examples of how to use ‘This is a matter of urgency and you’ve got to transfer this money quickly.’ To be fair to the employee, according to the police, initially was suspicious. The police said that the video call was real, and all the people who were on it looked like people he knows.

Terry: That’s what’s going to be tricky. Imagine that you wake up and find your bank account drained. You call your bank, and they tell you this was a legitimate transaction. Your colleagues were on the phone. Voice-verified. It was verified by email signature. Everything was verified — and you’re left with an empty bank account. It’s very very scary what’s coming up.

Howard: I appreciate that this was a big company and presumably was used to transferring large amounts of money — and I assume that the employee was someone who had authorization to transfer large amounts of money. $25 million is a large sum of money. You need verification control.

Terry: I agree, and I think this is something they’re going to put in place now. We’re going to have multiple members [of the company]This has to be signed off by the signatory [large transfers]. More than just dual-authentication Maybe it’s going to be better to have other people that are responsible for the transaction to be actually on the call. as well as a separate call to make sure it was really them — implement a hierarchical approval workflow. You could have independent channels that you can call to verify. Set transaction limits.

I’ll give you an example. One of my friends lost $445,000 in his company. Originally, he never wired more than $50,000. When he was hacked, the scammers took over his bank account and began wiring large amounts of money to Mexico. The banks did not intervene because his accounts had been pre-authorized by the banks for half a billion dollars. Because the threshold was set so high. [crooks’]The transaction was successful. I believe that they did. [banks]Before approving transactions, we will start to look at the transaction limits.

Howard:As you said, the sophistication of fake voices and videos today is evident.

Terry: This is really scary stuff, because it’s very difficult to know if it’s fake. We’re going to need help from third-party vendors, maybe some telecoms that can trace the signature see where if it came from.

Howard:This week’s related news Meta announced that they will soon label all AI generated images that are posted to Facebook and Instagram in order to help people identify fake pictures. It won’t matter whether the images were created with Meta’s AI tool or another company’s tool. There will be a label, or watermark. Meta marks photos that use its tool on Facebook and Instagram. It says next to the picture ‘Imagined with AI.’Hopefully, there will soon also be the ability to tag videos and audio files as well as AI-generated still images. Meta says that, if it determines an image or video that has been digitally altered is likely to mislead the public in a way that is important the label it uses may be more prominent. This watermarking wouldn’t have helped in the deepfake video call case that we just discussed, because that was a private call. It shows that the industry is thinking about it and trying to find a solution.

Terry: It’s going to be interesting to see, because AI is heavily used for marketing as well. Since the rise in ChatGPT, all these so called marketers have come up with innovative ways to market their products. There’s a heavy reliance on AI. It’ll be interesting to see social media platforms saying, ‘This was created with ChatGPT and is not original.’

‘ Credit:
Original content by – “Cyber Security Today, Week in Review for week ending Friday, Feb. 9, 2024”

Read the complete article at

Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *