IT Services – Kansas City

Risks and Scams Using AI

          Since the recent rise in the popularity of chatbot AI software like ChatGPT, many people have begun to express their concerns about the potential misuses of this technology. ChatGPT, launched in November of 2022, has already seen a massive surge in usage ranging from business professionals to students to casual users. This technology has the ability to read and respond to prompts or questions in a humanlike manner, which has prompted skepticism among some and excitement among others. After all, the idea of asking a question or giving a prompt about almost anything and receiving an effortless and usually accurate response can be quite enticing to almost anyone.  

          However, this technology is opening up several opportunities for severe misuse in ways that haven’t been seen before. Students have begun using ChatGPT to write essays or complete homework assignments for them, and some businesses have started to incorporate it and other AI systems into their workplace to generate auto-responses, write emails, and more. Worst of all, scammers are already beginning to abuse AI chatbots like ChatGPT to amplify their scams. It is unknown just how much AI will have an effect on our lives yet, but it is always important to be cautious in the face of new technology like this. Scam artists and criminals are always looking for new technology that they can use to pray on their victims more efficiently or effectively, and AI technology certainly has the capabilities needed to help them. 

          Some scammers are now figuring out how to use AI like ChatGPT to aid them in phishing and vishing scams. Phishing scams usually utilize fraudulent emails or other electronic messages that are disguised as communications from reputable companies in order to deceive individuals into revealing personal information like usernames and passwords, credit card numbers, social security numbers, etc. Successful phishing attempts often lead to identity theft and/or financial loss. Vishing, short for “voice phishing,” is a type of phishing scam in which the perpetrator will defraud their victims over the phone. The goal of almost every type of social engineering attack like phishing or vishing is to trick the victim into giving up sensitive information or sending money to their attacker. Scammers have recently created a new method of attack using the combination of phishing, vishing, and ChatGPT.

New Scam Uses Copycat ChatGPT Platform

          Some scammers have figured out how to simply use the excitement surrounding ChatGPT to trick people. Security researchers at Bitdefender have identified a phishing attack that uses fake ChatGPT marketing emails to lure in victims. This attack begins with a simple, unsolicited email to an unsuspecting individual with a flashy subject line such as “ChatGPT: New AI platform has everyone talking to it,” or “New ChatGPT: The chatbot that is making everyone crazy.” The email itself then offers very little information unless the recipient accesses an embedded link. After clicking the link, the victim will be directed to a copycat ChatGPT platform that offers users alluring financial opportunities. 

          This fake ChatGPT is posing as an AI chatbot that is designed to “work in the financial market.” It tells the user that its job is to “analyze the financial market” and “make profitable transactions in the stocks of global companies.” It claims that it can earn the user $500 a day no matter their experience level, then asks a variety of questions about the user’s income levels. Next, it asks the user to input an email address to verify if they are real. After offering even more phony information about the financial opportunities that the platform provides, the fake AI asks for additional personal information, including the user’s phone number. The user will then receive a call from a “representative” who will tell them how they can invest their money. The scammers have effectively turned their phishing attack into a vishing attack.

          The people on the phone will then send follow up emails, prompt the user to set up a WhatsApp account, and/or ask for valid ID information. At this point in the scam, they are trying to find any way possible to trick you into handing over money or personal information. They will then, in essence, turn the tables by making you verify your identity and information, by collecting it from you for nefarious use. It is important to remember to always be suspicious of emails, even ones that appear to be legitimate. Don’t click on links or images without verifying the validity of the email, and don’t hand over financial and other personal information without verifying the validity of the recipient. Also, remember that the real ChatGPT is not a financial investment platform.  

          Similarly, in the last couple of months, other fake ChatGPT apps for mobile devices have begun to surface in various app stores. Some offer weekly or monthly subscriptions in order to use the tool, while others may be malware or spyware that is disguised with a fraudulent ChatGPT logo. Either way, these fake AI apps are tricking users into downloading them onto their devices so that the perpetrator can steal their money or hack their device. Although these apps may appear entirely legitimate in the app store, they are most certainly not. There is no official app that exists for OpenAI’s actual ChatGPT for iOS or Android. The only way anyone can access ChatGPT is through an internet browser. 

Voice Scams Are Being Enhanced By AI

          Along with the phishing/vishing combination scams, there are other forms of voice scams that are being enhanced by new AI. The Federal Trade Commission (FTC) recently issued a consumer alert about scammers using AI to aid in “family emergency schemes.” Family emergency scams occur either when a caller will impersonate an authority figure (doctor, police officer, lawyer, etc.) or a family member. During the call, the scammer will claim that a family member is in trouble and that they need money from you right now in order to save them. Even though these types of scams have been around for a long time already, AI is now making them more believable. Scammers are now figuring out how to use AI to clone voices. 

          All they need is a short clip of your loved one speaking, and they can attempt to create a clone of their voice to use over the phone. These clips could easily come from content that your loved one has posted online. With the voice clone, when the scammer calls claiming an emergency, their voice will sound just like your loved one’s voice. In these instances, it is important to not immediately trust the caller. If you are worried about a loved one’s well-being, try hanging up and calling a number that you know is theirs. Then you can confirm whether or not it was actually them that called. If you cannot reach them through their phone number, try contacting other family members in order to get connected with them. You could even call police departments or hospitals to ensure that your loved one isn’t actually in danger or trouble. Additionally, a caller asking for financial information over the phone is a common sign of a scam, and that tells you that you should not trust the caller. If a caller asks you to buy gift cards, wire money, or send cryptocurrency, you might be in the middle of a scam. Also, if you do spot a scam, you can report it to the FTC. 

          It really is a shame that bad actors are already taking this amazing AI technology and using it to hurt others. It is important to remember to be skeptical and cautious when interacting with anything online, especially if you cannot confirm its origin and validity with absolute certainty. Keep your eyes open for fraudulent emails, websites, or phone calls and do your research before giving out any sensitive information.