IT Services – Kansas City

Experts Warn of New AI Chatbot DeepSeek

          Every day, companies around the world continue to improve their individual AI software programs. If you’ve been online recently, you have probably noticed that AI seems to be almost everywhere, from your Facebook feed to your iPhone messages, and that presence has grown rapidly over the last few years. As these advancements in AI continue to occur at such a rapid pace, more and more questions have arisen concerning the ethics, sustainability, and fraud involved in AI. Particularly, cybersecurity experts have grown more and more concerned about the potential for AI to aid cyberattacks. With the help of AI, scammers could be capable of creating more sophisticated attacks. 

          DeepSeek is a relatively new AI application that garnered a lot of attention. It originated from a Chinese startup company and has recently reached the top of several app download charts across the world, even causing US tech stocks to sink. Last month, the company released its latest AI model, DeepSeek R1, which is said to be comparable to OpenAI’s ChatGPT in terms of capabilities. Essentially, it is an AI-powered chatbot that can be used for several different tasks, like text generation, mathematics, and coding. Though its actual ranking next to ChatGPT is highly debated, many say that DeepSeek is more cost efficient, uses less memory, and equally as powerful. Additionally, DeepSeek is trained to avoid politically charged questions. 

          Some experts have raised concerns about the legitimacy of DeepSeek’s responses, highlighting biases and potential censorship. For example, when asked about the Tiananmen Sqaure massacre that occurred in China in the 80s, it did not provide any details. Some of its outputs have even appeared to favor narratives supported by the Chinese Communist Party (CCP), in some cases refusing to address sensitive topics like human rights altogether. This has major implications, not only for the influence that these apps can have on users, but also for scammers.  

          Experts believe that DeepSeek has the potential to be weaponized, meaning it could rapidly spread misinformation or aid in the execution of phishing attacks. Tailored propaganda and sophisticated cyberattacks could be generated with the use of an app like DeepSeek, highlighting one of the biggest downsides to AI as a whole. We have already seen AI-based scams in the wild, and widespread AI chatbots are only a few years old. 

          It is important to be careful when interacting with AI. Depending on what different AI models are trained on/for, they can be extremely biased and susceptible to spreading misinformation, or lacking in accurate or complete information. And, in cases like DeepSeek, AI models can be subject to censorship and propaganda. And, since AI can help cyberattacks be more effective, it is more important now than ever to be careful when surfing the web or clicking links in emails. There is a definite risk/reward calculation that needs to be considered with the use of AI. 

Read our previous post here: Why Privileged Access Management Is Important