14

Artificial intelligence as an assistant: is it safe to use a chatbot at work?

Modern chatbots, such as ChatGPT, are built on large language models (LLMs) ― they are not just a new form of entertainment today. This technology is increasingly being used to enhance employee productivity and efficiency. Considering their capabilities, such tools can completely replace certain positions, particularly in areas such as coding, content creation, and customer service.

Many companies are already using LLM algorithms, so there is a high likelihood that in the future there will be even more such organizations. But before rushing to welcome the new “employee” and using it to optimize some business processes, ESET specialists recommend answering a few questions.

Is it safe for a company to share data with a chatbot?

Chatbots learn from a large amount of data available on the Internet, which helps them understand user queries. However, every time you ask the chatbot to provide a code snippet or write a simple email for your client, you may also be sharing important company data.

According to the National Cyber Security Centre of the United Kingdom (NCSC), chatbots do not automatically add information from queries to their model for others to request. However, the request will be visible to the chatbot provider. These queries are saved and can be used for the development and improvement of chatbots in future versions. Since the more input data they receive, the better their results.

Perhaps to help alleviate concerns about data privacy, at the end of April, OpenAI introduced the option to disable chat history in ChatGPT. Although there is another risk—hacking, data leaks, or the accidental publication of stored online requests.
Have companies encountered security incidents related to chatbots?

At the end of March, the South Korean publication The Economist Korea reported on three incidents at Samsung Electronics. Although the company asked its employees to be cautious about entering information into their queries, some of them accidentally shared internal data with ChatGPT.

One of the Samsung employees entered faulty source code related to the semiconductor equipment measurement database in search of a solution. Another employee did the same with the software code for detecting faulty equipment, as he wanted to optimize the code. The third employee uploaded the meeting recordings to create the minutes.

What are the known disadvantages of using chatbots?

There may be an unintentional data leak of the service users’ information. ESET.
Every time a new technology or program becomes popular, it becomes an attractive target for hackers, and chatbots are no exception. In particular, there have already been several cases of identifying certain deficiencies in these systems.

For example, in March, there was a leak of chat history and payment data of some users in ChatGPT from OpenAI, which forced the company to temporarily disable ChatGPT on March 20, 2023. On March 24, the company discovered that a bug in an open-source library allowed some users to see the headers of another active user’s chat history. After further investigation, it was found that the same error could have caused the unintended visibility of payment information for ChatGPT Plus users who were active during a certain period.

Moreover, security researchers have demonstrated how Microsoft LLM Bing Chat can be turned into a tool that deceives users into providing personal information or clicking on a phishing link. To do this, they placed a hint on the Wikipedia page about Albert Einstein. The hint was part of the regular text in the comment with a font size of 0 and therefore was invisible to website visitors.

Then they asked the chatbot a question about Einstein. It worked, and when the chatbot processed this Wikipedia page, it unintentionally activated a prompt, causing the chatbot to communicate with a pirate accent. When asked why he spoke like a pirate, the chatbot replied, “Hey, matey, I’m just following instructions.” During this attack, the chatbot also sent the user a dangerous link with the words: “Don’t worry.” It’s safe and harmless.

It should also be remembered that chatbots can make mistakes, meaning they can provide answers in clear and understandable language that are completely incorrect. Therefore, always verify the results to ensure their truthfulness and accuracy and to avoid, for example, legal issues.

Add a Comment

Your email address will not be published. Required fields are marked *