If you thought that ChatGPT was only the latest craze in AI capability, you’ll be doing a double take to the news that Cybersecurity crime has a new dimension because of this capability.
So let’s give you the quick run through of Chat GPT, it’s rise to worldwide prominence and how it works.
ChatGPT – the what, why who, when, where and how of it
The owner and producer of ChatGPT, OpenAI explains that, ‘We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.’ Launched in December 2022, the service racked up 100 million users by 04 January 2023.
Chatbots have been around for decades, but ChatGPT’s intelligence through its extensive training makes it a cut above the rest. Mashable wrote in a guide to ChatGPT that, “ChatGPT hasn’t been put through a thorough evaluation with the Turing test, a test of a machine’s ability to behave like a human. But some researchers believe it has passed the test, nonetheless. If you asked ChatGPT if it is alive, it says: “No, I am not alive. I am an artificial intelligence language model developed by OpenAI, I do not have consciousness or feelings. I am just a computer program designed to respond to text inputs and generate outputs based on patterns in the data I was trained on.” https://mashable.com/article/what-is-chatgpt
The link between Cybercrime and ChatGPT
Cybersecurity firm Darktrace has reported an increase in criminals using artificial intelligence to create more sophisticated scams, hacking into businesses and deceiving employees. The text created in applications like GPT-3 is far superior to the copy used in your average phishing message, making it exceedingly difficult for end users and many email security solutions to detect. – idagent.com
Researchers at Checkpoint found that using Codex (a natural language-to-code system also developed by OpenAI) ChatGPT can be used to develop malicious code. “Our researchers built a full malware infection chain starting from a phishing email to an Excel document that has malicious VBA [Visual Basic for Application] code. We can compile the whole malware to an executable file and run it in a machine,” says Sergey Shykevich, threat intelligence group manager at Check Point Software. The general consensus across research houses is that ChatGPT can write far more convincing copy in phishing and impersonation emails than we are currently receiving.
Shykevich has explained that the first iteration of the malicious code isn’t perfect and requires slight tweaking by a human. This still gives the user heightened capability to quickly produce components for a cybercrime attack – far quicker than in the past. Researchers have also identified the development of information stealer and dark marketplace applications.
The largest concern was the lack of signature associated with text generated by ChatGPT. Open AI released a tool at the end of January 2023 saying that it could identify text written by its platform or by a human but has cautioned users not to use the tool alone to determine authenticity of a content piece. The tool also works best with content of 1000 words or more further narrowing the capability to verify AI vs human generated content. This is another angle to consider when determining the verification of emails which security tools might have difficulty tracking.
Latest update: 14 March 2023: Release of GPT4 – A multi-modal approach that can accurately read images
Open AI in its release on its website says, “GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities.” https://openai.com/product/gpt-4
“GPT-4 surpasses ChatGPT in its advanced reasoning capabilities.”
“GPT-4 outperforms ChatGPT by scoring in higher approximate percentiles among test-takers.”
Open AI is assuring users that the solution has been tested for safety. “We spent 6 months making GPT-4 safer and more aligned. GPT-4 is 82% less likely to respond to requests for disallowed content and 40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.”
About Nihka Technology Group
The Nihka Technology Group is a South African technology company based in Johannesburg, South Africa. The Group is focused on bringing the digital future to both the private and public sectors, locally and globally by delivering innovative, integrated technologies and intelligent solutions. Nihka offers end-to-end multi-dimensional consulting with an emphasis on integrating the human potential. Bringing EQ into AI.
www.nihka.co.za