Key Takeaways
- ChatGPT and similar applications raise IT security concerns due to their use in automating malicious cybercrime activities.
- The use of AI-powered chatbots has led to instances of "AI-gerism" resulting in bans by educational institutions, prompting a need to differentiate between AI-generated work and human-generated work.
- The legal system is moving towards augmenting lawyers for complex tasks rather than replacing them entirely.
The basics of AI and ChatGPT
In November, OpenAI publicly launched ChatGPT, showcasing its ability to process large amounts of data to create human-like responses. The language model has taken the world by storm and prompted questions about the likely effects of AI across all sectors.
So, how does ChatGPT actually work?
ChatGPT is a large language model. It is trained on massive amounts of text, which it uses to read inputs and create responses by predicting which word should come next.
A casual exchange with ChatGPT may give the impression of human conversation, but ChatGPT and applications like it cannot think or reason the way a human can. It also cannot evaluate whether the information it provides is correct. If you ask ChatGPT to explain Porter’s Five Forces or the types of inflation, it may give you a coherent and believable response, but the information it provides could be incorrect.
While the tool’s accuracy and efficiency depend on the amount of information it has access to, ChatGPT can automate a range of tasks, such as writing essays, answering queries and generating text content. Given the right data, ChatGPT can even imitate individual writing styles through its ability to process and contextualize information.
Although ChatGPT’s launch has been hugely successful, many have raised concerns about its reliability, potential for misuse, and data privacy risks. These concerns are not unfounded, especially given how quickly individuals and businesses have added ChatGPT to their workflow.
It certainly looks like AI-powered chatbots are here to stay, with Microsoft investing USD $10 billion in Open AI’s ChatGPT 3.5 and using it to enhance Bing search results, and Google announcing its own AI project, Bard.
While the buzz around ChatGPT 3.5 is warranted, its output only improves with more data inputted into the model, making the upcoming release of GPT 4.0 highly anticipated.
AI chatbots have the potential to offer significant benefits to a range of industries, but it’s important to use them responsibly and with a clear understanding of their limitations. Given the potential risks associated with AI systems, organizations must have robust governance and safeguards in place to inspire confidence and promote responsible use.
What does AI mean for cybersecurity?
IT security is crucial in today’s tech-saturated world, as the frequency and complexity of cyberattacks continue to rise. Effective IT security must be able to detect and respond to security threats in real-time, as well as predict and prevent future attacks.
Applying AI to cybersecurity can reduce the impact of hacking by analyzing large amounts of log and event data for signs of malicious activity, similar to the way spam messages are filtered.
While some organizations may be wary of implementing AI technology, major tech firms and cybercriminals alike are leveraging its capabilities, making it a critical aspect of tech security in the future.
For example, Israeli security firm Checkpoint released a report in January 2023 indicating that cybercriminals had begun to use ChatGPT to create basic malicious malware and tools. With applications like ChatGPT set to grow in number and complexity, companies need to invest in robust cybersecurity measures, as the cost of a data breach is expected to rise with the automation of cyberweapons.
The average global cost of data breaches in 2022 was USD $4.35 million. In the United States, the average cost is almost double that figure. Data breaches are becoming more frequent and costly, with phishing and business email compromise being the most damaging hacking methods. AI and automation are fast becoming the most effective cost-saving solutions for detecting and preventing breaches.
AI cybersecurity technology can provide a concise summary of incidents and their impact on the network, allowing personnel to quickly understand and respond to threats. Rapid identification is a crucial component of reducing the risk of breaches and ensuring cost savings.
Nevertheless, this technology is not a silver bullet against cyberattacks and requires supplemental oversight and training awareness, as systems and programs can still be manipulated and bypassed.
The risk of manipulation and bypassing isn’t the only concern around AI cybersecurity. AI models are trained on data, and if the data is biased, so is the model. These biases could lead to incorrect decisions in security scenarios.
For example, if the model is trained on data from a specific type of network infrastructure, it may not perform well in detecting intrusions in other network types, resulting in false positives (incorrectly flagging an insignificant event) or false negatives (missing an actual intrusion).
How will AI and ChatGPT affect education?
The rapid uptake of ChatGPT has sparked discussions in the education sector about the ethics of using AI in the classroom. When used appropriately, this technology can potentially enhance the student learning experience.
For example, students struggling to grasp a concept can ask ChatGPT for clear and concise explanations, which may help them master the material more effectively. As chatbots improve and become more reliable, teachers may be also able to use the technology to quickly process students’ work.
Additionally, AI-powered chatbots make adaptive learning practices more viable. Feeding the AI relevant data would allow the model to meet the needs of individual students and enhance their understanding of the material.
On February 2023, the Genius Group, a prominent provider of IT educational services, announced the introduction of the Genius AI Educator Suite. This solution leverages the capabilities of OpenAI’s GPT-3 and upcoming GPT-4 technologies to create individualized learning paths for students. The suite also integrates student assessments and course progress, allowing educators to augment their expertise with AI-generated content to develop the best possible educational experiences.
Tools to detect the use of AI – and bans to prevent its use in the first place – have become part of the conversation around generative AI almost as quickly as the technology has spread. Some education institutions have banned the use of AI chatbots amid concerns that students would use ChatGPT and applications like it to produce their work. Meanwhile, tools to detect generative AI output, such as watermarking, are also being trialed.
Some critics of ChatGPT have voiced concerns about the broader effects of widespread use. Many are worried that by using ChatGPT to explain concepts and generate ideas, students will miss key opportunities to develop critical thinking and creative skills.
For example, if a student uses ChatGPT to draft their paper and submit it without making any substantial changes or adding their own insights or thoughts, little is achieved. Neither the individual students or wider society benefit from heavy reliance on AI, as they may struggle to develop the abilities needed to communicate effectively or complete complex critical thinking tasks that require original thought and analysis.
Nevertheless, students will need to learn to evaluate the shortcomings and best uses of AI-generated content, which will be a useful skill in itself as the technology continues to spread.
Simply put, AI is here to stay and education institutions have a challenging task ahead of them to find a balance between AI technology, such as ChatGPT, and traditional teaching methods. Much like the introduction of the calculator or computers, students will utilize AI regardless of school policies.
In many cases, they will need to learn how to best use these tools to prepare them for life outside of school as industries increasingly adopt AI-powered technology. By incorporating different methods of testing a student’s ability, education can leverage the benefits of AI to enhance the learning experience for both students and teachers.
How does AI fit into legislation and the legal field?
AI technology is still in its infancy, and most jurisdictions lack consistent definitions of AI and legislation to regulate its use. In the absence of comprehensive AI regulation, some regions such as the EU, China, USA and Australia have proposed laws or ethical principles surrounding the use of AI. The design and structure of chatbots is critical in navigating the evolving regulations around AI, and its implementation in the legal field.
The arrival of ChatGPT has sparked discussion about the impacts of AI on the role of lawyers. The legal sector stands to benefit greatly from the power of AI to improve document management and analysis.
AI technology can streamline the process of reviewing contracts by identifying risks, renewal/expiration dates, and legal obligations, making it a cost-effective and time-saving tool for lawyers. Additionally, the ability of AI to learn from previous interactions suggests that it could be used for drafting legal documents.
A survey conducted by Linklaters in December 2022 tested ChatGPT’s ability to dispense legal advice by asking 50 legal questions, with the results ranging from 2-3 on a scale of 1-5, as a measure of the quality of the advice. Applications such as ChatGPT have the potential to produce inaccurate outputs due to flawed inputs or embedded bias in the information, which highlights the risk of negligence inherent in using the technology in legal applications.
The use of AI for direct legal advice may present unacceptable risks for clients, due to geographical and jurisdictional limitations and the risk of error.
Since communication with a chatbot is not protected or subject to legal professional privilege, a key aspect of the legal profession, relying exclusively on AI-generated information for important decisions is potentially negligent.
Coupled with the data privacy and information security risks involved in using applications such as ChatGPT, many aspects of the legal profession will likely remain free of AI automation for the foreseeable future.
Final Word
AI chatbots have the potential to disrupt the education, law and IT sectors in a range of ways, both positive and negative.
In the legal field, AI output is unlikely to supersede human oversight any time soon. Meanwhile, chatbots can be used to enhance, but should not replace, original work in education and have the potential to improve access and equity in learning. Monitoring the effects of AI applications such as ChatGPT on cyber security and productivity is essential as the technology evolves and regulation tightens.
Distinguishing human-generated work from AI-generated work will be a crucial component of developing regulation, while chatbots requiring registration and that produce traceable output will be important in preventing misuse.
Ultimately, ChatGPT—and AI in general—are here to stay. As the use of AI chatbots continues to expand, it is critical for companies and institutions to engage with caution and invest in measures that minimize potential risks.
Sign up to our newsletter and follow Alfabank-Adres on LinkedIn to keep up to date with our latest insights and market research guides.