Extract from Epiq’s article “Nothing to Fear Here – Using AI Tools Responsibly Breeds Success”
The news is flooded with stories about artificial intelligence (AI) tools — some shining light on the benefits, and others meant to invoke fear and panic. While the latter has bred some skepticism about AI usage in business, it is time to take a step back and look at what drives the errors. Oftentimes, it is not the tool itself but instead the human behind the tech. The reality is that many organizations are using AI highly successfully and without mishap.
Every organization – regardless of what industry it may operate in – should understand how to utilize AI in a safe and responsible manner. Most are likely already aware of the usual risks present in AI tools and other emerging technologies. Inherent bias, cybersecurity gaps, and lack of transparency are a few examples. However, many are forgetting to factor in the human component. It is crucial to understand the benefits and risks of both human and technological contributions to changing workflows. This helps teams build strategies and systems that realize the best in both, while effectively managing potential downsides.
AI in the News
A notable AI story in the media recently has been about the New York lawyer who used ChatGPT to help draft a brief that went south. The output included convincingly cited cases that did not in fact exist. Opposing counsel discovered the false citations and brought the issue to the court. The lawyer used ChatGPT to verify the accuracy of the decisions and the tool ended up creating facts and attributing the existence of the cases to legal research search engines. The lawyer responded that this was his first time using ChatGPT as a supplement to legal research and he was unaware that the tool could create false information.