MultiversX Tracker is Live!

AI21 Labs debuts anti-hallucination feature for GPT chatbots

The Cointelegraph ​

Cryptocoins News / The Cointelegraph ​ 118 Views

Contextual Answers is designed for enterprise but could have far-reaching implications for the generative AI sector.

AI21 Labs recently launched “Contextual Answers,” a question-answering engine for large language models (LLMs). 

When connected to an LLM, the new engine allows users to upload their own data libraries in order to restrict the model’s outputs to specific information.

The launch of ChatGPT and similar artificial intelligence (AI) products has been paradigm-shifting for the AI industry, but a lack of trustworthiness makes adoption a difficult prospect for many businesses.

According to research, employees spend nearly half of their workdays searching for information. This presents a huge opportunity for chatbots capable of performing search functions; however, most chatbots aren’t geared toward enterprise.

AI21 developed Contextual Answers to address the gap between chatbots designed for general use and enterprise-level question-answering services by giving users the ability to pipeline their own data and document libraries.

According to a blog post from AI21, Contextual Answers allows users to steer AI answers without retraining models, thus mitigating some of the biggest impediments to adoption:

“Most businesses struggle to adopt [AI], citing cost, complexity and lack of the models’ specialization in their organizational data, leading to responses that are incorrect, ‘hallucinated’ or inappropriate for the context.”

One of the outstanding challenges related to the development of useful LLMs, such as OpenAI’s ChatGPT or Google’s Bard, is teaching them to express a lack of confidence.

Typically, when a user queries a chatbot, it’ll output a response even if there isn’t enough information in its data set to give factual information. In these cases, rather than output a low-confidence answer such as “I don’t know,” LLMs will often make up information without any factual basis.

Researchers dub these outputs “hallucinations” because the machines generate information that seemingly doesn’t exist in their data sets, like humans who see things that aren’t really there.

According to A121, Contextual Answers should mitigate the hallucination problem entirely by either outputting information only when it’s relevant to user-provided documentation or outputting nothing at all.

In sectors where accuracy is more important than automation, such as finance and law, the onset of generative pretrained transformer (GPT) systems has had varying results.

Experts continue to recommend caution in finance when using GPT systems due to their tendency to hallucinate or conflate information, even when connected to the internet and capable of linking to sources. And in the legal sector, a lawyer now faces fines and sanctioning after relying on outputs generated by ChatGPT during a case.

By front-loading AI systems with relevant data and intervening before the system can hallucinate non-factual information, AI21 appears to have demonstrated a mitigation for the hallucination problem.

This could result in mass adoption, especially in the fintech arena, where traditional financial institutions have been reluctant to embrace GPT tech, and the cryptocurrency and blockchain communities have had mixed success at best employing chatbots.

Collect this article as an NFT to preserve this moment in history and show your support for independent journalism in the crypto space.

Related: OpenAI launches ‘custom instructions’ for ChatGPT so users don’t have to repeat themselves in every prompt


Get BONUS $200 for FREE!

You can get bonuses upto $100 FREE BONUS when you:
💰 Install these recommended apps:
💲 SocialGood - 100% Crypto Back on Everyday Shopping
💲 xPortal - The DeFi For The Next Billion
💲 CryptoTab Browser - Lightweight, fast, and ready to mine!
💰 Register on these recommended exchanges:
🟡 Binance🟡 Bitfinex🟡 Bitmart🟡 Bittrex🟡 Bitget
🟡 CoinEx🟡 Crypto.com🟡 Gate.io🟡 Huobi🟡 Kucoin.



Comments