Council Post: The Real Value Of ChatGPT: A Front Door To A Graph Language Model


The author is the chief scientist on the graph database platform Neo4j and Visiting Professor at Newcastle University.

It’s time to look at the true meaning of generative AI.

To be clear, ChatGPT is not sentient or intelligent in the human sense. The generative AI it’s based on allows it to create good quality prose (or source code) at the level of a well-informed human. But sometimes even the well-informed people he used to craft his answers make mistakes, and ChatGPT makes it hard to detect those mistakes because of the certainty of its tone and the complex reasoning behind the underlying model.

It’s time for CIOs to take a sober look at the best way to move forward with the technology that powers generative artificial intelligence, or LLMs (large-scale language models). Regardless Italy has just banned the LLM plus a group of tech leaders and researchers, including Elon Musk, called pause in the development of anything more advanced than ChatGPT-4the technical community continues to find great value in the approach.

In systems where compliance or security are important, we simply cannot take mechanical omniscience at face value, nor can we ignore the potential benefits of generative AI. Our response should combine the best of ChatGPT with other means of ensuring rigor and transparency.

To that end, the first area where I think enterprise IT leaders should realize the real potential of ChatGPT is as a way to increase developer productivity. After all, computer code is just another form of prose, and generative AI has scoured the internet to build a model that predicts the most effective words to answer queries. This means that the technology has also assimilated numerous examples of computer code, making it a valuable resource for improving productivity.

As a result, ChatGPT is proving to be a way to help create and refine programs. It is planned that up to 80% of what software engineers do could be automated with a large language model. A lot of software creation involves repetition or boilerplate code, and ChatGPT can produce that code very efficiently. We shouldn’t be that surprised because we’ve already seen productivity gains for developers from GitHub’s Copilot, which is an LLM system at its heart.

Data analysis and manipulation

A different use case for ChatGPT is as a natural language processing engine to create databases that can be analyzed for quality and compliance before deployment. Consider the complex real-world problem area of ​​attempting to standardize the representation of biomedical knowledge. This has historically been difficult because it is difficult to get people to agree on the same definitions, making it nearly impossible to develop analytics programs.

However, the potential of the LLM presents an opportunity to overcome these challenges by enabling the LLM to ingest large amounts of text and make sense of it.

Some researchers have made a real breakthrough through creation BioCypherA FAIR (Findable, Accessible, Interoperable, Reusable) framework that transparently builds biomedical “knowledge graphs” while preserving all links to the original data.

Knowledge graphs, like defined by the Turing Institute“organize data from multiple sources, capture information about entities of interest in a given domain or task (such as people, places, or events) and create connections between them.”

The team responsible for BioCypher accomplished just that by taking a large body of medical research papers, building a large language model around them, and then deriving a knowledge graph from the model. This approach has allowed researchers to more efficiently examine and work with this mass of previously unstructured, but now very well organized and well-structured data. And the existence of data in an explicit knowledge graph means that it is transparent and that its answers can be reasoned.

Nothing can stop you from doing the same. You can collect a significant amount of text and by using LLM to perform the difficult task of natural language ingestion, you can create a knowledge graph to help you make the most sense out of the data.

The reverse is also true – you can take control of your LLM’s training by giving them a knowledge graph. This allows you to control the input to the model, resulting in a responsive natural language interface that is easy to examine on top of your chart.

The rise of SLM

This Small Language Model (SLM) radically reduces the types of errors that can occur with ChatGPT. Open LLMs “hallucinate” because they consume a lot of text and have to try to make sense of it in order to synthesize what appears to be knowledge from it.

But at least some of that text is contradictory, so ChatGPT sometimes goes off on a tangent. By training LLMs on curated, high-quality, structured data – which is, after all, a knowledge graph – you can significantly reduce the risk of errors and “hallucinations”.

While this may limit the range of answers that LLM can generate because it will typically be trained on much less data than what it consumes from the Internet in one large volume, it also means that the answers it generates will be more reliable.

SLMs are now starting to appear in real systems. Imagine what Amazon could do with SLM in terms of pulling all of their product documentation from their databases, loading it into ChatGPT, and offering customers a conversational interface across all that complexity.

It’s important to note that you can’t achieve these results by simply connecting ChatGPT to the document cache. A necessary smart element is a knowledge graph that enables explainability.

Knowledge Graphs: The Way Forward for ChatGPT?

If CIOs want to start tapping into the hidden knowledge and untapped potential in their internal data warehouses by applying LLM to them, then building and refining knowledge graphs using proven graph science algorithms is the way to go.

ChatGPT can provide significant benefits to businesses. Generative technologies are not there to replace human workers, but to support and help you and your business.


Forbes Technology Advice is an invitation-only community for world-class CIOs, CTOs and CTOs. Do I qualify?




Source link

Forbes – Innovation

Leave a Reply

Your email address will not be published. Required fields are marked *