2024 Calendar
TechTalk Daily

7 Reasons AI Needs Knowledge Management

7 Reasons AI Needs Knowledge Management

By: Daniel W. Rasmus for Serious Insights

Much of the discussion about generative AI and the future of work focuses on the potential for AI to displace workers. Often vague references suggest that AI will create new jobs as did all the automation of the past. Like previous mechanical and digital automation, there are populations of experts who already understand the human labor required to make automation efficient and sustainable. Computers without programmers could do nothing. Grain combines without mechanics to clean and maintain their mechanisms would soon stall in the vast wheat fields they were intended to harvest.

Generative AI certainly has its inventors, those who know the secrets instilled in algorithms, as much as the algorithms permit. They know, more than the users of generative AI, at least, what mixes of data were used for training.

Once trained, many believe generative AI to be akin to a mythical beast that rises from an abyss. In this case, that abyss is the Internet, both light and dark. Generative AI follows the proclivities of all those who populated those vast repositories with facts, opinions, grievances, poetry, and many other outputs of humanity, written and visual.

To move beyond experiment—to prompt adoption and use by individuals and businesses—generative AI must be trusted. AI that randomly curses, one that reflects discriminatory positions, one that devolves into nonsense, raises doubt about the efficacy of AI as a legitimate partner in business or in life.

So, the developers of Large Language Models (LLM) institute various remediations to the aberrant AI behavior, reigning in its most outlandish assertions. Rather than reflect all of the Internet, LLM makers hone output to avoid culturally sensitive areas, adopt a shareable context, and attempt to curtail the most unconventional outbursts. They do this response management with manual overrides that sit between the LLM and the people or systems making queries.

The key word here is manual. While their may be some automation involved in writing what has become known as Guardrails, the choices about what needs to be guard railed are purely human. No generative AI is self-policing of its aberrant or offensive behavior.

The manual nature of guardrails requires knowledge management. Which guardrails are in place, their content and context, may need to be known to offer comfort to buyers, or to act as a baseline as expectations, politics or other contexts change.

Guardrails are not the only area where knowledge management needs to be applied to generative AI management.

The following list outlines the most important areas where organizations need to apply knowledge management principals to generative AI development and deployment. 

  1. Guardrails. Guardrails need to be documented and searchable. They will likely become complex. Some commercial services may already have guardrails in place. Customers will need to understand those parameters so they do not write new guardrails that conflict with those already wrapped around a generative AI service. Organizations will need to include guardrails in their change management practices, as they may become irrelevant or require augmentation depending on business situations–such as acquiring a new firm or opening a branch office in a new country. 
  2. LLM Metadata. For those who have sifted through the dozens of models available to LMStudio for download, they will realize that the most important parameters to that search come in the form of a models ability to run on the target hardware. Neptune.ai is all about machine learning metadata. They understand that organizations need to know how models were built and evaluated, what dataset versions were used, and test metrics, just to name a few attributes of metadata that are crucial to understanding the nature, history, skills required to maintain them, and their best use cases. Model repository Hugging Face offers some search filters for sifting their repositories, but they don’t include the kinds of parameters Neptune.ai tracks. Hugging Face models often list important information in their associated narrative, but that means reading through the released notes of candidate LLMs to see how they were trained and perhaps if they can be trusted. But first they need to be discovered. Hugging Face and other reprioritize will need to apply metadata so that they can keep their repositories relevant. 
  3. Context models, such as knowledge graphs, are often human-generated. Knowledge graphs are not new. They offer a semantic, often visual way, to represent the relationship between categories of knowledge and their related data. Knowledge graphics are becoming a go-to approach to reaching more contextually correct results of generative AI by reducing its tendency to wander through and incorporate less relevant data that might skew output and create distrust. Knowledge graphs require human reflection, monitoring and maintenance.

If you're interested in the remaining 4 areas where organizations need to apply knowledge management principles to generative AI development and deployment, check out the rest of the article on SeriousInsight.com: 7 Reasons AI Needs Knowledge Management.

About the author:

Daniel W. Rasmus, the author of Listening to the Future, is a strategist and industry analyst who has helped clients put their future in context. Rasmus uses scenarios to analyze trends in society, technology, economics, the environment, and politics in order to discover implications used to develop and refine products, services, and experiences. He leverages this work and methodology for content development, workshops, and for professional development.

Interested in AI? Check here to see what TechTalk AI Impact events are happening in your area.