2024 Calendar
TechTalk Daily

10 Questions Every Business Will Be Asking About Generative AI Next Year

Ten Questions Every Business Will Be Asking About Generative AI Next Year

April 27, 2023 by Daniel W. Rasmus
 

Ten Questions Every Business Will Be Asking About Generative AI Next Year

Image credit: Tara Winstead via Pexels.

Which generative AI system is worth paying for and how are they different? 

pexels-photo-2249528.jpegPhoto by Miguel Á. Padriñán on Pexels.com

Bing, ChatGPT, Bard—big company generative AI—are to some degree, mostly free. Some are related, and some are competitive—for a concept this big and techniques this opaque, comparisons will be difficult. Unlike enterprise software, where feature lists can be compared and the fit of those features to purpose can be tested, generative AI’s continuous learning and imprecise output make it hard to test against use cases.

Some are free, some are free in a limited way, and some require payment and licenses. Add into the mix companies like Jasper, copy.ai, anyword, Smartwriter.ai and many others. The market is going to get confused quickly about which generative AIs do what, what their differentiated value propositions are, and if they all arrive equally trusted.

Can I trust generative AI for critical business processes? 

gray steel chain on orange surfacePhoto by Miguel Á. Padriñán on Pexels.com

No, not at first glance, certainly, and not without oversight and supervision. I recently read a suggestion, and I’m sure people are actually thinking about this, that generative AI could be used in medical situations to send notes to patients about their lab results. Let’s say that 999 out of 1000 responses prove spot-on. One is bad, but it’s really bad. It led to hospitalization, perhaps death. I don’t know the real numbers, but even a scant chance of error in a medical process should be of concern. Yes, people make mistakes too, but people are accountable. 

For critical business processes, people will need to monitor the messages at a minimum; for how long? Likely a long time because there is no guarantee that a new Large Language Model (LLM) release won’t negatively impact something that appears to be working. 

Welcome the AI oversight function. Will people want to do that? What if people become lazy and miss misinformation? When we talk about trust, who will be accountable for generative AI errors in critical business situations that may threaten profits, productivity, or lives?

Do generative AI systems actually know anything? 

white three oval capsule on yellow canvasPhoto by Anna Shvets on Pexels.com

No. Generative AI does not know anything in the way that you or I know something. There is no need to worry about generative AI becoming conscience. There is worry about people believing that it is conscience. In scenario planning, technology issues co-exist with social, political, economic and environmental issues. That people believe an AI to be conscience, perhaps threatening, perhaps trustworthy, is a social phenomenon that technologists are not likely not prepared or even capable of addressing.

At a fundamental level, large AI models, be they based on language or images, or both, consume enormous amounts of content. When prompted, they offer up the most statistically accurate prediction of what someone asking that question would want to know.  While some technologists may suggest that this is inspired by the human mind, it is a collection of connected neurons; it is a far simpler model and, more importantly, a completely different architecture than the human brain.

Large AI models know how to access the data they have stored and navigate the relationships between that data, and to that extent, they “know” what they have learned, but they have no deep concept of the meaning of that content…all of which comes from secondary sources. Large AI models have no first-hand experience with anything they “know.”

Think about the number of blogs that answer questions like, “What’s the best way to smoke a brisket.” If asked for the best way to smoke a brisket, a large language model will return a synthesis of answers related to that question. 

The LLM will likely produce a result that will work. It may even offer alternative views. What it won’t produce is an actual opinion on the best way to smoke a brisket—by design, generative AI only offers suggestions, not definitive answers. The AI has never smoked a brisket. It hasn’t touched or smelled a brisket. Generative AI has no common sense derived from first principle experience; all it can do is find commonality among dozens, hundreds or thousands of inputs and create a reasonable list of recommendations and share that analysis with the person promoting it.

Generative AI offers none of the nuances of human insight. And not only does it not have knowledge of brisket, but it also has no sensory knowledge other than what it has read or seen (seen as data because an AI interpreting images does not see those images in the same way as a human does). It has no taste, no touch, no hearing.

Sensory input is a crucial factor in human intelligence because we are not just verbal or visual; we synthesize data from all our senses. A human BBQ master might well write a book, but if you enter a BBQ smoke pit, they will also share moments like, “Smell that…” and “Touch it like this, you can feel when it’s ready.” Advice and experiences that can be captured in writing or even on video.

No AI can make that leap, and they don’t have access to that data at this point and won’t likely have access to it anytime soon. And if they do, they will create richer answers based on the data in an artificial context, but they still won’t know in a human sense about the topic for which they pontificate.

What can I do with my AI-generated content? 

pexels-photo-1061139.jpegPhoto by Miguel Á. Padriñán on Pexels.com

The lawsuits have just begun, and they will increase. Unfortunately, the legal system doesn’t move at the breakneck speed of technology entrepreneurs, so it may take some time for legal precedence to arrive for generative AI content (or training sets). The best current answer is if you use AI-generated content to make money, you may be open to legal issues if people or businesses recognize their intellectual property has been repurposed by AI without permission.

A recent lawsuit from Getty Images against Stable Diffusion (see Getty Images sues AI art generator Stable Diffusion in the US for copyright infringement) claims 12 million images were ingested from its database. One damning example shows a soccer player in an image with a distorted Getty Images watermark still in the AI-generated image. There is little doubt that the generative AI incorporated Getty Image’s intellectual property into its learning data set.

This line of challenge also brings into view the role of uncompensated labor. Getty pays photographers when outlets and businesses buy their images. Not everybody will be a Getty Images with the financial resources required to sue another company. Generative AI has consumed a lot of content. What if 7 out of 10 items on a recommendation are exactly the same as the ones in your blog? What do you do?

Inside organizations, for internal support, generative AI is likely safely hidden when it comes to the use of IP gathered from others. A novice event planner likely scrubs through notes, references textbooks and online content, and confers with colleagues as they develop their first event plan. That an AI-generated event plan uses copywritten material from books and blogs to offer up a suggestion will likely never be discovered, and it probably isn’t any more of an issue than a person using copywritten material for the same purpose.

Books and blogs, after all, are meant to inform and teach, and they can only serve their purpose if people use them. That said, the AI offering that advice without reference publicly is a very different case.

Can generative AI tell me how it derived its answer? 

Generally, no. Generative AI models are black boxes, meaning that the data and reasoning it applies to an answer is opaque to the person submitting the query and to the developers. Machine Learning models that fail often need to be abandoned because they can’t be fixed. A new model has to be built with new data that scrubs out whatever data taught the model to behave badly.

If developers do start playing with constraining outputs, it isn’t clear what that investment will look like over time. What happens if Chat-GPT somehow gets part of its model entirely wrong? Can that piece be turned off? Microsoft seems to be moderating Bing’s responses, but how it does that isn’t clear. It also isn’t clear what meddling with output does to eliminate useful responses because the person tweaking the outputs has established a bias against a certain output, therefore reducing the overall value of the output.

Interestingly, ChatGPT’s “capabilities” pre-query list states: “Trained to decline inappropriate requests.” It does not say that ChatGPT is incapable of answering “inappropriate” requests, just that it has been “trained to decline them.” Who determined “inappropriate”? And what is lost in those inappropriate filters? Biblical passages about stoning? Information about Lolita? Gender coaching? Weapons cleaning instructions? As with much of generative AI, the definitions are vague, and most queries into its models return no results.

Why were we so worried about AI? 

This Photo by Unknown Author is licensed under CC BY-NC

There is a lot of hype and novelty about generative AI. Because it does so much, it has become the new Angry Birds or “What Color is this Dress” distraction. Unlike those things, it will probably have some role in creative business and personal value, but the amazement with generative AI will likely die down to a murmur at some point as its limits get defined, it fails to live up to promises, or it becomes hidden behind tools, as is already happening with tools like Miro.

In Miro, generative AI can create a response, for instance, to a node on a mind map. Very cool, but a step removed from the wild, ask ChatGPT anything world—a feature that looks much more like Excel auto-filling month headings.

Unfortunately, science fiction has primed people to see AI as a big bad. It may also become a scapegoat in the future for any number of political or social movements that will both bolster its standing, depending on the outcome, or erode it.

Why are we accepting mediocrity? 

Because generative AI purposefully avoids taking a position, it offers up the lowest common denominator responses. In the brisket smoking example above (and see the actual ChatGPT brisket smoking answer below), the responses, because of the statistical mechanism, remove the subtly of innovation. The inclusion of Hungarian Smoked Paprika may be a subtle shift that gets missed, or using a non-traditional wood. The general instructions will likely produce an edible smoked brisket, but it is clear that they will also produce a mediocre smoked brisket.

This same principle holds true for any output from ChatGPT. If it is reasonable and functional it is also probably safe and average.

When it comes to grading academic work, ChatGPT is forcing professors to ask if they are asking the right questions. Many papers prove little more than long-form multiple choice and fill-in-the-blanks exercises—they test fact recall, research prowess and organization, all “skills” where ChatGPT excels. 

While becoming a doctor or getting an MBA is a clear accomplishment, many people become doctors, and even more people receive MBAs. It could be argued that competence is more important than excellence, and that is what these tests look for. Tests aren’t looking for excellence; they identify people who have obtained a shared view of the world and a common knowledge base.

That ChatGPT can answer questions as well as students does not imply it is competent to act, for instance, as a doctor—a physician is an exercise in being a whole person, with empathy and insight, not just factual knowledge. Residency forces doctors to demonstrate their knowledge and hone their skills, a feat that AI cannot engage in at this time.

Can AI offer business advice of equal quality to a person with an MBA? Yes, to an extent, in the same way that it describes smoking a brisket, with a list of basic ideas that any person with an MBA would offer up. But hopefully, the person with the MBA wouldn’t stop with the list but would provide advice on their experience with what works and what doesn’t; details and variations, while perhaps available to ChatGPT from a blog, would likely be homogenized out of an initial response, and follow-up responses as well.

Why does the AI reflect a bias in its data but no opinion about anything? 

The predictive algorithm delivers the best approximation of an answer that the corpus of content digested by the generative AI calculates makes “sense” within the context of the prompt. Generative AI has no memory and no intent, so it does not have a belief system from which to assert an opinion. It does, however, reflect bias derived from its data set. It can be imagined even that LLM could be built purposefully to reflect a biased data set.

At this point, however, generative AI systems seek to eliminate bias, but in doing so, they also eliminate all points of view or opinions. 

As most humans know, asking a person for the “best” something usually yields an opinion, but generative AI produces no recommendation when asked about the “best,” but the response most aligned with the prompt. 

Because the AI has no experience and it statistically smooths its inputs, it is incapable of having an opinion about the best anything—even though that input inputs hundreds or thousands of “best” content inputs. What statistics will show is that “the best” may be an outlier. ChatGPT may find commonality among the “best” list and include those elements in its response, but it will be impossible to differentiate the “best” elements from the necessary elements.

While the reflection of human bias and prejudice can be trained into a model, generative AI, even more crucially, to the question of “artificial intelligence” does not hold a point-of-view on anything. As its developers seek to rid the system of bias from human data, they also remove the most human of features: having a personal point of view.

Is AI going to kill innovation when we spend more time doing the same things with AI guidance than inventing new things for AI to learn? Or…What happens when generative AI starts to cannibalize its own outputs? 

This is a concern for a future point in time, but it is a legitimate concern. LLMs are trained primarily on human-generated content. If the proposition comes true that AI can generate much of future content to enhance human productivity, does the AI then not start processing its own outputs, and therefore reinforcing its own mediocrity and averageness, creating over time even more average outputs?

This is an area where early scenario work suggests human pushback on generative AI, with creatives, inventors and entrepreneurs purposefully not employing generative AI or perhaps reacting against it.

Is it OK to do my work without using AI? 

This Photo by Unknown Author is licensed under CC BY-SA-NC

Hopefully, the answer in most cases will be “yes.” Skilled people with deep knowledge are not the target for generative AI. They may be the recipients of other “AI” technology, such as learning models that help interpret CT scans or identify cancer cells from a biopsy. 

Those tools use machine learning to improve pattern recognition—they do not recommend treatment but identify causes more effectively because—again, data. They have more examples with more variations than humans can easily recall—they become specialized instruments that augment memory.

Once an oncologist, for example, identifies cancer, they may use, again, a specialized system to suggest treatments, trained on treatment approaches. I don’t think these systems qualify as “AI” in the way people see ChatGPT; they are memory enhancement tools, a way to enhance retrieval.

The generative AI community might argue that the algorithms used in ChatGPT could consume all of these databases and offer a more generalized diagnostic tool. And I have little doubt that they would be right. But it would be the same data served up through a new algorithm. I don’t believe that an over-generalized ChatGPT that sucks in ever more precise knowledge about specialized domains is the right answer.

And to the point made in item “1” above, which commercial tools know what and how well they know it will be of ongoing concern. A specialized, purpose-built tool will generate more trust and likely be much easier to receive FDA approval than a fraction of a large knowledge base that is difficult to test in isolation.

On the more mundane level, there will be a period of experimentation, some might say distraction, as people attempt to use generative AI to enhance their jobs, for example, to see if it can really do what they do or make them better at what they do. People who work in repetitive process areas, where automation already not only threatens but has eliminated jobs over the years, are most likely to find the templatized nature of generative AI responses an alternative to well, using templates. But these are also the least creative of roles, ones where process and repetition lend themselves to automation.

The promise of generative AI is to help people in jobs where they need to respond to a variety of inputs and use their skills, knowledge and experience to respond to those inputs. Which sounds, not surprisingly, like ChatGPT’s mission. Like most tools, if generative AI offers value and produces consistent, reliable results, then there is no reason not to use it. But that means defining what consistent and reliable mean in context and monitoring for ongoing delivery at acceptable quality levels.

Unlike tools with explicit code that can be checked for errors and corrected when one arises, generative AI will require monitoring much like a new employee—unfortunately, unlike people who can rapidly change their mental models locally, if the generative AI makes a mistake, it may be beyond the ability of the manager to correct it. How generative AI vendors handle catastrophic and even minor errors will be a test of their commercial models.

It’s all about the data

In a recent New Yorker article, Jaron Lanier makes some astute observations that challenge the “intelligence” part of “artificial intelligence” (see There is no A.I.). He also argues for “data dignity” that allows generative AI models to reveal their source data, which makes a lot of sense—a few years ago.

Unfortunately, that is not how the current models are built, and it would take a complete re-invention of the underlying models to incorporate that idea. My guess is that at least the current vendors aren’t going to do that as they watch their values hit unicorn status without this accommodation to transparency.

codes on tilt shift lensPhoto by Markus Spiske on Pexels.com

The Semantic Web sought to imbue the net with a built-in intelligence driven by metadata. If Internet practitioners had widely adopted the Semantic Web, then generative AI would have been inclined, if not forced, to incorporate that metadata into their models. Given its sparse implementation, they went with raw content, eschewing any metadata that didn’t become part of their model—a model that cannot be queried about its content.

OpenAI’s Sam Altman has suggested (see OpenAI’s CEO Says the Age of Giant AI Models Is Already Over) that LLMs may have reached their limit. “I think we’re at the end of the era where it’s going to be these, like, giant, giant models,” and that future “AI” will require new ideas.

Generative AI vendors aren’t likely to pause, but they may also find that the limits to LLM get exposed just as quickly as their rapid adoption.

Perhaps those limits will create a new generation of organizations that apply the lessons of generative AI to create a more open, transparent model. But if Lanier is right, it may take rethinking all existing content, and that probably isn’t going to happen either. So as Altman suggests, we’re going to need some new ideas.

ChatGPT’s response to my query on the “best way to smoke a brisket.”

Smoking a brisket can be a complex process, but the following are some general guidelines that can help you achieve a delicious result:

  1. Choose the right brisket: Look for a well-marbled brisket with a thick layer of fat on one side. The fat will help keep the meat moist during smoking.
  2. Prepare the brisket: Trim any excess fat or silver skin from the brisket, and season it generously with a dry rub or marinade of your choice. You can also inject the brisket with a liquid marinade to add flavor and moisture.
  3. Preheat your smoker: Bring your smoker to a temperature of around 225-250°F (107-121°C) using your preferred fuel source. Many people prefer to use wood chips or chunks for flavor, but you can also use charcoal or a combination of both.
  4. Smoke the brisket: Place the brisket on the smoker with the fat side facing up. Smoke the brisket for several hours, depending on its size and thickness, until the internal temperature reaches around 160°F (71°C).
  5. Wrap the brisket: Once the brisket reaches the desired temperature, wrap it tightly in butcher paper or aluminum foil. This will help keep the meat moist and tender as it finishes cooking.
  6. Finish cooking the brisket: Place the wrapped brisket back on the smoker and continue cooking until the internal temperature reaches around 195-205°F (91-96°C). This can take several more hours, so be patient and keep an eye on the temperature.
  7. Rest the brisket: Once the brisket reaches the desired temperature, remove it from the smoker and let it rest for at least 30 minutes before slicing. This allows the juices to redistribute throughout the meat, resulting in a more flavorful and tender brisket.

Remember, smoking a brisket is a long and slow process that requires patience and attention to detail. But with the right technique and a little practice, you can create a mouth-watering brisket that will impress your friends and family.

Read more at SeriousInsights.net >

Daniel W. Rasmus, the author of Listening to the Future, is a strategist and industry analyst who has helped clients put their future in context. Rasmus uses scenarios to analyze trends in society, technology, economics, the environment, and politics in order to discover implications used to develop and refine products, services and experiences. He leverages this work and methodology for content development, workshops and for professional development.