Marketing Automation Buyer's Guide

Generative AI: The Promise and The Pitfalls

Generative AI The Promise and The Pitfalls

Generative AI The Promise and The Pitfalls

As part of Solutions Review’s Contributed Content Series—a collection of articles written by industry thought leaders in maturing software categories—Neil Serebryany, the CEO and Founder of CalypsoAI, outlines some of the promises and potential pitfalls of generative AI technology.

Generative AI—the ability to generate new audio, images, and text by entering a query or command in a large language model (LLM), such as ChatGPT-4, or other machine learning models, like DALL-E—is a foundational technology. By this, I mean we’re watching in real-time as the power of computing technology produces new, “original” creative works has moved from the “imagine if” stage to a realized product. And not just any product, but a product that is an inflection point marking a fundamental change in how our society works, thinks, and operates at all levels, from broad commercial interactions to one-to-one human relationships.  

I can’t consider it anything other than a profoundly transformative social milestone. It might not rise to the level of humans learning how to create fire, but it arguably rises to the level of the creation of the steam engine. It’s not like the invention of the wheel, but maybe the first automobile assembly line; not written language, but the printing press, not electricity, but the light bulb. 

Only months old, the effects of Generative AI on the social fabric mimic those pivotal inventions. In the arts, sciences, heavy industry, medicine, technology, education, and other fields, the quantity, scope, and scale of new opportunities for creating products and services are impossible to comprehend. Every imaginable type of information is not only available to everyone who has access to the Internet; it’s available instantly. I doubt anyone isn’t awed by the speed of the responses.  

It is already making inroads into the legal profession, as at least one international Biglaw firm has announced it will use Generative AI to assist with writing contracts. In healthcare, it assists with everything from scheduling to diagnostics. In the mental health field, it’s expanding our ability to understand human loneliness and helping people attain autonomy and competencies they might not otherwise have. Clearly, its ability to accelerate positive change is staggering.  

And so, it is no surprise that this mind-bending invention has a dark side. The question that promptly began clouding the happy-path landscape is what to do about opportunities exploited in ways that aren’t positive, legal, or ethical. That version of Pandora’s Box was flung open almost immediately: roughly one month after OpenAI released ChatGPT into the wild, an “evil” alter ego—DAN, which stands for Do Anything Now—was created.

Within a few months, the field became crowded as more LLMs came online. College students began using them to create essays; users began creating original art by riffing and sampling existing works; and copyright holders started claiming blatant unauthorized and uncompensated use of works of all types. And researchers—as well as, more than likely, criminals—could get around built-in safeguards and obtain instructions for nefarious activities.

As a result, a chorus of doom-and-gloomers began to insist the technology couldn’t be controlled now that it was in the wild. What that really means is that it will be challenging to attempt to control it; very difficult. But doing nothing is not an option. We can’t, so to speak, let a nascent technology architect its own path forward. As an industry, we must establish, test, and apply universal controls and then diligently protect those controls and the algorithms enabling them from adversarial interference.

Our path must begin with the basics, meaning we must bypass the awe and acknowledge these systems can’t just be fast. Speed is secondary, at best. I’ll take it a step further and state that Google might agree that the primary trait every Generative AI system must possess to be a socially beneficial tool is trustworthiness. If a user—whether a lawyer, a physician, a commanding general, or a college student—can’t trust the answers provided by their AI system of choice, it doesn’t matter how fast those answers appear.  

Achieving Trusted status doesn’t only mean the information provided is factually accurate, although that is critical. Both users and responsible creators of the technology must be confident that their AI tools are not—overtly or subliminally—creating or reinforcing damaging biases or stereotypes, providing incorrect data, or urging the user to engage in unethical, inappropriate, illegal, or dangerous behavior.  

For an LLM to get to Trusted, it must be tested. The technology is called “generative” AI because it continuously expands the body of knowledge on which it depends. That means testing a Generative AI system cannot be a one-and-done proposition; testing must be rigorous,  exhaustive, independent, and ongoing.  

The entire AI system must also be secure. As it continues to grow its body of resources, bad actors will be determined to thwart every means deployed to maintain its reliability. This is especially important for niche or bespoke systems that serve specific constituencies. Like every other software-based system in use, LLMs must have fortified perimeter and monitoring, detection, and blocking capabilities, at the very least.  

An AI system operating without these minimum safeguards in place would be dangerous, and not just because of what threat actors might do. Every delay in developing and deploying controls means more ground is gained for the growing backlash that already includes calls for measures that overcompensate the current open, unfettered access to information. Much as we are obligated to protect the technology from threat actors, we must also preserve its limitless capabilities from well-intended actors who consider its very existence a threat.   

The path forward through this exhilarating new wilderness is just as chaotic as anyone could have expected, but let’s use that to our advantage. The time to focus on building the guardrails—what they must do, where they must be placed, how strong they can be, and how flexible they must be—is right now.  


 

Latest posts by Neil Serebryany (see all)

Share This

Related Posts

Marketing Automation Buyer's Guide

Marketing Automation Buyer's Guide

%d bloggers like this: