StorytellingStorytelling

Nuances of AI — How to Simplify the Complex Concepts of AI

November 13, 2023
·
7 min read
Photo by Jeff Sheldon on Unsplash
The future is already here – it's just not very evenly distributed.
William Gibson

“I’m hungry. What can I make with a parsnip?”

Words guaranteed to spike the stress levels of any Mother. The child in question: a university student. These words usually trigger a frantic Google search, looking for quick and easy parsnip-based recipes to feed a starving twenty-year-old.

The story is true. The parsnip, irrelevant.

What is relevant is how we use Google. Faced with the same situation, you would type in the search bar, “parsnip recipe.” Google autocomplete might helpfully suggest “parsnip recipe ideas,” “parsnip recipes roasted,” or “parsnip recipes easy.” You click on “parsnip recipes easy,” aware of your son’s culinary skills. If you have used Google autocomplete, congratulations, you’re already using AI.

Meanwhile, back to the parsnip.

You know how the story goes—time whiled away clicking on food blogs, scanning through recipes, only to discover you don’t have Italian Seasoning.

Click. Scan. Repeat. Click. Scan. Repeat. If only there were an easier way. Something magical that could work on our behalf, step in and accurately predict what we need, and guide us on the way.

There is. It’s here. AI.

Magically accurate prediction.

Science-fiction genius Arthur C. Clarke, he of the plummy British accent, horn-rimmed glasses, 2001: A Space Odyssey, and Rendezvous with Rama fame, defined three “laws” of prediction. The most famous, his third law: “Any sufficiently advanced technology is indistinguishable from magic." For non-experts, magic is a good approximation of AI.

Magic can explain how AI works.

Large Language Models (LLMs) like Google’s Bard, OpenAI’s ChatGPT, or Anthropic’s Claude predict a token. Picture the token as a magic coin. On one side of the magic coin is a dumb guess; on the other is a really good guess. The mystic ingredient is that the coin, more often than not, lands on the side of the really good guess.  But there’s a downside. The coin isn’t that big. The token only has room for a word or a few characters. When the AI you’re chatting with is “speaking” to you, it’s rapidly flipping its magic coin.

Each flip predictably charmed, an artificial sleight of hand appearing as thought.

As an AI progresses through its training, the coin’s odds of unnervingly accurate prediction increase—to the point where it lands, almost all the time, on the side of a really good guess. A few good guesses turn into intelligent sentences. Strings of them turn into coherent answers.

You can see the coin flip in action and begin to understand how an AI works by seeing an approximated response to a child-like question.

Why is the sky blue?

That question provides insight into how LLMs are trained.

An AI, in response to “Why is the sky blue?” might answer the question differently, depending on its stage of development.

*Note: This table and its approximations were produced by ChatGPT4.

The increasing cost of training explains why, despite all the AI-washing, few companies are large enough or well-financed enough to enter the AI arms race with their own Large Language Models.

Of course, counting AI as a magic coin is magical thinking.

Reality is more complex. Multi-layered, interconnected neural networks talk to each other—algorithms pass along information—each “coin flip” is better informed by a rich history of past data, complicated calculations, and previous “coin flips.”

Clarke’s magic comes from advanced mathematical algorithms and statistical models.

Wanted: magicians.

ChatGPT hit the hype train like a hurricane.

In five days in November 2022, ChatGPT hit one hundred million users. Goliaths like TikTok took nine months to reach the same mark. Netflix took three and a half years.

Eight months later, ChatGPT’s users are on the decline.

Why? Concerns about privacy and regulation might be a factor. Getting burnt by hallucinations (AI just making stuff up) may be another. A third reason could be large companies banning the use of AI at work, among them Apple, JP Morgan Chase, Gartner, and Verizon, citing data leakage concerns. Another—is that after the initial trial period, people are back to Google.

If you treat AI like Google, you’re doing it wrong.

My supposition is that we know AI is a thing; we know it can do things, but we (most of us) haven’t figured out how to make good use of it yet. We’re treating AI like Google. Asking it questions and expecting immediate answers.

Which brings us back to the parsnip and leads to prompt engineering.

Asking Google, “What can I make with a parsnip?” will bring you over six million results; far too much information to process in your lifetime. At the top, Google has used AI in the background to helpfully show you recipes like “Simple Parsnips with Oil and Herbs.” Or lists, like “25 Best Ways to Cook Parsnips.”

Then you run into the problem that, while you really like the sound of Carrot Parsnip Soup, you don’t have any carrots.

Rohit Chauhan, Executive Vice President of AI at Mastercard, explains why we’re stuck in an old way of thinking.

“It’s helpful to think of computers and AI as a triangle. The corners of the triangle are input, output, and algorithms. With a computer, you define the algorithm, and you feed the computer input. You get output.

An example is a calculator. The algorithm is mathematical; the input is two plus two, and the output you would get is four.

AI changes the triangle. You feed AI input. You define what output you would like. AI gives you the algorithm.”

In the Google example, the algorithm is already defined: find published information on the Internet based on the keywords provided. You provide the input. What can I make with a parsnip? Google spits out six million results.

Don’t get me wrong. This is a magical leap from the dawn of the Internet, where we had to dial up a modem, click through AOL’s cooking section, and find a recipe. It’s a step up from books.

We are doing less work. But we still have work to do.

With the current iteration of Google, the work is click, scan, repeat. Click. Scan. Repeat. Sorting and sifting through results to get to something you want to eat and can cook.

With AI, it’s different. It’s input and output.

Go to ChatGPT, or Bard, or Claude, and feed it input. I have a Parsnip, some pasta, garlic, olive oil, and various herbs. Define the output you would like. Act as an expert chef. Give me something tasty that I can make for two people. I don’t have much time, and I am not a great cook.

Generative AI will give you the algorithm—the recipe:

A step-by-step guide to Roasted Parsnip and Garlic Herb Pasta.

That’s a step up from Google, which gives you a thousand recipes and leaves you with more work.

To use AI well—to perform the magic—we have to think about input and output and start looking for algorithms.

This flips our thinking.

Try it.

If you haven’t played with them yet, try out Claude 2 by Anthropic, Bard by Google AI, and ChatGPT by OpenAI. They are all slightly different.

Each model has its own strengths and weaknesses. I’ve found Claude and ChatGPT to be the most useful of the big three.

My unofficial synopsis, as of the end of 2023, is that they’re like three tween siblings. Claude is the smart one, ChatGPT is the more versatile one, and Bard tries hard. Oh, and they all lie—hallucinating—the technical term for a Gen AI going off the rails and making stuff up.

You still have to be smarter, more versatile, and try harder.

Try Claude 2 for obtaining accurate and unbiased responses, particularly when precision matters. Claude stands out for its focus on reducing bias. It seems slightly brighter than the others. It also has a more updated training data cutoff, making its output more current.

ChatGPT, to me, is more versatile, especially with its array of plugins. Try ChatGPT for creative writing projects and brainstorming sessions where context and nuance are important. And it's just gotten even better with Turbo, custom GPTs, and enterprise services.

Bard, I think, is competent. It has some neat tricks. It has a double-check “Google it” feature, which can help catch the hallucinations. Most useful in the world of distributed work is its ability to share chats and prompts.  It’s the clunkiest of the three, but with Google behind it, I imagine it won’t stay that way for long.

To help you on your way, here’s a handy dandy prompt guide.

AI prompt guide. fassforward

It’s time to learn some magic tricks with AI.

Gavin McMahon is a founder and Chief Content Officer for fassforward consulting group. He leads Learning Design and Product development across fassforward’s range of services. This crosses diverse topics, including Leadership, Culture, Decision-making, Information design, Storytelling, and Customer Experience. He is also a contributor to Forbes Business Council.

Eugene Yoon is a graphic designer and illustrator at fassforward. She is a crafter of Visual Logic. Eugene is multifaceted and works on various types of projects, including but not limited to product design, UX and web design, data visualization, print design, advertising, and presentation design.

Coaching
Training
Consulting
Free Survey
About Us
Our Thinking
Free Downloads
Shop