Daisy the AI Granny Is Probably This Year’s Best AI

AI use cases matter, and If you’re looking for one to learn from, look no further.

EXPERT OPINION BY JOE PROCOPIO, FOUNDER, JOEPROCOPIO.COM @JPROCO

NOV 21, 2024

Illustration: Inc.; Photo: Getty Images

I’m in love with an AI Granny and I don’t care who knows it.

I often talk negatively about generative AI, and with good reason. I helped invent the first commercially available platform for automating content in 2010, before we were even calling it natural language generation, and before LLMs were robust enough to turn NLG into what we call generative AI today.

Sorry for all the jargon. It comes with the territory.

I’m actually a fan of the pursuit of what we’re now calling AI. My problem isn’t what we’re chasing, it’s how we’re chasing it. And I get it. This happens all the time in tech. With so much money backing so much perceived technical evolution – I mean, who cares if AI is going to change our lives for the better, as long as it comes close enough to cash in.

Am I right? 

Well, last week, an AI application dropped that you may have missed, but I believe it’s one of the best use cases – in terms of changing lives for the better – that I’ve seen out of generative AI thus far. 

The app is an AI Granny. And she’s beautiful.

This is not hyperbole and I’m not trying to be cute. OK, maybe a little cute – it’s an AI granny. How can she not be cute?

But in all seriousness, the use case is what matters here.

What Daisy Does

If you’re not aware of Daisy, it’s a generative AI platform built by U.K. mobile giant O2 to take on gift-card scammers. 

Now, if you’re not familiar with the gift-card scamming profession, these are call centers that prey mostly on the elderly by gaining their trust and playing on their fear, to bilk them out of thousands of dollars. This almost always happens using retail gift cards, due to their ubiquity, convenience, and unrecoverable funding. 

It’s a disappointingly booming business. The scammers took in around $3.4 billion last year.

I spent about a week diving down the YouTube rabbit hole of scambaiters, the most popular of which is probably Scammer Payback. These folks are creators who use conventional technical fakery and hacking tactics to, at the very least, try to waste a scammer’s time in an entertaining fashion, if not identify them and shut them down. But shutting them down is usually hopeless, as the scammers are too numerous and located in countries that don’t have the will or the resources to crack down. You shut one down, two pop back up.

Watching these videos, I learned that, like any good con, the key for scammer success is gaining the victim’s trust—for the scammer to appear as a white-hat support representative from a known software company like Microsoft, calling to help them “fix” a potentially costly mistake that the tech-savvy representative “detected.” The process varies, but it always requires gaining trust, which requires convincing the victim, which requires talking to the victim.

Daisy is more than ready to talk.

Daisy, trained in part by one of the more popular YouTube scambaiters, Jim Browning, mimics an elderly woman who has copious time to listen to the scammer and, as elderly folks sometimes do, veer the conversation into all sorts of tangents—her cats, her knitting, maybe her grandchildren—anything but her bank account and how the scammer might access it.

In a world where the bad guys far outnumber the good guys and never have to leave the shadows to do their dirty work, with little fear of getting caught, slowing them down is the next best thing, and it’s a perfect use case for AI. 

There’s a lot for anyone to learn here, even if you’re not using AI.

Use Cases Determine Success or Failure

If there’s one thing I’ve learned from being on the bleeding edge of technology for almost 30 years—from the first days of the modern internet to the mobile evolution and now AI—it’s that just because something can be done doesn’t mean it should be done.

If you don’t know what I mean, just ask yourself if we really need another Star Wars movie. 

Here’s how I learned the difference the use case makes in determining success or failure.

Back in 2010, our first-of-its-kind AI (and yes, I know I’m casting a wide net when I use the term “AI,” but I want people to read this column and a lot of folks still think genAI is the next generation of youth behind Gen-Z). Anyway, our AI was birthed in the primordial soup of next-gen sports data. 

We took game data, player data, team data, league data, even weather and location data, and turned that into all kinds of articles, including game recaps, previews, players-of-the-week, and so on.

To show off our new tech, we stood up more than 800 websites, one for each professional and college football, basketball, and baseball team in the U.S., and populated each website, up to five times a day, with our machine-written articles. 

It was super cool and people gave us all kinds of props. But you know what they didn’t give us?

Money.

AI Use Case Lesson No. 1 

As amazing as our never-done-before tech was, the simple truth was that almost every single one of these teams had at least one human writer covering them. Those humans could do things we could not, like describe the tension in the arena or get a player quote. 

But then we noticed something. When our tech selected a player-of-the-week from a small college in a lesser-known conference, that school would tweet about it, and even sometimes put out a press release. 

That’s when I realized AI use case lesson number one. Those schools simply didn’t have the resources or the broad appeal to warrant a human to cover their teams. But those teams were still important to a lot of people

Expand that thinking out. We weren’t meant to replace human writers – we were meant to write where humans could not, whether for financial reasons or, more important, logistical reasons. Especially in cases where the volume of data was overwhelming.  

Like finance.

So we dropped our sports exclusivity and started “writing” anywhere there was a lot of data and a lot of individual impact, but it was impossible for humans to digest enough of that data to write about it specifically and quickly. The obvious low-hanging fruit was fantasy football, and we signed both Yahoo and NFL.com. Then we did, among others, quarterly earnings reports articles for the Associated Press. Then it was off to the races.

Nobody Asked for Most AI

Again, my problem is not with the chase for AI, but how we’re chasing it. 

I’m in a position where I see a lot of proposals and plans and products built on AI, and a lot of them are built for use cases nobody wants, nobody needs, and nobody is going to pay for.

That doesn’t mean that the AI isn’t amazing, but much like our 800 websites, the prospect of replacing a human function usually goes one of two ways. 

  1. The AI redundancy loop is a phenomenon I’ve coined to talk about what’s going on in recruiting and hiring right now, where technology, and now AI, have converged on the job search to add so much noise to what was already a poorly executed process that no one is actually getting hired, let alone those folks who are most qualified for the position.
  1. The use case gets dumbed down so as to reduce outliers, which results in more work for other humans (or other AIs) in areas related to the tasks the AI is performing. This shifts the true cost, making AI look “cheap” to implement. Then once those hidden costs are realized, the AI must be constantly improved to the point that the economics no longer make sense, that is, until economies of scale take over. 

That last part is a big and risky bet. Ask Waymo.

AI Is Good!

Wow. I can’t believe I’m saying that. Well, yes I can. I’ve always believed it. I just thought it would be a lot longer before I could say it out loud again.

Thank you, Daisy!

Look, we’re about to hit what’s being called the AI wall, if we haven’t hit it already. And don’t worry, it’s just more of the same kind of jargon that started all this hype in the first place. But my hope is that hitting that wall, plus the general exhaustion that’s growing around the AI hype, plus a few excellent under-the-radar proper use cases for today’s AI, with its limitations, will get us back on the innovation track. If you join my email list, I’ll keep you posted.

Speaking of dumbing something down, I’ve been saying this for over a decade. At its heart, AI is just “if-this-then-that” but really, really fast. As long as we remember that and not try to sell AI as magic, it might finally do a lot of those life-changing-for-the-better things after all.

The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

Inc Logo
This Morning

The daily digest for entrepreneurs and business leaders