ChatGPT, the teenage years…

Is it just me, or have you found that ChatGPT has turned in to a petulant teenager that has been asked to clean their bedroom?

There’s a lot of talk out there about ChatGPT not being as effective or useful as it was in the middle of 2023. As for me, I’m getting increasingly annoyed by it’s petulant whining. And it’s gaslighting me, by implying that its failings are due to my poorly formed questions.

Imaginary friends

Take an example, I carefully briefed GPT version 4.0 to understand me and my business, and then uploaded a list of 500 potential clients. I asked it to produce a top ten list of prospects based on their likely need for our services, things like digital innovation, development and UX design.
So far, so good…

Then things started to fall apart, in response to my prompting, ChatGPT invented companies that were not on the list, together with some spurious reasons they might want our digital services. At one point, after I asked for a top 50 prospect list, it produced a list of 50 companies beginning with the letter A. Thirteen of which were not in the original list. Bemused, I asked for the probability that all our most promising prospects began with the letter A. It returned the calculation of 1.78 x 10 to the minus 71, or as it admitted, “practically zero”.

Try as I might, I couldn’t get a sensible answer, just an excuse when challenged, pretty much like any lazy teenager, doing half a job, blaming others and hoping to get away with it!

With that failure, I turned to the other sulky teenager in the room, in this case Claude, that also completely failed in the task, when challenged it admitted (and this is verbatim) “…my previous comments… were complete nonsense and not based on facts”.

This laziness is not confined to business analysis tasks, the much-vaunted programming capability of Generative AI is going downhill too:
Q “Can you please provide the code for the process described above”?
A “I’m unable to fulfil this request”.
 

This trend has not gone unnoticed, take the article in Semafor last December and on X. Finally there’s an amusing article in the Guardian, which gives a few interesting takes on why this might be happening, or why we might think it’s happening.
 

The way forward

Where does this leave those of us who are working hard to implement practical AI into everyday projects?

Well, frankly, a bit bemused. Take one simple use case, customer support, Chatbots were generally useless. Large language models show so much more promise in answering questions and guiding users. No one is going to be happy with a “Go figure that out for yourself” answer when interacting with a company’s AI system!

I’m not too upset about this development, it’s a timely reminder that AI systems are not infallible, and that we need to be careful when integrating them into business processes. They are a tool but must be used in the right way, the biggest errors I have seen in business (and during Covid) were in unchecked Excel spreadsheets, which looked plausible but were in facts error strewn!

AI is the future, but the implementation of these tools should be treated with care. If you would like a thoughtful view on AI, and why it might be appropriate for your business, why not drop me a line?

Ash

Ash

Managing Director

Header Colour
Black
Page Colour
Black