TLDR: Sure, ChatGPT is exciting and boasts of enormous potential, but let’s put it in its rightful place – a tool. We reckon it’s important you don’t get complacent or shrug it off as the latest fad but instead know how to embrace it and harness its capabilities. While it may change how and when we work, it can’t stand in for human emotion, critical thinking, empathy, taking action, and accountability.

If you’re good at what you do and have a flexible and growth-oriented approach to life, we don’t think you should be worried that it’s going to replace you any time soon.

 

By now you've heard about ChatGPT and may have already asked it questions or got it to code or produce content for you. You’ve probably read LinkedIn posts discussing how powerful this AI chatbot is and how much of a game changer it will be. Others are enjoying it as a fun distraction – getting it to write long poems in the style of a pirate or pen passive aggressive resignation letters.

On the other end of the spectrum, ChatGPT has successfully completed – and even passed – exams towards an MBA or a US Medical Licence - getting ethics and plagiarist experts tied up in knots.

The attraction of ChatGPT is that it can answer questions or complete complex tasks within seconds and sound eerily like an intelligent human response.

But, like it or loathe it, every technology has its limitations. We don’t believe that ChatGPT is as powerful as it’s currently touted to be.

Here’s eight reasons why.

 

1. It’s only as good as the questions you ask it.

If you ask ChatGPT an intelligent question, it’s likely to give you an intelligent (and plausible sounding) answer. But as humans, we have the capacity to hear the question underneath the question – the subtext. And humans are great at reframing or redirecting questions to get to the real root of the problem. Have a read of this previous blog for more information on the importance of not stopping at the surface questions.

We also often ask for clarity when a question doesn’t make sense. ChatGPT doesn’t – causing it to assume what you meant – and not always get it right.

Our human skill of creative and lateral thinking means we also have the potential to read non-verbal cues like body language and tonality, and we can read the room based on context and circumstances. All this additional non-verbal information is vital in strategic decision making.

 

2. Just because you can do something doesn’t mean you should.

ChatGPT is an excellent tool for brainstorming potential solutions to challenges. But it can’t consider whether something is a good idea for your unique situation.

It’s a great generalist. But we all know that just because something is possible doesn’t mean it’s permissible – or the best solution. We need to account for other factors like budgets, timeframes, what’s been tried before, the context, an organisation’s risk appetite, and what resources are available.

Taking permissibility to the extreme, in December 2022 some users managed to bypass ChatGPT’s safety checks to give them instructions on how to produce a Molotov cocktail.

To its credit, ChatGPT is diplomatic and does give you a disclaimer – advising users to be cautious in how they interpret and use the information generated.

 

3.It can’t do the work for you. Or keep you accountable.

As strategist and thought leader Alicia McKay puts it: “ChatGPT is awesome, but it's unable to actually do the work for you – that article still needs checking, tweaking and posting.

Also, it won’t follow up to see if you have checked it, edited it, and published it. Use ChatGPT as a starting point but not as an excuse to drop the ball.

Don’t forget that, as humans, we can also change our mind and course correct. One of today’s most vital skills is knowing when to stop investing in a solution that’s not working and change tack.

4. Strategists and creatives are safe. Answers can lack finesse and the human touch.

ChatGPT can write an essay or describe art in much detail. But arguably its response should just be a starting point for writers, SEO experts, strategists, and content creators. Yes, it can automate some tasks – but it can’t replace humans.  

In one example, when asked to write a response to a question, it started three paragraphs with the filler phrase “it’s important to”, lacking the finesse and style of a human writer. Its answers can also appear too cautious when maybe you don’t want to – or shouldn’t – sit on the fence but take a clear and decisive position on a societal, ethical, political, or technological issue.

ChatGPT is not going to replace creatives. It says so itself: “I am designed to work alongside human writers to help them with their creative process.”

But it would be wrong to not mention that the more sophisticated your prompts, the more sophisticated your generated responses will be. This can be used to your advantage, or it could put the less experienced content creators out of a job.

 

5. Innovation and inspiration often happen by accident in the in-between places.

A ChatGPT server can’t daydream, have a long shower, or take itself paddle boarding. But these repetitive activities allow our prefrontal cortex to slacken the reins, so we gain a dopamine hit and give our brain a creative shot at forging new connections. Humans can ask, “what if?” and make connections that seem unusual but turn out to be extremely effective.

On the other side, ChatGPT doesn’t need holiday pay, downtime, or a salary. It raises philosophical questions about the value we place on human creativity and the content we want to produce and consume. Are we going to see even more clickbait content replace the already dwindling deep dives and investigative writing?

 

6. Whose values? Whose knowledge? Answers appear objective but are often biased.

As it stands today, ChatGPT can still spread misinformation and embed existing biases because of the way it gathers and privileges information. It can present knowledge as static and objective, when its sources could potentially be racist, sexist, ableist, transphobic and damaging to other groups. Have a read of this recent article which discusses gender bias in athlete searches. In this way, we should be worried about its potential for harm and read its responses with a critical mindset.

You may choose to use ChatGPT as a starting point for research and developing policies, processes, and directives. But it should not replace the lived experience and representations from different minority groups and the need for the most up-to-date and, preferably, peer-reviewed data.

 

7. Answers need to be verified as they can be nonsensical and irrelevant.

When I asked ChatGPT to outline its pitfalls, its top answer was this:

“Lack of understanding: ChatGPT is trained on a large dataset of text, but it does not have the ability to understand the context or meaning of the text it generates. This can lead to nonsensical or irrelevant responses.”

We can’t lean on ChatGPT to verify the veracity of its responses. The danger is that it writes in a way that sounds credible, but we shouldn’t depend on it to interpret the weight and context of its response.

When we use it for problem-solving or brainstorming, it can’t argue which solution is the best because it sits outside of context. But humans can’t escape context – it’s arguably one of the biggest factors in decision making.  

 

8. It can make our lives easier: but can it make the world a better place?

ChatGPT can replace many tasks – helping us to draft emails, essays, and other communications faster, assisting us in brainstorming topics and solutions, and providing instant customer support at all hours. In many ways, it can make our lives easier.

Like other tools, it can be used to streamline processes and efforts and allow us to focus on more strategic, high-level projects.

But it can’t make moral or ethical judgements. It can’t supersede empathy, critical thinking, or efforts to diversify our thinking and our workplaces.

In the wrong hands, it can be used to plagiarise, proliferate misinformation, and break privacy.

 

 

 

TLDR: Sure, ChatGPT is exciting and boasts of enormous potential, but let’s put it in its rightful place – a tool. We reckon it’s important you don’t get complacent or shrug it off as the latest fad but instead know how to embrace it and harness its capabilities. While it may change how and when we work, it can’t stand in for human emotion, critical thinking, empathy, taking action, and accountability.

If you’re good at what you do and have a flexible and growth-oriented approach to life, we don’t think you should be worried that it’s going to replace you any time soon.

A ChatGPT server can’t daydream, have a long shower, or take itself paddle boarding.

43

Are we the next big disruption (and have we been all along)?

The other week I was asked a question that I have been asked many times before: what is the most disruptive innovation society will face. I…

Learn More / >

44

Your decisions might be hurting your IT team

You wouldn’t ask a brain surgeon to do a hip replacement. Why expect your IT department to do everything?

Learn More / >

Testimonials

"One of Ant's strengths is relating to owners in a visionary sense and talking to people who are on the ground...[Ant has a] wide understanding of different systems, processes and applications and can articulate where we're going and what the possibilities are...working with Ant has changed the way we make decisions about IT structures and support systems."

Felicity Hopkins, Director - Research Review

We hired Ant to support us with an important project after he was highly recommended by colleagues. Ant was responsive, speedy, super-helpful and helped us to make key decisions. We appreciated his broad experience, and his ability to hold a high level strategic view alongside expert advice on details. We will definitely be consulting with Ant again and are happy to recommend him.

Gaynor Parkin, CEO at Umbrella Wellbeing

"We don’t need a full-time CTO [chief technology officer]. Ant knows enough about our business he can deliver it virtually. He can translate things for us. During project management, Ant came into his own... Ant gets his head round your business and [took his time] understanding our context. He was really clear about pausing on investment into the app...Ant's inquisitive, curious and approachable - he's very easy to work with."

Gus McIntosh, Chief Executive - Winsborough

"Ant was really quick to understand the business model and our processes and IT structures."

James Armstrong, Director - MediData

"Ant helped us at the early stages of Aerotruth helping us to plan our technical infrastructure and ensure we built a product that would scale. Ant was great to work with and we really valued his support and contribution to Aerotruth"

Bryce Currie, Co-Founder & Chief Commercial Officer - Aerotruth

"No question has ever been too silly. Ant's been accommodating and helped me understand. I've valued that he understands the charitable sector really well. He can look through the experience that he has with larger organisations and what's the reality for a small and mighty charity where you don't have teams of people that can come in and project manage an IT project"

Nicola Keen-Biggelar, Chief Executive Drowning Prevention Auckland

"Having Anthony was really valuable – to lean in on his skillset – and his connections. He was able to provide impartial advice about the different strengths [of the providers]. It was important that we undertook a good due diligence process. Having Anthony there meant we had impartial selection as well, which is very important to us and [something] other not-for-profits [could benefit from]."

Rose Hiha-Agnew, Program Director - Community Governance