Apple's 'lazy advert' highlights a much bigger problem with AI
You might’ve heard that Apple dropped a bunch of adverts for Apple Intelligence – a new AI built right into their devices. It can rewrite emails, create images and use Siri better. But is it any good?
The adverts are terrible. They’re all about lazy, immature people getting away with avoiding their work. In fact, they’re so bad that people have been slamming it all over social media.
Mostly, people are criticising Apple for appealing to the wrong audience or encouraging bad behaviour.
But I’ve got a different gripe.
Specifically, I’m irritated by the ad for their writing tools. After wasting time bouncing on his chair, spinning paperclips, and squandering perfectly good sticky tape, a lazy employee writes an email to his boss. He then uses Apple Intelligence to rewrite it into a more professional tone.
The irony is that the rewrite is worse
Let’s ignore what Apple is implying here. Sure, it’s weird. The original email is actually better than the rewrite. You might have missed the actual wording as it’s only a few frames.
So here’s the original:
Hey J,
Been thinking, this project might need a bit of zhuhzing. But you’re the big enchilada.
Holler back,
💪Warren💪
Not great. I’d definitely lose the emojis and switch out the slang. But it’s short, clear and in a friendly tone. It’s too chatty for most brands, but easy enough to fix.
Tweak a few words and it could easily be:
Hi J,
Been thinking that this project might need some extra work. But you’re the boss, so I’ll take your lead.
Shout if you need a hand,
Warren
I’d argue that there’s also some content missing here. (What does he mean by extra work?) But let’s just focus on the tone for now.
Apple’s rewrite is objectively bad
Here’s the AI rewrite:
Hey J,
Upon further consideration, I believe this project may require some refinement. However, you are the most capable individual to undertake this task.
Please let me know your thoughts.
Best regards,
Warren
There are five problems that jump to mind when I see this
The tone is now inconsistent. It completely ignored the greeting, which just makes it sound odd.
It’s added abstract terms. Consideration and refinement are both abstract and confusing.
It added formal language. Words like however, capable and undertake. It also cuts the contractions (which are easier to read).
It made it longer. The new version added 13 words. The words themselves are also much longer.
The readability is worse. The original scored around 74.5 on the Flesch reading ease. (Fairly easy.) The rewrite got 49.3. (Fairly difficult.)
All in all, the rewrite is harder to read. And it lacks personality. It’s not professional, it’s just mimicking thousands of documents tagged professional in a database. Reports, letters and instruction manuals – most of which were already bad.
Professional shouldn’t be synonymous with bad. Professional should mean that it’s short, clear and polite. Not long, confusing and standoffish.
This is a major problem with AI-generated content. It doesn’t know what good actually looks like. You still need people to think about what they’re sending.
First, we need to ask, how does AI work?
Before we continue, you need to understand how we create an AI model. The basic principle is similar to evolution. Create loads, test them, and the most successful ones survive to the next round.
Imagine you want an AI to scan handwriting
You’d create your models, test them on loads of random handwriting samples, and whichever model got the most correct, wins. You don’t do any of it by hand, though. You build a program to create the models automatically. (Unfortunately, that has the side effect that you have no idea why a model was successful or not. No worries. That won’t be a problem later.)
Eventually, you might want to scan handwritten equations. No big deal – they’re similar enough. You add some equations to your tests and keep on going. The model can pretty much do it already, so it only takes a few more rounds.
At a certain point, you’ve created a model that can detect lines pretty well.
Now, other people can build on it
Maybe someone wants to scan a map for roads and rivers. A rockface for fissures. A heart-rate monitor for anomalies. Rather than start from scratch, they use your model as a starting point. Congratulations, you’ve created a Foundation Model.
This is why we’ve seen a surge of new AI tools recently
People are building on these Foundation Models, like how we turned wolves into chihuahuas. They add new data and tweak what’s considered ‘right’.
The key to all of this is in the tests. You need an absolutely huge number of examples – all tagged with the correct answer. (How do you tag all that data in the first place? Well, one way is to crowdsource it. When you fill in a reCaptcha, you’re also helping train an AI.)
Put bad writing in, get bad writing out
When an AI creates content, it’s not writing based on what the evidence says or the lessons every professional writer will tell you. It’s writing based on the testing criteria. If the original data is flawed, the AI will be flawed. And if you get the average person to tag the data, it’ll learn the average answers.
If the original data is flawed, the AI will be flawed
That’s fine in some cases. We can all spot a cat in a picture. But could you tell exactly what species of cat it is? This leads to a couple of problems.
Human bias slips through
Imagine a picture of a fox that looks an awful lot like a cat. If you asked a thousand people what it was, they might all make the same mistake and say the fox is a cat.
That might not sound like a big deal. But it can be a big problem when you ask questions that are more prone to human bias. Which of these people is the doctor? Which is the nurse? You might personally answer correctly. But would everybody? If the correct answer is the anomaly, the AI is going to ignore it.
It might not even be people getting the answer wrong. It might just be that the source data itself is filled with stereotypes. We’ve seen this with visual AI already. Ask it to draw someone at social services and it can default to racial cliches.
It isn’t malicious. It can just be that the data was badly tagged or there isn’t enough data to reflect the real world.
Not all topics are created equally
It’s very easy to gather data about certain topics. Ask an AI about the speed of light and you’ll (probably) get an accurate result. There are thousands of books that talk about it. But ask about the specifics of the Sumerian language and you probably won’t get accurate answers. We don’t know much about the topic and there aren’t many sources for the AI to reference.
The more specific you get into a topic, the less information there is about it. At a certain point, the AI can easily misinterpret information or hallucinate answers.
This isn’t so bad on very technical topics. It’s worse on topics where common knowledge is wrong. The AI can end up believing the same myths that everybody else believes – that urine is sterile (it’s not), that fortune cookies are Chinese (they’re Japanese), or that you can’t start a sentence with and (you can).
Problems now will become worse later
If companies use AI to churn out content, we end up in a bad loop. AI developers scrape the internet for examples to include in their training. They pick up blogs that AI wrote in the first place, and the AI learns from that data.
This just reinforces bad ideas in the AI. For example, if AI thinks that a blog uses formal language and long sentences, it’s going to create blogs in that style. People publish those and the AI gets more data supporting the idea.
Remember, the more data there is on a topic, the more weighted that data becomes. Laziness will seep into the process. Lazy creators, who just publish whatever the AI throws out every day, are naturally going to outweigh thorough, well-crafted writers who took their time.
Not only will the quality fall, but everybody using AI will end up sounding the same.
You could accidentally plagiarise someone
Let’s imagine that you’re writing about a niche topic. There are only a few books on the matter. You ask AI to explain the concept in a few words.
Now, the AI doesn’t have much to work from. So it basically just regurgitates what one book said. You have no idea, because you never read those sources yourself. The end result is a report littered with plagiarism.
AI is just a mirror of what we’ve created. It doesn’t come up with original ideas. It predicts based on what we’ve fed into it.
Bad AI models will make bad writers
With all that in mind, you can see why the Apple advert was so infuriating. It’s encouraging thoughtless, lazy behaviour. Just click the button and trust the AI, it suggests.
But good writing is good thinking. You still need to ask yourself the same questions. You still need to be thorough. What’s my point? Do I need to include any extra information? Have I put my argument in a logical order? Is the tone appropriate for the audience? The AI isn’t doing that. That’s still your responsibility as the writer.
Taking a shortcut means that you never learn that skill.
Writing is a skill. It takes practice and experience to understand how to get the idea in your head into someone else’s. Taking a shortcut means that you never learn that skill. You never learn to develop and refine an idea.
When is AI useful to a writer?
While there are definitely problems with AI, it is still a tool. You can cut yourself with a knife if you use it incorrectly. But that doesn’t mean you shouldn’t chop up your vegetables. Similarly, there are ways you can use AI responsibly.
Brainstorm ideas. All creators need stimulus to come up with ideas. You might need a hundred headlines before settling on one blog. AI can help speed that process up. You might not use the exact idea it suggests, but it might spark a better thought in you.
Research a topic. AI can pull information from multiple sources and summarise it for you pretty well. But you need to be careful. It doesn’t know whether a source is trustworthy or it might just completely invent a fact. Double-check everything.
Plan your outline. You can figure out what you should probably include. But know that it will only give you the obvious information. Topics and points that others have already covered. You should also make sure you’re putting those points in the right order. (Look at this list, it’s slightly better because it’s in the order you’d go through the writing process.)
Review your writing. Once you’ve done your first draft, you can ask AI to help you refine it. It can act as an editor. Ask it to tell you whether you should add or cut any content, if there’s a different way you could structure your points, or if your tone is consistent. But, again, be wary. AI can sometimes be a bit of a sycophant.
Create summaries and headlines. Lastly, AI can be quite good at distilling your content down into a headline or creating a meta description.
While these are tasks an AI model can help with, it shouldn’t replace the uniquely human aspect of your writing. If you understand how the AI model works, you can question the results you get. And decide for yourself what’s correct.
Upgrade to unlock more content
These are the kinds of articles we write for our paid subscribers. So if you enjoyed it, upgrade your subscription to get more.