- We analysed 22,845 LinkedIn posts and measured how much each user edited the AI draft before publishing
- Pure AI content performs exactly at the author's baseline — LinkedIn does not suppress it
- Light edits (1–10% changed) produce the highest breakout rate at 22.2%
- Heavy rewrites (51%+ changed) crater performance to just 4.9% — humans ruin everything
- The sweet spot: refine 10%, trust the structure, and hit publish
If I had a nickel for every time some self-appointed LinkedIn “guru” or “growth hacker” proclaimed that AI-generated content is dead on arrival, I’d have enough money to buy Twitter and ruin it all over again.
Every day, my feed is clogged with these self-styled prophets spewing the exact same rhetoric: “The LinkedIn algorithm can detect AI!” or “AI slop is destroying your reach!” or “You must write every single word from the depths of your own bleeding soul if you want to get any leads!”
OK cool bro. But here’s the thing: it’s mathematically, categorically, and unequivocally wrong.
We all need to generate leads and drive growth. And we all know, deep down, that your team’s networks are a massive, untapped goldmine. If you could just get your experts, your sales reps, and your execs to post consistently on LinkedIn, your company’s influence would skyrocket.
But getting them to post is impossible. They lack the time. They lack the inspiration…
… and when you give them standard AI tools to help, they complain that the output sounds “fake” or “robotic.”
So, what do they do? They take the AI draft, and they rewrite the whole damn thing.
And then the post tanks. And they blame the algorithm. Rinse and repeat.
So everyone just assumes that LinkedIn nerfs AI content, according to “some guy on the internet”. But does LinkedIn actually penalise AI content? Or are humans just remarkably good at sabotaging their own success?
At Drumbeat, we love data. So we decided to find out.
Here’s the methodology behind our research
We looked at a massive sample of 22,845 LinkedIn posts generated and published through Drumbeat.
To be clear, these aren’t your standard ChatGPT copy-and-paste jobs. Drumbeat uses our proprietary AuthorDNA to ensure the first draft already sounds uniquely like the author. It captures their tone, their formatting quirks, and their level of professional snark.
But, because we give users the ability to edit these drafts before they hit “publish,” we were able to track exactly how much a user changed the AI’s draft, and correlate that edit intensity with the post’s final performance.
Nerd alert: here comes the math.
How do you determine if a post is “Good”? You can’t just look at raw impressions, because the CEO with 50,000 followers is almost always going to out-perform the junior SDR with 500 followers.
So, we normalised the data. We calculated a within-voice performance z-score for every post.
A z-score simply measures how many standard deviations an observation is from the mean. In human English: we looked at each author’s typical engagement baseline (impressions, reactions, comments, and reposts).
We defined a post as “Good” when its z-score exceeded +0.5. That means it meaningfully outperformed that specific author’s historical average. A positive number means it was a hit. A negative number means it was a dud.
We then broke the 22,845 posts down into four categories based on how heavily the human edited the AI draft:
- Unedited (0%): Pure AI content. The “set it and forget it” crowd.
- Light (1–10%): A quick tweak. Fixing a phrase, adding a specific stat, or other light edits.
- Moderate (11–50%): Changing a few paragraphs, or restructuring the post.
- Rewrite (51–100%): Taking the AI draft and essentially re-doing the whole thing from scratch.
Here is what we found.
The percentage of posts that meaningfully outperformed the author's own baseline (z-score > +0.5), broken down by how much the user edited the AI draft. Light edits win by a landslide.
The Results: Humans ruin everything
Let’s look at the cold, hard numbers.
| Edit Intensity | Good Rate | Mean Z |
|---|---|---|
| Unedited (0%) | 13.4% | -0.004 |
| Light (1–10%) | 22.2% | +0.090 |
| Moderate (11–50%) | 17.1% | +0.038 |
| Rewrite (51–100%) | 4.9% | -0.085 |
Look at that bottom row. I mean, seriously, look at it.
When you cross into rewrite territory (51%+ of the content changed), your chance of getting a “Good” post drops to near zero, a miserable 4.9%. The Mean Z drops to -0.085, meaning heavy rewrites actively perform worse than the person’s baseline average.
In other words: the more human you try to make the AI post by completely rewriting it, the worse it performs.
On the flip side, look at the “Light” edits. Just 1 to 10% of the text changed. This is the Goldilocks zone. These posts have a 22.2% breakout rate and a massive +0.090 Mean Z.
So, what is the data actually telling us?
Does LinkedIn penalise AI content? No. It penalises bad content (human and/or AI).
Let’s look at the Unedited (0%) posts. These are pure AI. If LinkedIn’s algorithm was actively hunting down and penalising AI content, you’d expect the Unedited group to be a bloodbath.
But it’s not.
Unedited posts have a 13.4% Good Rate, and a Mean Z of basically zero (-0.004).
What does this mean? It means pure AI content is perfectly, exactly, reliably average. It doesn’t get suppressed by some mythical algorithmic shadow-ban. It performs exactly in line with the author’s normal baseline.
This is because (Drumbeat’s) AI knows how to structure a post, pace a sentence, place line breaks, and whatever else.
But, pure AI lacks the messy, unpredictable, lived experience that makes people stop scrolling and actually comment.
How each edit intensity group performs relative to the author's own average. Positive = outperforming baseline. Negative = underperforming. Light edits are the clear winner; heavy rewrites actively hurt you.
Why light editing works the best
When users make small, targeted adjustments to a Drumbeat draft, the results are the best.
Why?
They keep the structural and engagement signals Drumbeat’s models optimised for, while injecting the ultimate un-fakeable metric: their own two cents so it “feels right”.
These lightly edited posts have the highest breakout rates because they are the best of both worlds. You get the algorithmic brain of a machine, infused with the unpredictable heart of a human.
It’s like buying a perfectly tailored, Savile Row suit, and then pairing it with your favourite vintage sneakers. The structure is flawless, but the personality is all yours.
When humans think they are smarter than the machine (Spoiler: they aren’t)
So why do the heavy rewrites (51–100%) fail so spectacularly?
Because, and I say this with all the love in my heart for my fellow marketers: most people are terrible at writing for the internet.
Think about this. Before ChatGPT, it’s not like the internet was filled with millions of Shakespeares and Molières. But we seem to have collectively forgotten this… and think back to 2021 like we had flying cars and every word on the internet was beautiful and authentic.
But… if anything, the quality of writing on the internet has gotten better since LLMs came into our lives.
The problem is the ideas. Ideas aren’t better; they may not be worse, but they’re not better. And a bad idea, whether written like Dickens or like a microwave, is still a bad idea.
So here’s what happens when you give a B2B professional an AI draft, and they decide to rewrite it. They don’t make it more engaging. They do the exact opposite:
They sand down the hooks. They remove the punchy, short sentences. They inject corporate jargon, passive voice, and meandering preamble.
The AI might write a hook like: “Stop wasting your Q3 budget on cookie-retargeting.”
Bob from Sales decides that’s too aggressive, so he rewrites it to: “In today’s ever-evolving digital landscape, it is imperative that we holistically evaluate our Q3 advertising expenditures to ensure synergistic ROI…”
Zzzzzzz. I legit just fell asleep just typing that.
Heavy rewrites replace the pacing and phrasing patterns that drive engagement with conventional, forgettable language. The result isn’t necessarily “bad” in a grammatical sense, but it is reliably ordinary. It’s white noise. It’s the visual equivalent of beige wallpaper in a dentist’s waiting room.
When you rewrite the whole post, you aren’t out-smarting the algorithm. You are actively sabotaging the structural advantages the AI gave you in the first place.
The data shows that trying to out-think Drumbeat’s models costs you the upside.
The sweet spot: Refine, don’t rewrite.
If you take away one thing from this blog post, let it be this: stop trying to do too much. You’re probably not Hemingway, so don’t pretend to be (you’re good at other stuff though, don’t worry!)
Your team is busy. They don’t have the time to sit around agonising over a blinking cursor. And, as the data proves, even when they do find the time, their editing will hurt your reach.
Here is the blueprint for B2B employee advocacy that actually works:
- Use smart AI (like Drumbeat): Don’t use generic ChatGPT prompts that make everyone sound like a 19th-century butler. Use a platform that understands AuthorDNA so the first draft is already 90% of the way there.
- Tell your team to spend 60 seconds, max: Tell them to read the draft. If there’s a word they wouldn’t use, change it. If they have a quick story to add, drop it in.
- Change 10%, and hit publish: Trust the machine on the structure. Trust yourself on the soul.
By embracing this workflow, you eliminate the friction that keeps your team from posting, and you get engagement that actually translates into business growth.
If your current strategy of manual, hand-wringing content creation isn’t getting you the reach you deserve, it’s time to follow the data.
And the data is clear: LinkedIn doesn’t penalise AI content. It penalises bad content.
Ready to stop rewriting and start performing?
Book a DemoLike what you read?
See how Drumbeat can handle all your marketing execution while you focus on running your business.
Book a Demo