Max Nardit
Dev

Any paid Chrome extension can be rebuilt in 30 KB

Most paid Chrome extensions are 1-3% code. The rest is paywall, login, and analytics. With an AI agent you can rebuild any of them for personal use in one evening — 30 KB instead of 2.6 MB, no subscription, no telemetry.

Most paid Chrome extensions are 1-3% code by weight. Everything else is paywall, login, billing, analytics, and the fact of being installable from a curated store.

That means any of them can be rebuilt for personal use in one evening with an AI agent. 30 KB instead of 2 to 3 megabytes. No subscription, no login. The functionality itself is 5-10 KB of DOM walking and format conversion; everything around it collapses once you strip the billing.

I tested this against a $99/year extension I'd been using. The interesting parts (formatting, templates, comfortable settings) sat behind a Google login and a server-side license check. The working code I wanted was a couple hundred lines of JavaScript wrapped in a polished React shell. An evening with Claude got me a 30 KB clone with the same features and no backend.

The patterns generalize. So do the traps.

The hard part was copying the folder

A Chrome extension lives at %LOCALAPPDATA%\Google\Chrome\User Data\<Profile>\Extensions\<id>\<version>\. Find the right <id> from chrome://extensions/, navigate to the folder, copy it out. That's the plan, and that's where the evening nearly ended before it started.

cp from git-bash returns Access denied. So does Copy-Item in PowerShell. So does robocopy /B. Even though my user is administrator, even though the ACL grants FullControl, Chrome holds the files with an exclusive share lock and the OS refuses concurrent reads.

The privilege paradox is the second trap. whoami /priv shows:

SeBackupPrivilege   Disabled
SeRestorePrivilege  Disabled

The privileges are listed in your token but disabled. To actually use them you call AdjustTokenPrivileges. Ten lines of P/Invoke, fine. Except AdjustTokenPrivileges returns true and immediately GetLastError() is 1300 / ERROR_NOT_ALL_ASSIGNED. The privilege isn't really there. It's listed cosmetically. Modern sandboxed processes filter their tokens this way.

The actual answer is Volume Shadow Copy. VSS is a Windows service that creates a read-only point-in-time snapshot of a volume: a parallel filesystem view that ignores live handles and share locks. WMI exposes it directly:

powershell
$cls = [wmiclass]"root\cimv2:Win32_ShadowCopy"
$r = $cls.Create("C:\", "ClientAccessible")
$shadow = Get-CimInstance Win32_ShadowCopy | Where-Object { $_.ID -eq $r.ShadowID }
$shadow.DeviceObject
# \\?\GLOBALROOT\Device\HarddiskVolumeShadowCopy9

Through that path you see the same files Chrome has open, but as snapshot bytes, with no contention. One last trap: Copy-Item doesn't understand the \\?\GLOBALROOT\... syntax. It silently creates empty folders and reports success. You have to drop down to [System.IO.Directory]::EnumerateFileSystemEntries and [System.IO.File]::Copy, which call Win32 directly. Ten lines of code, fully delegable to an AI agent if you give it the full error context.

After the snapshot is taken, delete it. Otherwise VSS shadow copies pile up on disk.

That's the part that should have been cp -r. It ate the first half of the evening. Worth flagging because most "rebuild a Chrome extension" tutorials skip straight to the JavaScript and assume you have the bytes already.

What's inside, and where the value isn't

A modern Chrome extension on disk is mostly UI weight. Manifest is 2 KB. The content script is 10 KB. The popup is a megabyte. The background service worker is half a megabyte and pulls in the Firebase SDK so it can hold a Google login token.

The actual algorithm (DOM walk, format conversion) is 5 to 10 KB. Everything around it is a paywall, sixteen language locales, analytics, and a "buy me a coffee" button.

You don't need to read any of the obfuscated JavaScript to understand this. The size ratio is itself a diagnosis. And the most informative file in the entire bundle is _locales/en/messages.json, where every Pro feature gets named in human language because translators need it:

json
"upgradeButton": { "message": "Upgrade now" },
"upgradeTip": { "message": "After upgrading you can use all professional features such as customizing the file name, dialog formatting with H1, and exported file header settings." }

In thirty seconds you know what's free, what's paid, what the marketing landing page is exaggerating, and which UI fragments are gated. There's no obfuscation on UI strings. There can't be, since they're rendered to humans.

The gate is a boolean

The paywall in a typical freemium Chrome extension is a React state variable set from a server response, with no signed token and no cryptography:

js
const [isPremium, setIsPremium] = useState(false)
useEffect(() => {
  if (signedInUser) {
    fetch(`${API_BASE}/api/check_user`, {
      method: "POST",
      body: JSON.stringify({ user_email: signedInUser.email })
    })
      .then(r => r.json())
      .then(({ data }) => setIsPremium(!!data?.status))
  }
}, [signedInUser])
 
{isPremium && <ProSettings />}

The server says true or false. The client believes it. The "paid → unlocked" link is held together by reputation, not by anything cryptographic. A determined user could patch the response in DevTools in under a minute, and it doesn't matter, because the relevant audience never opens DevTools.

To be clear: this isn't an exploit, and the article isn't about bypassing paywalls. The point is that the gate doesn't need to be bypassed at all. If you're going to rewrite the same functionality, you simply don't add a gate.

Architecture is shaped by billing

Look at the original architecture and you see three layers tied together by IPC. The popup runs a megabyte of React. The background service worker holds Firebase auth state, talks to the licensing backend, and runs chrome.scripting.executeScript against the active tab. Three bundles, three runtimes, two message-port hops.

There's no functional reason for that shape. There's a billing reason. You need a place to hold the auth token (background, persistent across popup closes). You need to check subscription status in a context that survives UI teardown (background again). You need to keep the heavy execution out of the popup so the popup can stay snappy while it renders the paywall.

Strip the auth, the licensing check, and the paywall, and the architecture collapses to two files:

popup.html (vanilla JS, ~5 KB)
  ↕ chrome.tabs.sendMessage
content.js (matched on the target domain, ~10 KB)
  ↕ DOM API
target page

No service worker. No long-lived port. No tokens. The popup sends one message, the content script does the work, the result comes back. 30 KB total. Same functionality.

This is the part of the experiment I find most interesting. The original wasn't engineered badly. It was engineered correctly for its actual requirements, which include "we have to bill people for this". A lot of what we call "complexity" in shipped software is the cost of being a product instead of a tool.

What the AI did, what I did

Claude wrote the manifest, picked the right permissions and host_permissions, scaffolded the popup UI, wrote the DOM-walking code as a node-visitor that emits markdown, generated the test fixtures using jsdom, and reviewed its own first pass to strip over-engineering. That's about 80% of the typing.

Where I had to step in:

The AI is good at finding things in obfuscated bundles, but only when you anchor on word boundaries and read enough surrounding context. A naïve grep "isPro" returns dozens of false positives like isPropagationStopped and isProtoOf from React internals. The AI happily reports "I found six mentions of the Pro flag" when in fact it found zero. You have to ask for \bisPro\b with at least 60 characters of context, every time.

Selectors against live websites can't be trusted from training data. The AI remembers what some sites looked like a year ago. It's wrong now. You open DevTools, check what the current DOM emits, and pass that to the agent. Twenty minutes of manual work.

And the AI loves to do too much. Without an explicit "v1 scope, YAGNI" instruction, it will quietly add batching, multi-select, undo, templates, and a settings page. You end up with the same megabyte bundle as the original, just for different reasons. The discipline of "no, just the one feature" has to come from you.

The legal framing is also yours to handle. Reverse-engineering for personal use and for interoperability is allowed almost everywhere: DMCA §1201(f) in the US, the Software Directive Article 6 in the EU. What you cannot do is redistribute the original binary, ship a clone with the same name and icons, or copy obfuscated code verbatim into your version. "Inspired by, similar to, personal pet project, not in the Web Store" is the standard frame, and it's also true.

Convenience is the product

Nobody pays $99/year for "a React component that wraps a string in --- markers and stores a boolean." They pay so they don't have to install unpacked extensions, don't have to know what data-message-author-role means, don't have to debug Volume Shadow Copy. Convenience is the product. Code is the smallest part of the value chain.

That cuts two ways. If you have a free evening, an AI agent, and a rough understanding of the stack, you can replace any of these tools for personal use, legally, in less time than the subscription saves you in a month. And if you sell one of these tools, technical paywalls are theatre. They hold the honest. The defensible asset is brand, UX, distribution, and the support contract; not the JavaScript. Build the business around those, and you're fine. Build it around "feature X behind subscription", and you're selling something that an evening with Claude can dissolve.

Neither side of that is bad news. It's where browser extensions sit in 2026.

Discussion

No comment section here — all discussions happen on X.

Max Nardit

Max Nardit

@mnardit

More articles