The random ramblings of a French programmer living in Norway...
2026
← Measuring AI Productivity With Actual Numbers
  Modernizing a Homemade PHP Blog With AI
Mon 23rd March 2026   
Using AI Efficiently

This is the second article in a series about using AI efficiently. In the first article we built tools to measure AI productivity with real session logs.

This time we'll look at a concrete case study: modernizing a 15-year-old homemade PHP blog in a single afternoon, with the help of Claude Code.

The blog you're reading right now was the guinea pig.

The starting point

This blog has been running since 2009. It's a flat-file CMS written in plain PHP — no framework, no database, no dependencies. Articles are text files with a custom markup language, served by about 1,000 lines of procedural PHP.

It worked. The code is basic and no-nonsense, but it's been rock solid for over a decade — which is more than I can say for the WordPress installation it replaced.

The problem is that "working" and "modern" are not the same thing. Over the years, the blog accumulated cruft: deprecated PHP calls, no RSS feed, no social media previews, unreadable URLs, duplicate files, and a syntax highlighting library from 2013 that didn't know what JSON was.

So one Saturday1, I decided to fix all of it.

Setting up local development

First things first: I needed a way to test changes locally. The blog had always been edited directly on the production server via WinSCP, which is... not ideal.

Me: I was wondering if there was an easy way to run this PHP project on my Windows machine?

Claude's answer: install PHP via winget, then use the built-in development server:

winget install PHP.PHP.8.4
php -S localhost:8000

That's it. No Apache, no Docker, no configuration files. Open http://localhost:8000 in a browser and you're looking at your site.

I knew PHP had a built-in development server, but I assumed setting it up would be more involved than it actually is. In practice, the 8.4 package had a broken download link on winget, so I ended up installing PHP 8.5 instead — which immediately revealed a bunch of deprecation warnings. Good: that meant things to fix.

Time spent: ~10 minutes (mostly wrestling with winget package versions).

Fixing deprecated PHP calls

Running the blog on PHP 8.5 produced a wall of warnings:

Me: The local PHP server works, but there are quite a few errors. Could you check the problems? Ideally we want a fix that still works with the previous versions of PHP.

Claude's answer: identified the problem as a deprecated array_map pattern used 7 times, and fixed all instances in one pass.

Deprecated: trim(): Passing null to parameter #1 ($string) of type string
  is deprecated in articles.php on line 55

The culprit was a pattern used 7 times throughout the code:

// Old pattern — deprecated in PHP 8.1+
list($type, $visual) = array_map('trim', explode("|", $parameters), ['','','']);

The third argument to array_map() was being used as a default value array — when explode() returned fewer elements than expected, the extras became null, and trim(null) is deprecated in modern PHP.

The fix:

// New pattern — works on PHP 4 through 8.5+
list($type, $visual) = array_pad(array_map('trim', explode("|", $parameters)), 2, '');

array_pad() ensures the array always has the expected number of elements, filled with empty strings. It's been available since PHP 4, so this is fully backwards-compatible.

Claude found all 7 instances, understood the pattern, and applied the same fix everywhere.

Time spent: ~3 minutes (Claude reading the code + applying 7 fixes).

Adding an RSS feed

Users had been asking for RSS support for a while. I'd never implemented it because... I don't use RSS myself. But it's a standard feature that any blog should have.

Me: Users have been asking for an RSS feed for quite a while. I don't think that's very difficult to do, but I never had the time.

Claude's answer: read the blog engine code, proposed a plan (create rss.php + add autodiscovery link), and implemented it after I said "sounds good."

The result was a new rss.php file that:
  • Scans the articles folder for the 20 most recent posts
  • Strips the custom blog markup to produce plain-text summaries
  • Extracts the first image from each article for preview thumbnails
  • Falls back to the blog logo when no image is found
  • Includes full article content via <content:encoded> for feed readers that support it
  • Filters out work-in-progress and duplicate articles

The first version was functional in about 5 minutes, but then came the iterative refinements. I asked a friend who'd been requesting the feature to test it, and he immediately spotted issues:
  • Some WIP articles were leaking through (case-sensitive string matching — easy fix)
  • All preview thumbnails showed the blog logo instead of article images
  • He could only see a summary, not the full article content

That last point led to an interesting question: should an RSS feed include the full article or just a teaser? I asked Grok2 to research it, and the consensus is clear: full content is reader-friendly and preferred for personal blogs. Truncated feeds are mostly a commercial/advertising-driven choice.

So we added a <content:encoded> block with the full article converted from custom markup to HTML, while keeping the short summary in <description> for list views.

Time spent: ~25 minutes (5 min for the initial version, then 20 min of iterative fixes based on real user feedback).

OpenGraph and social media previews

Right after RSS, a natural follow-up: when someone pastes a blog link on Discord, Twitter, or Bluesky, it should show a rich preview with the article title, a description, and an image — not just a bare URL.

Me: Could this preview image also be implemented for things like when I paste an article link in Discord or Twitter?

Claude's answer: that's done with OpenGraph and Twitter Card meta tags. Created an extract_og_metadata() function to pull article title, description, and image early enough for the HTML head, then added the meta tags in index.php.

This is done with OpenGraph meta tags in the HTML head:

<meta property="og:type" content="article"/>
<meta property="og:title" content="Encounter - One year later"/>
<meta property="og:description" content="Encounter was released on December 20th..."/>
<meta property="og:image" content="https://blog.defence-force.org/pics/encounter_tape.png"/>
<meta name="twitter:card" content="summary_large_image"/>

The implementation extracts the first ~200 characters and the first image from the article being viewed, with a fallback to the blog logo for image-less articles.

One gotcha: the previews worked immediately on Discord and Bluesky, but Twitter/X showed nothing. After some investigation, the culprit turned out to be the robots.txt file, which had a Crawl-delay: 25 directive. Twitter's crawler apparently doesn't like waiting 25 seconds. Adding an explicit exception for Twitterbot fixed it instantly.

Another gotcha: some article image paths had Windows backslashes instead of forward slashes. Browsers handle both, but OpenGraph crawlers are pickier — those had to be normalized.

Time spent: ~15 minutes (implementation + debugging the Twitter and backslash issues).

Readable URLs with title slugs


Me: There's an important missing feature: articles don't have the actual title in the URL. Instead of ref=ART88 we want something like ref=ART88&title=Encounter_One_Year_Later.

Claude's answer: added an article_url() helper function in setup.php that slugifies titles, then updated every link-generation spot across articles.php and rss.php.

Blog URLs used to look like this:

blog.defence-force.org/index.php?page=articles&ref=ART88

Not very informative when shared in a chat or email. The fix was a small helper function that appends a slugified title to the URL:

blog.defence-force.org/index.php?page=articles&ref=ART88&title=Encounter_One_Year_Later

The title parameter is purely cosmetic — the PHP code only uses ref for the actual article lookup. Old URLs without the title still work perfectly. This means every existing link out there keeps working, while new links are immediately more readable.

Time spent: ~10 minutes (writing the helper function + updating all link-generation spots across 3 files).

The big cleanup


Me: Could you go through the entire project, and look at what could be test or temporary files, or old stuff that's not actually used, or articles that were started but not finished, or cloned versions of other articles under a different name?

Claude's answer: scanned every file in the project and produced a categorized report. The findings were... extensive:

Duplicate articles: Four article files (ART85, 86, 87, 88) all contained the exact same Encounter article content. Turns out I'd been using copy-paste to create placeholder files for future articles, then forgot about it.

Test and temp files:
  • test_scandir.php — leftover debugging script
  • index - Copy.php — accidental Windows copy
  • Multiple " - Copy.txt" and " - Copy (2).txt" files in the articles folder
  • A file with a .txttemp extension (malformed filename)
  • An outdated sitemap.xml from 2019
  • Backup and import folders full of abandoned drafts

Unreferenced images: About 90 out of 875 images in the pics/ folder were not referenced by any article. Including raw camera photos, PaintShop Pro source files, and images from events that never got written up.

Claude identified all the issues, but I made the decisions about what to keep and what to delete. You don't want an AI deleting your files without oversight.

Time spent: ~15 minutes (Claude scanning + me reviewing and deleting).

Syntax highlighting

This one was already mentioned in the previous article, but it fits the narrative here.

The blog was using Google Code Prettify from 2013. It didn't recognize JSON (everything showed up in a single color), and it certainly didn't support 6502 or 68000 assembly language — which is a problem when you blog about retro computing.

We replaced it with Prism.js (version 1.30.0) and added support for all the languages used across the blog: C, C++, JSON, JavaScript, PHP, BASIC, Python, Lua, Bash, 6502 assembly, and a custom language definition for Motorola 68000 assembly — because no existing syntax highlighting library supports it.

Time spent: ~15 minutes (as mentioned in the previous article, this was a side quest during article writing).

Code quality fixes


Me: While we are at it, could you do a general pass on the PHP code to see for egregious errors or dubious programming patterns?

Claude's answer: read all 4 PHP files and produced a categorized report by severity. Beyond the deprecated calls already fixed, it found a couple of actual bugs that had probably been there since the beginning:
  • Double quotes in HTML attributes — the tag cloud and year filter links had an extra closing quote ("" instead of "), which pushed the CSS class outside the tag
  • Tag count off by one — new tags started at 0 instead of 1, making the tag cloud font sizes slightly wrong
  • All the Disqus embed URLs were still using HTTP instead of HTTPS

The kind of accumulated bugs that you never notice because the page looks "close enough" — but that a systematic code review catches immediately.

Time spent: ~10 minutes (Claude reading all PHP files + applying fixes).

Fixing the blog while writing about the blog

Here's a meta one: while writing this very article, I ran into a bug in the blog's own list markup parser.

Me: Oops, looks like the "Test and temp files" section triggered a known limitation of my list system: the - in "index - Copy.php" is creating extra bullet entries.

Claude's answer: found the one-line cause in the list handler and fixed it with a regex that only matches dashes at the start of a line.

The article included a list mentioning files like "index - Copy.php" and " - Copy.txt". The blog's list handler uses - (dash followed by a space) as the bullet point marker. The problem? It was replacing every occurrence of - in the text, not just ones at the start of a line. So "index - Copy.php" was being split into two list items: "index" and "Copy.php".

The original code:

$htmlcode = str_replace("- ", "<li>", $htmlcode);

The fix:

$htmlcode = preg_replace("/^- |\\n- /", "<li>", $htmlcode);

Now - only creates a bullet when it's at the very start of the content or right after a newline. A one-line fix for a bug that had been there since the beginning — but that nobody noticed until an article actually had dashes in the middle of list items.

Time spent: ~2 minutes.

The full list

Here's everything that was done:

ChangeTimeImpact
Local dev setup~10 minNo more editing on production
PHP deprecated fixes~3 minClean on PHP 8.1 through 8.5
RSS feed~25 minFeed readers can follow the blog
OpenGraph + Twitter Cards~15 minRich previews on Discord, Twitter, Bluesky
URL slugs~10 minShared links are readable
Project cleanup~15 minNo more duplicates and dead files
Syntax highlighting~15 minPrism.js with 68000 assembly support
Code quality fixes~10 minActual bugs found and fixed
HTTPS everywhere~2 minNo mixed content warnings
Version control~2 minChanges are tracked in git
Total~1h47

How the work actually happened

What's interesting is not just what was done, but how.

The session was conversational. I'd describe a problem or a feature I wanted, Claude would read the relevant code, propose a solution, and implement it. When something didn't work — like the WIP articles leaking into the RSS feed — I'd paste the error or describe what I was seeing, and we'd iterate.

Some patterns that emerged:
  • Real-time testing is invaluable — I'd deploy, test, and report back within minutes. The RSS thumbnails showing the wrong image, the Twitter previews not appearing, the HTTP 500 after a refactoring — all caught within seconds because I was testing as we went.
  • The AI is great at finding things — the cleanup audit, the unreferenced images scan, the code review — these are tasks where reading every file is tedious for a human but trivial for the AI.
  • Edge cases come from the real world — the backslash in image paths, the Twitterbot crawl delay, the case-sensitive WIP check — these aren't things you'd think to handle upfront. They surface when you test with real data on real platforms.
  • Not everything should be automated — for the file cleanup, Claude identified the issues but I made the decisions. The AI is a tool, not a decision-maker.

Was it worth it?

About 1 hour and 45 minutes of active work to go from "old PHP blog with deprecation warnings and no RSS" to "the same blog, but now with RSS, social previews, readable URLs, cleaner code, and version control." It's still an antiquated system by today's standards, but it's a much more functional antiquated system.

Could I have done this without AI? Absolutely. Would I have actually done it? Probably not — certainly not in one sitting. Each of these improvements is individually small, but the cumulative friction of reading PHP documentation, figuring out RSS 2.0 specs or finding an adequate PHP implementation, looking up OpenGraph tag formats, and auditing 875 image files kept this firmly in the "someday" pile — where it had already been sitting for years. The RSS feed alone had been requested multiple times.

The AI turned a "someday" project into a "Saturday afternoon" project. That's the real productivity gain.

Next steps

The blog engine is in much better shape now, and this is just the second article in the series — there will be more, though I don't have a fixed plan for what comes next. There's plenty of material to draw from across the various projects I've been using AI on.

If there's anything in particular you'd like me to dig into, feel free to let me know in the comments!
Using AI Efficiently



1. March 9th 2026, to be precise. The whole session lasted about 1h30 of active work, spread over a longer afternoon with breaks. Total estimated API cost: $31 — covered by the Claude Max subscription.
2. I use multiple AI tools depending on the context. Grok is useful for quick web searches when I need a general answer.
comments powered by Disqus