There are a few comic strips that span entire decades that I might be interested in seeing. Just ten years of a daily comic would be about 3650 strips to read.1 I don’t think it’s a stretch to say that reading strip after strip in a row would get pretty boring after about a hundred strips.
I use an RSS reader2 to keep up to date with some blogs, news sites, and comic strips.3 The daily delivery of funny pictures is what works for me,4 and this is how the format was intended to be consumed anyway.
But how do you get into a new strip? Well, when you find out about some new comic, maybe it’s only been around for a relatively short time, and you can just read it all in an hour or two, then add the strip to your RSS feed and be up to date. Or maybe the strip’s run ended a long time ago, and it’s now on some kind of a repeat service, and you happen to match your subscription with the rerun restart?
You can also not care at all and just start reading the strip as you go. This is how the format originally worked, although there was probably also a certain zeitgeist attached to it, when basically everyone had seen this or that strip printed in the newspaper at least once, and just “knew” about it.
But what if you want to start reading the comic strip from the very beginning? Maybe you want to find out how all the running gags got started. Maybe you don’t feel comfortable leaving a quarter century of strips unknown. Maybe all you have is a collection of images from some blog that originally hosted the run?
The first thing you need to have is a comic archive. I am not going to tell you how to get one of those.
Then you need some kind of service to host the feed. Here’s a little Python script I use for that:
import os
import datetime
import http.server
import socketserver
import xml.etree.ElementTree as ElementTree
# CONFIG
startTime = datetime.datetime( 2025, 1, 28 )
stripsPerDay = 4
stripsFeed = 30 * stripsPerDay
port = 12345
bind = "127.0.0.1"
url = "https://example.com/strips/foobar"
title = "Foobar Feed"
subtitle = "Foobar comic strips"
short = "Foobar"
# END
contents = []
for root, dirs, files in os.walk( 'data' ):
for file in files:
if not file.startswith( '.' ):
date = datetime.datetime.strptime( file[:10], "%Y-%m-%d" )
contents.append( ( date, os.path.join( root, file ) ) )
contents.sort()
print( "Found %d files" % len( contents ) )
favicon = None
try:
with open( 'data/.favicon', 'rb' ) as f:
favicon = f.read()
except FileNotFoundError:
pass
def GetStripRange():
curTime = datetime.datetime.today()
diffTime = curTime - startTime
curStrip = min( len( contents ) - 1, diffTime.days * stripsPerDay + stripsPerDay - 1 )
firstStrip = max( 0, curStrip - stripsFeed )
return firstStrip, curStrip
class Handler( http.server.SimpleHTTPRequestHandler ):
def do_GET( self ):
if self.path == "/":
self.send_response( 200 )
self.send_header( "Cache-Control", "no-cache, no-store, must-revalidate" )
self.send_header( "Pragma", "no-cache" )
self.send_header( "Expires", "0" )
self.send_header( "Content-Type", "text/xml" )
self.end_headers()
firstStrip, curStrip = GetStripRange()
xml = ElementTree.Element( "feed", { "xmlns": "http://www.w3.org/2005/Atom" } )
ElementTree.SubElement( xml, "title" ).text = title
ElementTree.SubElement( xml, "subtitle" ).text = subtitle
ElementTree.SubElement( xml, "id" ).text = url
ElementTree.SubElement( xml, "updated" ).text = datetime.datetime.now().isoformat()
ElementTree.SubElement( xml, "link", { "rel": "self", "href": url } )
ElementTree.SubElement( xml, "icon" ).text ="%sfavicon.png" % url
for i in range( firstStrip, curStrip + 1 ):
strip = ElementTree.SubElement( xml, "entry" )
ElementTree.SubElement( strip, "title" ).text = "%s #%d (%s)" % ( short, i + 1, contents[i][0].strftime( "%Y-%m-%d" ) )
ElementTree.SubElement( strip, "id" ).text = "%s%d" % ( url, i + 1 )
ElementTree.SubElement( strip, "updated" ).text = ( startTime + datetime.timedelta( days = i // stripsPerDay ) + datetime.timedelta( seconds = i % stripsPerDay ) ).isoformat()
ElementTree.SubElement( strip, "link", { "href": "%s%d" % ( url, i + 1 ) } )
ElementTree.SubElement( strip, "content", { "type": "html" } ).text = '<img src="%s%d" />' % ( url, i + 1 )
ElementTree.indent( xml )
self.wfile.write( b'<?xml version="1.0" encoding="utf-8"?>\n' )
self.wfile.write( ElementTree.tostring( xml ) )
elif self.path == "/favicon.png":
if favicon:
self.send_response( 200 )
self.send_header( 'Content-Type', 'image/png' )
self.end_headers()
self.wfile.write( favicon )
else:
self.send_response( 404 )
self.end_headers()
else:
try:
strip = int( self.path[1:] ) - 1
if strip < 0 or strip >= len( contents ):
raise ValueError
with open( contents[strip][1], 'rb' ) as f:
self.send_response( 200 )
if contents[strip][1].endswith( '.gif' ):
self.send_header( 'Content-Type', 'image/gif' )
elif contents[strip][1].endswith( '.png' ):
self.send_header( 'Content-Type', 'image/png' )
elif contents[strip][1].endswith( '.jpg' ):
self.send_header( 'Content-Type', 'image/jpeg' )
self.end_headers()
self.wfile.write( f.read() )
except ValueError:
self.send_response( 404 )
self.end_headers()
socketserver.TCPServer.allow_reuse_address = True
with socketserver.TCPServer( ( bind, port ), Handler ) as httpd:
print( "Binding to http://%s:%d" % ( bind, port ) )
httpd.serve_forever()
You need to set the following things in the config section:
startTime
specifies when the feed should start,stripsPerDay
is how many strips should be provided in each day,stripsFeed
is the size of the content window that will be present in the feed,port
andbind
set where the http server should bind to,url
is how the feed will be visible to the outside world,title
is the name of the feed,subtitle
is its description, andshort
is the name displayed in article titles.
Comic strip images should be placed in the data
directory (the script will recurse to subdirectories), and each filename must start with the date in YYYY-MM-DD
format. You can place the favicon PNG image as data/.favicon
.5
If you are using systemd, you can enable the script as a system service by placing the following file as /etc/systemd/system/foobar.service
:
[Unit]
Description=Foobar RSS Feed
After=network.target
[Service]
User=strips
Group=strips
ExecStart=python3 /srv/foobar/foobar.py
WorkingDirectory=/srv/foobar
[Install]
WantedBy=multi-user.target
Then, execute:
systemctl daemon-reload
systemctl start foobar
systemctl enable foobar
The feed’s HTTP endpoint will now run on the configured IP address and port. You can use it locally or expose it to the world through a proxy, but I’m not going to teach you how to configure your HTTP server.
Garfield has been running for 46 years now. It’s getting close to 17000 strips. ↩︎
My reader of choice is a self-hosted instance of FreshRSS. It works pretty well, has a decent web interface, and most mobile RSS readers can use it as a backend. ↩︎
I fail to understand why some people think RSS is dead. Most websites have an RSS feed (and the ones that don’t usually aren’t worth your time anyway). There is a wide variety of readers and services out there, a much more vibrant, though less visible, ecosystem of tools compared to the days of Google Reader.6 I don’t know, maybe it’s the conflation of the thing with the most popular implementation? In the same way that some people don’t know the difference between git and github.com, maybe most people have confused RSS with Google Reader? ↩︎
For those interested, here are some comic strips I am currently reading:
- Calvin and Hobbes
- Darths & Droids
- Garfield (mostly out of habit, it became boring and tedious)
- Oglaf (nsfw)
- Saturday Morning Breakfast Cereal
- The Perry Bible Fellowship
- xkcd
And for people who can read Polish: - Boli (nsfw, broken RSS feed)
- Kryzys Wieku
The favicon is present in the feed’s XML data stream according to the spec, but it doesn’t seem to be picked up by FreshRSS. Maybe something is wrong there? ↩︎
This is a double-edged sword. The last time I checked, it was basically impossible to find a good native desktop RSS reader that could live in the tray. It’s all web now. Sure, you can find an RSS client, but can it sync article read status with a remote server, or is it limited to local-only, which was fine in early 2000, before smartphones were a thing? Or, if it’s recent enough, can it display the feeds and articles in a classic multi-pane view, instead of just some “modern” tile-based layout? ↩︎