Sounds Like a Job for Perl

In which I write a continuous build system for bagpipe music using Perl and a Makefile.

The problem: the band has sheet music that everyone is supposed to memorize, the membership is geographically dispersed and not everyone can make it to every practice, and during practices and workshops and after competitions the music can get edited to reflect feedback and (one hopes) improve the performance. How do we distribute the music to everyone?

The history: one or two of the members would typeset the music in some graphical program like CelticPipes or PiobMaster, print off a zillion copies, and bring them to practice, whereupon we’d all get out our pens and correct any typos and then notate any changes. Repeat forever, recycling all the old sheets.

The recent past: I started setting the music in ABC and storing the files in the band’s Box folder. ABC meant that the source files were text, not binary, so they could be edited by anybody with a text editor. We’d still either import them into CelticPipes or use abcm2ps to generate typeset music, and publish PDFs back to the Box folder. Manual, but better.

Then: I moved the ABC files to a Github repository. They’re text, after all, and so they diff nicely and they compress well. Having them under version control also meant that we could go back (“You know, this change isn’t working, let’s go back to the way we were playing it.”). Generating the PDFs was still a manual process, but I figured that since CI servers are a thing nowadays it shouldn’t be too hard to configure something that would monitor the repo and, when it detected a change, rebuild the PDFs and push them to Box and, for good measure, notify the band that there was new music available. I poked at a couple (Jenkins, Travis, TeamCity) and ultimately set up TeamCity because it confused me the least.

However: TeamCity is a resource pig. I mean, seriously. It requires an external database (to keep track of what’s been built, by whom, where the artifacts are, who the users are, etc., etc.) and ungodly amounts of RAM, to say nothing of the disk footprint. There is precisely no way to run a TeamCity server that does anything on an EC2 free-tier server. The system I got working had an RDS instance and an EC2 small instance and cost about $20 per month to run. Not a lot of money, really, and it was cool to see messages pop up in Slack, telling everyone that the binder had been rebuilt while I was doing something else. Cool, that is, until the small instance ran out of disk space and everything stopped working. This will come as a surprise to no one who has ever worked on/with a CI system in an enterprise environment.

So, now: Github sends notifications of changes to a repo via webhooks. I wrote a Perl script that gets executed as a CGI to accept the POST data from Github. It records the branch, the commit message(s), and commit id(s) in a text file. I wrote another Perl script that looks for those text files and, for each one it finds, it pulls the repo and executes the Makefile. On success, it invokes a Slack webhook to send a message to the band that the music has been updated. I even wrote another Perl script that generates a section title page PDF so that the binder has, essentially, chapter title pages. To get the script that invokes the build to run periodically, I had to create a service and timer, because apparently cron isn’t a thing anymore.

Ingredients

  • Perl
  • make
  • Ghostscript
  • abcm2ps
  • rclone
  • git
  • lighttpd
  • certbot
  • Perl modules:
    • LWP
    • JSON
    • CGI
    • PDF::API2
    • LWP::Protocol::https (side quest – to install this on ubuntu, don’t go crazy using CPAN; apt-get install liblwp-protocol-https-perl
    • Data::UUID

This setup runs on a tiny EC2 instance and doesn’t talk to a database. There is no web interface to manage the system. If it gets stuck, I have to ssh into the server and figure it out. On the other hand, it works and it totally runs in a free-tier environment. Hooray for old school simplicity!

2 responses »

    • Right?! When I started this, I thought, “Surely all I need to do is edit /etc/crontab.” And then, while I was setting up certbot to generate the SSL certificates, I noticed that its script in cron.d said, essentially, “This script is here just in case, but if your system is using systemd, you should be aware that it isn’t going to run. Enable the certbot timer, instead.” The thing I haven’t quite worked out, yet, is how to get my timer to be reenabled when the server restarts. All the howtos I’ve seen so far are just about starting services (daemons).

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.