Tutorials Tech Web

Deploying from the Desert: Building an Off-Grid CI/CD Pipeline

My motorhome doesn't have an IT department. Here is how I use Hugo and GitHub Actions to push code and publish posts even when I'm parked in a cellular dead zone.

By Matt Weaver
May 5, 2026
15 min read

Most web development tutorials assume a fundamental luxury: a fat, stable pipe to the internet.

When you live in a traditional house, you don’t think twice about running a heavy npm build process locally and dragging a massive folder of unoptimized, high-resolution images via FTP to your server. If it takes three minutes to upload, who cares? You have gigabit fiber.

But when your “house” is a Class A motorhome currently parked in the Nevada desert, the internet is not a utility; it is a precious, finite resource.

If I am working off a single bar of fluctuating LTE, or if my Starlink dish is being partially obstructed by a stubborn pine tree, a dropped packet during a manual deployment isn’t just an annoyance. It means starting over. It means broken websites.

To survive out here, you have to embrace automation. My motorhome doesn’t have an IT department, so I had to build a virtual one. Here is a look at the CI/CD (Continuous Integration / Continuous Deployment) pipeline that keeps The Roam Office running, even when I’m completely off the grid. I’ll also touch on some of the architecture I use to keep this thing running.

The Problem: The “Fat Client” Trap

In the early days of building sites, I did everything locally. I wrote the code, compiled the site, generated the thumbnails, and uploaded the finished product.

This is the “Fat Client” trap. If you are doing the heavy computational work on your laptop, you are entirely responsible for pushing the results of that work over the network. If an article has ten high-res images, I might be trying to push 50MB of data over a cellular connection that is currently measuring its upload speed in kilobits per second.

I needed to flip the script. I needed to send only the absolute bare minimum of raw instructions (text) and let a server in a data center somewhere handle the heavy lifting.

The Foundation: Hugo

The core of this website is Hugo, a remarkably fast static site generator.

I chose Hugo for a few very specific reasons:

  1. No Databases: There is no WordPress backend, no MySQL database to query, and no PHP rendering pages on the fly. Years ago, I ran a personal website built entirely from hand-crafted PHP, a bespoke database schema, and (woefully insecure) user management. It was really cool, but it also got hacked constantly, and fixing it every time was a huge hassle. With Hugo, I don’t have to deal with any of that, since it just generates static files for me. This whole website is just raw HTML, CSS, and JavaScript. It dramatically reduces the attack surface, is wildly cheap to host, and loads instantly.
  2. Markdown Native: I write all of these posts in plain text Markdown files.
  3. Local Speed: I can run a local Hugo server on my Mac with zero internet connection. If I am parked in a canyon with absolutely no cell service, I can still write, preview, and test the entire website locally.

I still get the fun of building something completely custom, but instead of all that custom stuff happening in PHP, putting content in a database (often without a nice user interface to make life easy, or worse: through my terrible custom interface that required a full framework for users, authentication, permissions, etc.), and loading it dynamically at runtime, I can create the custom website template, put all of the content in easily managed static files, and run a single command to combine those into an easily deployable package.

Along the way, I get a lot of nice extras, like the ability to easily generate the files for an RSS feed (I actually do two feeds. More on that later), turn large source images into more compact WebP images, generate thumbnail images, support content search, etc. And because it’s just static files, with almost no executable runtime logic, it’s simple enough to host for free (or cheap).

The Building: Hosting Architecture

I host the website on Amazon Web Services. The static site itself is stored in S3, as a static S3-hosted website. I then have a CloudFront distribution sitting in front of that, which enables HTTPS, reduces S3 bandwidth, and provides some basic cloud functions for the very small amount of run-time computing that happens.

I actually host two complete copies of the website: the live “production” site and a preview version, which includes drafts and future content, and can also include new template features.

The website code is all stored in a single private git repo on GitHub.

For managing content, I use a mix of manually dealing with markdown files (95% of the time) and the Sveltia CMS. Sveltia lets me provide a simple admin interface for content on my phone, authenticated via GitHub tokens, which provides a secure way to manage things without needing to deal with any of the hassle of user management or authentication.

This all runs very cheaply at my current traffic level. Typical costs for me are the domain name (around $12/year), Route 53 hosted zone + DNS query charges, and occasional pennies of extra usage. The GitHub portion is easily within the free usage limits, as is my use of Make.com for social media posts (described below).

The Window Dressing: The Hugo Template

While the site itself is just a pile of static files that can be generated with a single command, the result is a pretty fantastic pile of features (if you’ll grant me a minute to toot my own horn). This is basically all defined within my custom Hugo template. The template is simply the structure for the website, without any content. When you generate the site, Hugo looks at all my content files and plugs the appropriate values into the template to generate the actual site files.

In order to keep things manageable, I have the site template defined as an independent theme. It lives right alongside my site content, but I could easily move the entire theme into a new repository and create a whole new Hugo-based website with the same theme within a few minutes. I actually have a hidden “demo” site for this theme, which allows me to test new features out without risking breaking something on the real website (or worse: accidentally posting a demo page).

Some of the nice things my custom theme provides:

  • The overall look and feel of the whole website.
  • Image handling:
    • Thumbnail generation: Every post on this website has an image associated with it, which shows up on the post itself, as well as on the page grids that allow you to browse the site. At build time, Hugo processes that source image (PNG/JPG/WebP), automatically zooms/crops for the thumbnail, and can output WebP thumbnails. When you click on the thumbnail, you’re presented with the full-size image.
    • Image processing: Most content images are converted/optimized during the build pipeline (typically to WebP), which keeps payload sizes down with very little quality loss.
    • Image galleries: On a few particularly image-heavy posts, you’re presented with a nice gallery, which allows you to browse through a set of images via thumbnails and/or navigation buttons. Normally, I set up a post-level gallery, which places it automatically at the top of the page, but I can actually embed multiple galleries into a single post if I want to. To make life easier, I also included a setting for post-level galleries that automatically collects all of the images embedded in the post (along with the post’s main image that its thumbnail is generated from) into a gallery, without needing to manually list each one out.
  • The “hero” carousel: At the top of the website’s home page is the “hero” carousel, where there’s a set of “featured” articles. Each one includes some pretty colors, an alternate title, and a short description (which is distinct from the short description you see in the page grid below the carousel). This is all configured within the featured posts themselves. So if I want to add a post to that set, I simply go to the page’s content file and add those settings. This allows me to have as many featured posts as I want, and I don’t need to separate the “hero” summary from the content itself (as a software engineer, I hate splitting closely related things across separate files if I can possibly avoid it).
  • The audio player widget: The Roam Office is primarily concerned with audio, so it’s really useful to provide a way to show off how different things affect audio. For this, I built a completely custom audio player widget that I can embed into pages. This deceptively complex widget does a lot of neat things. For example, it supports playlists and displays the live frequency spectrum of the audio that is playing. It also has an “A/B” mode, which allows you to switch between items in the playlist without resetting the playhead to the start of the file. This allows you to seamlessly switch between “raw” and “processed” versions of the same audio.
  • Embedded apps (Morse Trainer): When I was messing around and created a tool to learn Morse code, I wanted a way to show it off without making people completely leave my website. This app was created via React, and only needs some CSS (to override its built-in theme) and a very small amount of HTML to embed it into any webpage. I was able to create a simple Hugo “shortcode” (a little snippet that I can drop into a post’s markdown to embed something more complex), which allows me to drop the entire web app into a post, where it works perfectly and matches the look and feel of the rest of the website.
  • The page grid: This is the UI I built to allow users to browse the website. This is what you see on the home page, tag pages, series pages, etc. It’s generated dynamically from the post content.
  • RSS feeds: I actually have two RSS feeds: one is the simple RSS feed you’d expect from any blog. The second one is a simpler feed, containing only posts I want posted to social media sites. This allows me to have fine-grained control over how much spam I’m producing.
  • Link management: Links on my website are handled somewhat uniquely. I have a custom hook for these that allows me to do some cool stuff.
    • For example, to link to another post, I can target the post’s “slug” (the thing you usually see after the date in the URL) instead of having to point to its actual path. This is really useful if I restructure the website content and don’t want to update a bunch of links (which has happened a couple of times already).
    • If I know that a post will exist, but doesn’t exist yet, I can set the link target to future:whatever, and it will look like regular text until whatever actually exists, at which point it automatically becomes a link.
    • Affiliate links: In a lot of cases (especially reviews), I provide links to purchase the thing I’m reviewing. It should go without saying, but I don’t let any affiliate “relationships” (meaning I filled out a form once, and hope that someday I get $0.50 as a result) influence my reviews at all. For these links, I add the affiliate to an affiliates.yaml file that defines the link structure. So for a link going to https://whatever.com/products/super-cool-thing?aid=wowSuchAffiliate, I can simply link to affiliate:whatever/super-cool-thing in my post, allowing me to do things like update the affiliate link parameters without needing to change a bunch of spots.
    • In that same vein, I also have a links.yaml file, which allows me to create short link references to whatever I want (including affiliate links). This allows me to define a link in a central location (add some-link to links.yaml, which points to whatever.com/something), and use that link in 10 posts as ref:some-link. If the target URL eventually changes to whatever.com/things/some, or if I want to turn it into an affiliate link, it’s an easy change for me.
  • Analytics: If you’re in the EU/EEA/UK/Switzerland, you’ll see the “are you cool with cookies” dialog that is now a part of daily life online. For my website, this is there because I use Google Analytics to see which posts are useful, popular, etc. It’s also just interesting to see where visitors are coming from (both in the world and on the internet)! The flow is: a country signal is set at the edge, front-end JavaScript checks that value plus stored consent, and then either shows the banner or loads Google Analytics. Outside those privacy regions, analytics is auto-enabled unless you manually reopen the consent banner and change your preference.

The Engine: GitHub Actions

This is where the magic happens.

Instead of building the site on my laptop, I use version control (Git) to track my changes. When I finish an article or build a new feature (like the MorseMaster trainer ), I commit those raw text files and original images to a repository on GitHub.

Then, GitHub Actions takes over.

I wrote a YAML workflow file that tells GitHub’s servers exactly what to do the moment it receives my code. It looks roughly like this:

  1. Spin up a virtual server: GitHub provisions an Ubuntu Linux machine in the cloud.
  2. Install Hugo: It downloads the specific version of the Hugo compiler I need.
  3. Checkout my code: It pulls in my raw Markdown files and images.
  4. The Heavy Lift (The Build): It runs the hugo command. This is the crucial step. The cloud server processes all of my image shortcodes. It takes my massive 5MB source images, resizes them, crops them, converts them to next-gen WebP formats, and generates tiny thumbnails.
  5. Deploy: Finally, it takes that beautifully optimized, compiled website and pushes it to my hosting provider.
  6. Social Media: Before deploying, it downloads the live social media RSS feed and compares it to the freshly generated version. If they are different, it fires off an HTTP request to Make.com, where I have a custom workflow that downloads the social media feed, looks for new posts, and then creates posts on Facebook and Instagram for anything new.

On top of that, I have a second workflow for the preview site, which is nearly identical, except it runs hugo with the -D and -F flags, and also overrides the site domain name so that I end up with a fully-functional version of the website that includes draft and future posts.

The “Burst” Workflow

Because of this pipeline, my travel workflow is entirely decoupled from my internet connection.

I can sit at the dinette while my wife drives us down the highway, completely offline. I can write an entire post, organize the images, and test the layout on my local machine.

When we finally pass through a town or hit the crest of a hill and my phone gets a momentary burst of 5G, I hit git push.

I am often only sending tiny commits (mostly text and maybe a couple of raw images), so the push itself is usually just a few seconds.

The moment that code hits GitHub, my laptop’s job is done. I can slam it shut and lose signal again. In the background, GitHub’s servers (which have backbone connections to the rest of the internet) spend roughly the next minute crunching images, building HTML, and deploying it to the world.

Planning Ahead

Often, it’s hard to find the time to sit down and write a post. Other times, it’s super easy. In order to maintain a constant cadence for this side project, I try to maintain a nice backlog of content. I normally try to write things a month or so out (right now, I’m running a bit behind, so I’m writing about three weeks ahead).

My workflow for this is to write the initial draft with the post’s draft property set to true, and let it stew for a day or two so that I can revisit a few times, figure out images and links, etc. I also set the date for the post to whenever I want it to post (typically a Tuesday, for no particular reason other than to stay consistent). This provides two different things to prevent it from accidentally posting before I’m ready for it, while making it immediately available on the preview site. The preview site gives me a nice spot to verify that the post renders correctly, links work, etc.

When I’m satisfied, I’ll set the draft property on the post to false. At this point, the post still just sits there until we get to the future date. I have my GitHub Actions CI/CD workflow set up to automatically run every morning. So when the day comes, the workflow runs, and since the post’s publish date has finally arrived, it will automatically generate the static files, add them to the website, update the RSS feeds, post stuff on social media, etc.

It runs really smoothly now, so that most weeks, I actually forget that something was going to post automatically until I randomly stumble across it on Instagram or something, which is always a fun surprise.

The backlog and schedule really remove a lot of stress and effort, so I just need to make sure I set aside enough time every once in a while to make sure I’ve always got a nice backlog of content ready to go.

Why This Matters

As a software engineer, it is easy to get caught up in the romance of “vibe coding” with AI or arguing over the newest JavaScript framework. But out here, practicality wins every time.

Automation isn’t just about saving time; it is about resilience.

When your physical environment is unpredictable—when the power at the RV park fluctuates, or the desert wind knocks your Starlink out of alignment—you need your digital environment to be bulletproof. By offloading the fragile, bandwidth-heavy tasks to the cloud, I ensure that The Roam Office stays online, even when the actual office is miles away from civilization.

The Good

  • Total offline capability for writing and local testing.
  • Minimal bandwidth required to push updates (perfect for spotty cellular).
  • Zero server maintenance—static files hosted on edge networks.
  • Images are automatically optimized and resized by the CI server.

The Bad

  • Steep learning curve if you aren’t familiar with Git or YAML.
  • No traditional CMS (Content Management System) interface.
  • If the build script fails in the cloud, you still have to get back online to read the error logs to fix it.