Eduardo Trujillo
4 minutes

For few years now, I’ve been running a small Kubernetes cluster for managing a few services, including this blog.

My setup has generally had a strong focus on cost-savings and efficient resource usage, so I generally avoided anything that didn’t have a way to set spending limits or predictable cost. It is just a personal project hobby after all.

After a few iterations of this setup on DigitalOcean and adquiring some dedicated hardware, I eventually moved to hosting the cluster at home.

Setting up and maintaining a Kubernetes cluster is not a trivial task and requires some level of planning. However, once everything is up and running, it’s generally a very pleasant setup to work with.

One part of the puzzle is storage. For clusters hosted on a large provider like AWS or Google Cloud, you generally have a few storage options available like EBS and S3. With my homelab setup, however, I had to look for options that I could run locally.

Rook is a project that I’ve been following and using for a while to accomplish this. It allows you to run a storage layer on your cluster and provides multiple interfaces: Block Storage, Object Storage (i.e. S3-like), and Distributed Filesystems. It does all this by deploying and managing a Ceph cluster for you.

I’ve primarily used it for Block Storage support, but more recently I’ve begun to use it’s Object Storage gateway (RGW), which allows you to consume storage using an S3-compatible API.

When deployed, the default setup provides you with buckets accessible over a path-based API. Nonetheless, if you have used a block storage solution from a cloud provider, you’ve likely noticed that they generally provide DNS-based bucket access (e.g. {bucket_name}.{service_endpoint}). Similarly, a lot of S3 clients seem to expect this DNS-based API.

From reading the documentation of Ceph’s RGW, it seemed that there is some level support for this, including serving static websites. So I set out to explore what would it take to get this working with Rook.

I eventually got this working using Rook 1.2 and Ceph Nautilus. Below are some of my notes on some of the steps I took.


For the DNS records themselves, I started by picking out a name for my storage service, and created two DNS entries:

  • Just points to one of my ingress nodes.
  • * A wildcard that also points to my ingress nodes.

Once set up, I verified that that visiting random subdomains resolved correctly to my cluster’s ingress endpoints.


In order to serve buckets using HTTPS, I needed a wildcard certificate. Fortunately, this is trivial to set up using cert-manager and LetsEncrypt.

When creating the certificate CRD, I included the wildcard host in the list of dnsNames:

kind: Certificate
  name: buckets-chromabits-com
  namespace: rook-ceph
  secretName: buckets-chromabits-com-tls
    name: letsencrypt-prod
    - '*'


Next, is routing requests to the RGW service. I had a preexisting ingress set up so the main challenge was figuring out how to handle requests for a wilcard domain.

Upon some initial reading, it seems that Kubernetes ingresses don’t have a way to handle wilcard domains. However, after skimming through some issues on GitHub, I learned that it is possible to configure nginx-ingress to handle this case.

This is done through the annotation on the Ingress resource. I added '*'to the annotations and it began handling requests for the subdomains.

Another option here would be to manually modify the ingress every time a new bucket is created, but that doesn’t really scale well and only seems feasible if you only plan to have a small number of buckets.

Rook Configuration

The last step is to configure the RGW to handle requests from these domains.

The documentation mentions that a domain can be set in rgw dns name in the daemon’s configuration. Though this didn’t seem like a simple change to implement using Rook, so I looked for alternatives.

I eventually learned that it is possible to specify one or more hostnames per zonegroup on the RGW, without having to mess with global settings. So, adding the hostnames I needed was just a matter of modifying the default zonegroup.

I deployed the Rook Toolbox container and used radosgw-admin zonegroup get default to get the configuration of the default zonegroup.

I stored the output on a JSON file and modified the JSON object to include a new hostname in the hostnames key:

radosgw-admin zonegroup get default > default.json

Once satisfied with the changes, I applied them and restarted the RGW:

radosgw-admin zonegroup set --infile default.json
radosgw-admin period update --commit

After the RGW came back up, buckets began to resolve correctly via DNS! (e.g.

Eduardo Trujillo
4 minutes

For the past few years, I’ve given this blog a fresh coat of paint every few months, even if I don’t post any new content. Sometimes it’s a major redesign or refactor, involving switching the underlying language or framework. I’ve gone from PHP, to a Node.js server, and back to basics, a static site. Other times, the changes are more subtle and in the background.

As time has allowed in the past few weeks, I’ve been working on modernizing the blog’s project. However, it’s not a departure from using Hakyll, in fact it’s the opposite. My two main goals this round were to improve the visual appearance of the site and to make the code itself easier to build and deploy.

Improving the site’s appearance wasn’t something I planned throughly. After all, I’m a programmer first, designer second. I decided to go with an iterative approach. This worked well with my schedule because I have limited time to work on side projects, and I don’t necessarily work on the same project every time.

Yep, seems to look OK even on an iPhone X in landscape mode

Each iteration began with me looking at the current design and layout, and asking myself questions like “What can I make better?”, “What am I trying to achieve?”, and “How do I optimize the site for that?“. This was an interesting mental exercise because these are not things I always think about.

Next, I would make a few changes to the layout or styles, and play with them for a while. If I felt that it was an overall improvement, the change stayed, otherwise I rolled it back and tried something else.

Now, I realize I’m probably describing what is a generic iterative design process. What’s interesting to me is being able to spend time on it, given that my past redesigns were mostly driven by a change in the stack powering the site. This goes back to my decision to keep things simple and making my blog statically generated.

On the engineering side of things, my sole goal was to keep simplifying. There were also iterations, but different questions were asked (“Can I get rid of that?”). I basically looked at all the “moving pieces” and dependencies of the project used to generate and serve the blog, and tried to figure out what could be removed.

One big dependency of the project was Node.js. Even though the site is generated by Hakyll, I still needed a way to compile my SCSS files, along with a package manager to obtain dependencies like Foundation or Font Awesome. On top of that, the project was a bit stuck in the past. Dependencies were pulled by Bower rather than NPM, and it was using Gulp rather than Webpack.

I first tried to migrate to Webpack, but later decided that it wasn’t helping much. I still relied on Node. As I kept looking, I found two packages on Hackage regarding building Sass/SCSS projects using just Haskell by using libsass: hsass provided a Haskell API on to of these bindings, and hakyll-sass showed how to use this in a Hakyll context.

I added hsass as a dependency and used a similar approach as hakyll-sass without importing another dependency. A simple SCSS compiler for Hakyll can be defined in just a few lines:

sassCompiler :: SassOptions -> Compiler (Item String)
sassCompiler options = getResourceBody >>= compileSass options
    compileSass :: SassOptions -> Item String -> Compiler (Item String)
    compileSass options item = join $ unsafeCompiler $ do
      result <- compileFile (toFilePath $ itemIdentifier item) options
      case result of
        Left sassError -> errorMessage sassError >>= fail
        Right result_ -> pure $ makeItem result_

On top of that, I also took a minimalistic approach with dependencies. I dropped most external resources, such as Google Analytics, Typekit, and MathJax, and replaced them with resources served from site itself. This greatly simplifies CSP policies, reduces the number of requests a browser has to do to read this blog, and is privacy-friendly.

Tracking is just gone, and I don’t plan to add it back, unless the needs of the site change. Right now, it’s just overkill for this blog. Typekit was replaced with Inter UI, a gorgeous open source font. MathJax is now self-hosted rather than pulled using a CDN. Foundation and Font-Awesome were already self-hosted.

Dependency management through NPM was replaced with shallow Git Submodules. Rather than cloning the entire repository, Git fetches the repository at the specific version/commit needed.

Finally, you may have noticed that the sidebar is gone. I tought about it and came to the conclusion that I don’t use my Twitter that actively anymore. Now, there is nothing to steal focus from the core of the blog, the content.

With the sidebar gone, I still wanted a place to highlight content, and place links to other sections of the blog. The home page now has a “leaderboard” component. It’s an experiment. Over time, it may stay or leave. We’ll see.

Anyhow, I still haven’t talked about how all this is deployed. I hope I can get to it on a future post.

Eduardo Trujillo
2 minutes

Phabulous is a server written in Go capable of receiving event notifications from Phabricator, a suite of developer tools and forward them to a Slack community, while also providing additional functionality through a bot interface.

You can interact with Phabulous over chat messages

The project started while I was working at Seller Labs and Phabricator was their repository hosting tool. We mainly wanted to have better integration with Slack, just like GitHub and Bitbucket had.

Over time, Seller Labs migrated to GitHub and other tools, so development on Phabulous slowed down a bit since I wasn’t using it on a daily basis any more.

However, this does not mean the project is dead, I’ve quietly been finding some spare time to work on improving Phabulous, and it has received a few contributions through pull requests.

I recently landed a large refactor of the project which should make future contribution and extensions easier. I’ve reorganized how the code is structured to make better use of Go interfaces.

In a perfect world, I would have enough time to write an extensive test suite for the project, but given my limited time, I’ve only been able to cover certain simple part of the project. The transition to interfaces has allowed me to improve the coverage of the project since dependencies can now be easily mocked.

Another side effect that came naturally from this transition was the increased modularity of the code. Want to implement a connector for a different chat protocol? Or do you want to add a new command? Just implement the interfaces.

While still technically in beta, I’m happy to say that Phabulous has reached v3.0.0. With this new release, you can expect the following new features:

  • Experimental support for IRC: The bot is now able to connect and work over IRC networks. Functionality is almost on-par with what is available on Slack.
  • Modules: Commands and functionality are now split into modules. You can enable/disable them in the configuration file, as well as implementing your own modules when forking the project.
  • Improved integration between Slack and Phabricator: Phabricator added a new authentication provider that allows you to sign in with your Slack account. Phabulous makes use of this new integration with a new extension. This extension allows the bot to lookup Slack account IDs over the Conduit API, which means the bot can properly mention users on the chat by using their Slack username rather than their Phabricator username.
  • Summon improvements: The summon command can now expand project members if a project is assigned as a reviewer of a revision. Additionally, the lookup algorithm has been optimized to perform less requests on the Conduit API.
  • Many other small fixes and improvements.

You can get the latest version of the bot by using Docker or by downloading the latest release on GitHub.

…or you can find more in the archives.

Copyright © 2015-2021 - Eduardo Trujillo
Except where otherwise noted, content on this site is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license.
Site generated using Gatsby.