Eduardo Trujillo
2 minutes

If you have a Linux laptop or desktop with a solid-state drive, and happen to have disk encryption enabled through a LUKS/LVM combo, getting TRIM support enabled isn’t a very straightforward process. It has to be enabled on every IO layer.

In their blog post, How to properly activate TRIM for your SSD on Linux: fstrim, lvm and dm-crypt, Carlos Lopez gives a brief introduction on what TRIM is, and explains why it is beneficial to enable it. The article also describes the steps needed to enable this functionality on each IO layer (dm-crypt, LVM, and the filesystem).

I followed most of this guide for one my own systems, and while I followed their advice and avoided enabling the discard flag on the filesystem, I never set up a cron job for running the trim operation periodically. So I found myself manually executing fstrim every now and then by hand.

This quickly became slightly repetitive, so I began looking into setting up the automation part. The guide above had an example setup using cron. However, I never set up a cron daemon on my system. So I wondered if it was possible to achieve the same result using systemd.

After reading some documentation on systemd unit files, I learned that is possible to setup timers for your service units, which effectively achieves the same result as a cron daemon.

Below I’m including a fstrim service and timer. The service mainly specifies which command to run and the timer defines how often it should be executed. Note that the service unit does not have a WantedBy option and its Type is oneshot. This means it won’t be automatically executed, and that it is intended to be a one off command, not a daemon. The timer does have a WantedBy option, which will result on it being started at boot.

I can check the status of the timer by using systemctl list-times and also run the operation on demands by starting the service unit: systemctl start fstrim. The logs are stored on the journal, which can be queried with journalctl -u fstrim.


This is the service file. Here you can customize how fstrim is invoked. I use the a and v options, which tell fstrim to automatically run on every drive and print verbose output. Additionally, this assumes fstrim is installed at /sbin/fstrim.

Description=Run fstrim on all drives

ExecStart=/sbin/fstrim -av

In this configuration, the fstrim command is executed by root 15 minutes after booting the machine and weekly afterwards.

Description=Run fstrim on boot and weekly afterwards



Eduardo Trujillo
3 minutes

On a previous post, I covered how to setup continuous integration for Haskell projects using a combination of Stack, Travis CI, and Docker. But what about documentation?

Sweet auto-generated docs!

If you already have CI setup for your project with Stack and Travis CI, it is actually pretty easy. In fact, you can make use of GitHub’s Pages feature, which hosts statics sites based on the content of the gh-pages branch to host your documentation for free.

The following is a bash script I’ve been using on a couple of projects on GitHub. It takes care of collecting the documentation and coverage reports generated while building the application, triggered using Stack’s --haddock and --coverage respectively.


# .travis/
# Make sure you install the Travis CLI and encrypt a GitHub API token with
# access to the repository: `travis encrypt GH_TOKEN=xxxxxxx --add`.
# This script is meant to be run during the `after_success` build step.

# Copy haddocks to a separate directory.
mkdir -p ../gh-pages
cp -R "$(stack path --local-doc-root)" ../gh-pages
cp -R "$(stack path --local-hpc-root)" ../gh-pages
cd ../gh-pages

# Set identity.
git config --global ""
git config --global "Travis"

# Add branch.
git init
git remote add origin https://${GH_TOKEN}${TRAVIS_REPO_SLUG}.git > /dev/null
git checkout -B gh-pages

# Push generated files.
git add .
git commit -m "Haddocks updated" git push origin gh-pages -fq > /dev/null

To do its job, the script relies on some neat tricks/hacks to keep the process as simple as possible:

  • Stack: When using --haddock and --coverage flags, Stack will place documentation and coverage reports on specific paths. You can query these paths using stack path. On the script above, we use special flags so the output of the program is a single line with the requested path. This avoids having to think about or trying to find where the compiler placed the documentation and related files.
  • GitHub API Tokens: Managing SSH keys inside a build job is probably something doable but probably not easy. Thankfully, GitHub allows you to push commits to a repository using just an API token.
  • Travis CI encrypted variables: This allows us to conveniently store the aforementioned token in a secure manner and easily access it as an environment variable while the job runs. We do have to use > /dev/null on a couple of places so the key is not leaked on build logs.
  • Bare Git branch: Given that keeping track of history is not a priority and could break the build process if somebody else pushed to the documentation branch, we simply keep a single commit on the gh-pages branch. One can easily do this by initializing a new repository, committing, and force-pushing into the branch.

If you would like to see a public working example, checkout this repository and its build logs on Travis CI. The resulting documentation is available as a GitHub pages website and coverage reports can be found under /hpc on the site.

Eduardo Trujillo
2 minutes

This week I’m flying to Colorado along with some co-workers for this year’s LambdaConf, which is one of the largest conferences focused on Functional Programming out there.

Unlike some conferences I’ve been to in the past, LambdaConf has multiple tracks covering multiple levels of experience. Each track has its own set of talks, which means that you won’t be able to attend all talks, but, and perhaps more importantly, you can sort of mix and match to create your own conference schedule!

So that’s exactly what I did. Below you can see which talks I’m hoping to attend during the conference. It’s a fairly balanced mix of all three tracks (beginner, intermediate, advanced) and a couple of alternatives.

I’m specially interested on the Urbit talk. I’ve read about the project in the past and it almost sounded like something out of (Computer) Science Fiction.

Hands-on and Haskell-related talks also sound like a lot of fun. I’m definitely looking forward to writing my own Lisp interpreter, and learning more about the inner workings of Haskell and advance features in its type system.

NOTE: (1) Some talk names are shortened. (2) OR is used to signify parts of the schedule I’m unsure of.


  1. Breakfast
  2. Keynote
  3. Agda from Nothing OR Introduction to Non-Violent Communication
  4. Lunch
  5. Make Your Own Lisp Interpreter (Part 1)
  6. Afternoon Refresh
  7. Make Your Own Lisp Interpreter (Part 2)
  8. Closing Remarks


  1. Breakfast
  2. Keynote
  3. Type Kwon Do
  4. Lunch
  5. Functional Algebra
  6. Urbit
  7. Afternoon Refresh
  8. On The Shoulders of Giants
  9. Interactive Tests and Documentation via QuickCheck-style Declarations
  10. Recursion Schemes
  11. Closing Remarks
  12. Mystery Dinner


  1. Breakfast
  2. Keynote
  3. Functional Web Programming OR Computing vs Computers
  4. Panel: The Functional Front-End OR Computing vs Computers
  5. Who Let Algebra Get Funk with My Data Types?
  6. RankNTypes Ain’t Rank at All
  7. Lunch
  8. The Next Great Functional Programming Language
  9. Manuals are for Suckers
  10. MTL Versus Free
  11. Afternoon Refresh
  12. What Would Happen if REST Were Immutable?
  13. OOP Versus FP
  14. Functional Refactoring
  15. Closing Remarks


This is the unconference day. I’ll probably attend one or two of the earliest events, but I will have to take off later in the day given that I have a plane to catch.

…or you can find more in the archives.

Copyright © 2015-2021 - Eduardo Trujillo
Except where otherwise noted, content on this site is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license.
Site generated using Gatsby.