I use git to track cash expenses. (Don’t you?)

No, really. I track cash expenses using my phone by adding empty commits to a git repository, then pushing. (That’s my “sync” feature.) I still haven’t found a better way to automatically timestamp a bunch of simple one-line text entries that I can quickly sync off site. I use git with Gitlab.

When I want to collect that information to process it, I need the output in a more convenient format than the default git log format. For that, I combine git log with jq to make magic happen.

$ git log master...offsite-backup/master --format='{"date": "%aI","transaction": "%s"}' | jq --slurp '. | sort_by(.date)' | jq -r '. [] | [.date, .transaction] | @csv' > ~/Downloads/cash-transactions.csv

I can make that easier to read.

$ git log master...offsite-backup/master \
    --format='{"date": "%aI","transaction": "%s"}' \
    | jq --slurp '. | sort_by(.date)' \
    | jq -r '. [] | [.date, .transaction] | @csv' \
    > $HOME/Downloads/cash-transactions.csv

In case not all this feels familiar to you yet:

  • Give me all the commit comments from my offsite backup. Before this, I did git fetch offsite-backup to download the transactions.
  • Format the commits as a stream of JSON objects (in preparation for jq to process them), with the date in ISO 8601 format (is there any other format?) and using the entire commit comment as the transaction’s description.
  • Slurp the stream of items into a JSON array, then sort them by date (ascending, by default). Maybe I could have asked git log to sort the commits in ascending order by commit date, but I didn’t try. That would eliminate a step.
  • Grab the JSON output again, this time in “raw” mode, so that I can format it as CSV. (I don’t know why yet; I’m just doing it.) First, map the JSON array back into a stream of rows, then map each object into an array of values, then format as CSV. In order to format as CSV, jq wants a stream of rows, each row as an array of values.
  • Pipe the output to a CSV file so that I can more-easily import that into Libreoffice Calc.

There you have a very tiny introduction to jq! I find it very handy and it helps me do things that I used to do by writing little Ruby programs. It’s my go-to tool for processing structured data and I love it especially for how it turns unstructured output from standard bash tools into structured data more suitable for complicated processing.

So… go learn about jq! Set aside an hour. You’ll be glad you did.