Using fswatch to dynamically update Obsidian documents

Although I’m a relative newcomer to Obsidian, I like what I see, especially the templating and data access functionality - both that provided natively and through the Templater and Dataview plugins.

One missing piece is the ability to dynamically update the YAML-formatted metadata in the frontmatter of Obsidian’s Markdown documents. Several threads on both the official support forums and on r/ObsidianMD have addressed this; and there seems to be no real solution.1 None proposed solution - mainly view Dataview inline queries or Templater dynamic commands - seems to work consistently.

The solution proposed here is a proof-of-concept for an entirely different way of addressing the problem. But it requires getting dirty with more command line programming than many may want to contend with. If you do, the basic idea is to watch the vault directories for changes and update the YAML directly outside of Obsidian.

Use case

I have a YAML field mdate: which is the date last modified. Whenever the file is touched, I would like the mdate: field updated. Here’s a sample of my frontmatter:

---
uid:	20210517060102
cdate:	2021-05-17 06:01 
mdate:  2022-11-18 20:06
type:	zettel
---

Straightforward, right?

Solution

As I was unable to implement a solution inside Obsidian, I turned to fswatch which is a cross-platform filesystem watcher. When certain events occur in a watched directory, it reports those events in user space.

#!/bin/bash

FP="path/to/my/vault"

function update_mdate() {
   FILE="$1"
   if uname | grep -q "Darwin"; then
      MODDATE=$(stat -f "%Sm" -t "%Y-%m-%d %H:%M" "$FILE")
   else
      MODDATE=$(stat -f "-c %y" "$FILE" \
         | xargs \
         | awk '{split($0,a,":"); printf "%s:%s\n", a[1], a[2]}' )
   fi
   sed -i '' -E "s/(modification date:).*/\1  $MODDATE/g" "$FILE"
   sed -i '' -E "s/(mdate:).*/\1  $MODDATE/g" "$FILE"
}

/usr/local/bin/fswatch -0 --format="%p|%f" $FP | while read -d "" event; do
   [[ $event =~ ".DS_Store" ]] && continue
   [[ $event =~ "IsDir" ]] && continue 
   [[ ! $event =~ "Updated" ]] && continue
   
   # ignore anything that's not a Markdown file
   [[ ! $event =~ ".md" ]] && continue 
   
   # ignore file removal events
   [[ $event =~ "Removed" ]] && continue
   # ignore swap file bs
   [[ $event =~ ".swp" ]] && continue
   
   # Ignore what may be swap files that Obsidian uses
   UPDATED_FILE=$(echo "$event" | cut -d "|" -f1)
   [[ ! $event =~ ".!" ]] && update_mdate "$UPDATED_FILE"
done

I’ll try to explain the highlights of the above code. The main loop is around the fswatch invocation. I won’t go into depth with the --format parameter; but essentially we are looking for the file that has been altered and the event list.

Most of the remaining logic of this loop is to filter out unwanted events, directories, and files. For example:

[[ ! $event =~ "Updated" ]] && continue

ensures that only Updated events will be processed. The rest of these filters are fairly self-explanatory.

One additional feature of these event and file filters that does need to be mentioned is that it appears Obsidian writes some kind of temporary swap file before committing changes to the main file. These files have a naming convention which is just the main filename prefixed with “.!”, so the following logic only processes files that do not have this pattern:

# Ignore what may be swap files that Obsidian uses
UPDATED_FILE=$(echo "$event" | cut -d "|" -f1)
[[ ! $event =~ ".!" ]] && update_mdate "$UPDATED_FILE"

Updating the mdate

The logic for updating the modified date parameter is embedded in the Bash function update_mdate.

function update_mdate() {
   FILE="$1"
   if uname | grep -q "Darwin"; then
      MODDATE=$(stat -f "%Sm" -t "%Y-%m-%d %H:%M" "$FILE")
   else
      MODDATE=$(stat -f "-c %y" "$FILE" \
         | xargs \
         | awk '{split($0,a,":"); printf "%s:%s\n", a[1], a[2]}' )
   fi
   sed -i '' -E "s/(modification date:).*/\1  $MODDATE/g" "$FILE"
   sed -i '' -E "s/(mdate:).*/\1  $MODDATE/g" "$FILE"
}

Most of the complexity in the date-updating function is in handling stat differently depending on the platform. macOS uses the BSD version of stat but Linux uses the coreutils version. After parsing the last modified date from stat we use sed to “splice it into” our document. Some of my documents use mdate: and others have used modification date: so we handle both. I haven’t had the chance to do thorough testing of the Linux piece, but I believe it should work.

Then we just have to worry about how to keep the script running. On my macOS machine, I use LaunchControl to set it up as a User Agent. If you’re comfortable with launchd then you can set it up directly with the help of a GUI application.


  1. For example, this thread on the official Obsidian forums which discusses the issue with using dynamic queries in the YAML. One solution offered was to embedded the dynamic command in apostrophes: ‘<%+ tp.file.last_modified_date() %>’ This did not work in my case, nor did it work in the case of at least one other respondent. As of right now, I don’t think there’s a good solution, apart from the approach suggested in this article, if you want the frontmatter YAML to update dynamically. ↩︎

Week functions in Dataview plugin for Obsidian

There are a couple features of the Dataview plugin for Obsidian that aren’t documented and are potentially useful.

For the start of the week, use date(sow) and for the end of the week date(eow). Since there’s no documentation as of yet, I’ll venture a guess that they are locale-dependendent. For me (in Canada), sow is Monday. Since I do my weekly notes on Saturday, I have to subtract a couple days to point to them.

`="[[" + dateformat(date(sow) - dur(2 days), "yyyy-MM-dd") + " weekly" + "|Week]]"`

This inline Dataview function will provide a link to my weekly summary document.

Scraping Forvo pronunciations

Most language learners are familiar with Forvo, a site that allows users to download and contribute pronunciations for words and phrases. For my Russian studies, I make daily use of the site. In fact, to facilitate my Anki card-making workflow, I am a paid user of the Forvo API. But that’s where the trouble started.

When the Forvo API works, it works OK, often extremely slow. But lately, it has been down more than up. In an effort to patch my workflow and continue to download Russian word pronunciations, I wrote this little scraper. I’d prefer to use the API, but experience has shown now that the API is slow and unreliable. I’ll keep paying for the API access, because I support what the company does. And as often as not when a company offers a free service, it’s likely to be involved in surveillance capitalism. So I’d rather companies offer a reliable product at a reasonable price.

A regex to remove Anki's cloze markup

Recently, someone asked a question on r/Anki about changing and existing cloze-type note to a regular note. Part of the solution involves stripping the cloze markup from the existing cloze’d field. A cloze sentence has the form Play {{c1::studid}} games. or Play {{c1::stupid::pejorative adj}} games.

To handle both of these cases, the following regular expression will work. Just substitute for $1.

\{\{c\d::([^:\}]+)(?:::+[^\}]*)*\}\}

However, the Cloze Anything markup is different. It uses ( and ) instead of curly braces. If we want to flexibly remove both the standard and Cloze Anything markup, then our pattern would look like:

Anki: Insert the most recent image

I make a lot of Anki cards, so I’m on a constant quest to make the process more efficient. Like a lot of language-learners, I use images on my cards where possible in order to make the word or sentence more memorable.

Process

When I find an image online that I want to use on the card, I download it to ~/Documents/ankibound. A Hazel rule then grabs the image file and converts it to a .webp file with relatively low quality and a maximum horizontal dimension of 200px. The size and low quality allow me to store lots of images without overwhelming storage capacity, or more importantly, resulting in long synchronization times.

Altering Anki's revlog table, or how to recover your streak

Anki users are protective of their streak - the number of consecutive days they’ve done their reviews. Right now, for example, my streak is 621 days. So if you miss a day for whatever reason, not only do you have to deal with double the number of reviews, but you also deal with the emotional toll of having lost your streak.

You can lose your streak for one of several reasons. You could have simply been lazy. You may have forgotten that you didn’t do your Anki. Or travel across timezones put you in a situation where Anki’s clock and your clock differ. Others have described a procedure for resetting the computer’s clock as a way of recovering a lost streak. It apparently works though I haven’t tried it. Instead I’ll focus on a technique that involves working directly with the Anki database.

A deep dive into my Anki language learning: Part III (Sentences)

Welcome to Part III of a deep dive into my Anki language learning decks. In Part I I covered the principles that guide how I setup my decks and the overall deck structure. In the lengthy Part II I delved into my vocabulary deck. In this installment, Part III, we’ll cover my sentence decks.

Principles

First, sentences (and still larger units of language) should eventually take precedence in language study. What help is it to know the word for “tomato” in your L2, if you don’t know how to slice a tomato, how to eat a tomato, how to grow a tomato plant? Focus on larger units of language increases your success rate in integrating vocabulary into daily use.

A deep dive into my Anki language learning: Part II (Vocabulary)

In Part I of my series on my Anki language-learning setup, I described the philosophy that informs my Anki setup and touched on the deck overview. Now I’ll tackle the largest and most complex deck(s), my vocabulary decks.

First some FAQ’s about my vocabulary deck:

  1. Do you organize it as L1 → L2 or as L2 → L1, or both? Actually, it’s both and more. Keep reading.
  2. Do you have separate subdecks by language level, or source, or some other characteristic? No, it’s just a single deck. First, I’m perpetually confused by how subdecks work. I’d rather subdecks just act as organizational, not functional, tools. But other users don’t see it that way. That’s why I use tags rather than subdecks to organize content.1
  3. Do you use frequency lists? No, I extract words from content that I’m reading, that I encounter when listening to moviews or podcasts, or words that my tutor mentions in conversation. That’s what goes in Anki.

Since this is a big topic, I’m going to start with a quick overview of the fields in the main note type that populates my vocabulary deck and then go into each one in more detail and how they fit together in each of my many card types. At the very end of the post, I’ll talk about verb cards which are similar in most ways to the straight vocabulary card, but which account from the complexities of the Russian verbal system.2

A deep dive into my Anki language learning: Part I (Overview and philosophy)

Although I’ve been writing about Anki for years, it’s been in bits and pieces. Solving little problems. Creating efficiencies. But I realized that I’ve never taken a top-down approach to my Anki language learning system. So consider the post the launch of that overdue effort.

Caveats

A few caveats at the outset:

  • I’m not a professional language tutor or pedagogue of any sort really. Much of what I’ve developed, I’ve done through trial-and-error, some intuition, and a some reading on relevant topics.
  • People learn differently and have different goals. This series will be exclusively focused on language-learning. There are similarities between this type of learning and the memorization of bare facts. But there are important differences, too.
  • As I get further and further into the details, more and more of what I discuss will be macOS specific. I’m not particularly opinionated about operating systems. And my preference has more to do with the accumulated weight of what I’m accustomed to and as a consequence, the potential pain of switching. In the sections that deal with macOS specific solutions, feel free to skip over that content or read it with a view toward thinking about parallel tools on whatever OS you are using.
  • I use Anki almost exclusively for Russian language acquisition and practice. Of necessity, some particularities of the language are going to dictate the specific issues that you need to solve for. For example, if verbs of motion aren’t part of the grammar of your target language (TL) then rather than getting lost in those weeds, think about what unique counterparts your TL does have and how you might adopt the approaches I’m presenting.

We that out of the way, let’s dive in!

A tool for scraping definitions of Russian words from Wikitionary

In my perpetual attempt to make my language learning process using Anki more efficient, I’ve written a tool to extract English-language definitions from Russian words from Wiktionary. I wrote about the idea previously in Scraping Russian word definitions from Wikitionary: utility for Anki but it relied on the WiktionaryParser module which is good but misses some important edge cases. So I rolled up my sleeves and crafted my own solution. As with WiktionaryParser the heavy-lifting is done by the Beautiful Soup parser. Much of the logic of this tool is around detecting the edge cases that I mentioned. For example, the underlying HTML format changes when we’re dealing with a word that has multiple etymologies versus those with a single etymology. Whenever you’re doing web scraping you have to account for those sorts of variations.