Converting Cyrillic UTF-8 text encoded as Latin-1

This may be obvious to some, but visually-recognizing character encoding at a glance is not always obvious.

For example, pronunciation files downloaded form Forvo have the following appearance:


How can we extact the actual word from this gibberish? Optimally, the filename should reflect that actual word uttered in the pronunciation file, after all.

Step 1 - Extracting the interesting bits

The gibberish begins after the pronunciation_ru_ and ends before the file extension. Any regex tool can tease that out.

This is what I did in the shell:

echo $fn | perl -CSD -pe 's/pronunciation_ru_(.*)\.mp3/$1/gm;'

Now we have are left with оÑ‚бывание and the question of what kind of strange encoding this is.

Step 2 - Figuring out character encoding

Obviously, this uses some Latin character set. Since the Russian language does not, we have some work to do. The task of unraveling this is easier when you can visualize the hex character codes laid out. A simple Python script makes that easy:

#!/usr/bin/env python3

word = 'оÑ‚бывание'

print(":".join("{:02x}".format(ord(c)) for c in word))

Running this little script, we see d0:be:4e:303:82:d0:b1:4e:303:8b:d0:b2:d0:b0:d0:bd:d0:b8:d0:b5.

Immediately we can begin to discern a pattern - lots of D0 codes followed by something else. It’s beginning to look like Unicode. So, on macOS, I fired up the character viewer from the menu bar and drilled down to the Cyrillic (or Unicode) section. Lookup any Cyrillic character, for example ж :

Aha! The Cyrillic range in includes characters whose first byte is D0, so now it’s just a matter of lining up 2 byte groups and reading it as UTF-8. So the first character would be D0 BE - which, according to the table, is a lower case Cyrillic o

However, one complication remains. What is happening when the sequence is broken? There is an interruption in the two-byte reading frame that begins with the sequence 4e:303:82 then happens again with the sequence 4e:303:8b? The first step is to figure out the common portion 4e:303. Back to the character viewer table, we find 4E is the Latin capital letter N. So what about 303? Using the search feature of the Character Viewer, we easily see that U+0303 is a combining tilde. It’s a symbol that combines with the character that immediately precedes it. So what we have is not just Cyrillic UTF-8 characters encoded in Latin symbols, but with the additional oddity of a composed Ñ character. It we search for that character, we find that it is D1. So, the sequence isn’t really interrupted; it’s just an issue of how Ñ is comprised.

Step 3 - Reading N + combining tilde as u'\u00D1'

This just requires substituting one UTF-8 sequence for another. In Python, this will work:

# strange issue where Ñ (\u00D1) is intended
# but is encoded as N + tilde. Obviously this 
# is meaningless in terms of UTF-8 encoding
# but we have to deal with it before the decoding
# takes place.
word = word.replace(u'\u004E\u0303',u'\u00D1')

Step 4 - Putting it all together

After correcting the odd Ñ composition, we can simple decode the text as UTF-8, but we have one little twist first. .decode('utf8') requires a sequence of bytes (class <bytes>) not a string. So we have to make a trip through encoding in ‘latin1’ first, then decode it to UTF-8.

tr_word = word.encode('latin1').decode('utf8')

rfndecode - a Python script to decode this form of encoding

#!/usr/bin/env python3

# rfndecode
# When downloading files from Forvo, we get file
# names that look like: .коÑ‚.mp3
# This puts the text into ordinary utf-8
# Input: Text to translate as argument or 
#        on stdin
# Output: Re-encoded text

import sys

# accept word as either argument or on stdin
   word = sys.argv[1]
   except IndexError:
      word =

      # check if this word is in the expected encoding
      if word.find(u'\u00D0') == -1:

               # strange issue where Ñ (\u00D1) is intended
               # but is encoded as N + tilde. Obviously this 
               # is meaningless in terms of UTF-8 encoding
               # but we have to deal with it before the decoding
               # takes place.
               word = word.replace(u'\u004E\u0303',u'\u00D1')

               # convert string to bytes in latin script
               # then decode it as UTF-8
               tr_word = word.encode('latin1').decode('utf8')

Shell script to extract the unencoded text and rename

Now it’s just a matter of connecting all the components, which I did in a small shell script.


# extract the really messed-up name of the 
# pronunciation file
if [ "$#" -gt 0 ]; then
  read fn

tr_fn=$(echo $fn | perl -CSD -pe 's/pronunciation_ru_(.*)\.mp3/$1/gm;' | rfndecode ).mp3
tr_fn=$(basename $tr_fn)
printf "*** tr_fn = %s\n" $tr_fn >> $HOME/wtf.txt
mv $fn $HOME/Documents/mp3/$tr_fn

Undoubtedly, the mysterious encoding might have been obvious to some, but for me it was an illustration of how to approach technical problems by taking them apart into the smallest discernible piece then applying what you know - even if limited in scope - to assemble the pieces into a comprehensive solution.

accentchar: a command-line utility to apply Russian stress marks

I’ve written a lot about applying and removing syllabic stress marks in Russian text because I use it a lot when making Anki cards.

This iteration is a command line tool for applying the stress mark at a particular character index. The advantage of these little shell tools is that they can be composable, integrating into different tools as the need arises.


while getopts i:w: flag
    case "${flag}" in
        i) index=${OPTARG};;
        w) word=${OPTARG};;

if [ $word ]; then
    read temp

for (( i=0; i<${#temp}; i++ )); do
    if [ $i -eq $index ]; then
        thischar=$(echo $thischar | perl -C -pe 's/(.)/\1\x{301}/g;')

echo $outword

We can use it in a couple different ways. For example, we can provide all of the arguments in a declarative way:

➜  cli accentchar -i 1 -w 'кошка'

Or we can pipe the word to accentchar and supply only the index as an argument:

➜  cli echo "кошка" | accentchar -i 1

sterilize-ng: a command-line URL sterilizer

Introducing sterilize-ng [GitHub link] - a URL sterilizer made to work flexibily on the command line. Background The surveillance capitalist economy is built on the relentless tracking of users. Imagine going about town running errands but everywhere you go, someone is quietly following you. When you pop into the grocery, they examine your receipt. They look into the bags to see what you bought. Then they hop in the car with you and keep careful records of where you go, how fast you drive, whom you talk with on the phone.

Using Perl in Keyboard Maestro macros

One of the things that I love about Keyboard Maestro is the ability to chain together disparate technologies to achieve some automation goal on macOS. In most of my previous posts about Keyboard Maestro macros, I’ve used Python or shell scripts, but I decided to draw on some decades-old experience with Perl to do a little text processing for a specific need. Background I want this text from Wiktionary: to look like this:

Stripping Russian stress marks from text from the command line

Russian text intended for learners sometimes contains marks that indicate the syllabic stress. It is usually rendered as a vowel + a combining diacritical mark, typically the combining acute accent \u301. Here are a couple ways of stripping these marks on the command line: First is a version using Perl #!/bin/bash f='покупа́ешья́'; echo $f | perl -C -pe 's/\x{301}//g;' And then another using the sd tool: #!/bin/bash f='покупа́ешья́'; echo $f | sd "\u0301" "" Both rely on finding the combining diacritical mark and removing it with regex.

Splitting a string on the command line - the search for the one-liner

It seems like the command line is one of those places where you can accomplish crazy efficient things with one-liners. Here’s a perfect use case for a CLI one-liner: In Anki, I often add lists of synonyms and antonyms to my vocabulary cards, but I like them formatted as a bulleted list. My usual route to that involves Markdown. But how to convert this: известный, точный, определённый, достоверный to - `известный` - `точный` - `определённый` - `достоверный` After trying to come up with a single text replacement strategy to make this work, the best I could do was this:

A Keyboard Maestro macro to edit Anki sound file

Often when I import a pronunciation file into Anki, from Forvo for example, the volume isn’t quite right or there’s a lot of background noise; and I want to edit the sound file. How? The solution for me, as it often the case is a Keyboard Maestro macro. Prerequisites Keyboard Maestro - if you are a macOS power user and don’t have KM, then your missing on a lot. Audacity - the multi-platform FOSS audio editor Outline of the approach Since Keyboard Maestro won’t know the path to our file in Anki’s collection.

Querying the Anki database when the application is running

When the Anki application is open on the desktop, it places a lock on the sqlite3 database such that it can’t be queried by another process. One workaround is to try to open the database and if it fails, then make a temporary copy and query that. Of course, this only works with read-only queries. Here’s the basic strategy: #!/usr/local/bin/python3 # -*- coding: utf-8 -*- # requires python >= 3.8 to run because of anki module from anki import Collection, errors if __name__ == "__main__": try: col = Collection(path_to_anki_db) except (errors.

Normalizing spelling in Russian words containing the letter ё

The Russian letters ё and e have a complex and troubled relationship. The two letters are pronounced differently, but usually appear the same in written text. This presents complications for Russian learners and for text-to-speech systems. In several recent projects, I have needed to normalize the spelling of Russian words. For examples, if I have the written word определенно , is the word actually определенно ? Or is it определённо ?

Scraping Russian word definitions from Wikitionary: utility for Anki

While my Russian Anki deck contains around 27,000 cards, I’m always making more. (There are a lot words in the Russian language!) Over the years, I’ve become more and more efficient with card production but one of the missing pieces was finding a code-readable source of word definitions. There’s no shortage of dictionary sites, but scraping data from any site is complicated by the ways in which front-end developers spread the semantic content across multiple HTML tags arranged in deep and cryptic hierarchies.