Interlinear glossing dealing with punctuation
PUNCT
nodes in interlinear glossing.
PUNCT
nodes in interlinear glossing.
Still thinking about interlinear glossing for my language learning project. The leizig.js library is great but my use case isn’t really what the author had in mind. I really just need to display a unit consisting of the word as it appears in the text, the lemma for that word form, and (possibly) the part of speech. For academic linguistics purposes, what I have in mind is completely non-standard.
The other issue with leizig.js for my use case is that I need to be able to respond to click events on individual words so that they can be tagged, defined or otherwise worked with. It’s straightforward how I could apply CSS id
attributes to word-level elements to support that functionality.
I have written previously about stripping syllabic stress marks from Russian text using a Perl-based regex tool. But I needed a means of doing in solely in Python, so this just extends that idea.
#!/usr/bin/env python3
def strip_stress_marks(text: str) -> str:
b = text.encode('utf-8')
# correct error where latin accented ó is used
b = b.replace(b'\xc3\xb3', b'\xd0\xbe')
# correct error where latin accented á is used
b = b.replace(b'\xc3\xa1', b'\xd0\xb0')
# correct error where latin accented é is used
b = b.replace(b'\xc3\xa0', b'\xd0\xb5')
# correct error where latin accented ý is used
b = b.replace(b'\xc3\xbd', b'\xd1\x83')
# remove combining diacritical mark
b = b.replace(b'\xcc\x81',b'').decode()
return b
text = "Том столкну́л Мэри с трампли́на для прыжко́в в во́ду."
print(strip_stress_marks(text))
# prints "Том столкнул Мэри с трамплина для прыжков в воду."
The approach is similar to the Perl-based tool we constructed before, but this time we are working working on the bytes
object after encoding as utf-8. Since the bytes
object has a replace
method, we can use that to do all of the work. The first 4 replacements all deal with edge cases where accented Latin characters are use to show the placement of syllabic stress instead of the Cyrillic character plus the combining diacritical mark. In these cases, we just need to substitute the proper Cyrillic character. Then we just strip out the “combining acute accent” U+301
→ \xcc\x81
in UTF-8. After these replacements, we just decode
the bytes object back to a str
.
Previously, I wrote about adding syllabic stress marks to Russian text. Here’s a method for doing the opposite - that is, removing such marks (ударение) from Russian text.
Although there may well be a more sophisticated approach, regex is well-suited to this task. The problem is that
def string_replace(dict,text):
sorted_dict = {k: dict[k] for k in sorted(dict)}
for n in sorted_dict.keys():
text = text.replace(n,dict[n])
return text
dict = { "а́" : "а", "е́" : "е", "о́" : "о", "у́" : "у",
"я́" : "я", "ю́" : "ю", "ы́" : "ы", "и́" : "и",
"ё́" : "ё", "А́" : "А", "Е́" : "Е", "О́" : "О",
"У́" : "У", "Я́" : "Я", "Ю́" : "Ю", "Ы́" : "Ы",
"И́" : "И", "Э́" : "Э", "э́" : "э"
}
print(string_replace(dict, "Существи́тельные в шве́дском обычно де́лятся на пять склоне́ний."))
This should print: Существительные в шведском обычно делятся на пять склонений.