<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Localization Engineering on Localization Times</title>
    <link>https://localizationtimes.com/categories/localization-engineering/</link>
    <description>Recent content in Localization Engineering on Localization Times</description>
    
    <generator>Hugo -- gohugo.io</generator>
    <language>en</language>
    <lastBuildDate>Sat, 05 Jul 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://localizationtimes.com/categories/localization-engineering/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Building Multilingual Glossaries With AI Support</title>
      <link>https://localizationtimes.com/blogs/building-glossaries-out-of-translation-memories/</link>
      <pubDate>Sat, 05 Jul 2025 00:00:00 +0000</pubDate>
      
      <guid>https://localizationtimes.com/blogs/building-glossaries-out-of-translation-memories/</guid>
      <description>Intro Recently, I was requested to find a way to build multilingual glossaries leveraging translation memory content. My client had translated an entire year of educational content using professional translators in seven different languages.
The challenge was bipartite:
Build all the required glossaries in a relatively short timeframe. Have a final human review step to validate the terms without breaking the bank. In other words, I had to create the glossary programmatically, using a pre-built list of English terms (source language), and then look for their translations, if any, in translation memories following an &amp;ldquo;exact match&amp;rdquo; approach.</description>
    </item>
    
    <item>
      <title>Okapi Rainbow: Real Use Cases in Localization and Internationalization</title>
      <link>https://localizationtimes.com/blogs/okapi-rainbow-for-localization-tasks/</link>
      <pubDate>Thu, 10 Apr 2025 00:00:00 +0000</pubDate>
      
      <guid>https://localizationtimes.com/blogs/okapi-rainbow-for-localization-tasks/</guid>
      <description>Intro Although AI models now dominate all our digital ecosystems and are helping localization folks move faster between production workflows, noble people in the software engineering space continue to develop handy tools that take little effort to master and cost zero dollars and computing power.
Okapi Framework&amp;rsquo;s Rainbow is a great example. Briefly, Rainbow performs a myriad of cascade batch tasks on text-based files. In other words, you have a convenient UI to mix and match Java text manipulation scripts.</description>
    </item>
    
  </channel>
</rss>
