Genie in a Plugin: OpenAI Translator Whispering Translation Revolution

Genie in a Plugin: OpenAI Translator Whispering Translation Revolution

Greetings, Translator Troops!

You're probably as excited as we are about the latest update for our OpenAI Translator with addition of the OpenAI language models and connectivity to Azure.

What's New?
These enhancements are readily available through the AppStore integration in Trados Studio or directly from our platform here
  - Compatibility with Azure OpenAI models, alongside OpenAI (same models used by ChatGPT)
  - Advanced toolset for managing multiple model configurations
  - New feature allowing the adjustment of the "Temperature" of the OpenAI models, for tailored output.
  - Added feature to test model configuration connectivity, ensuring optimal performance.
  - UI Automation (UIA) support for accessibility
  - Support for Shortcut actions, Apply Translation Suggestion and others.

With these updates, your translation efforts are destined to become more streamlined. Explore these features today and take your translation work to the next level.


OpenAI Models

Building upon our integrations with GPT-4, GPT-3.5-turbo, and text-davinci-003, we have extended the OpenAI model selection to include also gpt-3.5-turbo-16k and gpt-3.5-turbo-instruct, each with unique capabilities.

"But why on earth would I choose GPT-4 model over GPT-3.5-turbo?!" I hear you muttering. Well, consider this: you might opt for speed and efficiency with tasks that require straight-ahead translations (sprinters GPT-3.5-turbo) or you might need a more nuanced, "deep thinker" model for complex literary pieces (marathoner GPT-4). The running track is yours, and you can choose the best sprinter or marathoner.

Why would I choose a 16k model over the normal one?” This is rather like asking, "Why would I choose a 16th Century treasure map over a regular one?" By using the 16k model, you're opting for generous capacity and the ability to process larger chunks of text (up to 16,000 tokens, hence the name). This is particularly useful if you often work on wordy, information-dense documents or books as it enables the model to consider more context when translating. On the other hand, the normal models handle fewer tokens but may speed up processing time where efficiency is a priority. Just like the GPT-4 and GPT-3.5-turbo models, it's all about selecting the best tool for the job.

"But why should I switch to the gpt-3.5-turbo-instruct model?" you may ask. The reality is that the goto text-davinci-003 model is taking a well-deserved retirement, and will no longer be accessible from the 4th of January, 2024. Much like entrusting the baton to a reliable successor in a relay race, we recommend transitioning to the gpt-3.5-turbo-instruct. This successor doesn't only carry the baton forward; it runs the race with an amplified stride.

OpenAI have comprehensive documentation to guide you in selecting the right model here. You should also keep an eye on the models that are being deprecated here.

Azure OpenAI

Next, on the improvement docket: We've connected our plugin to Azure OpenAI models, in addition to OpenAI. Oversimplified, it's like having a ticket to two top concerts instead of one, DOUBLE delight! The benefit? Azure OpenAI models shine whenever you require stronger security or higher volume, and yes, fine-tuning them is simple, like whistling a tune (just slightly more techie)!

To connect to an OpenAI model that you have deployed with Azure, select the provider 'AzureOpenAI' and add the Endpoint & API Key. It's that simple!


Note:
the model will be selected automatically once you provide a valid Azure OpenAI deployment endpoint.

 

Other Feature Updates

Multiple model configurations are now just a fingertip away. We took this step after receiving and analyzing user feedback. Turns out, some of you enjoy change and switching up models more frequently than others. Who knew?

We've also added a feature to test model configuration connectivity, making sure it's smooth sailing – or should I say translating?! Early feedback suggests it helps enormously in setting up model configurations, as if we have equipped you with a compass to navigate the OpenAI islands.

Included a control to enable users to modify the Temperature of the OpenAI models. This value controls randomness. Lowering the temperature means that the model produces more repetitive and deterministic responses. Increasing the temperature results in more unexpected or creative responses

Finally, we are introducing Shortcut actions like Apply Translation Suggestion, Select Next Translation Suggestion, and others. This innovation aims to streamline the translator's workflow by enabling a more natural way of working, being able to navigate & select the appropriate OpenAI translation suggestion without moving focus from the segment in the editor.

To infinity (or the end of your translation project) and beyond! Get the new Trados Studio 2022 plugin update today and join us on this translation revolution!

Happy Translating!