Unleash the Power of Translation with AI Professional

Unleash the Power of Translation with AI Professional

Enter the world of AI-powered translation with AI Professional for Trados Studio. This powerful tool combines the capabilities of Azure and OpenAI's language models to revolutionize your translation projects. With features like Translation Provider for batch tasks automation, the AI Companion for enhanced translation capabilities, and Terminology-aware Translation Suggestions for precision and consistency, AI Professional is your trusted ally in the realm of language. Your very own language wizard without the pointy hat and questionable fashion choices.

AI Professional is already available through the AppStore integration in Trados Studio or directly from our platform here

Key Features

Translation Provider - Your Efficiency Ally
Imagine having a personal assistant who can take care of mundane tasks while you sip your coffee and bask in the glory of your superiority. With AI Professional, the Translation Provider integration automates batch tasks like analysis and pre-translation, boosting productivity. Just sit back and let it do its thing while you focus on the important stuff.

AI Companion - Your Sidekick in Translation
Every superhero needs a sidekick, right? Well, meet your trusty translation sidekick - the AI Companion. This little marvel resides in the Editor view and is packed with superpowers to simplify the translation process with its ability to customize the Translation Suggestions with specific prompts and settings. Not only can it highlight translated terms, but also provide a visual representation of comparison changes, making it easier for you to identify and track modifications made during the translation. It's like having a team of linguists brainstorming ideas for you, minus the office drama and coffee stains on your keyboard.

Terminology-aware
The Translation Provider & AI Companion also bring terminology awareness to the table. They can incorporate terminology context with the prompt when requesting translations, resulting in more precise and contextually relevant suggestions. Say goodbye to those cringe-worthy moments when you realize your translation didn't quite hit the mark. This smart integration ensures consistency and precision in all your translated content.

Visualize the Magic
Sometimes, you just need that visual representation to truly grasp the magnitude of the magic happening behind the scenes. The AI Companion's visual representation of translated terms & comparison changes helps you track and identify modifications made during the translation process. The magical highlighter that illuminates your changes and helps you maintain control over your work. No more getting lost in a sea of text revisions!

Tags and Formatting - Neat and Tidy
We're all familiar with the headache of having to reintroduce tags and formatting after applying translation suggestions. Ever tried to fit a square peg in a round hole? it's worse! AI Professional has your back with full roundtrip support for tags in the XML source/target content. It saves you time and effort by seamlessly preserving your tags and formatting, leaving you with more time to rearrange your sock drawer.

We Simplify, You Shine
We've also revamped the content structure included with the prompts to simplify how you interact with the OpenAI technology. We want to make sure that what you communicate is fully understood by our language models, so we've made it easier to refer to elements in the content structure. No more deciphering mind-boggling hieroglyphics.

TransUnit: The translation unit; must contain 1 <Source> element and can contain 1 <Translation> element and/or 1 <Terms> element. 
Source: The source segment content.
Translation: The target segment content.
Terms: A list of terms that are matched against the source segment from the default Terminology Provider that is attached to the project.

<TransUnit>   <Source Language="en-US">source text</Source>   <Translation Language="it-IT">translated text</Translation>   <Terms>     <Term>       <Source>source term</Source>       <Translation>translated term</Translation>       <Status>Preferred</Status>     </Term>   </Terms> </TransUnit>

Befriending the Rate Limits
It's important to be aware of the rate limits that OpenAI imposes on the number of times a user can access thier services within a specified period of time.  When it comes to models like gpt-4 and gpt-3.5-turbo, OpenAI has set limits to the number of tokens and requests you can send per minute. These thresholds are in place to maintain optimal performance and prevent overload on the system.

How do these rate limits work?
Rate limits are measured in five ways: RPM (requests per minute), RPD (requests per day), TPM (tokens per minute), TPD (tokens per day), and IPM (images per minute). Rate limits can be hit across any of the options depending on what occurs first. For example, you might send 20 requests with only 100 tokens to the ChatCompletions endpoint and that would fill your limit (if your RPM was 20), even if you did not send 150k tokens (if your TPM limit was 150k) within those 20 requests.

Other important things worth noting:
  - Rate limits are imposed at the organization level, not user level.
  - Rate limits vary by the model being used.
  - Limits are also placed on the total amount an organization can spend on the API each month. These are also known as "usage limits".

Rate limits - Tier 1


It's time to embrace efficiency, accuracy, and the occasional sprinkle of magic in your translations. Say hello to smoother workflows, fewer headaches, and more time to enjoy life's little pleasures. So go ahead, unleash the power of AI Professional and let your translations shine!

  • Setup and connection are OK but unfortunately, Studio crashes while opening a translatable file when this plugin is active.
    Object reference not set to an instance of an object.

    <SDLErrorDetails time="20.05.2024 14:04:59">
    <ErrorMessage>Object reference not set to an instance of an object.</ErrorMessage>
    <Exception>
    <Type>System.NullReferenceException, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</Type>
    <HelpLink/>
    <Source>AIProfessional</Source>
    <HResult>-2147467261</HResult>
    <StackTrace>
    <![CDATA[ konum: AIProfessional.ViewModel.TranslationViewModel.GetTranslationCacheItem() konum: AIProfessional.ViewModel.TranslationViewModel.<>c__DisplayClass126_0.<OnChangeSegment>b__0() konum: System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) konum: System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler) konum: System.Windows.Threading.DispatcherOperation.InvokeImpl() konum: System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(Object state) konum: MS.Internal.CulturePreservingExecutionContext.CallbackWrapper(Object obj) konum: System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) konum: System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) konum: System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) konum: MS.Internal.CulturePreservingExecutionContext.Run(CulturePreservingExecutionContext executionContext, ContextCallback callback, Object state) konum: System.Windows.Threading.DispatcherOperation.Invoke() konum: System.Windows.Threading.Dispatcher.ProcessQueue() konum: System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) konum: MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) konum: MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) konum: System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs) konum: System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler) konum: System.Windows.Threading.Dispatcher.LegacyInvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs) konum: MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) konum: System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) konum: System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData) konum: System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) konum: System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) konum: Sdl.TranslationStudio.Application.Launcher.RunApplication() ]]>
    </StackTrace>
    </Exception>
    <Environment>
    <ProductName>Trados Studio</ProductName>
    <ProductVersion>Studio17</ProductVersion>
    <EntryAssemblyFileVersion>17.2.8.18668</EntryAssemblyFileVersion>
    <OperatingSystem>Microsoft Windows 10 Pro</OperatingSystem>
    <ServicePack>NULL</ServicePack>
    <OperatingSystemLanguage>1055</OperatingSystemLanguage>
    <CodePage>1254</CodePage>
    <LoggedOnUser>DESKTOP-I50F5UF\Selçuk</LoggedOnUser>
    <DotNetFrameWork>4.0.30319.42000</DotNetFrameWork>
    <ComputerName>DESKTOP-I50F5UF</ComputerName>
    <ConnectedToNetwork>True</ConnectedToNetwork>
    <PhysicalMemory>16665664 MB</PhysicalMemory>
    </Environment>
    </SDLErrorDetails>
  •   

    We're making some significant updates to our AI Professional plugin for Trados Studio 2024 (including the fact there won't be an AI Professional plugin anymore). In Trados Studio 2024, the AI Assistant part of AI Professional will become Trados Copilot AI Assistant. This new feature will support multiple LLM (Large Language Model) providers, not just OpenAI or Azure OpenAI models. This means that developers will be able to create and integrate their own AI providers through separate plugins. This approach gives us the flexibility to offer a wide range of LLMs as they become available. The machine translation component will be delivered as a MT provider, similar to how we currently handle NMT providers. This change will streamline the integration process benefitting everyone (users and developers) as they will automatically get the ability to use all the AI Assistant features for whichever model they choose to install.

    Our initial release for Trados Studio 2024 will include support for GPT-4o, which is already available to our Beta Community. It's worth noting that while GPT-4o is accessible to ChatGPT free users, delivering it in a plugin is different. Using the API involves more complexity than using the web interface as a chat client and it also requires a subscription. So it won't be free.

    Moving forward, the groundwork we've laid will allow us to quickly add new AI providers. We're looking into supporting LLMs from various sources over the coming months such as Google (Gemini, etc.), Amazon (Anthropic, etc.), and Hugging Face with its extensive library of models.

    These changes bring several benefits. The ability to integrate multiple LLM providers means you're not confined to a single source for AI assistance. This flexibility ensures you can use the most advanced models available, tailored to your specific needs. By adopting a modular approach, we can easily incorporate future advancements in AI technology. As new models and providers emerge, they'll fit seamlessly into the Trados ecosystem.

    With these enhancements in both AI assistance and machine translation, you can expect more refined and contextually accurate translations. The AI feedback loop will be more interactive, offering real-time suggestions and improvements. Plus, the inclusion of diverse LLM providers enriches the AI ecosystem within Trados Studio, fostering innovation and offering you a wider range of tools and resources to enhance your translation projects.

    We believe these changes will enhance your translation experience and continue to position Trados Studio at the forefront of AI integration in the translation industry. We’re looking forward to your feedback when we do release and are committed to continuously improving based on your needs. If you do have ideas I'd recommend using the ideas forum so we can automatically get this into our backlog for consideration:

     

     

    If you do have more questions I'd also recommend you use the forums as adding them into a blog like this makes it very difficult to follow and respond to:

    https://community.rws.com/product-groups/trados-portfolio/rws-appstore/f