Unleash the Power of Translation with AI Professional

Unleash the Power of Translation with AI Professional

Enter the world of AI-powered translation with AI Professional for Trados Studio. This powerful tool combines the capabilities of Azure and OpenAI's language models to revolutionize your translation projects. With features like Translation Provider for batch tasks automation, the AI Companion for enhanced translation capabilities, and Terminology-aware Translation Suggestions for precision and consistency, AI Professional is your trusted ally in the realm of language. Your very own language wizard without the pointy hat and questionable fashion choices.

AI Professional is already available through the AppStore integration in Trados Studio or directly from our platform here

Key Features

Translation Provider - Your Efficiency Ally
Imagine having a personal assistant who can take care of mundane tasks while you sip your coffee and bask in the glory of your superiority. With AI Professional, the Translation Provider integration automates batch tasks like analysis and pre-translation, boosting productivity. Just sit back and let it do its thing while you focus on the important stuff.

AI Companion - Your Sidekick in Translation
Every superhero needs a sidekick, right? Well, meet your trusty translation sidekick - the AI Companion. This little marvel resides in the Editor view and is packed with superpowers to simplify the translation process with its ability to customize the Translation Suggestions with specific prompts and settings. Not only can it highlight translated terms, but also provide a visual representation of comparison changes, making it easier for you to identify and track modifications made during the translation. It's like having a team of linguists brainstorming ideas for you, minus the office drama and coffee stains on your keyboard.

Terminology-aware
The Translation Provider & AI Companion also bring terminology awareness to the table. They can incorporate terminology context with the prompt when requesting translations, resulting in more precise and contextually relevant suggestions. Say goodbye to those cringe-worthy moments when you realize your translation didn't quite hit the mark. This smart integration ensures consistency and precision in all your translated content.

Visualize the Magic
Sometimes, you just need that visual representation to truly grasp the magnitude of the magic happening behind the scenes. The AI Companion's visual representation of translated terms & comparison changes helps you track and identify modifications made during the translation process. The magical highlighter that illuminates your changes and helps you maintain control over your work. No more getting lost in a sea of text revisions!

Tags and Formatting - Neat and Tidy
We're all familiar with the headache of having to reintroduce tags and formatting after applying translation suggestions. Ever tried to fit a square peg in a round hole? it's worse! AI Professional has your back with full roundtrip support for tags in the XML source/target content. It saves you time and effort by seamlessly preserving your tags and formatting, leaving you with more time to rearrange your sock drawer.

We Simplify, You Shine
We've also revamped the content structure included with the prompts to simplify how you interact with the OpenAI technology. We want to make sure that what you communicate is fully understood by our language models, so we've made it easier to refer to elements in the content structure. No more deciphering mind-boggling hieroglyphics.

TransUnit: The translation unit; must contain 1 <Source> element and can contain 1 <Translation> element and/or 1 <Terms> element. 
Source: The source segment content.
Translation: The target segment content.
Terms: A list of terms that are matched against the source segment from the default Terminology Provider that is attached to the project.

<TransUnit>   <Source Language="en-US">source text</Source>   <Translation Language="it-IT">translated text</Translation>   <Terms>     <Term>       <Source>source term</Source>       <Translation>translated term</Translation>       <Status>Preferred</Status>     </Term>   </Terms> </TransUnit>

Befriending the Rate Limits
It's important to be aware of the rate limits that OpenAI imposes on the number of times a user can access thier services within a specified period of time.  When it comes to models like gpt-4 and gpt-3.5-turbo, OpenAI has set limits to the number of tokens and requests you can send per minute. These thresholds are in place to maintain optimal performance and prevent overload on the system.

How do these rate limits work?
Rate limits are measured in five ways: RPM (requests per minute), RPD (requests per day), TPM (tokens per minute), TPD (tokens per day), and IPM (images per minute). Rate limits can be hit across any of the options depending on what occurs first. For example, you might send 20 requests with only 100 tokens to the ChatCompletions endpoint and that would fill your limit (if your RPM was 20), even if you did not send 150k tokens (if your TPM limit was 150k) within those 20 requests.

Other important things worth noting:
  - Rate limits are imposed at the organization level, not user level.
  - Rate limits vary by the model being used.
  - Limits are also placed on the total amount an organization can spend on the API each month. These are also known as "usage limits".

Rate limits - Tier 1


It's time to embrace efficiency, accuracy, and the occasional sprinkle of magic in your translations. Say hello to smoother workflows, fewer headaches, and more time to enjoy life's little pleasures. So go ahead, unleash the power of AI Professional and let your translations shine!

  • Hi all,

    We've been enjoying using this AI Professional plugin for a few weeks now.

    However, we've been encountering the following error message: "Jeton 'm98,72,0 l0, 2' inattendu rencontré à la position '8'." (unexpected token).

    What happens?
    The error message pops up and the whole Trados Studio 2022 application crashes (closes).

    When does it happen?
    When the AI Professional plugin is enabled. 
    It does not necessarily happen when we're using the plugin.
    In fact, it usually happens on projects for which we don't use the plugin at all.
    However, the plugin is enabled.

    Trados version: SDL Trados Studio 2022 Professional

     

    Please find the slack trace below:

    <SDLErrorDetails time="14/08/2024 15:15:24">
    <ErrorMessage>Jeton 'm98,72,0 l0, 2' inattendu rencontré à la position '8'.</ErrorMessage>
    <Exception>
    <Type>System.FormatException, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</Type>
    <HelpLink />
    <Source>PresentationCore</Source>
    <HResult>-2146233033</HResult>
    <StackTrace><![CDATA[ à MS.Internal.AbbreviatedGeometryParser.ThrowBadToken()
    à MS.Internal.AbbreviatedGeometryParser.ReadNumber(Boolean allowComma)
    à MS.Internal.AbbreviatedGeometryParser.ReadPoint(Char cmd, Boolean allowcomma)
    à MS.Internal.AbbreviatedGeometryParser.ParseToGeometryContext(StreamGeometryContext context, String pathString, Int32 startIndex)
    à MS.Internal.Parsers.ParseStringToStreamGeometryContext(StreamGeometryContext context, String pathString, IFormatProvider formatProvider, FillRule& fillRule)
    à MS.Internal.Parsers.ParseGeometry(String pathString, IFormatProvider formatProvider)
    à System.Windows.Media.Geometry.Parse(String source)
    à AIProfessional.ViewModel.TranslationViewModel.GetTermBorder(String text, Color borderBrush)
    à AIProfessional.ViewModel.TranslationViewModel.CreateSpanFromResults(List`1 comparisonTextUnits, List`1 elements, String note, List`1 termEntries)
    à AIProfessional.ViewModel.TranslationViewModel.GetTranslationResults(ITranslationResponse response, ISegmentPair segmentPair, List`1 termEntries)
    à AIProfessional.ViewModel.TranslationViewModel.<>c__DisplayClass164_1.<Translate>b__0()
    à System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs)
    à System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler)
    à System.Windows.Threading.DispatcherOperation.InvokeImpl()
    à System.Windows.Threading.DispatcherOperation.InvokeInSecurityContext(Object state)
    à MS.Internal.CulturePreservingExecutionContext.CallbackWrapper(Object obj)
    à System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
    à System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
    à System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
    à MS.Internal.CulturePreservingExecutionContext.Run(CulturePreservingExecutionContext executionContext, ContextCallback callback, Object state)
    à System.Windows.Threading.DispatcherOperation.Invoke()
    à System.Windows.Threading.Dispatcher.ProcessQueue()
    à System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled)
    à MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled)
    à MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o)
    à System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Int32 numArgs)
    à System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Int32 numArgs, Delegate catchHandler)
    à System.Windows.Threading.Dispatcher.LegacyInvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Int32 numArgs)
    à MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam)
    à System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg)
    à System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(IntPtr dwComponentID, Int32 reason, Int32 pvLoopData)
    à System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context)
    à System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context)
    à Sdl.TranslationStudio.Application.Launcher.RunApplication()]]></StackTrace>
    </Exception>
    <Environment>
    <ProductName>Trados Studio</ProductName>
    <ProductVersion>Studio17</ProductVersion>
    <EntryAssemblyFileVersion>17.2.11.19134</EntryAssemblyFileVersion>
    <OperatingSystem>Microsoft Windows 10 Professionnel</OperatingSystem>
    <ServicePack>NULL</ServicePack>
    <OperatingSystemLanguage>1036</OperatingSystemLanguage>
    <CodePage>1252</CodePage>
    <LoggedOnUser>AzureAD\user</LoggedOnUser>
    <DotNetFrameWork>4.0.30319.42000</DotNetFrameWork>
    <ComputerName>laptop</ComputerName>
    <ConnectedToNetwork>True</ConnectedToNetwork>
    <PhysicalMemory>16509732 MB</PhysicalMemory>
    </Environment>
    </SDLErrorDetails>

    Any idea about what happens and how we can solve this?

    Thank you,

    Jonathan

  • Hi , Your translation data is not stored or transmitted to the GPT model, ensuring your sensitive information remains confidential. If you're concerned about data privacy, Azure can provide a secure environment for your applications, as it allows you to manage data without it being shared online.  I'd recommend you consult the documentation from OpenAI for more detailed information.

  • Hello,  this looks interesting. It is perhaps a silly question, is my translation data transmitted to the GPT (is it stored there)? Is this compatible with a static, premise-only version of ChatGPT (or Azure)? We have sensitive data that we don't want to publish on the Internet, but be interested in benefiting from such apps. Could you please advise?

    Thanks!

  • Hi, we're struggling with the same issue and the link is dead. Can you please share a new one?

  • Hello

    One .NET library to consume OpenAI, Anthropic, Cohere, Google, Azure, Groq, and self-hosed APIs.

    lofcz/LlmTornado at 71a8635da2e5d1f9a36ca0e8c8d30272f31ecfc6 (github.com)

    ------

    https://github.com/songquanpeng/one-api

    integrate multiple LLM, in AI professional plugin