AI Professional does not respect tags in source elements despite appropriate config options?

I switched from "Open AI translator" to "AI Professional". Now I ran into several situations where the tags of the source text is not respected by AI Professional (whereas both NM I use, DeepL and Language Weaver work as expected).

My current version of AIP is 1.0.1.2 (from February 1st), but the effect was there in the previous version as well.

Here example 1:

Screenshot showing a comparison of source text with HTML tags and its translation, highlighting a mismatch in tag handling.

And example 2:

Screenshot displaying a translation example where the icon and object name are correctly translated but the formatting is inconsistent.

I checked the appropriate option in the AIP config (gpt-3.5-turbo shows the same problem, btw, as expected):

Screenshot of AI Professional application settings with options checked for including tags in translation suggestions and enabling terminology-aware suggestions.



Generated Image Alt-Text
[edited by: RWS Community AI at 4:49 PM (GMT 0) on 14 Nov 2024]
emoji
  • Hi  ,

    Thank you for this feedback. This might a bug with the TM provider in how the locked tags are interpreted; I'll need to confirm and then circle back.

    If your prompt is instructing the AI technology to refine an existing translation & it does not contain tags (regardless of whether the source contains tags or not), then, the translation suggestion from AI will not contain tags, as it will use the existing translation as the basis for the request.

    Having said that, occasionally, during my initial investigations/ tests, OpenAI simply ignored instructions to include tags in the translation suggestion.  We need to keep in mind that this is AI technology. OpenAI's GPT models, including gpt-3.5-turbo and gpt-4, are powered by machine learning and do their best to understand and respond appropriately to the prompts given to it. However, understanding HTML-like tags, can occasionally be challenging for these models.

    The models are trained on a diverse range of internet texts, but they do not know specifically which documents were in their training set or have access to any proprietary databases. Therefore, they could sometimes overlook or misunderstand complex inputs, like tags.  Addtionally, each model may behave a bit differently due to refinements and updates included in newer versions. GPT-4 may present improvements over gpt-3.5-turbo in recognizing and correctly handling these tags, but no model is perfect and occasionally either could miss the tags.

    emoji
  • Hi

    thanks for getting back!

    Well, I understand the complexities of ChatGPT and the API models as I wrote a book about ChatGPT (exploring from a poet's and translator's viewpoint) and use the API a little bit myself as a programmer.

    For me the AI Professional add-in is still kind of a black box. Could you share a bit of information about how exactly you are instructing the models?

    Do you use one- or few-shot-prompting within each accompanying instruction to inform the LLM about the way it should treat tags so that the model can really recognize all tags?

    And how do you identify tags to the model? Iincluding the distinction between tags with content to be translated and content not to be translated, both in the initial translation (based on the hidden "general prompt") and in refined translations using the Companion (based on the customized prompts)? Do you use special prefixes in the xml structure you send to the API? I believe that the way you instruct the model is the problem, but would be solvable. And I guess actual users of the add-in could provide some ideas here.

    Are customized prompts (for refininig via Companion) added to the general prompt or do they replace it? If added, for experienced users it would be helpful to have a chance to see the general prompt to better define any added refinements. As it is, the tool is not yet as versatile as it could be.

    I also deeply miss the option to write a general custom prompt which applies to all user editable prompts. If I want to add an instruction which applies to all my translations, I have to repeat it in any of the prompts noiw which is a real hassle. (That's why OpenAI added custom instructions to ChatGPT, and now offers the GPTs.)

    To really yield all the benefits of LLMs, I believe the AI Professional add-in should be more open to deeper customization. There are so many different things you can probably fine-tune here, and the inner workings should be documented.

    But at least the handling of tags – with the mentioned distinction between tags to be or to be not translated – both in the initial translation and in improved translations in the Companion – should work as expected, otherwise it the AIP is no real help at all.

    I hope it can live up to its potential which is quite there.

    PS: And then there would be the question of context. Are there any plan to make AI aided translation in Trados Studio more context-aware? For example, for larger projects like book translations I nowadays set up special GPTs (customized ChatGPTs) which are informed about more background and use it as my own separate "Companion". I'd like to see that as well, as part of the mentioned "custom general prompt".


    emoji
  •  

    And one more question:

    What is the best order of translation providers in my project settings? I use DeepL and Language Weaver, and I use this order right now:

    DeepL
    Language Weaver
    AI Professional

    Is this the best order and which translation does AI Professional pick to send to the LLM?

    emoji
  • Hi  , I confirm that there was a bug that prevented the plugin from recognizing locked tags. We worked on this with priority over the weekend and released a new version to resolve this problem.  The latest release is already available from the Integrated AppStore in Trados Studio or directly from the AppStore webpage.

    We value your feedback and suggestions, all aimed at enhancing our offering. I would encourage you to submit these innovative ideas to AppStore Ideas. This platform allows us to systematically track and address all feedback, which is important in our commitment to continuous improvement.

    I can share with you that some of the features you suggested are already on our backlog and scheduled for release in the near future. For example, as you mentioned here, a general assistant prompt & the ability to include additional context such as previous & next segments. We are also exploring more advanced features, and will provide updates on these developments in due course.

    As for the query on our approach to communicating tags to AI technology, we follow an agnostic strategy. This involves simplifying the information that needs to be interpreted by the NLP, focusing only on the tag type and their position within the content. Our initial results have reinforced this approach.

    Regarding your question on the preference of Translation Providers and sequence/order. I'm not a linguist, so probably not the right person to answer your question.  Anyone care to follow up here?

    emoji
  •  

    What is the best order of translation providers in my project settings?

    I don't believe there is one single answer to this as it's all opinion and might even vary between projects.  It's going to be language pair specific, domain specific, paid (trained engines etc.) or free... etc.  So in your case with a free Language Weaver engine and pretty general content for translation that might be best... only you can be the judge of that.

    which translation does AI Professional pick to send to the LLM?

    The prioritised one I believe... so the one at the top.   can confirm that.

    Paul Filkin | RWS Group

    ________________________
    Design your own training!

    You've done the courses and still need to go a little further, or still not clear? 
    Tell us what you need in our Community Solutions Hub

    emoji