Presentation on AI Professional plugin at the Elevate conference

Dear colleagues,

It's a pleasure to e-greet you. I'm glad to announce that I have prepared a presentation on one of the latest plugins for Trados Studio desktop called AI Professional, which integrates the power of LLMs directly on your Editor view.

The presentation lasts some 50 minutes and it's available on the Bonus Sessions tab of the Elevate conference: https://www.trados.com/events/elevate/. It's called "My favorite AI Professional app prompts". (It's already under revision for further improvements Smiley).

This comes with a lot of help and support from colleagues from this forum as well as the RWS App Store development team and the RWS Events team, who answered like a thousand emails and assisted running some tests before the actual recording, including  ,  ,  ,  ,  .

And a big thank you to my Trados Studio guru,  , who first introduced me to the Beta Community and the further RWS Community. So the presentation is dedicated to her Heart

Thank you and if you happen to see it I'd like to know what you think Smiley

Best,

Martín



Edited the writing
[edited by: Martin Chamorro at 3:45 PM (GMT 0) on 19 Mar 2024]
emoji
Parents
  • Thank you Martin Chamorro. The demonstration of transaltion and revision in the Trados Studio editor was fantastic, showing the functions and effects of different prompts very well. Now I have a question: In the editor, the source text is split into segments. So, when using AI Professional for batch pretranslation, is only one Request sent to GPT, or is a separate Request sent for each segment? Since the Request includes both prompt and the source segment, sending a separate Request for each segment would significantly increase the number of tokens. 

    emoji
Reply
  • Thank you Martin Chamorro. The demonstration of transaltion and revision in the Trados Studio editor was fantastic, showing the functions and effects of different prompts very well. Now I have a question: In the editor, the source text is split into segments. So, when using AI Professional for batch pretranslation, is only one Request sent to GPT, or is a separate Request sent for each segment? Since the Request includes both prompt and the source segment, sending a separate Request for each segment would significantly increase the number of tokens. 

    emoji
Children
No Data