MT Enhanced handling of tagged content

I'm using MT Enhanced to connect to our AutoML models and I am finding that the handling of internal tags is really bad.   At first glance, it looks like the plugin breaks up the sentence into chunks, send those chunks to the MT engine and then concatenates the responses of the MT engine to form the output.  If that is true, that cannot lead to quality results.

Digging further, I sent the exact same file through MT with Trados 2021 + MT Enhanced  + my Google AutoML engine vs  Phrase TMS +  Phrase Translate + my Google AutoML engine.  And to my surprise, the results are completely different, and clearly in favor of Phrase.




vs





As you can see, the Phrase connector comes out with the same translation, whether "log in" is surrounded by a strong tag or not.   However, the Trados connector doesnt and comes up with a nearly non-sensical output in French.


Hence my question, why is Phrase able to handle the internal tags properly during their connection to the same AutoML engine, and why can't the MT Enhance plugin match the same results?


By the way, when using a Google generic or MS engine in MT Enhanced, the results are no better.   Only when using the DeepL connector do I get good results for this sentence.   But of course, I do not want to use a generic DeepL engine, I want to use a trained AutoML engine for my needs.


Here is my HTML file, in case you would like to reproduce:

<html>
<p>Please <strong>log in</strong> so that we can verify your permissions.</p>
<p>Please log in so that we can verify your permissions.</p>
</html>






Thank you.



emoji
Parents Reply
  •  or  , I let 3 months pass hoping that I might get a fix, but even with the latest patch of Trados 2022 and the latest plugin, the Google plugin doesn't work on tagged content for me when I use a custom AutoML model.   I keep getting the error  "Translation failed: The end tag </1> does not have a matching start tag.  It works on content without tag.




    After checking, I wonder if there is a bug in the model selection.   In this new plugin, you no longer pass the model ID yourself, instead you browse for a model name and the tool autofills the model ID.  And in my case, it appears to be wrong.  For example, I pick 



    but then, when I look at what was selected for the model ID, I see:




    However, that is not the model ID.   That is apparently a dataset ID


    This is what the model page shows me on the Google Cloud console:



    This is what the dataset page shows:



    Could it be the reason why this is not working?

    emoji
Children