Hello everyone!
Is there any way to display a per-segment quality estimation value on MT segments in Studio, similar to what we see for the fuzzy matches?
Regards,
Luís Costa
Hello everyone!
Is there any way to display a per-segment quality estimation value on MT segments in Studio, similar to what we see for the fuzzy matches?
Regards,
Luís Costa
I'm afraid not. If you were doing the quality check you could use Post-Edit Compare and generate a report that provided the actual quality in terms of a fuzzy value, but I'm not aware of any tool that can do this automatically on the basis of seeing the MT alone... at least not a real one, I have seen a few who say they can do it.
Paul Filkin | RWS Group
________________________
Design your own training!
You've done the courses and still need to go a little further, or still not clear?
Tell us what you need in our Community Solutions Hub
Thank you Paul!
I suppose that's valid even if using SDL own language cloud NMT, right?
I suppose that's valid even if using SDL own language cloud NMT, right?
It's valid for all machine translation. There is no reliable way (yet) to measure it's quality in terms of a fuzzy score.
Paul Filkin | RWS Group
________________________
Design your own training!
You've done the courses and still need to go a little further, or still not clear?
Tell us what you need in our Community Solutions Hub
As fate would have it, ModelFront launched a self-serve console and API for translation risk prediction launched right about the time the question and answer were posted.
It's not the first - quality estimation was researched by Google and Lucia Specia since 2013, and launched in prod inside Unbabel, Amazon and Microsoft over 2017-2020, and there are features like Memsource QE and KantanQES built into other products.
It's just the first production system available to all players - support for 100+ languages, customizable on data, secure, scalable...
See machinetranslate.org/quality-estimation for more
(Full-disclosure: I'm co-founder and CEO of ModelFront.)
Update 2:
To be clear, getting quality predictions in Trados depends on the TMS, and for high-value use cases, we don't usually recommend just generic scores in the CAT.
Quality prediction should be customized on the workflow data to reflect the quality bar, terminology and style, and integrated into the TMS to actually realize the efficiency gains, by updating segment statuses accordingly, before the XLIFF is sent for editing.
To get a quality prediction in TMSes like Trados Enterprise (or WorldServer, GroupShare or SDL TMS), contact the ModelFront team.
https://modelfront.com/contact
Adam