post-editing standard

Hey, I am a student and now doing my graduation project about MT and post editing. So I wonder how the industry evaluates an output of post-edited machine translation? Do we have a quantitative standard or something else? Of course the post-edit distance is an indicator. But is the number of post-edit distance always fixed for all kinds of texts? For example, if I use trados for accessing the output, then what post-editing rate or number can be deemed as normal or good? 

emoji
Parents
  • Hi

    Quality depends on how you want to use raw machine translation. If the raw machine translation output is the starting point for post-editing, then you can assess the value either via the time needed for post-editing or via the changes that need to be made to the output. In either case, in order to get reliable results, evaluations need to be averaged out over a large number of segments.

    If you look at necessary changes to the machine translation (edit distance), then fewer changes indicate that the machine translation was a good starting point. However, there is no widely recognized threshold for edit distance metrics. Different translation providers use different values to indicate the usability of the output.

    A lower edit distance does not necessarily mean that the translator spends less time during post-editing. When post-editing, you need to understand the meaning of the source and for example spend time to research the topic and familiarize yourself with the client’s requirements. Post-editing entails more than just the typing which is measured by edit distance. For this reason, time metrics can be used to supplement edit distance metrics

    Have you heard of TQA - which can be used to help evaluate/revise translations objectively. PE being an example

    I hope this helps

    Lydia

    Lydia Simplicio | RWS Group

    _______
    Design your own training!

    You've done the courses and still need to go a little further, or still not clear? 
    Tell us what you need in our Community Solutions Hub

    emoji
  • Yeah,your answer makes a lot sense to me.  I can udnerstand the edit distance for measuring translation quality, but how can we use time to do the same? I mean we can just say the time we spend on the output is much or a little, isn't it? Can it be a specific parameter to make the output more convincing? 

    emoji
  • Just for a little discussion on this topic... you asked:

    So I wonder how the industry evaluates an output of post-edited machine translation?

    To answer this you need to ask yourself why would a company pay for a machine translation capability?  Measuring edit-distance for a company in this situation is useful from the perspective of being able to see how many changes a translator makes to the original raw output.  Did they change more than they needed to for the required use-case of the translation?  How "perfect" did the translation need to be?  Was the original good enough?

    The effect of making too many changes to achieve the PEMT or unnecessary changes is that it takes too long.  This is bad for both the company and the translator.  Rates are probably lower for post-editing work so the longer they take, the less they earn.  So coming back to this question:

    but how can we use time to do the same?

    If jobs take too long everyone loses money.  So when you ask how the industry evaluates an output, time is an important consideration, and in some cases post-editors may be paid on a time basis anyway as opposed to wordcount so efficiently addressing the post-editing in line with the quality requirements and budget for the job are essential.

    I mean we can just say the time we spend on the output is much or a little, isn't it? Can it be a specific parameter to make the output more convincing? 

    If you're looking for a measure of time to be able to say more than 5 seconds an edit for example then it's probably more of an academic exercise than a production one.  I believe time is particularly important when considering the cognitive effort involved in post-editing.  Some texts, and some MT engines, may require more time to make a similar edit distance score to that in a previous job and this won't be measured at all with edit distance techniques.  The variance in time won't be linear either so creating a specific parameter is probably not that useful... in my opinion.

    In practical terms, and I prefer practical thoughts on things, using a post-edit analysis is probably what I would like if I was translating.  That way I could keep to fuzzy bands where I hopefully already have rates that work for me and could be paid on this basis.

    For example, if I use trados for accessing the output, then what post-editing rate or number can be deemed as normal or good? 

    There is a plugin you can install called Qualitivity from the appstore.  This can record your keystrokes, track the movement up and down a file, the time it takes in each segment and between keystrokes, and also edit-distance.  I guess you could carry out some interesting tests with this to look at the difficulties in creating a quantitative standard for PEMT.

    Paul Filkin | RWS Group

    ________________________
    Design your own training!

    You've done the courses and still need to go a little further, or still not clear? 
    Tell us what you need in our Community Solutions Hub

    emoji
  • Error message in Trados Studio saying 'Invoke or BeginInvoke cannot be called on a control until the window handle has been created.' with an OK button.

    Hi paul, I got a problem. Every time I click the Stop button of Activity Tracker, then the window pops up like this. What happened?

    In addition, when I merge projects in the Qualitivity, it also occasionally comes out like this(see the below...)

    Trados Studio Qualitivity window showing a right-click context menu with 'Merge Project Activities' option highlighted.

    Error message in Trados Studio Qualitivity saying 'Error creating window handle.' with an OK button.

    Did I forget something? 

    Look forward to your reply.

    Xiaofeng

    emoji


    Generated Image Alt-Text
    [edited by: Trados AI at 6:09 AM (GMT 0) on 5 Mar 2024]
Reply Children
No Data