Slow Studio performance -> Will a hardware upgrade help?

The Windows 7 Resource Monitor does not seem to show a bottleneck in my system that would be slowing down Studio 2017 (same problem with previous versions of Studio).

I am willing to upgrade my hardware to speed up processing, but am not sure where to look.

I currently have an Intel i7-950 CPU, 4 cores (8 threads), 3.07 GHz, 18 GB of RAM, and an SSD.

I increased the RAM and added the SSD in the hopes of boosting performance, but there has been little change. I did not focus on the motherboard, because Studio generally uses a steady 12-13% of CPU.

I could be mistaken, however, as during a long wait of 20 to 30 minutes for Studio to filter and display around 20,000 segments, it looked like only one thread was being used at around 90%. Since there are 8 threads possible (2 per core), this would match the 12-13% CPU figure (90%/8 = approx. 11 or 12%). Does Studio really only use 1 of the possible 8 threads?

Here is an example of what I have been experiencing. An SDLXLIFF file with around 40,000 segments (around 250,000 characters), takes several minutes to load into Studio. Studio then uses around 1.2 GB of RAM. (Quite a feat, using 1.2GB of RAM for a mere 250,000 characters of text). Filtering, say, 20,000 segments (e.g. "Translated" segments, "Signed off" segments) can take 20 to 30 minutes. What this means is that if I want to, for example, select 20,000 translated segments, change them to approved and then lock them, I could be looking at a waiting time of an hour.

I am wasting a lot of time. It is also interfering with meeting deadlines, which is why I increased my RAM and added an SSD.

Do other Studio users dealing with large files experience similar delays? If not, do any of the SDL people have an idea of where the problem might be in my system?

I am willing to upgrade my motherboard, but would like to know if there actually is a problem with the hardware.

It could be that Studio is simply slow and nothing can be done to improve things ...

Any information would be very welcome :-)

Regards,

Bruce Campbell

ASAP Language Services

Parents
  • Hi Bruce,

    Those PC specs are pretty high end, so I doubt throwing money at the problem will do any good.
    What types of files are you dealing with?
    Unfortunately, for the file size you describe is simply too large for Trados...
    I would recommend breaking the file into smaller chunks if possible.
  • Hi Jesse,

    Thanks for your post.

    Although I was hoping not to hear "save the money, Studio is slow" :-(

    The file I just translated was an annual report that only contained around 150 pages. Not particularly big for an annual report. It was in idml format.

    The file was already broken into pieces, but unfortunately one piece had 40,000 segments and I have to work with what I am given. 40,000 segments really doesn't seem like such a big number in this day and age.

    The 1.2 GB of RAM that Studio was using is also not all that big. Chrome is using more RAM at the moment, sometimes it uses 2.5 GB. Dragon typically uses around 0.7 GB to load a profile.

    I really wonder what Studio is doing that takes so much time.

    Best regards,
    Bruce Campbell
    ASAP Language Services
  • Hi Bruce,

    I've worked with files around 10,000 segments and it was slow but still manageable. However, 40,000 is definitely over what I think Trados Studio can handle.

    There is one tool though you could use to break the sdlxliff file into more manageable chunks:
    appstore.sdl.com/.../
     * However, I noticed this app cannot be downloaded with a Trados Studio 2017 license... Perhaps  could comment on that.

    As for RAM usage, that probably isn't the issue, but the big bottle neck is saving back to the file I would presume. XML files get complex, so this becomes a hassle.
    Since Trados stores everything in SDXLIFF there are advantages and disadvantages, and I think handling really large files is one of the disadvantages.

  • Unknown said:
    There is one tool though you could use to break the sdlxliff file into more manageable chunks:
    appstore.sdl.com/.../
    * However, I noticed this app cannot be downloaded with a Trados Studio 2017 license... Perhaps could comment on that.

    Hi guys,

    This one needs to be updated by the core SDL development team... it's not owned by the community developers.  So once they do this we can make it available on the store.

    Regards

    Paul

    Paul Filkin | RWS Group

    ________________________
    Design your own training!

    You've done the courses and still need to go a little further, or still not clear? 
    Tell us what you need in our Community Solutions Hub

  • Hi Jesse,

    I will keep it in mind as a last resort. But cutting a file into pieces makes it difficult to perform global changes or filter on words or phrases to find the right segments to change throughout the entire file.

    I tried another test this morning to see whether the CPU was overburdened.

    While one instance of Studio was analysing one set of files, I opened another instance of Studio to run at the same time and analyse a different set of files.

    One instance of Studio typically takes 12% of the CPU. When I had two instances running, the Resource Monitor showed that each instance was taking 12% of the CPU.

    So, there does not appear to be a shortage of CPU power.

    This could, of course, be a "single thread" limit. If Studio is single-threaded, then each instance would be limited to around 12% of the total CPU (i.e. 100%/8 threads = 12% per thread), which might be what I am seeing in the Resource Monitor.

    I suppose only one of the SDL tech people would know the answer to this ...

    Regards,
    Bruce Campbell
    ASAP Language Services
  • Perhaps it's the time SDL thought about producing a 64-bit version of Studio, which could help resolve such problems.
    Unfortunately it's true that Studio can be awfully slow at times. On my computer with an SSD disk, an almost fresh Windows 7 installation (1 month old), 8GB RAM and a 5 years old AMD Athlon 4-core processor, it takes exactly 35 seconds for Studio to open to the project view. Deja Vu takes 12 seconds and Memoq 15 seconds. The whole system opens in 33 seconds, so you have a comparison.

    By the way, wouldn't it be great if you were editing Segment 1 and in the meantime Studio would be working in the background on Segment 2 (MT queries, TM, Termbase and Concordance searches), to have them ready for you when you move to the next segment without waiting for a few seconds?
  • I am working on another couple of 150 page reports and it almost seems like the design of the Studio UI might be a problem.

    For example, when I have a large file open and merge two segments, Studio freezes.

    Sometimes for a few seconds, sometimes for a few minutes. Sometimes the segment display is messed up so that the source and target no longer align. Sometimes the cursor is thrown back to the start of the file so I no longer know where I was. Sometimes Studio essentially crashes and I have to kill the process.

    One would not think that merging two segments would be such a big deal. Just combine the two segments and adjust the display buffer where the two segments used to be. The actual change in the data structures is trivial.

    So why does Studio threaten to crash when two segments are merged?

    Since the actual merge is not a big deal, I think it must be choking on the re-display part.

    It almost seems like Studio is preparing everything -- the whole file from the first to last segment -- for display again, throwing away all the formatting is has already performed and starting again from scratch, like it is performing the same routine it would perform if it had just opened the file.

    And it seems like it does the same thing when you apply a filter.

    When you think about it, once Studio has prepared the segments for display, applying a filter is just a matter of selecting only certain segments to display. That is really not so difficult, since the actual preparation for each individual segment has already been done. (You might have noticed that when you open a large file Studio won't let you hop to the end right away. It seems to be busy preparing everything in the background, and with a large file it can take a while before you can hop to the end of the file.)

    If I was programming the display routines, I might choose a "throw-away" approach if I thought the files would be small. Just junk everything and quickly format it all from scratch again. That way you don't have to program several layers of display buffers, list structures, or whatever they are using. Just a single layer. Whenever there is a change, just rebuild everything from scratch again.

    But if you have 40,000 segments in a file, the display routine would end up thrashing. The Studio window would just freeze while the display routine desperately tries to reformat everything for the last minor change.

    As it is now, it appears that a 150 page report -- which really isn't all that big, I have had reports with three or four hundred pages -- is enough to bring the whole program to its knees, even when you aren't really doing anything of substance.

    So if I am right about the UI thrashing with big files, why is SDL adding things like fragment matching?

    Programming fragment matching is undoubtedly way more fun than updating the UI, and at first glance it is also good marketing ("We have something no one else has!").

    So I can sort of understand it from the programming and marketing point of view. The UI just is not sexy.

    But if you think about it, the next step in growth for translation tools is going to come when the end clients get serious and start dumping big files on the market to take advantage of the cost savings that TMs make possible.

    The volume could soar -- and translators would simply be forced to use a translation tool to handle the volume -- assuming your tool can handle it.

    I don't know what the internal company arguments are. SDL might argue that there aren't that many translators dealing with big files, so from a sales point of view their problems can safely be ignored. Attract customers by focusing instead on lots of bells and whistles for handling small files.

    That would be one way to look at the problem. The other approach would be to focus on the only true way forward for translation tools to realise their ultimate purpose and finally become indispensible: push the boundaries and make it possible for Studio to handle big files.

    I guess you know what's on my wish list for Santa ... a more efficient UI and, while they are at it, why not make the UI Dragon-friendly.

    Should I also ask for a pony?

    Best wishes for the holidays !

    Bruce Campbell
    ASAP Language Services
Reply
  • I am working on another couple of 150 page reports and it almost seems like the design of the Studio UI might be a problem.

    For example, when I have a large file open and merge two segments, Studio freezes.

    Sometimes for a few seconds, sometimes for a few minutes. Sometimes the segment display is messed up so that the source and target no longer align. Sometimes the cursor is thrown back to the start of the file so I no longer know where I was. Sometimes Studio essentially crashes and I have to kill the process.

    One would not think that merging two segments would be such a big deal. Just combine the two segments and adjust the display buffer where the two segments used to be. The actual change in the data structures is trivial.

    So why does Studio threaten to crash when two segments are merged?

    Since the actual merge is not a big deal, I think it must be choking on the re-display part.

    It almost seems like Studio is preparing everything -- the whole file from the first to last segment -- for display again, throwing away all the formatting is has already performed and starting again from scratch, like it is performing the same routine it would perform if it had just opened the file.

    And it seems like it does the same thing when you apply a filter.

    When you think about it, once Studio has prepared the segments for display, applying a filter is just a matter of selecting only certain segments to display. That is really not so difficult, since the actual preparation for each individual segment has already been done. (You might have noticed that when you open a large file Studio won't let you hop to the end right away. It seems to be busy preparing everything in the background, and with a large file it can take a while before you can hop to the end of the file.)

    If I was programming the display routines, I might choose a "throw-away" approach if I thought the files would be small. Just junk everything and quickly format it all from scratch again. That way you don't have to program several layers of display buffers, list structures, or whatever they are using. Just a single layer. Whenever there is a change, just rebuild everything from scratch again.

    But if you have 40,000 segments in a file, the display routine would end up thrashing. The Studio window would just freeze while the display routine desperately tries to reformat everything for the last minor change.

    As it is now, it appears that a 150 page report -- which really isn't all that big, I have had reports with three or four hundred pages -- is enough to bring the whole program to its knees, even when you aren't really doing anything of substance.

    So if I am right about the UI thrashing with big files, why is SDL adding things like fragment matching?

    Programming fragment matching is undoubtedly way more fun than updating the UI, and at first glance it is also good marketing ("We have something no one else has!").

    So I can sort of understand it from the programming and marketing point of view. The UI just is not sexy.

    But if you think about it, the next step in growth for translation tools is going to come when the end clients get serious and start dumping big files on the market to take advantage of the cost savings that TMs make possible.

    The volume could soar -- and translators would simply be forced to use a translation tool to handle the volume -- assuming your tool can handle it.

    I don't know what the internal company arguments are. SDL might argue that there aren't that many translators dealing with big files, so from a sales point of view their problems can safely be ignored. Attract customers by focusing instead on lots of bells and whistles for handling small files.

    That would be one way to look at the problem. The other approach would be to focus on the only true way forward for translation tools to realise their ultimate purpose and finally become indispensible: push the boundaries and make it possible for Studio to handle big files.

    I guess you know what's on my wish list for Santa ... a more efficient UI and, while they are at it, why not make the UI Dragon-friendly.

    Should I also ask for a pony?

    Best wishes for the holidays !

    Bruce Campbell
    ASAP Language Services
Children
No Data