The 'analyse files' batch task can take some considerable time to run for large files across multiple languages? I am looking for ideas on how to potentially speed up this task so I wondered if anyone had any ideas please? Thus far I have found that generally the server memories seem slower for analysis than file based memories & that local file based memories are faster than network file based memoires, no great surprise there really. Using fewer/smaller numbers of memories as part of any analysis also improves speed. Are there any best file/batch sizes (mbs or words) that might help ensure performance is optimised? I have tried increasing the 'batchTaskThreadCount' setting inside SDLTradosStudio.exe.config. Its default is 3, I have increased it but seen not change in performance thus far. I have a good spec machine so would prefer not to go down the new hardware route. Any practical ideas to improve this area of Studio performance would be much appreciated.