For German readers, the FAZ has an interesting article about the new OpenAI o1 LLM. Especially interesting as it is not a "newer, more expensive, better" model (emphases are mine):
o1 is in many ways a significant break with the LLM trend described above. Inference takes longer with o1. So the model feels slower. Much slower. Open AI humanises the longer computing time with "thinking". But why is o1 a break? Firstly, the model is not optimised for regular, run-of-the-mill requests such as "rephrase this email in a more professional tone". The now longer and more expensive "thinking time" compared to other models gives o1 new capabilities. It is better at logical tasks, such as maths or programming, than any other model. At the same time, it is no better, and often worse, at text formulation than classic LLMs such as Claude or GPT-4o.
For the first time, o1 is an LLM that can perform complex tasks better than simple tasks, even if the user accidentally puts the tasks in the same area. If you give o1 a simple task, Open AI warns, the model may 'think' too much about the solution and complicate the result. The LLM landscape as a whole is not intuitive, and with o1 this is exacerbated.
For us as Trados Studio users, this might mean we can be quite relaxed about not having an integration to this new model right away - OpenAI might introduce a move language-oriented variety of this LLM at a later point, just as GPT-3 and GPT-4 grew into families of LLMs with different abilities (and costs).
My personal opinion is that currently the real productivity boosts for translation processes are less in more powerful models, but in comprehensive application of the existing models (not even the most advanced ones) to reduce the "donkey-work" and highlight the areas where human translation and editing skills are most needed.