How to Select a Specific OpenAI Assistant in Trados AI Assistant? Default Assistant Always Used

Hi everyone,

I am using Trados Studio 2024 with the integrated AI Assistant to improve translations and text quality via the OpenAI API. I have set up multiple Assistants on the OpenAI platform, each tailored to my specific needs.

My Goal:

I want Trados Studio to call a specific OpenAI Assistant, rather than always using the default Assistant.

My Current Configuration:

However, there is no option in Trados Studio AI Assistant settings to specify an assistant_id. As a result, every request defaults to OpenAI’s standard Assistant instead of my custom-trained Assistant.

What I Have Tried So Far:

  1. Changed the API endpoint to api.openai.com/.../threads → Trados throws a connection error.
  2. Created a dedicated API key for my desired Assistant → Trados still defaults to the standard Assistant.
  3. Tried a system prompt workaround → didn't work.

My Question:

Is there a way to select a specific OpenAI Assistant in Trados Studio 2024, instead of always using the default one?
If not directly, is there a workaround, or could this be achieved through a script or plugin?

Looking forward to your insights and solutions!

Thanks in advance!

  • Hi ,

    In OpenAI’s current public API, there is no “assistant_id” parameter or concept of “calling an existing Assistant” the same way you might select a conversation or “Assistant” in the ChatGPT web UI. Instead, OpenAI exposes endpoints (e.g. /v1/chat/completions) where you specify a model (and optionally a system prompt) for your requests. If you have created a custom “Assistant” on the ChatGPT website with specialized instructions or memory, that is not directly callable through the OpenAI API by a unique Assistant ID.

    However, the OpenAI provider for Trados does allow you to specify different OpenAI resources that might get you close to what you want:


    1) Fine-Tuned Models:

    • If your “Assistant” is really a fine-tuned GPT model, then you can simply enter the name of that fine-tuned model in Trados Studio’s OpenAI provider settings (e.g. “ft:gpt-3.5-turbo:my_custom_model”).
    • Ensure that your API key is enabled for that fine-tuned model
    • This will let you leverage your custom training data without needing a separate “assistant_id.”



    2) Custom Instructions via User Prompts:

    • If the “assistant” you’ve set up is just a specialized set of instructions or style guidelines rather than a full fine-tuned model, you can replicate that in Studio by using including this information in the User Prompt.  We are currently working a feature that will enable users to also include a system prompt -> TBD
    • While this doesn’t “call an assistant_id,” it reproduces the same effect of instructing the GPT model to assume your desired role, style, or domain knowledge.


    3) Azure OpenAI Integration:

    • If your custom Assistant is hosted via Azure OpenAI, you can use an Azure endpoint and your Azure credentials in the plugin settings.
    • This still works by specifying the relevant Azure model deployment name.
    • There is no support for passing an “assistant_id,” but you can continue to set custom system prompts or fine-tune a model in Azure.

     


    Example Fine-tuned model:

    Screenshot of a fine-tuning dashboard showing a list of models with one failed and one succeeded fine-tuning process. The failed entry is highlighted in red.

    Screenshot of a settings dialog box in a software application with fields for Name, Provider, Endpoint, API Key, and Model, including a fine-tuned model identifier.



    Generated Image Alt-Text
    [edited by: RWS Community AI at 9:21 AM (GMT 0) on 14 Feb 2025]