Paul
I wonder how the OpenAI Translator internally works. I read through the documentation but found no answer to the following question:
- Is every API request to the OpenAI models independent from all prior requests and all surrounding segments within the current source file? I guess so because the API endpoints are completion-endpoints?
If my guess is correct, there would be no context-awareness in OpenAI translator yet?
But one of the big advantages of LLMs is the context awareness.
So maybe it could be possible to add a feature which would allow us to transfer a complete source text of a file as general context and then (as long as we are in an editor session) transmit the individual requests as part of an ongoing "chat" (still via API calls)? That way we could (theoretically) have the best of both worlds: context awareness and the segment-wise functionality.
Probably just a nice dream, I know ... :-)