RWS Community
RWS Community
  • Site

Trados Studio

Trados Team

Trados Accelerate

Trados Enterprise

Trados GroupShare

Passolo

MultiTerm

RWS AppStore

Connectors

Beta Groups

Managed Translation

MultiTrans

TMS

Trados Enterprise

WorldServer

Language Weaver

Language Weaver Edge

Language Weaver Connectors

Language Weaver in Trados Studio

 

 

Content Champions

Tridion Docs

Tridion Sites

Contenta

LiveContent

XPP

Trados Studio Ideas

Trados GroupShare Ideas

Trados Team Ideas

Trados Team Terminology Ideas

Trados Enterprise & Accelerate Ideas

MultiTerm Ideas

Passolo Ideas

RWS Appstore Ideas

Tridion Docs Ideas

Tridion Sites Ideas

Language Weaver Ideas

Language Weaver Edge Ideas

Managed Translation - Enterprise Ideas

TMS Ideas

WorldServer Ideas

Trados Enterprise Ideas

XPP Ideas

GroupShare Developers

Language Cloud Developers

MultiTerm Developers

Passolo Developers

Trados Studio Developers

Managed Translation Developers

TMS Developers

WorldServer Developers

Tridion Docs Developers

XPP Developers

Language Combinations by Language Services

RWS Training & Certification

Style Guides

LDE Korean Vendor Support

RWS Campus

Trados Approved Trainers

Nordic Tridion Docs User Group

Tridion West Coast User Group

Community Ops

RWS Community Internal Group

AURORA

Internal Trados Ideas

Linguistic Validation

Mercury

QA Tools

RI Operational Excellence

Trados Inspired

XPP Cloud

Recognition & Reward System

RWS Community Platform Related Questions

Community Solutions Hub (Trados)

About RWS

Events

RWS Services: Train AI & others

RWS Training & Certification

To RWS Support

  • Search
  • Translate

    Detecting language please wait for.......


    Powered by
  • User
  • Site
  • Search
  • User
  • Products
  • Language Weaver Solutions
  • Language Weaver Edge
  • More
  • Cancel
Language Weaver Edge
  • Products
  • Language Weaver Solutions
  • Language Weaver Edge
  • More
  • Cancel

Language Weaver Edge > Wiki

Running Language Weaver Edge on Kubernetes
  • Home
  • Blogs
  • Leaderboard
  • Forum
  • Videos
  • Wiki
  • Docs
  • More
  • Cancel
  • New
Show Translation Options

Detecting language please wait for.......


Powered by
Language Weaver Edge requires membership for participation - click to join
  • Wiki
  • Supported Language Pairs and Versions for Language Weaver Edge
  • Language Weaver Edge: Deployment types and features availability
  • Out-of-the-box connectors available for Language Weaver Edge customers
  • Adaptation of Language Pairs in Kubernetes GPU Nodes
  • Associating User Feedback to a New Language Pair (Generic, Adaptable or AutoAdaptive)
  • Connecting Power BI to Language Weaver Edge
  • End dates for CentOS Linux 7 and CentOS Stream 8
  • Running Language Weaver Edge on Docker
  • Running Language Weaver Edge on Kubernetes
  • Edge Release Versions

Running Language Weaver Edge on Kubernetes

Language Weaver Edge could easily be deployed in Kubernetes & the deployment architecture is similar to an on-prem Windows/Linux deployment. Helm charts and sample “values.yaml” files are provided by RWS for easy deployment. Kubernetes GPU nodes are preferred for Training Engines.
 
Minimum requirement
  1. Kubernetes cluster with node CPU/RAM totalling up to the minimum requirement.
    e.g. AKS, EKS, GKE
  2. Autoscaling is enabled in Kubernetes cluster. (optional)
  3. Ingress controller to access Kubernetes services.
    e.g. NGINX
  4. Storage classes in Kubernetes that support both RWO & RWX volume types.

Architecture
  1. Edge Controller host is deployed as a pod in Kubernetes and Edge UI & API are published as a service.
  2. Job Engines, Translation Engines & Training Engines are deployed as Stateful Sets and could be configured to auto scale based on the pod CPU usage or predefine the number of replica pods.
  3. Language Pairs and the Edge configuration is saved in persistent volumes.
  4. All pods use the same docker base image.
Deployment
  1. Helm charts & example “values.yaml” files are provided by RWS for easy deployment.
  2. Upgrades & rescaling of Edge configuration is possible with helm upgrades.
  3. Training Engines require GPU nodes in the Kubernetes cluster for better performance.
  4. LW Edge license & default admin accounts are saved as Kubernetes secrets.
  5. Persistent volume of the Controller pod could be a standard RWO storage class.
  6. Persistent volume of Language Pairs requires a high performance, case sensitive RWX storage class.
  7. Language Pairs are deployed to the persistent volume using Kubernetes Jobs.
Depending on the size of the Language Pair and I/O bandwidth, each Language Pair installation job will generally run for about 2-10 minutes.
Whilst it is possible to deploy the Edge software during Language Pair installation process, Translation Engines will not start or be usable until all the above Kubernetes jobs are completed.
Most likely used parameters are described in the sample “google.yaml” & “azure.yaml” files provided. Consult “values.yaml” inside the chart “sdl-mtedge-8.6.2.tgz” for a complete list of parameters.

Reconfigure
Language Weaver Edge instance in a Kubernetes cluster could be reconfigured/resized while the Language Weaver Edge service is running. Reconfiguration can be performed using a helm upgrade or directly modifying relevant Kubernetes objects. The simplest way is to modify the “values.yaml” and run a helm upgrade.

Typical values that could be changed while the system is running are:
  1. Bootstrap Language Weaver Edge admin user.
  2. Install new Language Pairs.
  3. Increase/Decrease the size of the pool of Job Engines, Training Engines or add/modify the pool of Translation Engines for a given Language Pair.
  4. Auto scale the number of Translation Engines, depending on the system load.

 

Environment Variables
Custom environment variables could be set independently for:
  1. Controller pod.
  2. Job Engine pods - all pods share the same environment variables.
  3. Translation Engine pods - each Language Pair has its own environment variables.
  4. Training Engine pods - all pods share the same environment variables.
Edge API
The API key is automatically prefixed with “u_” + username + ”_” when created.
For an example, to make a REST API call for user admin@example.com with API key myapikey1234
$ curl -u u_admin@example.com_myapikey1234: https://myhost.example.com/api/v2/
  • Share
  • History
  • More
  • Cancel
Related
Recommended
  • Our Terms of Use
  • Copyright
  • Privacy
  • Security
  • Anti-slavery Statement
  • Cookie Notice
  • YouTube