<?xml version="1.0" encoding="UTF-8" ?>
<?xml-stylesheet type="text/xsl" href="https://community.rws.com/cfs-file/__key/system/syndication/rss.xsl" media="screen"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>Adaptation of Language Pairs in Kubernetes GPU Nodes</title><link>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes</link><description>Language Weaver Edge Wiki</description><dc:language>en-US</dc:language><generator>Telligent Community 12 Non-Production</generator><item><title>Adaptation of Language Pairs in Kubernetes GPU Nodes</title><link>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes</link><pubDate>Fri, 13 Oct 2023 10:28:38 GMT</pubDate><guid isPermaLink="false">10acfa76-f078-475b-a7ef-fc5b3e8d2934:bf91ecc4-9a23-4ef1-8520-3694fe1ff754</guid><dc:creator>Brian John</dc:creator><comments>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes#comments</comments><description>Current Revision posted to Wiki by Brian John on 10/13/2023 10:28:38 AM&lt;br /&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;Language Weaver Edge is deployed in Kubernetes as pods, running on standard CPU nodes. Translation Engine pods &amp;amp; Training Engine pods could optionally be deployed in Kubernetes on nodes with Nvidia Tesla GPUs. Helm charts provided by RWS will try to deploy Training Engines &amp;amp; Translation Engines on nodes with Nvidia Tesla GPUs if detected &amp;amp; specified in the &amp;ldquo;values.yaml&amp;rdquo;. No pods are scheduled on Kubernetes GPU nodes by default, unless specified in the &amp;ldquo;values.yaml&amp;rdquo;.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;For Training Engines, GPU deployment offers significant speed gains versus CPU deployment (up to 10x faster).&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;img style="max-height:600px;max-width:900px;" alt=" " src="/resized-image/__size/1800x1200/__key/communityserver-wikis-components-files/00-00-00-03-31/3264.lwedge_2D00_gpu.png" /&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt; &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported GPU Platforms&lt;/strong&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;The current release of Language Weaver Edge supports only Nvidia Tesla GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported Nvidia Tesla Drivers&lt;/strong&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;&lt;span class="ui-provider jl og jj clf clg clh cli clj clk cll clm cln clo clp clq clr cls clt clu clv clw clx cly clz cma cmb cmc cmd cme cmf cmg cmh cmi cmj cmk" dir="ltr"&gt;&lt;a class="fui-Link ___10kug0w f3rmtva f1ewtqcl fyind8e f1k6fduh f1w7gpdv fk6fouc fjoy568 figsok6 f1hu3pq6 f11qmguv f19f4twv f1tyq0we f1g0x7ka fhxju0i f1qch9an f1cnd47f fqv5qza f1vmzxwi f1o700av f13mvf36 f1cmlufx f9n3di6 f1ids18y f1tx3yz7 f1deo86v f1eh06m1 f1iescvh fhgqx19 f1olyrje f1p93eir f1nev41a f1h8hb77 f1lqvz6u f10aw75t fsle3fq f17ae5zn" title="https://confluence.sdl.com/display/aip/about+nvidia+drivers" href="https://confluence.sdl.com/display/AIP/About+NVidia+drivers" rel="noopener noreferrer" target="_blank"&gt;https://confluence.sdl.com/display/AIP/About+NVidia+drivers&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;Adding below under Training Engine or Translation Engine Stateful Set configuration in &amp;ldquo;values.yaml&amp;rdquo; will force the helm deployment to use GPU nodes to deploy Training Engine or Translation Engine pods.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div style="padding-left:60px;"&gt;&lt;span style="font-family:&amp;#39;courier new&amp;#39;, courier;"&gt;gpu: true&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item><item><title>Adaptation of Language Pairs in Kubernetes GPU Nodes</title><link>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes/revision/10</link><pubDate>Tue, 10 Oct 2023 16:16:57 GMT</pubDate><guid isPermaLink="false">10acfa76-f078-475b-a7ef-fc5b3e8d2934:bf91ecc4-9a23-4ef1-8520-3694fe1ff754</guid><dc:creator>Brian John</dc:creator><comments>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes#comments</comments><description>Revision 10 posted to Wiki by Brian John on 10/10/2023 4:16:57 PM&lt;br /&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;Language Weaver Edge is deployed in Kubernetes as pods, running on standard CPU nodes. Translation Engine pods &amp;amp; Training Engine pods could optionally be deployed in Kubernetes on nodes with Nvidia Tesla GPUs. Helm charts provided by RWS will try to deploy Training Engines &amp;amp; Translation Engines on nodes with Nvidia Tesla GPUs if detected &amp;amp; specified in the &amp;ldquo;values.yaml&amp;rdquo;. No pods are scheduled on Kubernetes GPU nodes by default, unless specified in the &amp;ldquo;values.yaml&amp;rdquo;.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;For Training Engines, GPU deployment offers significant speed gains versus CPU deployment (up to 10x faster).&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;img style="max-height:2400px;max-width:3000px;" alt=" " src="/resized-image/__size/6000x4800/__key/communityserver-wikis-components-files/00-00-00-03-31/8311.lwedge_2D00_gpu.png" /&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt; &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported GPU Platforms&lt;/strong&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;The current release of Language Weaver Edge supports only Nvidia Tesla GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported Nvidia Tesla Drivers&lt;/strong&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;&lt;span class="ui-provider jl og jj clf clg clh cli clj clk cll clm cln clo clp clq clr cls clt clu clv clw clx cly clz cma cmb cmc cmd cme cmf cmg cmh cmi cmj cmk" dir="ltr"&gt;&lt;a class="fui-Link ___10kug0w f3rmtva f1ewtqcl fyind8e f1k6fduh f1w7gpdv fk6fouc fjoy568 figsok6 f1hu3pq6 f11qmguv f19f4twv f1tyq0we f1g0x7ka fhxju0i f1qch9an f1cnd47f fqv5qza f1vmzxwi f1o700av f13mvf36 f1cmlufx f9n3di6 f1ids18y f1tx3yz7 f1deo86v f1eh06m1 f1iescvh fhgqx19 f1olyrje f1p93eir f1nev41a f1h8hb77 f1lqvz6u f10aw75t fsle3fq f17ae5zn" title="https://confluence.sdl.com/display/aip/about+nvidia+drivers" href="https://confluence.sdl.com/display/AIP/About+NVidia+drivers" rel="noopener noreferrer" target="_blank"&gt;https://confluence.sdl.com/display/AIP/About+NVidia+drivers&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span&gt;Adding below under Training Engine or Translation Engine Stateful Set configuration in &amp;ldquo;values.yaml&amp;rdquo; will force the helm deployment to use GPU nodes to deploy Training Engine or Translation Engine pods.&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div style="padding-left:60px;"&gt;&lt;span style="font-family:&amp;#39;courier new&amp;#39;, courier;"&gt;gpu: true&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item><item><title>Adaptation of Language Pairs in Kubernetes GPU Nodes</title><link>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes/revision/9</link><pubDate>Tue, 10 Oct 2023 16:16:21 GMT</pubDate><guid isPermaLink="false">10acfa76-f078-475b-a7ef-fc5b3e8d2934:bf91ecc4-9a23-4ef1-8520-3694fe1ff754</guid><dc:creator>Brian John</dc:creator><comments>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes#comments</comments><description>Revision 9 posted to Wiki by Brian John on 10/10/2023 4:16:21 PM&lt;br /&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;Language Weaver Edge is deployed in Kubernetes as pods, running on standard CPU nodes. Translation Engine pods &amp;amp; Training Engine pods could optionally be deployed in Kubernetes on nodes with Nvidia Tesla GPUs. Helm charts provided by RWS will try to deploy Training Engines &amp;amp; Translation Engines on nodes with Nvidia Tesla GPUs if detected &amp;amp; specified in the &amp;ldquo;values.yaml&amp;rdquo;. No pods are scheduled on Kubernetes GPU nodes by default, unless specified in the &amp;ldquo;values.yaml&amp;rdquo;.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;For Training Engines, GPU deployment offers significant speed gains versus CPU deployment (up to 10x faster).&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;img style="max-height:2400px;max-width:3000px;" alt=" " src="/resized-image/__size/6000x4800/__key/communityserver-wikis-components-files/00-00-00-03-31/8311.lwedge_2D00_gpu.png" /&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt; &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported GPU Platforms&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;The current release of Language Weaver Edge supports only Nvidia Tesla GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported Nvidia Tesla Drivers&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;&lt;span class="ui-provider jl og jj clf clg clh cli clj clk cll clm cln clo clp clq clr cls clt clu clv clw clx cly clz cma cmb cmc cmd cme cmf cmg cmh cmi cmj cmk" dir="ltr"&gt;&lt;a class="fui-Link ___10kug0w f3rmtva f1ewtqcl fyind8e f1k6fduh f1w7gpdv fk6fouc fjoy568 figsok6 f1hu3pq6 f11qmguv f19f4twv f1tyq0we f1g0x7ka fhxju0i f1qch9an f1cnd47f fqv5qza f1vmzxwi f1o700av f13mvf36 f1cmlufx f9n3di6 f1ids18y f1tx3yz7 f1deo86v f1eh06m1 f1iescvh fhgqx19 f1olyrje f1p93eir f1nev41a f1h8hb77 f1lqvz6u f10aw75t fsle3fq f17ae5zn" title="https://confluence.sdl.com/display/aip/about+nvidia+drivers" href="https://confluence.sdl.com/display/AIP/About+NVidia+drivers" rel="noopener noreferrer" target="_blank"&gt;https://confluence.sdl.com/display/AIP/About+NVidia+drivers&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Adding below under Training Engine or Translation Engine Stateful Set configuration in &amp;ldquo;values.yaml&amp;rdquo; will force the helm deployment to use GPU nodes to deploy Training Engine or Translation Engine pods.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div style="padding-left:60px;"&gt;&lt;span style="font-family:&amp;#39;courier new&amp;#39;, courier;"&gt;gpu: true&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item><item><title>Adaptation of Language Pairs in Kubernetes GPU Nodes</title><link>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes/revision/8</link><pubDate>Tue, 10 Oct 2023 16:16:09 GMT</pubDate><guid isPermaLink="false">10acfa76-f078-475b-a7ef-fc5b3e8d2934:bf91ecc4-9a23-4ef1-8520-3694fe1ff754</guid><dc:creator>Brian John</dc:creator><comments>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes#comments</comments><description>Revision 8 posted to Wiki by Brian John on 10/10/2023 4:16:09 PM&lt;br /&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;Language Weaver Edge is deployed in Kubernetes as pods, running on standard CPU nodes. Translation Engine pods &amp;amp; Training Engine pods could optionally be deployed in Kubernetes on nodes with Nvidia Tesla GPUs. Helm charts provided by RWS will try to deploy Training Engines &amp;amp; Translation Engines on nodes with Nvidia Tesla GPUs if detected &amp;amp; specified in the &amp;ldquo;values.yaml&amp;rdquo;. No pods are scheduled on Kubernetes GPU nodes by default, unless specified in the &amp;ldquo;values.yaml&amp;rdquo;.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;For Training Engines, GPU deployment offers significant speed gains versus CPU deployment (up to 10x faster).&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;img style="max-height:2400px;max-width:3000px;" alt=" " src="/resized-image/__size/6000x4800/__key/communityserver-wikis-components-files/00-00-00-03-31/8311.lwedge_2D00_gpu.png" /&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt; &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported GPU Platforms&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;The current release of Language Weaver Edge supports only Nvidia Tesla GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;Supported Nvidia Tesla Drivers&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;&lt;span class="ui-provider jl og jj clf clg clh cli clj clk cll clm cln clo clp clq clr cls clt clu clv clw clx cly clz cma cmb cmc cmd cme cmf cmg cmh cmi cmj cmk" dir="ltr"&gt;&lt;a class="fui-Link ___10kug0w f3rmtva f1ewtqcl fyind8e f1k6fduh f1w7gpdv fk6fouc fjoy568 figsok6 f1hu3pq6 f11qmguv f19f4twv f1tyq0we f1g0x7ka fhxju0i f1qch9an f1cnd47f fqv5qza f1vmzxwi f1o700av f13mvf36 f1cmlufx f9n3di6 f1ids18y f1tx3yz7 f1deo86v f1eh06m1 f1iescvh fhgqx19 f1olyrje f1p93eir f1nev41a f1h8hb77 f1lqvz6u f10aw75t fsle3fq f17ae5zn" title="https://confluence.sdl.com/display/aip/about+nvidia+drivers" href="https://confluence.sdl.com/display/AIP/About+NVidia+drivers" rel="noopener noreferrer" target="_blank"&gt;https://confluence.sdl.com/display/AIP/About+NVidia+drivers&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Adding below under Training Engine or Translation Engine Stateful Set configuration in &amp;ldquo;values.yaml&amp;rdquo; will force the helm deployment to use GPU nodes to deploy Training Engine or Translation Engine pods.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div style="padding-left:60px;"&gt;&lt;span style="font-family:&amp;#39;courier new&amp;#39;, courier;"&gt;gpu: true&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item><item><title>Adaptation of Language Pairs in Kubernetes GPU Nodes</title><link>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes/revision/7</link><pubDate>Tue, 10 Oct 2023 12:05:20 GMT</pubDate><guid isPermaLink="false">10acfa76-f078-475b-a7ef-fc5b3e8d2934:bf91ecc4-9a23-4ef1-8520-3694fe1ff754</guid><dc:creator>Brian John</dc:creator><comments>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes#comments</comments><description>Revision 7 posted to Wiki by Brian John on 10/10/2023 12:05:20 PM&lt;br /&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;Language Weaver Edge is deployed in Kubernetes as pods, running on standard CPU nodes. Translation Engine pods &amp;amp; Training Engine pods could optionally be deployed in Kubernetes on nodes with Nvidia Tesla GPUs. Helm charts provided by RWS will try to deploy Training Engines &amp;amp; Translation Engines on nodes with Nvidia Tesla GPUs if detected &amp;amp; specified in the &amp;ldquo;values.yaml&amp;rdquo;. No pods are scheduled on Kubernetes GPU nodes by default, unless specified in the &amp;ldquo;values.yaml&amp;rdquo;.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;For Training Engines, GPU deployment offers significant speed gains versus CPU deployment (up to 10x faster).&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;img style="max-height:2400px;max-width:3000px;" alt=" " src="/resized-image/__size/6000x4800/__key/communityserver-wikis-components-files/00-00-00-03-31/8311.lwedge_2D00_gpu.png" /&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt; &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported GPU Platforms&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;The current release of Language Weaver Edge supports only Nvidia Tesla GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Adding below under Training Engine or Translation Engine Stateful Set configuration in &amp;ldquo;values.yaml&amp;rdquo; will force the helm deployment to use GPU nodes to deploy Training Engine or Translation Engine pods.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div style="padding-left:60px;"&gt;&lt;span style="font-family:&amp;#39;courier new&amp;#39;, courier;"&gt;gpu: true&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item><item><title>Adaptation of Language Pairs in Kubernetes GPU Nodes</title><link>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes/revision/6</link><pubDate>Fri, 06 Oct 2023 10:47:27 GMT</pubDate><guid isPermaLink="false">10acfa76-f078-475b-a7ef-fc5b3e8d2934:bf91ecc4-9a23-4ef1-8520-3694fe1ff754</guid><dc:creator>Brian John</dc:creator><comments>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes#comments</comments><description>Revision 6 posted to Wiki by Brian John on 10/6/2023 10:47:27 AM&lt;br /&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;Language Weaver Edge is deployed in Kubernetes as pods, running on standard CPU nodes. Translation Engine pods &amp;amp; Training Engine pods could optionally be deployed in Kubernetes on nodes with Nvidia GPUs. Helm charts provided by RWS will try to deploy Training Engines &amp;amp; Translation Engines on nodes with Nvidia GPUs if detected &amp;amp; specified in the &amp;ldquo;values.yaml&amp;rdquo;. No pods are scheduled on Kubernetes GPU nodes by default, unless specified in the &amp;ldquo;values.yaml&amp;rdquo;.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;For Translation Engines, the cost of deploying on GPU nodes, tends to be comparable with deploying on non-GPU nodes. The additional complexity may not be warranted.&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;For Training Engines, GPU deployment offers significant speed gains versus CPU deployment (up to 10x faster). &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;Training Adaptable Language Pairs on GPUs is cost effective when a Language Pair takes unacceptably long time to complete a training on CPUs.&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;img style="max-height:2400px;max-width:3000px;" alt=" " src="/resized-image/__size/6000x4800/__key/communityserver-wikis-components-files/00-00-00-03-31/8311.lwedge_2D00_gpu.png" /&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;strong&gt;Advantages &lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Training Engines could be considerably faster (up to 10x speedup).&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Kubernetes cluster must have GPU nodes.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Auto scaling Translation Engines is not yet supported on GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;The hardware cost of performing translations is usually comparable with CPU deployment.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div&gt;&lt;span&gt; &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported GPU Platforms&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;The current release of Language Weaver Edge supports only Nvidia GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Adding below under Training Engine or Translation Engine Stateful Set configuration in &amp;ldquo;values.yaml&amp;rdquo; will force the helm deployment to use GPU nodes to deploy Training Engine or Translation Engine pods.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div style="padding-left:60px;"&gt;&lt;span style="font-family:&amp;#39;courier new&amp;#39;, courier;"&gt;gpu: true&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item><item><title>Adaptation of Language Pairs in Kubernetes GPU Nodes</title><link>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes/revision/5</link><pubDate>Fri, 06 Oct 2023 10:47:07 GMT</pubDate><guid isPermaLink="false">10acfa76-f078-475b-a7ef-fc5b3e8d2934:bf91ecc4-9a23-4ef1-8520-3694fe1ff754</guid><dc:creator>Brian John</dc:creator><comments>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes#comments</comments><description>Revision 5 posted to Wiki by Brian John on 10/6/2023 10:47:07 AM&lt;br /&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;Language Weaver Edge is deployed in Kubernetes as pods, running on standard CPU nodes. Translation Engine pods &amp;amp; Training Engine pods could optionally be deployed in Kubernetes on nodes with Nvidia GPUs. Helm charts provided by RWS will try to deploy Training Engines &amp;amp; Translation Engines on nodes with Nvidia GPUs if detected &amp;amp; specified in the &amp;ldquo;values.yaml&amp;rdquo;. No pods are scheduled on Kubernetes GPU nodes by default, unless specified in the &amp;ldquo;values.yaml&amp;rdquo;.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;For Translation Engines, the cost of deploying on GPU nodes, tends to be comparable with deploying on non-GPU nodes. The additional complexity may not be warranted.&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;For Training Engines, GPU deployment offers significant speed gains versus CPU deployment (up to 10x faster). &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;Training Adaptable Language Pairs on GPUs is cost effective when a Language Pair takes unacceptably long time to complete a training on CPUs. &lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;img style="max-height:2400px;max-width:3000px;" alt=" " src="/resized-image/__size/6000x4800/__key/communityserver-wikis-components-files/00-00-00-03-31/8311.lwedge_2D00_gpu.png" /&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;strong&gt;Advantages &lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Training Engines could be considerably faster (up to 10x speedup).&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Kubernetes cluster must have GPU nodes.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Auto scaling Translation Engines is not yet supported on GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;The hardware cost of performing translations is usually comparable with CPU deployment.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div&gt;&lt;span&gt; &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported GPU Platforms&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;The current release of Language Weaver Edge supports only Nvidia GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Adding below under Training Engine or Translation Engine Stateful Set configuration in &amp;ldquo;values.yaml&amp;rdquo; will force the helm deployment to use GPU nodes to deploy Training Engine or Translation Engine pods.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div style="padding-left:60px;"&gt;&lt;span style="font-family:&amp;#39;courier new&amp;#39;, courier;"&gt;gpu: true&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item><item><title>Adaptation of Language Pairs in Kubernetes GPU Nodes</title><link>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes/revision/4</link><pubDate>Fri, 06 Oct 2023 10:46:02 GMT</pubDate><guid isPermaLink="false">10acfa76-f078-475b-a7ef-fc5b3e8d2934:bf91ecc4-9a23-4ef1-8520-3694fe1ff754</guid><dc:creator>Brian John</dc:creator><comments>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes#comments</comments><description>Revision 4 posted to Wiki by Brian John on 10/6/2023 10:46:02 AM&lt;br /&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;Language Weaver Edge is deployed in Kubernetes as pods, running on standard CPU nodes. Translation Engine pods &amp;amp; Training Engine pods could optionally be deployed in Kubernetes on nodes with Nvidia GPUs. Helm charts provided by RWS will try to deploy Training Engines &amp;amp; Translation Engines on nodes with Nvidia GPUs if detected &amp;amp; specified in the &amp;ldquo;values.yaml&amp;rdquo;. No pods are scheduled on Kubernetes GPU nodes by default, unless specified in the &amp;ldquo;values.yaml&amp;rdquo;.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;For Translation Engines, the cost of deploying on GPU nodes, tends to be comparable with deploying on non-GPU nodes. The additional complexity may not be warranted.&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;For Training Engines, GPU deployment offers significant speed gains versus CPU deployment (up to 10x faster). &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;Training Adaptable Language Pairs on GPUs is cost effective when a Language Pair takes unacceptably long time to complete a training on CPUs. &lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;img style="max-height:600px;max-width:3000px;" alt=" " src="/resized-image/__size/6000x1200/__key/communityserver-wikis-components-files/00-00-00-03-31/lwedge_2D00_gpu.png" /&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;strong&gt;Advantages &lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Training Engines could be considerably faster (up to 10x speedup).&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Kubernetes cluster must have GPU nodes.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Auto scaling Translation Engines is not yet supported on GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;The hardware cost of performing translations is usually comparable with CPU deployment.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div&gt;&lt;span&gt; &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported GPU Platforms&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;The current release of Language Weaver Edge supports only Nvidia GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Adding below under Training Engine or Translation Engine Stateful Set configuration in &amp;ldquo;values.yaml&amp;rdquo; will force the helm deployment to use GPU nodes to deploy Training Engine or Translation Engine pods.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div style="padding-left:60px;"&gt;&lt;span style="font-family:&amp;#39;courier new&amp;#39;, courier;"&gt;gpu: true&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item><item><title>Adaptation of Language Pairs in Kubernetes GPU Nodes</title><link>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes/revision/3</link><pubDate>Fri, 06 Oct 2023 10:45:42 GMT</pubDate><guid isPermaLink="false">10acfa76-f078-475b-a7ef-fc5b3e8d2934:bf91ecc4-9a23-4ef1-8520-3694fe1ff754</guid><dc:creator>Brian John</dc:creator><comments>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes#comments</comments><description>Revision 3 posted to Wiki by Brian John on 10/6/2023 10:45:42 AM&lt;br /&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;Language Weaver Edge is deployed in Kubernetes as pods, running on standard CPU nodes. Translation Engine pods &amp;amp; Training Engine pods could optionally be deployed in Kubernetes on nodes with Nvidia GPUs. Helm charts provided by RWS will try to deploy Training Engines &amp;amp; Translation Engines on nodes with Nvidia GPUs if detected &amp;amp; specified in the &amp;ldquo;values.yaml&amp;rdquo;. No pods are scheduled on Kubernetes GPU nodes by default, unless specified in the &amp;ldquo;values.yaml&amp;rdquo;.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;span&gt;For Translation Engines, the cost of deploying on GPU nodes, tends to be comparable with deploying on non-GPU nodes. The additional complexity may not be warranted.&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;For Training Engines, GPU deployment offers significant speed gains versus CPU deployment (up to 10x faster). &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;span&gt;Training Adaptable Language Pairs on GPUs is cost effective when a Language Pair takes unacceptably long time to complete a training on CPUs. &lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;img style="max-height:600px;max-width:3000px;" alt=" " src="/resized-image/__size/6000x1200/__key/communityserver-wikis-components-files/00-00-00-03-31/lwedge_2D00_gpu.png" /&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;div&gt;&lt;strong&gt;Advantages &lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Training Engines could be considerably faster (up to 10x speedup).&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Disadvantages&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Kubernetes cluster must have GPU nodes.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;Auto scaling Translation Engines is not yet supported on GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span&gt;The hardware cost of performing translations is usually comparable with CPU deployment.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;div&gt;&lt;span&gt; &lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;strong&gt;Supported GPU Platforms&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;The current release of Language Weaver Edge supports only Nvidia GPUs.&lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
&lt;div&gt;&lt;strong&gt;Configuration&lt;/strong&gt;&lt;/div&gt;
&lt;ol&gt;
&lt;li&gt;&lt;span&gt;Adding below under Training Engine or Translation Engine Stateful Set configuration in &amp;ldquo;values.yaml&amp;rdquo; will force the helm deployment to use GPU nodes to deploy Training Engine or Translation Engine pods. &lt;/span&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;br /&gt;
&lt;div style="padding-left:60px;"&gt;&lt;span style="font-family:&amp;#39;courier new&amp;#39;, courier;"&gt;gpu: true&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item><item><title>Adaptation of Language Pairs in Kubernetes GPU Nodes</title><link>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes/revision/2</link><pubDate>Thu, 05 Oct 2023 14:00:11 GMT</pubDate><guid isPermaLink="false">10acfa76-f078-475b-a7ef-fc5b3e8d2934:bf91ecc4-9a23-4ef1-8520-3694fe1ff754</guid><dc:creator>RWS Community</dc:creator><comments>https://community.rws.com/product-groups/linguistic-ai/edge/w/wiki/6733/adaptation-of-language-pairs-in-kubernetes-gpu-nodes#comments</comments><description>Revision 2 posted to Wiki by RWS Community on 10/5/2023 2:00:11 PM&lt;br /&gt;
&lt;p&gt;Adaptation of Language Pairs in Kubernetes GPU Nodes&lt;/p&gt;&lt;div style="clear:both;"&gt;&lt;/div&gt;
</description></item></channel></rss>