AWS launches GPU-powered G4 instances for machine learning, graphics rendering



Amazon Web Services Inc. is making its platform more attractive for companies adopting artificial intelligence.

The cloud giant today announced the general availability of the G4 instance family, which consists of six virtual machines optimized for machine learning workloads. They succeed the G3 series that AWS introduced back in 2017. The performance difference is considerable: the new instances run ResNet-50, a popular image recognition model, up to twice as fast.

Under the hood, the G4 instances use Nvidia Corp.’s Tesla T4 graphics card. The chip packs nearly 3,000 processing cores, including 320 so-called Tensor Cores engineered for the sole purpose of helping AI models crunch data faster.

Five of the six virtual machines in the G4 family come with a single T4 card per instance. The sixth, known as the g4dn.12xlarge, lives up to its name by providing no fewer than four chips plus 192 gigabits of memory to go along. The Nvidia silicon is supported by Intel Corp. central processing units that handle general computing tasks to free up computing  power for the AI software running on top.

Besides machine learning, the new instances also lend themselves to graphically-intensive workloads such as video rendering. That’s partially the result of the underlying T4 chips’ multipurpose architecture. In addition to the 320 Tensor Cores, the graphics card features 40 RT Cores, which take over the resource-intensive task of generating light and shadow effects to speed up visual processing.

“The T4 GPUs are ideal for machine learning inferencing, computer vision, video processing, and real-time speech & natural language processing,” AWS chief evangelist Jeff Barr detailed in a blog  post.

The instances are currently available in eight of AWS’ 22 global data center clusters. Further down the road, the provider plans to expand support to more regions as well as add a supersized seventh instance with eight T4 graphics cards, 93 CPU processing cores and 384 gigabits of memory.

Companies in need of even more computing power can turn to AWS’ P3 instance series. The largest virtual machine in the lineup comes with eight Nvidia Tesla V100 data center chips that each pack more than 5,700 processing cores, of which 640 are Tensor Cores.

Photo: AWS

Since you’re here …

… We’d like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.





Source link

WP Twitter Auto Publish Powered By : XYZScripts.com
Exit mobile version