Hewlett Packard Enterprise Accelerates AI Journey from POC to Production with New Solution for AI Development and Training at Scale

New HPE Machine Learning Development System eliminates complexity and cost to build and train models with complete, ready-to-use solution accelerating time-to-insights from weeks to days

Hewlett Packard Enterprise (NYSE: HPE) today announced that it is removing barriers for enterprises to easily build and train machine learning models at scale, to realize value faster, with the new HPE Machine Learning Development System. The new system, which is purpose-built for AI, is an end-to-end solution that integrates a machine learning software platform, compute, accelerators, and networking to develop and train more accurate AI models faster, and at scale.

The HPE Machine Learning Development System builds on HPE’s strategic investment in acquiring Determined AI to combine its robust machine learning (ML) platform, now formally called the HPE Machine Learning Development Environment, with HPE’s world-leading AI and high performance computing (HPC) offerings. With the new HPE Machine Learning Development System, users can speed up the typical time-to-value to start realizing results from building and training machine models, from weeks and months, to days.

Early adopter of HPE Machine Learning Development System launches training of giant multimodal AI model in record speed

HPE also announced today that Aleph Alpha, a German AI startup, has adopted the HPE Machine Learning Development System to train their multimodal AI, which includes Natural Language Processing (NLP) and computer vision. By combining image and text processing in five languages with almost human-like context understanding, the models push the boundaries of modern AI for all kinds of language and image-based transformative use cases, such as AI-assistants for the creation of complex texts, higher level understanding summaries, searching for highly specific information in hundreds of documents, and leveraging of specialized knowledge in a conversational context.

By adopting the HPE Machine Learning Development System, Aleph Alpha had the system immediately up and began efficiently training in record time, combining and monitoring hundreds of GPUs.

“We are seeing astonishing efficiency and performance of more than 150 teraflops by using the HPE Machine Learning Development System. The system was quickly set up and we began training our models in hours instead of weeks. While running these massive workloads, combined with our ongoing research, being able to rely on an integrated solution for deployment and monitoring makes all the difference.” – Jonas Andrulis, Founder and CEO, Aleph Alpha

“Enterprises seek to incorporate AI and machine learning to differentiate their products and services, but are often confronted with complexity in setting up the infrastructure required to build and train accurate AI models at scale,” said Justin Hotard, executive vice president and general manager, HPC and AI, at HPE. “The HPE Machine Learning Development System combines our proven end-to-end HPC solutions for deep learning with our innovative machine learning software platform into one system, to provide a performant out-of-the box solution to accelerate time to value and outcomes with AI.”

Removing barriers to realize full potential of AI with complete machine learning solution

Organizations have yet to reach maturity in their AI infrastructure, which according to IDC, is the most significant and costly investment required for enterprises that want to speed up their experimentation or prototyping phase, to develop AI products and services. Typically, adopting AI infrastructure to support model development and training at scale, requires a complex, multi-step process involving the purchase, setup and management of a highly parallel software ecosystem and infrastructure spanning specialized compute, storage, interconnect and accelerators.

The HPE Machine Learning Development System helps enterprises bypass the high complexity associated with adopting AI infrastructure by offering the only solution that combines software, specialized computing such as accelerators, networking, and services, allowing enterprises to immediately begin efficiently building and training optimized machine learning models at scale.

Gaining accurate models to unlock value faster with the HPE Machine Learning Development System

The system also helps improve accuracy in models faster with state-of-art distributed training, automated hyperparameter optimization and neural architecture search, which are key to machine learning algorithms.

The HPE Machine Learning Development System delivers optimized compute, accelerated compute, and interconnect, which are key performance drivers to scale models efficiently for a mix of workloads, starting at a small configuration of 32 GPUs, all the way to a larger configuration of 256 GPUs. On a small configuration of 32 GPUs, the HPE Machine Learning Development System delivers approximately 90% scaling efficiency for workloads such as Natural Language Processing (NLP) and Computer Vision. Additionally, based on internal testing, the HPE Machine Learning Development System with 32 GPUs delivers up to 5.7X faster throughout for an NLP workload compared to another offering containing 32 identical GPUs, but with a sub-optimal interconnect.1

Blog: HPE Machine Learning Development System: Real-world NLP & computer vision model benchmarks, by Evan Sparks, vice president, HPC & AI, at HPE

Speeding up POC to production with ready-to-use, AI model development and training solution

The HPE Machine Learning Development System is offered as one, integrated solution that provides preconfigured, fully installed AI infrastructure for turnkey model development and training at scale. As part of the offering, HPE Pointnext Services will provide onsite installation and software setup, allowing users to immediately implement and train machine learning models for faster and more accurate insights from their data.

The HPE Machine Learning Development System is offered starting in a small building block, with options to scale up. The small configuration starts with the following:

  • Innovative machine learning platform with the HPE Machine Learning Development Environment to enable enterprises to rapidly develop, iterate, and scale high-quality models from POC to production
  • Optimized AI infrastructure using the HPE Apollo 6500 Gen10 Plus system to provide massive, specialized computing capabilities to train and optimize AI models, starting with eight NVIDIA A100 80GB GPUs for accelerated compute
  • Enabling fine-grained centralized monitoring and management of for optimal performance with the HPE Performance Cluster Management, a system management software solution
  • Management stack to control and manage system components using HPE ProLiant DL325 servers and 1Gb Ethernet Aruba CX 6300 switch
  • Ensuring performance of compute and storage communications using the NVIDIA Quantum InfiniBand networking platform

Availability

The HPE Machine Learning Development System is available now worldwide. For more information, please visit: hpe.com/info/machine-learning-development-system

HPE expands AI product portfolio to help customers improve insights and make better decisions

HPE is building on today’s news with additional AI offerings, including the launch of HPE Swarm Learning, the industry’s first privacy-preserving, decentralized machine learning framework for the edge or distributed sites. With HPE Swarm Learning, a range of organizations such as healthcare, banking and financial services, and manufacturing, can share learnings from their AI models with other organizations to improve insights, without sharing the actual data.

Additionally, HPE announced that it is building on its collaboration with Qualcomm Technologies, Inc. to deliver advanced inferencing offerings to support heterogenous system architectures that provide AI inferencing at scale. HPE will offer the HPE Edgeline EL8000 Converged Edge systems, which are compact, ruggedized edge computing solutions optimized for harsh environments outside the datacenter, with the Qualcomm® Cloud AI 100 accelerator to deliver inferencing for datacenters and at the edge. The combined solution delivers high-performance at low-power for demanding AI Inference workloads.2 The offering will be generally available in August 2022.

To learn more about HPE’s AI solutions, please visit: https://www.hpe.com/us/en/solutions/artificial-intelligence.html

About Hewlett Packard Enterprise

Hewlett Packard Enterprise (NYSE: HPE) is the global edge-to-cloud company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way people live and work, HPE delivers unique, open and intelligent technology solutions as a service. With offerings spanning Cloud Services, Compute, High Performance Computing & AI, Intelligent Edge, Software, and Storage, HPE provides a consistent experience across all clouds and edges, helping customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com

  1. Claims based on internal benchmark testing that compares the HPE Machine Learning Development System with another offering using 32 GPUs
  2. Delivers 350 TOPS peak performance at 75W TDP, according to https://www.qualcomm.com/products/technology/processors/cloud-artificial-intelligence/cloud-ai-100

Qualcomm is a trademark or registered trademark of Qualcomm Incorporated. Qualcomm Cloud AI 100 is a product of Qualcomm Technologies, Inc. and/or its subsidiaries.

Contacts

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.