Red Hat Enterprise Linux AI extends innovation across the hybrid cloud
Red Hat Enterprise Linux (RHEL) AI is Red Hat’s foundation model platform, enabling users to develop, test, and run GenAI models to power enterprise applications.
The platform brings together the open source-licensed Granite LLM family and InstructLab model alignment tools based on the Large-scale Alignment for chatBots (LAB) methodology, packaged as an optimized, bootable RHEL image for individual server deployments across the hybrid cloud.
While GenAI’s promise is immense, the associated costs of procuring, training, and fine-tuning LLMs can be astronomical, with some leading models costing nearly $200 million to train before launch. This does not include the cost of aligning for a given organization’s specific requirements or data, which typically requires data scientists or highly specialized developers. No matter the model selected for a given application, alignment is still required to align it with company-specific data and processes, making efficiency and agility key for AI in actual production environments.
Red Hat believes that smaller, more efficient, and built-to-purpose AI models will form a substantial mix of the enterprise IT stack alongside cloud-native applications over the next decade. But to achieve this, GenAI needs to be more accessible and available, from its costs to its contributors to where it can run across the hybrid cloud. For decades, open source communities have helped solve similar challenges for complex software problems through contributions from diverse groups of users; a similar approach can lower the barriers to effectively embracing GenAI.
These are the challenges that RHEL AI intends to address – making GenAI more accessible, more efficient and more flexible to CIOs and enterprise IT organizations across the hybrid cloud. RHEL AI helps:
- Empower GenAI innovation with enterprise-grade, open source-licensed Granite models aligned with various GenAI use cases.
- Streamline aligning GenAI models to business requirements with InstructLab tooling, making it possible for domain experts and developers within an organization to contribute unique skills and knowledge to their models even without extensive data science skills.
- Train and deploy GenAI anywhere across the hybrid cloud by providing all the tools to tune and deploy models for production servers wherever associated data lives. RHEL AI also provides a ready-on-ramp to Red Hat OpenShift AI for training, tuning, and serving these models at scale while using the same tooling and concepts.
RHEL AI is generally available today via the Red Hat Customer Portal to run on-premise or upload to AWS and IBM Cloud as a “bring your own subscription” (BYOS) offering. A BYOS offering on Azure and Google Cloud is planned for Q4 2024, and RHEL AI is also expected to be available on IBM Cloud as a service later this year.