Editor’s take: When it comes to generative AI, the common thinking is that the critical tools necessary for businesses to build their own GenAI-based applications are virtually all cloud-based resources and that organizations are perfectly fine with using them. But what if that really isn’t the case?
What if companies are beginning to see that an alternative approach, using hybrid computing architectures, might be a better choice for building and running GenAI applications? Early signs suggest that this shift in thinking is already underway.
In fact, one of the most surprising findings from my recent survey of 1,000+ U.S.-based companies currently using GenAI (see “The Intelligent Path Forward: Generative AI in the Enterprise” for more) is the significant interest organizations have in running their GenAI-powered applications on-premises. Although only a small single-digit percentage reported they are currently running these applications locally, a striking 80% of respondents expressed interest in doing so.
To be clear, most organizations will continue to run GenAI applications in the cloud, and likely even a majority of them will remain there. However, there’s no question that demand is building to move some of these workloads behind corporate firewalls.
The implications of this interest are profound and point to emerging shifts in market dynamics that are set to impact how IT products and services are positioned, marketed, sold, and deployed.
Rather than taking about a decade, as the move from cloud to hybrid cloud did, the transition to Hybrid GenAI will likely happen within about 10 months.
At a basic level, we’re bound to see rapid growth in hybrid AI, where most enterprise environments end up running GenAI-powered applications both in the cloud and on-premises. In some cases, these may be separate applications, each operating in its own environment, but increasingly, I expect to see applications that run concurrently across both.
Long-time IT industry observers may not find this development surprising – it closely parallels the transition from pure cloud initiatives to hybrid cloud models that have now become widespread. However, there’s a crucial difference with GenAI. Rather than taking about a decade, as the move from cloud to hybrid cloud did, the transition to Hybrid GenAI will likely happen within about 10 months. Just as we’ve seen a rapid evolution in GenAI-based foundation models and tools, I expect an equally swift transition to new types of GenAI-powered computing environments and business models.
The reasons for this interest in Hybrid GenAI mirror what organizations cited when they evolved their cloud strategies to hybrid principles. Many companies have been reluctant – or unable, especially in certain regulated industries – to move their most precious data to the cloud. As a result, they’ve recognized the need to run at least some of the applications that use this critical data within their own data centers.
With GenAI, it’s this same critical data that organizations have quickly realized is essential for training and fine-tuning the foundation models they use for GenAI applications. To get the most valuable results, they need to input their most important (often highly confidential and frequently on-premises) data into these models. In other words, data gravity strikes again.
This, in turn, is prompting companies to rethink their GenAI application development strategies. While they commonly choose the cloud for initial proof-of-concept work, there’s a growing awareness that, for full deployment, they want the ability to run tools, platforms, and foundation models within their own environments.
As a result, I believe that many IT product and service providers will accelerate their hybrid AI offerings. One major challenge holding back on-prem GenAI deployments is that several popular foundation models – notably OpenAI’s GPT family, Amazon’s Titan, and Google’s Gemini – aren’t yet available for on-prem deployment.
Currently, they can only be accessed via the cloud. By this time next year, however, I expect that situation to be very different. Companies that make their tools more readily available on-prem and in hybrid environments are likely to gain a competitive edge. Whether that edge proves to be long-term or short-term remains to be seen, but given the momentum and interest in on-prem deployments, it’s bound to be influential.
Related to this is the increased demand for more corporate infrastructure. After a challenging decade for major corporate server and infrastructure providers like Dell, HPE, Lenovo, Cisco, and others, driven by the shift to cloud-based workloads, it seems the pendulum is now swinging back.
It seems we’re on the brink of some significant and fast-moving changes in how companies view the computing resources they’ll need to leverage GenAI effectively.
Arguably the adoption of hybrid cloud architectures already started this process, but the move to running more GenAI-powered applications behind the firewall (or in co-location environments) is likely to accelerate this further. As a Cisco executive summarized at the company’s recent Partner Summit event, “data centers are cool again.”
What makes Hybrid GenAI even more intriguing – and potentially more impactful – than Hybrid Cloud is that with GenAI, there’s an additional layer of hybridization: running workloads directly on devices alongside on-premises or cloud-based resources. The tremendous improvements in on-board processing of GenAI applications on PCs and smartphones – thanks to a combination of semiconductor, system design and software enhancements – is creating this third layer of potential workload hybridization.
Admittedly, these types of distributed computing architectures are not easy to build or write applications for, but their potential impact is huge. Imagine having the capability to leverage the fastest, most available, or best optimized computing resources across the entire three-layer stack of device, datacenter, and cloud to run a given application – with the option to select the best choice or combination depending on a specific application’s needs – and your head starts to swim as you ponder the possibilities.
Exactly how all these developments in Hybrid GenAI pan out is not all clear. Understanding the full implications of the three-layer hybridization stack that GenAI applications will soon have is no simple task. Toss in potential x-factors like small language models and how they could reshape the rules on how and where GenAI applications are written and run, and things could get even more confusing.
Still, it seems clear that we’re on the brink of significant, fast-moving changes in how companies think about the computing resources they’ll need to leverage GenAI effectively. In turn, this is likely to reshape the landscape of IT suppliers and their offerings much sooner and more profoundly than many might anticipate.
Source: Hybrid AI will change everything in enterprise computing