VMware Cloud Foundation 9.0 isn’t just a product update; it’s a defining leap forward.
What started as a bundled stack is now a full-spectrum private cloud platform, built for traditional workloads, modern apps, and enterprise AI. With cost-saving innovations, native automation, and built-in AI support, VCF 9.0 sets a new bar for private cloud agility and scale. This is the most significant release in VCF’s history, and here’s why. From Products to Platform: Why It Matters
For years, VMware customers juggled multiple management planes across vSphere, vSAN, NSX, Aria, and Kubernetes tooling. VCF 9.0 eliminates that sprawl by bringing everything into two unified consoles:
Benefit: You save time, reduce human error, and boost team efficiency by managing everything—from deployment to decommission—through a single, cohesive interface.
What’s New in VCF 9.0—and Why It MattersVMware Cloud Foundation 9.0 introduces powerful new features that enhance infrastructure performance, security, and operational efficiency. Here's a breakdown of what’s new and the real-world impact:
0 Comments
Introduction: Beyond the Prompt
The era of single-turn prompts is over. Enterprise AI teams are now building agentic applications—software that can reason, remember, and act over multiple steps using tools, memory, and context.
But while public cloud tools like LangChain and open-source agent runtimes are popular for prototyping, they rarely meet enterprise standards for security, observability, and operational control. Enter VMware Tanzu and Spring AI 1.0: a fully integrated, production-ready framework for deploying agentic AI workflows on a secure Kubernetes platform, backed by VMware Cloud Foundation (VCF). What Makes an App "Agentic"?
Agentic apps move beyond simple LLM queries. They:
VMware’s Model Context Protocol (MCP) is an emerging framework designed to standardize how AI agents manage and retrieve context across tools like vector databases, LLMs, and business APIs. Together with the Spring AI SDK, MCP allows developers to orchestrate multi-step agentic workflows using familiar Java patterns—delivered securely and observably via Tanzu Platform. The cloud revolution promised agility, scalability, and cost savings. For many organizations, adopting a "cloud-first" strategy seemed like the clear path forward. But in 2025, we are witnessing a dramatic shift. CIOs and enterprise architects across industries are embracing a new approach: the "cloud-smart" strategy. Based on real-world lessons, emerging industry surveys, and the evolving demands of AI, security, and cost control, the cloud-smart philosophy is reshaping how we think about digital infrastructure. From Cloud-First to Cloud-Smart: What CIOs Are Learning from Real-World Deployments From Cloud-First to Cloud-SmartA cloud-first strategy emphasizes default deployment of new workloads to public cloud environments. It favors speed and scale, but often lacks nuanced workload placement, governance, and long-term cost analysis. The result? Cloud sprawl, ballooning costs, compliance headaches, latency challenges, and vendor lock-in.
In contrast, a cloud-smart approach takes a more deliberate path. It asks: "What is the right environment for this workload?" Whether it's public cloud, private cloud, hybrid, or edge, cloud-smart thinking evaluates placement based on security, performance, budget, compliance, and data sovereignty. This approach doesn't reject public cloud—it incorporates it as one option in a diversified portfolio that aligns better with business priorities. Artificial Intelligence is quickly becoming a staple in every industry—from personalized customer service to autonomous vehicles. But behind the sleek models and intelligent applications lies a critical ingredient: NVIDIA. Just like cocoa beans are essential to making chocolate—regardless of whether it's milk, dark, or white—NVIDIA’s technology is the raw ingredient fueling AI across every major platform. Whether it’s Microsoft’s Copilot, VMware’s Private AI Foundation, or Hugging Face’s model training stack, chances are, NVIDIA is at the core. The Hardware Layer: From Beans to SiliconNVIDIA's GPUs are the silicon equivalent of cocoa beans—raw, potent, and necessary for transformation. Products like the A100, H100, and the Grace Hopper Superchips provide the computational horsepower to train and deploy large AI models. The DGX systems and NVIDIA-certified infrastructure are the AI factories, grinding and refining data into actionable intelligence.
These systems are foundational in hyperscale cloud environments and enterprise data centers alike. Whether you’re processing video analytics in a smart city deployment or training a custom LLM for financial modeling, it all starts here. NVIDIA hardware is often the first ingredient sourced in any serious AI recipe. Red Hat Enterprise Linux (RHEL) 10 is a major leap forward for enterprise IT. With modern infrastructure demands, hybrid cloud growth, and the emergence of AI and quantum computing, Red Hat has taken a bold approach with RHEL 10—bringing in container-native workflows, generative AI, enhanced security, and intelligent automation. If you’re a systems engineer, architect, or infrastructure lead, this release deserves your full attention. Here’s what makes RHEL 10 a milestone in the evolution of enterprise Linux. Image Mode Goes GA: Container-Native System ManagementImage Mode, first introduced as a tech preview in RHEL 9.4, is now generally available (GA) in RHEL 10—and it's one of the most impactful changes in how you build and manage Linux systems.
Rather than managing systems through traditional package-by-package installations, Image Mode enables you to define your entire system declaratively using bootc, similar to how you build Docker containers. As generative AI (GenAI) revolutionizes industries with tools like ChatGPT, Falcon, and MPT, enterprises are asking the big question: How do we embrace AI innovation without compromising data security or compliance? Enter VMware Private AI — a purpose-built framework to bring GenAI safely into enterprise data centers. This post breaks down VMware’s reference architecture for deploying LLMs using VMware Cloud Foundation, Tanzu Kubernetes Grid, and NVIDIA AI Enterprise. Whether you're building AI chatbots or fine-tuning foundation models, VMware Private AI equips your infrastructure for secure, scalable innovation. Why On-Premises GenAI?Designing the Future: How Dell’s AI Factory and PowerScale Supercharge Scalable AI Productivity5/20/2025 If you're serious about AI and scalability, Dell Technologies is making sure you're not left behind. At Dell Technologies World 2025, I had the chance to sit in on an incredible session titled “Accelerate Productivity Leveraging the Power of AI Factory with PowerScale Storage.” It didn’t just meet my expectations—it redefined how I view scalable AI infrastructure. Here’s a recap of what made this session so powerful. The AI Factory: Infrastructure with IntentDell’s AI Factory is more than marketing buzz—it's a blueprint for delivering production-ready AI. Built using Dell switching and powered by a 400Gbps core fabric with 100Gbps uplinks per node, the environment is engineered for one thing: fast, high-volume AI workloads. This speed is critical when loading large language models (LLMs) across GPUs, and Dell’s architecture ensures that happens with near-zero latency. Whether you're deploying a chatbot, building digital assistants, or scaling to enterprise RAG (retrieval augmented generation) agents, Dell’s AI Factory provides the optimized backbone. PowerScale: Storage That Thinks FastPowerScale storage is the unsung hero of this story. It’s not just fast—it’s smart.
In this session, we saw real-world examples where massive data sets, like 100,000+ documents from arXiv, were chunked, embedded, and indexed in seconds using vector databases. Thanks to PowerScale’s integration with container storage interfaces (CSI), that data could then be quickly retrieved—5% faster than comparable block storage options and with much lower latency. For AI workflows where every millisecond counts (think: healthcare diagnostics or real-time surveillance), that performance edge is everything. As more of my customers embrace the transformative potential of artificial intelligence, the demand for robust, secure, and scalable AI infrastructure has surged. Nutanix has taken a pivotal role in addressing these needs with its GPT-in-a-Box 2.0 solution, an enterprise-ready, full-stack AI platform tailored for organizations that require secure, on-premises AI deployments. This offering streamlines AI adoption by providing a comprehensive ecosystem, optimized infrastructure, and extensive partner support, allowing businesses to deploy and manage AI applications at scale. Simplified AI Deployment with GPT-in-a-BoxNutanix’s GPT-in-a-Box simplifies the deployment, operation, and scaling of AI workloads. With its 2.0 iteration, the solution includes an integrated inference endpoint and end-to-end features, such as GPU and CPU certification, high-performance storage, Kubernetes management, and in-depth telemetry. This design allows organizations to leverage generative AI (GenAI) models like LLMs on-premises, providing control over data security and operational flexibility.
GPT-in-a-Box is particularly beneficial for industries with stringent data regulations, such as government and finance, where public cloud alternatives may not meet compliance requirements. By extending Nutanix’s hybrid infrastructure strengths to AI, organizations can now manage AI applications with the same control and resilience they expect from their existing IT environments. As my customers continue to embrace hybrid cloud environments, the need for efficient and flexible cloud management solutions becomes more critical. VMware Cloud Foundation (VCF) 5.2 introduces several enhancements designed to address these needs, focusing on improving lifecycle management, scalability, security, and flexibility. Let's dive into the key features and updates in VCF 5.2 and see how they can benefit your cloud strategy. Seamlessly Transition to Cloud FoundationOne of the standout features of VCF 5.2 is the ability to import existing vSphere infrastructure into Cloud Foundation. This capability extends the SDDC Manager's inventory and lifecycle management to your current infrastructure, making the transition smoother and less disruptive. There are two primary use cases:
Flexible Edge Architectures for Diverse NeedsVCF 5.2 offers a range of edge architecture options to cater to various deployment scenarios:
In the current landscape shaped by Broadcom's influence on VMware's trajectory, organizations considering staying with VMware might find it prudent to explore transitioning to a hybrid cloud setup. Opting for the right infrastructure becomes paramount to ensure optimal performance and scalability. Among the offerings in the revamped portfolio, VMware Cloud Foundation (VCF) emerges as a favored option, thanks to its robust software-defined data center (SDDC) capabilities. Amid Broadcom's streamlined portfolio, featuring VMware vSphere Foundation and VMware Cloud Foundation, loyal VMware customers have a compelling incentive to opt for a dedicated solution. Combining VCF with Dell VxRail presents an attractive proposition. Not only is VxRail custom-built for VCF, but it also offers the flexibility to integrate third-party storage alongside VMware vSAN. This is important for customers who have investments that are already made in existing external storage systems or have a use case in which external storage systems are required. This combination sets itself apart with its seamless integration, streamlined management, and enhanced performance. Consequently, deploying VMware Cloud Foundation on Dell VxRail emerges as the prime selection. Tailored Integration and Optimization |