|
For a long time, there was a rule everyone in modeling followed—whether you were in finance, statistics, or early machine learning:
Keep the model simple. The reasoning was straightforward. If you added too many parameters, your model would overfit—memorize the past instead of learning something that generalizes. Simpler models were safer. More stable. Easier to trust. That rule shaped decades of thinking in finance in particular. Factor models stayed small. Linear relationships dominated. Parsimony wasn’t just a preference; it was doctrine. But something has changed. Recent work in financial machine learning—and increasingly, real-world practice—has revealed a pattern that directly contradicts that intuition: Models with more parameters than data points can perform better out of sample. This isn’t just theory. At the Future Alpha quant event, in a session on Machine Learning, Market Risk, and the Future of Asset Pricing, the message was clear: leading firms are moving away from small, interpretable models toward highly parameterized ones that better reflect the actual structure of markets.
0 Comments
What GTC 2026 Revealed About the Future of AI Infrastructure We’ve Been Optimizing the Wrong LayerFor the past few years, most conversations around AI infrastructure have centered on one thing: building bigger and faster AI factories. More GPUs. Larger clusters. Faster interconnects. And for a while, that made sense. Training was the bottleneck. But sitting in this session at GTC 2026, it became clear that the bottleneck has shifted—and most organizations haven’t caught up yet. The real challenge is no longer how we train AI. That shift—from training to inference—is not subtle. It fundamentally changes how infrastructure needs to be designed, deployed, and operated.
I created a short video overview of Continuing the Journey Toward Responsible AI.
If you’d rather go deeper into the operational and governance framework, continue reading below.
From Ethical Principles to Operational Governance
Artificial intelligence is scaling faster than any general-purpose technology in modern history.
Since 2012, the compute used to train leading AI systems has increased by an estimated factor of 10 billion (10¹⁰). Training cycles that once required months now iterate in weeks. Recent enterprise benchmarks show that more than 70% of executives cite ethical and regulatory risk as a primary barrier to AI deployment. AI is no longer experimental. It is infrastructural. And if AI is infrastructure, then responsible AI is not philosophy. It is risk management. Why TFLOPs and VRAM Are the Least Interesting Parts of Production AIIntroduction: The GPU Fallacy
When organizations plan large-scale LLM inference, the conversation almost always starts with hardware:
This fixation on raw compute is a textbook example of what I’ve previously called the AI Illusion: the belief that advanced infrastructure automatically produces outcomes. In reality, inference performance is determined far more by the system's behavior than by GPU specs. This article breaks down the hidden bottlenecks that dominate real-world LLM inference and explains why architects who only model TFLOPs and VRAM are consistently surprised in production.
Why accelerating AI output often magnifies problems instead of fixing them.
AI doesn’t automatically improve outcomes; instead, it amplifies existing processes — good or bad.
AI investment has never been higher.
AI capability has never been stronger. Yet across industries, many organizations are quietly frustrated by the results. Projects stall. Adoption plateaus. Confidence erodes. The promised transformation never quite arrives. This isn’t because AI is ineffective or overhyped. It’s because many organizations fall into what we call the AI Illusion. The illusion is the belief that adding AI automatically improves outcomes. The reality is more uncomfortable: AI amplifies whatever already exists—good or bad. If processes are clear, AI helps. If they’re unclear, AI accelerates the problems.
--- ### Watch: The AI Illusion Explained
*In this short video, I break down why AI amplifies existing systems, how organizations fall into the Amplification Trap™, and what leaders can do to design for Decision Gravity™ instead.*
AI success doesn’t begin with hardware or tools — it begins with clarity.
The most effective organizations don’t start with servers or GPUs — they start with outcomes. They focus on why AI matters, not just how it works. And that’s what allows them to align models, infrastructure, and business value from day one.
Watch this quick ~10-minute walkthrough of the blueprint before you dive into the blog details.
Step 1: Inventory Reality — Begin with the Current Environment
Before defining architecture, we first assess what exists today. This determines what can be reused, what must be modernized, and where AI will struggle to scale.
A New Industrial Shift: From Data Centers to AI Factories“The price of intelligence just dropped by 10x.” With that declaration, Jensen Huang signaled a generational pivot: every conventional data center is now obsolete, replaced by the AI Factory — a purpose-built system designed to mass-produce cognitive work. In the same way the industrial revolution mechanized labor, the AI Factory industrializes thought. The keynote at NVIDIA GTC 2025 outlined not a single product, but an entire economic architecture for manufacturing intelligence at scale. Intelligence at the Edge: Arc + Nokia = 6G AI on RANNVIDIA’s partnership with Nokia brings AI directly to the wireless edge through the new NVIDIA Arc platform.
Why it matters to business leaders:
How Atlassian’s 2025 AI Collaboration Report validates the “5 Pillars” every organization needs to get right.
Over the past two years, artificial intelligence has embedded itself into nearly every corner of the enterprise. From code generation and marketing automation to customer engagement and reporting, AI has become a workplace staple. But despite the hype, most organizations still aren’t seeing the transformational outcomes they were promised.
According to the Atlassian AI Collaboration Report 2025, daily AI usage has doubled in the last year, and employees report being 33% more productive. But here’s the catch: Only 4% of organizations are seeing meaningful improvements in company-wide efficiency, innovation, or work quality.
AI is making individuals faster, but it’s not making teams better. This productivity–collaboration gap is one of the main reasons so many AI projects stall after the pilot stage.
I wrote previously on Why AI Projects Fail: The 5 Pillars That Crumble Without the Right Foundation. Atlassian’s findings reinforce exactly that point: when one or more of those foundational pillars is weak, AI remains a tool, not a transformation. Let’s break this down. “The highest ethical duty of a Christian … is to love God and love your neighbor.” — Christian Ethics (The Gospel Coalition) Artificial Intelligence has sparked endless debate over fairness, bias, and governance. But at the root of nearly every ethical discussion lies a deeper question: Who decides what is good? Before we can align AI to “human values,” we must define what values mean — and on what foundation they rest. The Fragility of Social MoralityAcross history, morality defined by social consensus has proven fragile. Consider:
These examples show that while societies often lag in recognizing injustice, Christian ethics has historically offered a corrective authority. Rather than conforming to the cultural status quo, many believers were willing to stand against it, appealing to a higher, unchanging standard of goodness. If AI is trained only on society’s consensus at a given time, it risks freezing injustice into code or amplifying shifts in morality without that higher reference point. As the Scientific American essay “The Origins of Human Morality” explains, our ethical instincts largely arose from evolutionary interdependence: humans developed norms of fairness and reciprocity to survive in groups (Scientific American). These instincts are descriptive, but they don’t settle what is ultimately right or just.
Enterprise AI is accelerating, and at the center of nearly every platform is NVIDIA’s ecosystem. Its dominance comes from a full-stack approach: purpose-built GPUs, optimized software libraries like CUDA and cuDNN, and a broad set of frameworks and developer tools. This combination has made NVIDIA the standard foundation for enterprise-scale AI infrastructure.
Building on that foundation, Dell and HPE have partnered with NVIDIA to deliver validated, production-ready solutions. These platforms are not direct competitors in the traditional sense but rather different approaches to operationalizing AI at scale. The key question for enterprises is not which vendor is better, but which integration model, governance framework, and consumption strategy best aligns with their workloads and long-term goals. |






RSS Feed