Building an AI-Ready Data Foundation: Flexibility, Trust, and Sovereignty in a Rapidly Changing Landscape
.png)
Building an AI-Ready Data Foundation: Flexibility, Trust, and Sovereignty in a Rapidly Changing Landscape
A Six Five Media Research Brief Inspired by Conversations at AWS re:Invent 2025
AI is advancing so quickly that traditional planning cycles are becoming obsolete. Six months no longer feels forward-looking—it feels outdated. New capabilities now arrive in weeks, not quarters. Boards want results now, and operational leaders are expected to scale AI initiatives at a pace unmatched by any prior technology wave.
And behind that urgency lies an even harder truth: most organizations are still encumbered by legacy data challenges: silos, inconsistent governance, regional compliance constraints, and platform lock-in that limits their ability to adapt. Put simply, CIO’s are being asked to build the future on foundations that weren’t built for this kind of rapid advancement.
Industry data reinforces this reality. A Moor Insights & Strategy white paper cites a startling statistic: 83% to 92% of new AI projects fail, often due to poor data quality or insufficient data preparation—problems that trace back to underlying data architecture. Meanwhile, Futurum’s recent survey on enterprise AI reports that 29% of data organizations now cite “building AI capabilities” as their single most important objective, yet 20% identify data quality, trust, and governance as their top source of dissatisfaction with their current data stack. AI ambition is rising, but foundational readiness still lags.
Informed by discussions with Qlik’s leadership at AWS re:Invent 2025, and grounded in the latest industry data, this brief details the architectural principles and operational strategies required to keep pace with AI’s accelerating trajectory.
I. Vendor Lock-In: The Silent Threat to AI Agility
As AI evolves, CIOs increasingly fear committing to a data strategy that becomes obsolete within months. With innovation cycles compressing, the risk isn’t just inefficiency, it’s being unable to adopt transformative capabilities because the underlying data infrastructure cannot flex.
This concern is amplified by a global regulatory landscape that grows more complex each quarter:
- Data residency laws now span nearly every continent.
- Use-based restrictions dictate how data can participate in AI inference or training.
- Sector-specific requirements add additional layers of compliance.
This environment forces CIOs to navigate a double bind: move fast enough to deliver AI impact, and design systems capable of absorbing constant change. Locking into a single vendor or architectural pattern is no longer viable.
Organizations need choice. The ability to run different workloads across multiple environments and direct data to whichever models, engines, or platforms best serve the moment. Qlik’s data platform strategy mirrors this requirement, supporting a substrate-based architecture that allows organizations to reuse data flexibly across varied targets without re-engineering pipelines. As AI creates ever more diverse workloads, this level of adaptability becomes essential.
II. Data Quality: The Defining Variable in AI Success
The industry’s enthusiasm for AI often masks a harsh reality: the majority of AI failures stem not from model performance but from poor data foundations.
Consider two data points:
- Only 29% of technology leaders say their enterprise data meets quality, accessibility, and security requirements to scale generative AI.
- Nearly 90% of chief compliance officers now oversee data governance, model ethics, and AI risk management—roles that barely existed three years ago.
AI amplifies the consequences of unreliable data. For this reason, Qlik’s leadership stresses the importance of trusted data products—curated, governed, auditable datasets designed specifically for AI consumption. These products incorporate:
- Full lineage and provenance
- Quality and completeness checks
- Access controls linked to roles, applications, and agents
- Contextual metadata that ensures AI systems understand not just what the data is, but why it matters
This goes well beyond semantic layers. It establishes a trusted intelligence layer that supports explainability, evaluation, and governance—capabilities that AI agents increasingly require.
III. Agentic AI: From Monoliths to Microservices
AI is undergoing its own architectural evolution. Early enterprise deployments focused on large, centralized generative models. But the next phase of agentic AI will move toward smaller, specialized agents that operate within tightly defined contexts.
Qlik’s perspective reflects this shift:
- Agents will increasingly resemble microservices—lightweight, context-bound, and highly task-specific.
- Each agent will map to a precise data product, ensuring it has the right information (and only the right information) to perform its function.
- This bounded approach improves performance, simplifies evaluations, and hardens governance.
This shift also introduces new risks. Anyone can sign up for an AI agent tool with a credit card. CIOs must therefore return to a role many have not held since the rise of SaaS: balancing innovation with governance, ensuring employees experiment safely without fracturing compliance posture.
An agent-driven world demands a data platform that supports rapid onboarding of new models and pipelines while maintaining consistent controls across all of them.
IV. The Open Lakehouse: A Response to AI’s Need for Flexibility
The open lakehouse has emerged as a leading architectural answer to AI-era requirements. According to a 2024 industry survey:
- 86% of organizations plan to unify their data estate.
- 70% expect data lakehouses to power at least half of analytics workloads within three years.
This shift is fueled by the need to decouple data storage from processing engines. Technologies such as Apache Iceberg have become core to this movement, offering:
- ACID transactions
- Schema evolution
- Broad interoperability
- Historical snapshot queries
- Near-infinite scalability on cloud object storage
Within this model, data lives in one place, typically in low-cost object storage like AWS S3, and multiple engines can access it through shared table formats. Qlik’s substrate approach extends this pattern by making data reusable across diverse patterns, applications, and AI targets without duplication or heavy engineering.
This is how enterprises keep pace with AI’s speed: architect for change by default.
V. Increasing Regulatory Pressure: Sovereignty as a Strategic Imperative
Data sovereignty has moved from a compliance topic to a geopolitical priority.
The EU AI Act, adopted in 2024, established the first comprehensive global framework for AI governance, with requirements phasing in through 2027. Yet Europe is only one piece of a worldwide trend. From Asia-Pacific to South America to the Middle East, governments are enacting—or accelerating—data residency mandates.
The message is clear:
Organizations will not be given a choice about where data lives. Regulations will dictate it.
For global enterprises, this means:
- Establishing region-specific deployments
- Maintaining jurisdictional segmentation
- Implementing encryption, monitoring, and BYOK policies
- Ensuring auditability across all environments
Beyond compliance, data has become a national strategic asset, or “the new platinum,” as Qlik’s Brendan Grady puts it. Sovereign cloud participation and multi-region capabilities are no longer differentiators. They’re requirements.
VI. Scaling AI Efficiently: The Cost Flexibility Mandate
Running AI is expensive, and costs continue to rise. Global AI spending hit $154 billion in 2024, and enterprise infrastructure costs are rising at 35% annually.
But CIOs face a contradictory mandate: increase margins and accelerate AI adoption.
Open architectures offer a path through this tension:
- Cost-effective object storage
- Spin up compute engines only when needed
- Move workloads to the most cost-effective environment without replatforming
- Meter compute to align with SLAs instead of over-provisioning
The balance is delicate. Under-invest in data, and AI agents underperform. Overspend on compute or pipelines, and budgets evaporate.
The enterprises that succeed will adopt continuous optimization strategies like monitoring data movement, minimizing redundant workloads, and tuning AI pipelines to the needs of each model and use case.
VII. The Road Ahead: Designing for a Future That Will Change Again
The pace of AI innovation is accelerating, not stabilizing. New models, agentic patterns, evaluation frameworks, and governance requirements will continue to emerge—often faster than organizations can adapt.
Enterprises that thrive in this environment will share three characteristics:
- Open, flexible architectures:
Built on open table formats and reusable data substrates that avoid platform lock-in. - Trusted, governed data foundations:
Enabling both human oversight and AI explainability. - Dynamic, multi-model AI ecosystems:
Allowing rapid onboarding of new capabilities without re-engineering core data pipelines.
This is the new operating model for data in the AI-era. Not rigid, not centralized, but flexible, federated, and built for continuous change. Check out Qlik to learn more about their secure data integration solutions and AI-powered insights.
Related Content
.jpg)
Open by Design: Building Scalable AI-Ready Data Architectures Without Lock-In - Six Five On The Road
Sam Pierson and Brendan Grady of Qlik join Nick Patience to discuss how technology leaders can scale AI while building flexible, trusted data foundations that safeguard against vendor lock-in. Explore strategies for interoperability, open standards, and regulatory agility.
