Quantum Safe by 2029: Preparing for the Quantum Era
Please see below for the AI-assisted summary of our second virtual salon. View the whole salon here.
Bill Deckelman, Chief Legal Officer, Andersen Tax; Co-Chair, AI Collaborative
Mark Hughes, Senior Leader, Global Managing Partner of Cybersecurity Services, IBM; Chair, AI Collaborative Advisory Board
Dinesh Nagarajan, Global Partner – Cybersecurity, IBM Consulting
Section I — Why “Quantum Safe” Is a Near-Term Business Risk
Key Findings
Quantum is moving from theoretical to practical
- IBM framed viable quantum capability as emerging around 2029–2030 (initially limited access, but meaningful capability).
- The urgency is not “when quantum arrives,” but whether organizations will be ready when it does.
The core cyber issue: asymmetric cryptography becomes breakable
- Mark highlighted that a quantum-capable system running Shor’s algorithm (referred to in the transcript as “Peter Shaw” in the 1970s) could break the asymmetric cryptography underpinning:
- secure web sessions (TLS/HTTPS)
- VPNs and certificates
- public-key encryption and key exchange
- digital signatures and non-repudiation
Harvest-now, decrypt-later is already happening
- Threat actors are already stealing encrypted data today with the expectation they can decrypt it later once quantum capability matures—especially valuable data with long shelf-life (e.g., insurance, financial, identity-linked records).
Section II — What Changes and What Doesn’t: The Crypto Landscape
Key Themes
Two buckets of crypto
- Asymmetric cryptography (public/private key pairs: RSA, ECC) → most at risk from quantum
- Symmetric cryptography (e.g., AES-256 for data-at-rest) → generally viewed as more resilient, though not the central quantum-break concern raised here
Pervasiveness is the real problem
- Dinesh emphasized organizations underestimate how deeply asymmetric crypto is embedded:
- browsers, e-commerce, banking
- certificates and identity systems
- OT/IoT and “smart” environments (e.g., meters, connected devices)
- enterprise workflows relying on digital signatures
It’s not a simple swap
- Post-quantum cryptography (PQC) algorithms behave differently:
- performance and CPU “tax” can change
- data formats and signature sizes can change (affecting storage and downstream systems)
- interoperability requires both ends (server + client + supply chain) to migrate
Section III — Migration Reality: Complexity, Cost, and Readiness
Key Findings
Step 1 is visibility: build a cryptographic inventory
- Organizations must map where crypto is used (apps, infra, networks, pipelines) before planning migration.
- The speakers described this as creating a “crypto bill of materials” (CBOM).
Cost drivers
- Dinesh described cost in layers:
- discovery/inventory + prioritization
- engineering changes (apps + infrastructure + network + DevOps pipelines)
- performance testing/regression testing (critical because failures can halt operations)
- potential platform replacement (for legacy vendors without PQC roadmaps)
- ongoing compliance monitoring (preventing reintroduction of weak algorithms)
Concrete cost signal (large enterprises)
- Mark provided an illustrative benchmark: discovery alone for a large financial services organization can be “well into the millions,” citing ~$5M as a realistic discovery budget for some large institutions.
- Implementation costs were described as highly variable but also expected to be in the millions, driven largely by rollout + testing + supply-chain coordination.
Timing affects cost
- The later organizations begin, the more expensive and disruptive it becomes.
- Best window: embed PQC work into existing transformation programs (cloud migrations, platform refreshes) rather than treating it as a last-minute patch.
Section IV — How to Execute: Tooling, Prioritization, and “Crypto Agility”
IBM approach highlighted
Discovery + prioritization tools
- IBM described tooling to:
- discover cryptography usage and build inventories
- generate a “heat map” to prioritize migration based on exposure and data criticality
- model ecosystem dependencies so organizations migrate clusters (e.g., mail server + backups + connected components) rather than isolated systems
Crypto agility (architectural strategy)
- Dinesh defined crypto agility as decoupling cryptography from application logic by introducing a modular layer (often via APIs/services):
- applications request encryption/signing via a stable interface
- algorithms/keys can be swapped underneath without rewriting the application
- Rationale: not only quantum—future breakthroughs (including AI-driven attacks) may force further crypto evolution.
Section V — Governance, Regulation, and Threat Model
Key Findings
This is already a board-level issue in some sectors
- Dinesh stated it is already board-level in financial services and regulated environments, and expects broader urgency within 12–18 months.
Government recognition is accelerating mandates
- National governments and regulators are moving toward requiring post-quantum cryptography—especially for critical infrastructure and sensitive data domains.
Threat access isn’t just criminals
- While quantum compute is expensive and initially scarce, speakers warned:
- access barriers fall quickly over time
- “quantum-as-a-service” models could broaden availability
- nation-state actors are likely to be early quantum-capability adopters and could weaponize it against cryptography
Section VI – Implications
- Organizations must start now because PQC migration is a multi-year program, not a patch.
- The main risk is not only “quantum breaks encryption,” but also operational disruption if crypto handshakes fail during poorly managed migrations.
- A winning strategy combines:
- cryptography discovery + CBOM
- prioritized migration roadmap based on data longevity and dependency clusters
- crypto agility architecture to future-proof ongoing change
- continuous monitoring and compliance assurance
“It’s not really a question of when quantum computing is coming—it’s whether we’ll be ready when it does.”
Poking at the AI Bubble: Analyzing the Financing of AI Infrastructure
Please see below for the AI-assisted summary of our second virtual salon. View the whole salon here.
Tara Murphy Dougherty, Vice President, Communications & Public Affairs, Special Competitive Studies Project (SCSP), Moderator
David Lin, Senior Advisor, Future Technology Platforms, SCSP
Vince Jesaitis, Head of Global Government Affairs, ARM
Section I — Edge Computing, AI Infrastructure, and Energy Security
Context
The salon centered on insights from the white paper “Smarter at the Edge: How Edge Computing Can Advance U.S. AI Leadership and Energy Security”, which argues that the next phase of AI competitiveness will be defined by deployment efficiency, resilience, and energy sustainability, not solely by frontier model scale.
Key Findings
AI infrastructure strain
- U.S. data centers currently consume ~4% of domestic electricity, with projections rising as high as 25% by 2030 under current trends.
- Centralized, cloud-heavy AI architectures risk becoming a strategic bottleneck in the U.S.–China AI competition.
Edge AI as a strategic offset
- Edge computing can reduce energy consumption by up to 60% for equivalent inference workloads by using specialized, localized hardware.
- Rather than replacing cloud infrastructure, edge computing complements centralized data centers, improving resilience and continuity of operations.
Shift from training to inference
- The AI race is transitioning from model training to AI adoption, inference, and deployment at scale.
- By 2030, an estimated 70% of AI inference is expected to occur on edge devices rather than in the cloud.
Performance advantages
- Lower latency: Critical for real-world systems such as manufacturing lines, autonomous vehicles, and robotics.
- Improved privacy and security: Data can be processed, stored, and erased locally, reducing exposure.
- Operational resilience: Localized compute reduces dependency on centralized infrastructure outages.
“The next wave of AI will not be defined by the largest models, but by the smartest and most efficient ones.”
Section II — Geopolitics, China, and the AI Deployment Race
Strategic Competition
U.S. vs. China approaches
- The U.S. has led in frontier model innovation and AI infrastructure buildout, reinforced by the White House’s AI Action Plan and public–private partnerships.
- China’s “AI Plus” initiative signals a strategic shift toward diffusion and deployment—embedding AI across every sector of the economy.
China’s structural advantage
- A dominant global electronics manufacturing base allows China to rapidly integrate edge AI into commodity devices.
- Chinese strategy emphasizes deployment over original innovation, leveraging existing models and scaling them across industry and society.
Distributed computing architectures
- China is actively developing tiered, distributed AI systems, spanning centralized data centers to localized edge compute.
- This mirrors a broader industrial policy focus on resilience and scale rather than singular technological breakthroughs.
Section III — Security, Privacy, and Distributed AI Architectures
Edge AI vs. Centralized & Orbital Data Centers
Security benefits
- Reduced data transmission lowers exposure to cyber threats.
- Local inference minimizes reliance on vulnerable centralized or orbital systems.
Emerging risks
- Extraterrestrial and space-based data centers introduce new attack surfaces, including cyber and kinetic threats.
- Cross-border R&D collaborations risk unintended intellectual property diffusion, particularly in sensitive AI domains.
Operational flexibility
- Distributed AI enables workloads to be routed based on:
- Security requirements
- Energy availability (e.g., renewable-powered regions)
- Latency sensitivity
Section IV — Policy and Industry Recommendations
Government Action
Reinvest in foundational R&D
- Renew commitment to CHIPS and Science and long-term basic research.
- Support Department of Energy initiatives such as EES2 (Energy Efficiency Scaling for Two Decades).
Regional innovation ecosystems
- Expand regional testbeds and sandboxes through:
- Economic Development Administration (EDA) tech hubs
- NSF Innovation Engines
- Focus on state- and local-level deployment, where edge AI delivers immediate impact.
Industry Responsibilities
Hardware–software co-design
- Move beyond siloed development cycles.
- Tailor hardware specifically for targeted AI workloads to improve efficiency without sacrificing performance.
Public–private collaboration
- Co-develop edge AI solutions with government partners, aligning technology design with real-world public-sector use cases.
Section V — Implications and Outlook
Key Takeaways
- Edge computing represents a national security, energy, and competitiveness imperative, not merely a technical optimization.
- The future AI stack will be hybrid by design, combining centralized training with distributed inference.
- Strategic advantage will come from efficient deployment at scale, not only from model size or compute concentration.
“This is not an either–or proposition. Cloud and edge will coexist—but edge computing provides a critical offset to the strains facing today’s AI infrastructure.”
New York to New Delhi: Public Opinion Polling and the Rise of AI in India
Please see below for the AI-assisted summary of our second virtual salon. View the whole salon here.
Dritan Nesho, CEO, HarrisX, Presenting
Umesh Sachdev, Founder & CEO, Uniphore, and Abhishek Singh, Additional Secretary, Ministry of Electronics & IT, Government of India interviewed by Mukesh Aghi
Section I — Public Opinion, Trust, and AI in Democracy
Key Findings
- Erosion of trust: One-third of respondents misidentified fake AI content; in some instances, over 50% believed fabricated material was genuine.
- AI misinformation at scale: Exposure to synthetic content caused a measurable decline in trust across all forms of media.
- Subtle deception: Humorous or personality-aligned deepfakes proved far more convincing than overt falsehoods.
Regulatory consensus:
- 71% believe AI will place elections in “uncharted territory.”
- Over 70% support regulation to ban political deepfakes, mandate disclosure of AI use in campaigns, and hold platforms accountable.
Implications
Nesho warned that AI-generated misinformation could swing major elections and destabilize democracies. He called for:
- Corporate preparedness: Develop clear crisis response strategies rooted in authenticity and public trust.
- Technological guardrails: Explore blockchain-based watermarking to authenticate real content.
- Balanced regulation: Avoid overreach that stifles innovation but establish proactive industry self-governance.
“AI is here to stay, but the speed at which it erodes trust may outpace our ability to adapt,” Nesho cautioned. “Self-regulation and education will define whether AI strengthens or undermines democracy.”
Section II — U.S.–India Collaboration in AI Innovation
Umesh Sachdev, founder of Uniphore—a $2.5B AI company serving over 2,000 global enterprises—described India’s growing influence as both a talent hub and innovation partner in the global AI landscape. He emphasized that AI’s future lies in the complementary strengths of the U.S. and India.
Key Themes
- India’s AI Advantage: Supplies top-tier AI talent globally, including researchers at OpenAI and Google.
Increasing domestic entrepreneurship in applied AI. - Strategic Collaboration: U.S. leads in compute and foundational models; India contributes human capital and deployment scale.
Shared R&D in AI and quantum computing could deepen this alliance. - China’s Open-Source Strategy: Chinese models are “distilled” from U.S. systems and released openly, expanding global reach.
Sachdev urged the U.S. to keep allies “close and empowered” to counterbalance China’s influence. - AI and Growth: Most CEOs view AI as a growth accelerator, not a cost-reduction tool.
Productivity gains could push U.S. GDP growth above 5% and India’s beyond 15% within a decade. - Trust as Strategy: Building responsible, transparent AI systems is not only ethical—it’s good business.
“AI won’t eliminate jobs—it will create an age of abundance,” said Sachdev.
“The next wave of trillion-dollar value creation will come from AI security and trust.”
Section III — India’s National AI Mission and the 2026 Global AI Impact Summit
Abhishek Singh outlined India’s comprehensive National AI Mission, a framework designed to build inclusive, ethical, and globally competitive AI capacity. The mission’s seven pillars are reshaping India’s position as a leader in applied AI.
Seven Pillars of India’s AI Mission
- Compute Infrastructure: Affordable access to GPU capacity (40,000 GPUs currently deployed).
- Data Ecosystem: AI Kosh — a national platform with 3,000 datasets and 240 AI tools.
- Indian Foundation Models: Development of multilingual, culturally aligned large language models.
- AI for Public Good: Deployments in healthcare, education, agriculture, and climate adaptation.
- Talent & Training: 570 new data labs and national AI fellowships across disciplines.
- Startup Acceleration: Funding and mentorship through global incubators and venture programs.
- Responsible AI: Creation of India’s AI Safety Institute for standards in transparency, bias mitigation, and deepfake detection.
AI Impact Summit 2026 (New Delhi)
The India AI Impact Summit—scheduled for February 19–20, 2026—will be the first major AI summit hosted in the Global South.
Its central theme, “AI for People, Planet, and Progress,” will focus on:
- People: Future of work, inclusion, accessibility, and digital literacy.
- Planet: Sustainable AI development and climate resilience.
- Progress: Economic growth, innovation, and AI for social good.
Complementary events will include global innovation challenges, youth and women’s AI competitions, research symposia, and a large-scale AI Expo.
India’s Collaborative Vision
Singh emphasized that India seeks to democratize AI, not dominate it—serving as a partner to the Global South and a bridge to the West.
Voice-enabled, multilingual AI services aim to bring 500 million non-digital citizens into the digital economy, transforming access to healthcare, finance, and education.
“AI must be a democratizing force,” Singh stated.
“When paired with inclusivity, trust, and innovation, it becomes a tool for shared prosperity.”
The Global State of Play in Artificial Intelligence: Risks and Opportunities
Please see below for the AI-assisted summary of our first virtual salon. View the whole salon here.
Ylli Bajraktari interviewed by Tara Rigler, VP of Communications and Public Affairs, Special Competitive Studies Project (SCSP)
Co-founders Scott Campbell and Bill Deckelman introduced the AI Collaborative, a new global association for business executives, policy experts, and technologists designed to monitor and assess AI technology. They emphasized AI’s unprecedented transformative power compared to previous technologies. The AI Collaborative aims to provide a fact-based understanding of AI’s risks and opportunities, fostering global relationships and delivering practical insights to leaders navigating today’s complex and uncertain world.
Ylli Bajraktari, President and CEO of the Special Competitive Studies Project (SCSP), shared his journey into AI, influenced by former Deputy Secretary of Defense Bob Work, who launched early AI initiatives in the Pentagon. Bajraktari discussed the National Security Commission on Artificial Intelligence (NSCAI), created by Congress in 2018. This commission was unique as it was forward-looking, anticipating coming issues rather than investigating past failures. A key driver for its creation was the realization of China’s true intentions to become the global AI leader by 2030, marking it as a significant competitor. The NSCAI, a public-private effort including leaders like Eric Schmidt and Andy Jassy, aimed to organize the U.S. government for the AI age.
The NSCAI’s 759-page report had a profound impact, leading to 111 pieces of legislation. Its key recommendations focused on:
- Government Organization: Creating dedicated AI offices and White House leadership.
- Hardware: Advocating for hardware investment, which underpinned the CHIPS Act.
- Ecosystem: Revitalizing the U.S. ecosystem through collaboration between local, private, academic, and government sectors.
- Talent Development: Addressing the gap in computer science graduates, retaining foreign talent, and attracting skilled individuals to government service.
Bajraktari noted that AI models have become significantly more powerful every six months, addressing issues like “hallucinations”. He stressed that the current decade will determine who dominates the rest of the century in AI, emphasizing the need for the U.S. to win this technological competition. He highlighted China’s advantages, including greater numbers of computer scientists, more government investment in its private sector, and unmatched scale in population and data. China has also demonstrated its ability to get ahead in critical technologies like 5G, now dominating the global market.
Globally, Bajraktari advocated for new alliances beyond traditional ones, drawing parallels to the Cold War era’s infrastructure building. He highlighted the importance of Middle Eastern partners for energy and digital infrastructure, and the critical roles of Japan, South Korea, and particularly Taiwan (due to TSMC and its democratic values). While acknowledging Europe’s capacities, he expressed concern that their focus on regulation might hinder their competitiveness in the new AI era. India was identified as an emerging significant AI player, demonstrating immense opportunities, scale, and innovative applications, poised to be a major technological force.
Regarding risks, Bajraktari’s primary concern is China’s potential dominance of the global digital ecosystem through its AI platforms, akin to the spread of TikTok and WeChat. Other risks include AI’s use in cyber operations (macro-targeting at micro-levels) and bio risks (assisting in developing bio-weapons). Opportunities for AI include transforming education through personalized tutors, as China is already implementing AI curricula from elementary school. He also stressed the need to rethink the future of the workforce for advanced manufacturing and robotics, where China is making massive investments.
For immediate action, Bajraktari urged the U.S. to create a positive narrative about AI to drive broader public adoption, noting that China currently holds a more positive public sentiment towards AI. Looking to 2035, he hoped that “winning” the AI competition for democracies doesn’t mean a zero-sum outcome, but rather a healthy competition where the U.S. and its allies lead in building the future digital infrastructure and reach Artificial General Intelligence (AGI) before China does. He also mentioned the importance of quantum computing as another critical technology where China’s lead would be detrimental due to its interlinked nature with other technologies and its impact on encryption. He concluded by emphasizing the vital role and underfunding of National Labs in basic R&D, contrary to China’s increased investments.
David Sanger interviewed by Thom Shanker, Director, Project for Media and National Security
The second interview featured Thom Shanker and David Sanger, a New York Times national security reporter and author of “New Cold Wars.” Sanger began by explaining his book’s central thesis: the United States fundamentally misunderstood where China and Russia were headed, wrongly assuming they would integrate with Western systems. He cited the surprising U.S. dependence on Taiwan Semiconductor (TSMC), which produces 90-95% of the most advanced chips, a vulnerability that arose from business decisions rather than national security policy.
Sanger discussed the Biden administration’s policy to counter this, focusing on limiting advanced semiconductor shipments to China to “buy time” and reshoring production through the CHIPS and Science Act. While the Act prompted significant private investment, Sanger noted concerns that by 2030, U.S. production might only cover 20% of domestic consumption, leaving 80% vulnerability to a potential Chinese move against Taiwan.
Regarding the Trump administration’s AI policy, Sanger stated that President Trump has said little on the topic. However, JD Vance, a key figure, advocated for unleashing American industry and downplaying “hand wringing about safety,” criticizing European regulation as a barrier to catching up. Vance’s team is largely uninterested in “guardrails” for AI, a contrast to the Biden administration’s AI Safety Institute that evaluated large language models (LLMs) for harmful capabilities. Interestingly, the industry itself has shifted from requesting regulation for liability protection to desiring the removal of “all shackles” under the Trump administration’s stance.
Sanger outlined the greatest realistic risks of AI in national security:
- The unknown potential of AI and its application to autonomous weapons.
- The risk of AI providing bad information for nuclear launch decisions, even if humans are in the loop.
- China’s potential use of AI to track silent submarines, undermining second-strike capabilities.
- AI’s role in disinformation and misinformation, including deepfakes, targeting vulnerable groups.
- The severe risk of intelligence failure in detecting adversary AI advancements, citing past failures like missing Chinese infiltration of U.S. telecom firms.
On the opportunities side, AI can significantly enhance cyber defense by detecting anomalous activity faster. He pointed out that U.S. Patriot air defense batteries in Ukraine already use AI for rapid trajectory calculations. Sanger discussed the Pentagon’s shift from “man in the loop” to “man on the loop” for military autonomy, acknowledging that the speed of future conflicts might make human intervention impractical.
Sanger noted that while his book “A Perfect Weapon” on cyber security remains relevant, a new book would be needed to address AI’s national security challenges. He expressed concern over the dismantling of misinformation/disinformation sections within CISA and other agencies by the Trump administration, especially given AI’s role in crafting targeted strikes. He also discussed the threat of quantum computing to encryption, potentially leading to a “post-quantum future,” but noted that China also needs a secure financial system, and the U.S. is working on “quantum-proof” encryption.
Regarding AI’s impact on journalism, Sanger explained that AI companies, initially secretive, now engage more with reporters due to their awareness of public policy implications. He noted that while AI is useful for research and comparisons (e.g., for sports reporters), it is not good at creative thinking or writing; The New York Times bans its use for writing purposes.
Finally, Sanger delved into the “New Cold Wars”: one with China and a “hot and cold” one with Russia. He highlighted the growing convergence of Russia and China, evidenced by over 70 meetings between Putin and Xi, their mutual goal to frustrate the U.S., and Russia’s increased dependence on China for technology. He doubted the “reverse Kissinger” strategy (using rapprochement with Russia to split it from China) would work, as Russia’s dependence on China for technology is unlikely to change. He also touched on the energy demands of data centers, suggesting that many would need to be outside the U.S. due to grid limitations, contradicting President Trump’s desire to move manufacturing back to the U.S.. He concluded by stressing the growing threat of non-state actors using advanced AI capabilities for cyber operations like ransomware, emphasizing the critical need for guardrails despite political reluctance. Sanger asserted that superpowers cannot choose between safety and speed in AI development, as both are crucial, drawing a parallel to nuclear weapons development. He also pointed out that unlike nuclear deterrence, cyber deterrence has failed, a warning for the AI era.
Chatham House Report Update by Joyce Hakmeh and Katja Bego
Joyce Hakmeh and Katja Bego from Chatham House provided an update on their upcoming report, “Navigating AI Governance and Security: Global Perspectives“. As the research partner and global policy advisor for the AI Collaborative, Chatham House aims to provide a holistic understanding of the overwhelming AI landscape for senior decision-makers, particularly C-Suite executives. The report will be a dynamic “map” for navigating global markets, jurisdictions, and competing national security regimes in AI.
The report will have two main components:
- Retrospective Analysis: Examining major events from the past year that have shaped AI in terms of security, technology, and geopolitics.
- Forward-Looking Projections: Anticipating future trends, distinguishing hype from reality, identifying risks, and exploring opportunities for international collaboration. It will also map emerging AI leadership beyond the U.S. and China, focusing on “middle powers” such as the Gulf States, South Korea, and India, which have impressive initiatives.
Katja Bego elaborated on the report’s three-part structure:
- Holistic View: This section will analyze three main trends shaping the AI environment:
- Strategic Competition and National Security Focus: The competition over AI has taken on a clear national security dimension, with both the U.S. and China framing their AI efforts as crucial for military supremacy and national power. Other countries are also developing their own AI strategies through a lens of resilience and national security.
- Increased Miniaturization and Securitization of AI Development: Traditionally commercial technologies are increasingly being deployed for dual-use purposes in the national security domain. This dynamic, exemplified by AI use in conflicts like Ukraine and Gaza, is expected to flow from the national security sector into the broader commercial realm.
- Global Fragmentation: This involves a bifurcation in the AI space across three dimensions: technical fragmentation (different methodologies leading to incompatible systems), regulatory fragmentation (countries adopting increasingly divergent regulatory approaches), and economic fragmentation (export controls, protective industrial policies, and the bifurcation of AI supply chains leading to distinct technological blocks).
- Regional Deep Dives: This part will zoom into specific regions and countries to understand how they are navigating this complex landscape, including Europe, the Middle East (especially Gulf countries positioning as “swing states”), East Asia (Japan, Taiwan, and South Korea), and India.
- 2026 and Beyond: The final section will provide insights for business leaders and decision-makers, helping them understand how these intersecting dynamics will impact their ability to integrate AI into their businesses in the coming years.
See the in-progress Chatham House Report breakdown slide deck below.
The “Navigating AI Governance and Security: Global Perspectives” will act as the catalyst for our first annual AI Collaborative Conference early next year. This deck describes the structure, goals, and methodology of the report.