Please see below for the AI-assisted summary of our first virtual salon.

Ylli Bajraktari interviewed by Tara Rigler, VP of Communications and Public Affairs, Special Competitive Studies Project (SCSP)

Co-founders Scott Campbell and Bill Deckelman introduced the AI Collaborative, a new global association for business executives, policy experts, and technologists designed to monitor and assess AI technology. They emphasized AI’s unprecedented transformative power compared to previous technologies. The AI Collaborative aims to provide a fact-based understanding of AI’s risks and opportunities, fostering global relationships and delivering practical insights to leaders navigating today’s complex and uncertain world.

Ylli Bajraktari, President and CEO of the Special Competitive Studies Project (SCSP), shared his journey into AI, influenced by former Deputy Secretary of Defense Bob Work, who launched early AI initiatives in the Pentagon. Bajraktari discussed the National Security Commission on Artificial Intelligence (NSCAI), created by Congress in 2018. This commission was unique as it was forward-looking, anticipating coming issues rather than investigating past failures. A key driver for its creation was the realization of China’s true intentions to become the global AI leader by 2030, marking it as a significant competitor. The NSCAI, a public-private effort including leaders like Eric Schmidt and Andy Jassy, aimed to organize the U.S. government for the AI age.

The NSCAI’s 759-page report had a profound impact, leading to 111 pieces of legislation. Its key recommendations focused on:

  • Government Organization: Creating dedicated AI offices and White House leadership.
  • Hardware: Advocating for hardware investment, which underpinned the CHIPS Act.
  • Ecosystem: Revitalizing the U.S. ecosystem through collaboration between local, private, academic, and government sectors.
  • Talent Development: Addressing the gap in computer science graduates, retaining foreign talent, and attracting skilled individuals to government service.

Bajraktari noted that AI models have become significantly more powerful every six months, addressing issues like “hallucinations”. He stressed that the current decade will determine who dominates the rest of the century in AI, emphasizing the need for the U.S. to win this technological competition. He highlighted China’s advantages, including greater numbers of computer scientists, more government investment in its private sector, and unmatched scale in population and data. China has also demonstrated its ability to get ahead in critical technologies like 5G, now dominating the global market.

Globally, Bajraktari advocated for new alliances beyond traditional ones, drawing parallels to the Cold War era’s infrastructure building. He highlighted the importance of Middle Eastern partners for energy and digital infrastructure, and the critical roles of Japan, South Korea, and particularly Taiwan (due to TSMC and its democratic values). While acknowledging Europe’s capacities, he expressed concern that their focus on regulation might hinder their competitiveness in the new AI era. India was identified as an emerging significant AI player, demonstrating immense opportunities, scale, and innovative applications, poised to be a major technological force.

Regarding risks, Bajraktari’s primary concern is China’s potential dominance of the global digital ecosystem through its AI platforms, akin to the spread of TikTok and WeChat. Other risks include AI’s use in cyber operations (macro-targeting at micro-levels) and bio risks (assisting in developing bio-weapons). Opportunities for AI include transforming education through personalized tutors, as China is already implementing AI curricula from elementary school. He also stressed the need to rethink the future of the workforce for advanced manufacturing and robotics, where China is making massive investments.

For immediate action, Bajraktari urged the U.S. to create a positive narrative about AI to drive broader public adoption, noting that China currently holds a more positive public sentiment towards AI. Looking to 2035, he hoped that “winning” the AI competition for democracies doesn’t mean a zero-sum outcome, but rather a healthy competition where the U.S. and its allies lead in building the future digital infrastructure and reach Artificial General Intelligence (AGI) before China does. He also mentioned the importance of quantum computing as another critical technology where China’s lead would be detrimental due to its interlinked nature with other technologies and its impact on encryption. He concluded by emphasizing the vital role and underfunding of National Labs in basic R&D, contrary to China’s increased investments.

David Sanger interviewed by Thom Shanker, Director, Project for Media and National Security

The second interview featured Thom Shanker and David Sanger, a New York Times national security reporter and author of “New Cold Wars.” Sanger began by explaining his book’s central thesis: the United States fundamentally misunderstood where China and Russia were headed, wrongly assuming they would integrate with Western systems. He cited the surprising U.S. dependence on Taiwan Semiconductor (TSMC), which produces 90-95% of the most advanced chips, a vulnerability that arose from business decisions rather than national security policy.

Sanger discussed the Biden administration’s policy to counter this, focusing on limiting advanced semiconductor shipments to China to “buy time” and reshoring production through the CHIPS and Science Act. While the Act prompted significant private investment, Sanger noted concerns that by 2030, U.S. production might only cover 20% of domestic consumption, leaving 80% vulnerability to a potential Chinese move against Taiwan.

Regarding the Trump administration’s AI policy, Sanger stated that President Trump has said little on the topic. However, JD Vance, a key figure, advocated for unleashing American industry and downplaying “hand wringing about safety,” criticizing European regulation as a barrier to catching up. Vance’s team is largely uninterested in “guardrails” for AI, a contrast to the Biden administration’s AI Safety Institute that evaluated large language models (LLMs) for harmful capabilities. Interestingly, the industry itself has shifted from requesting regulation for liability protection to desiring the removal of “all shackles” under the Trump administration’s stance.

Sanger outlined the greatest realistic risks of AI in national security:

  • The unknown potential of AI and its application to autonomous weapons.
  • The risk of AI providing bad information for nuclear launch decisions, even if humans are in the loop.
  • China’s potential use of AI to track silent submarines, undermining second-strike capabilities.
  • AI’s role in disinformation and misinformation, including deepfakes, targeting vulnerable groups.
  • The severe risk of intelligence failure in detecting adversary AI advancements, citing past failures like missing Chinese infiltration of U.S. telecom firms.

On the opportunities side, AI can significantly enhance cyber defense by detecting anomalous activity faster. He pointed out that U.S. Patriot air defense batteries in Ukraine already use AI for rapid trajectory calculations. Sanger discussed the Pentagon’s shift from “man in the loop” to “man on the loop” for military autonomy, acknowledging that the speed of future conflicts might make human intervention impractical.

Sanger noted that while his book “A Perfect Weapon” on cyber security remains relevant, a new book would be needed to address AI’s national security challenges. He expressed concern over the dismantling of misinformation/disinformation sections within CISA and other agencies by the Trump administration, especially given AI’s role in crafting targeted strikes. He also discussed the threat of quantum computing to encryption, potentially leading to a “post-quantum future,” but noted that China also needs a secure financial system, and the U.S. is working on “quantum-proof” encryption.

Regarding AI’s impact on journalism, Sanger explained that AI companies, initially secretive, now engage more with reporters due to their awareness of public policy implications. He noted that while AI is useful for research and comparisons (e.g., for sports reporters), it is not good at creative thinking or writing; The New York Times bans its use for writing purposes.

Finally, Sanger delved into the “New Cold Wars”: one with China and a “hot and cold” one with Russia. He highlighted the growing convergence of Russia and China, evidenced by over 70 meetings between Putin and Xi, their mutual goal to frustrate the U.S., and Russia’s increased dependence on China for technology. He doubted the “reverse Kissinger” strategy (using rapprochement with Russia to split it from China) would work, as Russia’s dependence on China for technology is unlikely to change. He also touched on the energy demands of data centers, suggesting that many would need to be outside the U.S. due to grid limitations, contradicting President Trump’s desire to move manufacturing back to the U.S.. He concluded by stressing the growing threat of non-state actors using advanced AI capabilities for cyber operations like ransomware, emphasizing the critical need for guardrails despite political reluctance. Sanger asserted that superpowers cannot choose between safety and speed in AI development, as both are crucial, drawing a parallel to nuclear weapons development. He also pointed out that unlike nuclear deterrence, cyber deterrence has failed, a warning for the AI era.

Chatham House Report Update by Joyce Hakmeh and Katja Bego

Joyce Hakmeh and Katja Bego from Chatham House provided an update on their upcoming report, “Navigating AI Governance and Security: Global Perspectives“. As the research partner and global policy advisor for the AI Collaborative, Chatham House aims to provide a holistic understanding of the overwhelming AI landscape for senior decision-makers, particularly C-Suite executives. The report will be a dynamic “map” for navigating global markets, jurisdictions, and competing national security regimes in AI.

The report will have two main components:

  • Retrospective Analysis: Examining major events from the past year that have shaped AI in terms of security, technology, and geopolitics.
  • Forward-Looking Projections: Anticipating future trends, distinguishing hype from reality, identifying risks, and exploring opportunities for international collaboration. It will also map emerging AI leadership beyond the U.S. and China, focusing on “middle powers” such as the Gulf States, South Korea, and India, which have impressive initiatives.

Katja Bego elaborated on the report’s three-part structure:

  1. Holistic View: This section will analyze three main trends shaping the AI environment:
    • Strategic Competition and National Security Focus: The competition over AI has taken on a clear national security dimension, with both the U.S. and China framing their AI efforts as crucial for military supremacy and national power. Other countries are also developing their own AI strategies through a lens of resilience and national security.
    • Increased Miniaturization and Securitization of AI Development: Traditionally commercial technologies are increasingly being deployed for dual-use purposes in the national security domain. This dynamic, exemplified by AI use in conflicts like Ukraine and Gaza, is expected to flow from the national security sector into the broader commercial realm.
    • Global Fragmentation: This involves a bifurcation in the AI space across three dimensions: technical fragmentation (different methodologies leading to incompatible systems), regulatory fragmentation (countries adopting increasingly divergent regulatory approaches), and economic fragmentation (export controls, protective industrial policies, and the bifurcation of AI supply chains leading to distinct technological blocks).
  2. Regional Deep Dives: This part will zoom into specific regions and countries to understand how they are navigating this complex landscape, including Europe, the Middle East (especially Gulf countries positioning as “swing states”), East Asia (Japan, Taiwan, and South Korea), and India.
  3. 2026 and Beyond: The final section will provide insights for business leaders and decision-makers, helping them understand how these intersecting dynamics will impact their ability to integrate AI into their businesses in the coming years.

See the in-progress Chatham House Report breakdown slide deck below.

The “Navigating AI Governance and Security: Global Perspectives” will act as the catalyst for our first annual AI Collaborative Conference early next year. This deck describes the structure, goals, and methodology of the report.

Chatham House Report Deck