New York to New Delhi: Public Opinion Polling and the Rise of AI in India

Please see below for the AI-assisted summary of our second virtual salon. View the whole salon here.

Dritan Nesho, CEO, HarrisX, Presenting

Umesh Sachdev, Founder & CEO, Uniphore, and Abhishek Singh, Additional Secretary, Ministry of Electronics & IT, Government of India interviewed by Mukesh Aghi

Section I — Public Opinion, Trust, and AI in Democracy

Key Findings

  • Erosion of trust: One-third of respondents misidentified fake AI content; in some instances, over 50% believed fabricated material was genuine.
  • AI misinformation at scale: Exposure to synthetic content caused a measurable decline in trust across all forms of media.
  • Subtle deception: Humorous or personality-aligned deepfakes proved far more convincing than overt falsehoods.

Regulatory consensus:

  • 71% believe AI will place elections in “uncharted territory.”
  • Over 70% support regulation to ban political deepfakes, mandate disclosure of AI use in campaigns, and hold platforms accountable.
    Implications

Nesho warned that AI-generated misinformation could swing major elections and destabilize democracies. He called for:

  • Corporate preparedness: Develop clear crisis response strategies rooted in authenticity and public trust.
  • Technological guardrails: Explore blockchain-based watermarking to authenticate real content.
  • Balanced regulation: Avoid overreach that stifles innovation but establish proactive industry self-governance.

“AI is here to stay, but the speed at which it erodes trust may outpace our ability to adapt,” Nesho cautioned. “Self-regulation and education will define whether AI strengthens or undermines democracy.”

Section II — U.S.–India Collaboration in AI Innovation

Umesh Sachdev, founder of Uniphore—a $2.5B AI company serving over 2,000 global enterprises—described India’s growing influence as both a talent hub and innovation partner in the global AI landscape. He emphasized that AI’s future lies in the complementary strengths of the U.S. and India.

Key Themes

  • India’s AI Advantage: Supplies top-tier AI talent globally, including researchers at OpenAI and Google.
    Increasing domestic entrepreneurship in applied AI.
  • Strategic Collaboration: U.S. leads in compute and foundational models; India contributes human capital and deployment scale.
    Shared R&D in AI and quantum computing could deepen this alliance.
  • China’s Open-Source Strategy: Chinese models are “distilled” from U.S. systems and released openly, expanding global reach.
    Sachdev urged the U.S. to keep allies “close and empowered” to counterbalance China’s influence.
  • AI and Growth: Most CEOs view AI as a growth accelerator, not a cost-reduction tool.
    Productivity gains could push U.S. GDP growth above 5% and India’s beyond 15% within a decade.
  • Trust as Strategy: Building responsible, transparent AI systems is not only ethical—it’s good business.
    “AI won’t eliminate jobs—it will create an age of abundance,” said Sachdev.
    “The next wave of trillion-dollar value creation will come from AI security and trust.”

Section III — India’s National AI Mission and the 2026 Global AI Impact Summit

Abhishek Singh outlined India’s comprehensive National AI Mission, a framework designed to build inclusive, ethical, and globally competitive AI capacity. The mission’s seven pillars are reshaping India’s position as a leader in applied AI.

Seven Pillars of India’s AI Mission

  • Compute Infrastructure: Affordable access to GPU capacity (40,000 GPUs currently deployed).
  • Data Ecosystem: AI Kosh — a national platform with 3,000 datasets and 240 AI tools.
  • Indian Foundation Models: Development of multilingual, culturally aligned large language models.
  • AI for Public Good: Deployments in healthcare, education, agriculture, and climate adaptation.
  • Talent & Training: 570 new data labs and national AI fellowships across disciplines.
  • Startup Acceleration: Funding and mentorship through global incubators and venture programs.
  • Responsible AI: Creation of India’s AI Safety Institute for standards in transparency, bias mitigation, and deepfake detection.
    AI Impact Summit 2026 (New Delhi)

The India AI Impact Summit—scheduled for February 19–20, 2026—will be the first major AI summit hosted in the Global South.
Its central theme, “AI for People, Planet, and Progress,” will focus on:

  • People: Future of work, inclusion, accessibility, and digital literacy.
  • Planet: Sustainable AI development and climate resilience.
  • Progress: Economic growth, innovation, and AI for social good.

Complementary events will include global innovation challenges, youth and women’s AI competitions, research symposia, and a large-scale AI Expo.

India’s Collaborative Vision

Singh emphasized that India seeks to democratize AI, not dominate it—serving as a partner to the Global South and a bridge to the West.
Voice-enabled, multilingual AI services aim to bring 500 million non-digital citizens into the digital economy, transforming access to healthcare, finance, and education.

“AI must be a democratizing force,” Singh stated.
“When paired with inclusivity, trust, and innovation, it becomes a tool for shared prosperity.”

Introductory Salon

Please see below for the AI-assisted summary of our first virtual salon. View the whole salon here.

Ylli Bajraktari interviewed by Tara Rigler, VP of Communications and Public Affairs, Special Competitive Studies Project (SCSP) 

Co-founders Scott Campbell and Bill Deckelman introduced the AI Collaborative, a new global association for business executives, policy experts, and technologists designed to monitor and assess AI technology. They emphasized AI’s unprecedented transformative power compared to previous technologies. The AI Collaborative aims to provide a fact-based understanding of AI’s risks and opportunities, fostering global relationships and delivering practical insights to leaders navigating today’s complex and uncertain world.

Ylli Bajraktari, President and CEO of the Special Competitive Studies Project (SCSP), shared his journey into AI, influenced by former Deputy Secretary of Defense Bob Work, who launched early AI initiatives in the Pentagon. Bajraktari discussed the National Security Commission on Artificial Intelligence (NSCAI), created by Congress in 2018. This commission was unique as it was forward-looking, anticipating coming issues rather than investigating past failures. A key driver for its creation was the realization of China’s true intentions to become the global AI leader by 2030, marking it as a significant competitor. The NSCAI, a public-private effort including leaders like Eric Schmidt and Andy Jassy, aimed to organize the U.S. government for the AI age.

The NSCAI’s 759-page report had a profound impact, leading to 111 pieces of legislation. Its key recommendations focused on:

  • Government Organization: Creating dedicated AI offices and White House leadership.
  • Hardware: Advocating for hardware investment, which underpinned the CHIPS Act.
  • Ecosystem: Revitalizing the U.S. ecosystem through collaboration between local, private, academic, and government sectors.
  • Talent Development: Addressing the gap in computer science graduates, retaining foreign talent, and attracting skilled individuals to government service.

Bajraktari noted that AI models have become significantly more powerful every six months, addressing issues like “hallucinations”. He stressed that the current decade will determine who dominates the rest of the century in AI, emphasizing the need for the U.S. to win this technological competition. He highlighted China’s advantages, including greater numbers of computer scientists, more government investment in its private sector, and unmatched scale in population and data. China has also demonstrated its ability to get ahead in critical technologies like 5G, now dominating the global market.

Globally, Bajraktari advocated for new alliances beyond traditional ones, drawing parallels to the Cold War era’s infrastructure building. He highlighted the importance of Middle Eastern partners for energy and digital infrastructure, and the critical roles of Japan, South Korea, and particularly Taiwan (due to TSMC and its democratic values). While acknowledging Europe’s capacities, he expressed concern that their focus on regulation might hinder their competitiveness in the new AI era. India was identified as an emerging significant AI player, demonstrating immense opportunities, scale, and innovative applications, poised to be a major technological force.

Regarding risks, Bajraktari’s primary concern is China’s potential dominance of the global digital ecosystem through its AI platforms, akin to the spread of TikTok and WeChat. Other risks include AI’s use in cyber operations (macro-targeting at micro-levels) and bio risks (assisting in developing bio-weapons). Opportunities for AI include transforming education through personalized tutors, as China is already implementing AI curricula from elementary school. He also stressed the need to rethink the future of the workforce for advanced manufacturing and robotics, where China is making massive investments.

For immediate action, Bajraktari urged the U.S. to create a positive narrative about AI to drive broader public adoption, noting that China currently holds a more positive public sentiment towards AI. Looking to 2035, he hoped that “winning” the AI competition for democracies doesn’t mean a zero-sum outcome, but rather a healthy competition where the U.S. and its allies lead in building the future digital infrastructure and reach Artificial General Intelligence (AGI) before China does. He also mentioned the importance of quantum computing as another critical technology where China’s lead would be detrimental due to its interlinked nature with other technologies and its impact on encryption. He concluded by emphasizing the vital role and underfunding of National Labs in basic R&D, contrary to China’s increased investments.

David Sanger interviewed by Thom Shanker, Director, Project for Media and National Security

The second interview featured Thom Shanker and David Sanger, a New York Times national security reporter and author of “New Cold Wars.” Sanger began by explaining his book’s central thesis: the United States fundamentally misunderstood where China and Russia were headed, wrongly assuming they would integrate with Western systems. He cited the surprising U.S. dependence on Taiwan Semiconductor (TSMC), which produces 90-95% of the most advanced chips, a vulnerability that arose from business decisions rather than national security policy.

Sanger discussed the Biden administration’s policy to counter this, focusing on limiting advanced semiconductor shipments to China to “buy time” and reshoring production through the CHIPS and Science Act. While the Act prompted significant private investment, Sanger noted concerns that by 2030, U.S. production might only cover 20% of domestic consumption, leaving 80% vulnerability to a potential Chinese move against Taiwan.

Regarding the Trump administration’s AI policy, Sanger stated that President Trump has said little on the topic. However, JD Vance, a key figure, advocated for unleashing American industry and downplaying “hand wringing about safety,” criticizing European regulation as a barrier to catching up. Vance’s team is largely uninterested in “guardrails” for AI, a contrast to the Biden administration’s AI Safety Institute that evaluated large language models (LLMs) for harmful capabilities. Interestingly, the industry itself has shifted from requesting regulation for liability protection to desiring the removal of “all shackles” under the Trump administration’s stance.

Sanger outlined the greatest realistic risks of AI in national security:

  • The unknown potential of AI and its application to autonomous weapons.
  • The risk of AI providing bad information for nuclear launch decisions, even if humans are in the loop.
  • China’s potential use of AI to track silent submarines, undermining second-strike capabilities.
  • AI’s role in disinformation and misinformation, including deepfakes, targeting vulnerable groups.
  • The severe risk of intelligence failure in detecting adversary AI advancements, citing past failures like missing Chinese infiltration of U.S. telecom firms.

On the opportunities side, AI can significantly enhance cyber defense by detecting anomalous activity faster. He pointed out that U.S. Patriot air defense batteries in Ukraine already use AI for rapid trajectory calculations. Sanger discussed the Pentagon’s shift from “man in the loop” to “man on the loop” for military autonomy, acknowledging that the speed of future conflicts might make human intervention impractical.

Sanger noted that while his book “A Perfect Weapon” on cyber security remains relevant, a new book would be needed to address AI’s national security challenges. He expressed concern over the dismantling of misinformation/disinformation sections within CISA and other agencies by the Trump administration, especially given AI’s role in crafting targeted strikes. He also discussed the threat of quantum computing to encryption, potentially leading to a “post-quantum future,” but noted that China also needs a secure financial system, and the U.S. is working on “quantum-proof” encryption.

Regarding AI’s impact on journalism, Sanger explained that AI companies, initially secretive, now engage more with reporters due to their awareness of public policy implications. He noted that while AI is useful for research and comparisons (e.g., for sports reporters), it is not good at creative thinking or writing; The New York Times bans its use for writing purposes.

Finally, Sanger delved into the “New Cold Wars”: one with China and a “hot and cold” one with Russia. He highlighted the growing convergence of Russia and China, evidenced by over 70 meetings between Putin and Xi, their mutual goal to frustrate the U.S., and Russia’s increased dependence on China for technology. He doubted the “reverse Kissinger” strategy (using rapprochement with Russia to split it from China) would work, as Russia’s dependence on China for technology is unlikely to change. He also touched on the energy demands of data centers, suggesting that many would need to be outside the U.S. due to grid limitations, contradicting President Trump’s desire to move manufacturing back to the U.S.. He concluded by stressing the growing threat of non-state actors using advanced AI capabilities for cyber operations like ransomware, emphasizing the critical need for guardrails despite political reluctance. Sanger asserted that superpowers cannot choose between safety and speed in AI development, as both are crucial, drawing a parallel to nuclear weapons development. He also pointed out that unlike nuclear deterrence, cyber deterrence has failed, a warning for the AI era.

Chatham House Report Update by Joyce Hakmeh and Katja Bego

Joyce Hakmeh and Katja Bego from Chatham House provided an update on their upcoming report, “Navigating AI Governance and Security: Global Perspectives“. As the research partner and global policy advisor for the AI Collaborative, Chatham House aims to provide a holistic understanding of the overwhelming AI landscape for senior decision-makers, particularly C-Suite executives. The report will be a dynamic “map” for navigating global markets, jurisdictions, and competing national security regimes in AI.

The report will have two main components:

  • Retrospective Analysis: Examining major events from the past year that have shaped AI in terms of security, technology, and geopolitics.
  • Forward-Looking Projections: Anticipating future trends, distinguishing hype from reality, identifying risks, and exploring opportunities for international collaboration. It will also map emerging AI leadership beyond the U.S. and China, focusing on “middle powers” such as the Gulf States, South Korea, and India, which have impressive initiatives.

Katja Bego elaborated on the report’s three-part structure:

  1. Holistic View: This section will analyze three main trends shaping the AI environment:
    • Strategic Competition and National Security Focus: The competition over AI has taken on a clear national security dimension, with both the U.S. and China framing their AI efforts as crucial for military supremacy and national power. Other countries are also developing their own AI strategies through a lens of resilience and national security.
    • Increased Miniaturization and Securitization of AI Development: Traditionally commercial technologies are increasingly being deployed for dual-use purposes in the national security domain. This dynamic, exemplified by AI use in conflicts like Ukraine and Gaza, is expected to flow from the national security sector into the broader commercial realm.
    • Global Fragmentation: This involves a bifurcation in the AI space across three dimensions: technical fragmentation (different methodologies leading to incompatible systems), regulatory fragmentation (countries adopting increasingly divergent regulatory approaches), and economic fragmentation (export controls, protective industrial policies, and the bifurcation of AI supply chains leading to distinct technological blocks).
  2. Regional Deep Dives: This part will zoom into specific regions and countries to understand how they are navigating this complex landscape, including Europe, the Middle East (especially Gulf countries positioning as “swing states”), East Asia (Japan, Taiwan, and South Korea), and India.
  3. 2026 and Beyond: The final section will provide insights for business leaders and decision-makers, helping them understand how these intersecting dynamics will impact their ability to integrate AI into their businesses in the coming years.

See the in-progress Chatham House Report breakdown slide deck below.

The “Navigating AI Governance and Security: Global Perspectives” will act as the catalyst for our first annual AI Collaborative Conference early next year. This deck describes the structure, goals, and methodology of the report.

Chatham House Report Deck