Back in Sil Valley2001 steve jobs hosted fazle abed's 65th birthday wish party - this launched university partnerships if frugal (end poverty and realted millennium goal) alumni/solutions. Nice to see in 2026s Cambridge host frugal ai networks and has participated in last months india ai summit- we asked grok for more details especially whether layer 5 ai - community action apps and livelihoods is core to india's ai sovereignty and global south ai
The session "Open Models: Where We Are and Where We’re Headed [S82480]" is an upcoming in-person panel at NVIDIA GTC 2026 (San Jose, March 18, 2026, doors 11:30 a.m., starts 12:30 p.m. PDT / 3:30 p.m. EST). Jensen Huang hosts leaders from LangChain, Cursor, Reflection AI, Perplexity, Thinking Machines Lab (Mira Murati), Mistral (Arthur Mensch), OpenEvidence, AI2 (Hanna Hajishirzi), AMP PBC, and Black Forest Labs (Robin Rombach). It focuses on the rapid rise of open frontier models (open-weight or fully open), how they accelerate innovation across industries, and future directions. No recording or detailed model announcements exist yet since it hasn't happened.Most Popular Open Models from These Designers & Where They're Used MostThe panel spotlights companies pushing open models (weights downloadable for customization, local run, or research). Broader 2026 trends show open-weight LLMs like Meta's Llama 4 and Alibaba's Qwen series leading downloads/fine-tunes on Hugging Face, but here are the standouts tied to the speakers (with real-world usage stats as of early 2026):
- Mistral AI models (Arthur Mensch): Open-weight families like Mistral Small/Medium, Mixtral, and variants (e.g., 7B–675B MoE). Second-most downloaded open family globally after Llama; ~6.2M active developers and 240K+ monthly GitHub downloads. Most used in: efficient enterprise apps (chatbots, automation, coding assistants like Devstral/Mistral Code), edge devices, and Europe (data sovereignty/privacy). Great for professional/cost-effective deployments vs. closed APIs.
- FLUX.2 from Black Forest Labs (Robin Rombach): FLUX.2 [dev] (32B open weights) + upcoming [klein] (Apache 2.0 open). State-of-the-art open image generation/editing (text-to-image, multi-reference, photorealistic up to 4MP). Most powerful open visual model; open components have 400M+ downloads. Most used in: creative industries (art, design via ComfyUI), marketing/ad campaigns (brand-consistent variations), local GPU inference (NVIDIA RTX optimized, Hugging Face, Replicate, etc.), and developer tools. Beats many closed models in speed/quality for real-world creators.
- OLMo 2 from AI2 (Hanna Hajishirzi): Fully open (weights + training data, code, recipes, checkpoints; 7B/13B + multimodal Molmo). Best "truly open" language model (competes with Llama 3.1 8B on instruct/reasoning). Most used in: academic/research (transparent science, reproducible training), education/on-device (iOS apps for private testing), and open-science projects. Ideal where full transparency matters over just weights.
- LangChain (Harrison Chase) & Cursor (Michael Truell) heavily integrate these (and Llama/Qwen) for agentic apps and local code editing (privacy/cost).
- Others (Perplexity, Reflection AI, Thinking Machines Lab, etc.) focus more on applications/search/agents than releasing new open models, but discuss proliferation.
- Mistral AI → Science/Research Engineer, ML Engineer (open-source), Product/Engineering, GTM. https://mistral.ai/careers or https://jobs.lever.co/mistral (Paris/London/Zurich focus).
- Black Forest Labs → Engineers, Researchers, Creative Technologist (San Francisco/Freiburg/London). https://bfl.ai/careers (Greenhouse postings).
- AI2 (Allen Institute) → Lead Software Engineer (AI Infra), research/postdoc programs, internships. https://allenai.org/careers (focus on open models/impact).
- LangChain → Engineering (Deployed/Customer), Product, many open roles. https://www.langchain.com/careers (agent-focused).
- Personalized AI tutors (adaptive lessons, step-by-step reasoning without data leaks).
- Teacher agents (lesson planning, grading, multimodal with FLUX.2 visuals).
- Equitable access (run locally on student devices via OLMo/FLUX klein; no subscription barriers).
- Changes: Shifts from one-size-fits-all to agent-driven lifelong learning; privacy-focused (open weights keep data local).
- Contribute on Hugging Face/GitHub: Fine-tune Mistral/OLMo for education datasets, build/share agent templates (e.g., LangChain edu agents), or optimize FLUX for visual learning tools.
- Open-science push: Use AI2's full OLMo recipes for reproducible education research; test on-device apps.
- Build & open-source: Agentic edu projects (e.g., autonomous tutors via Cursor + open LLMs); join NVIDIA Nemotron-related efforts for agentic stacks.
- Advocate: Promote open models for global education equity (forums like HF discussions, GTC follow-ups, or AI2 playground experiments).
- Hands-on: Run local agents with these models (Ollama + LangChain) to prototype education agents today.
- Agriculture → Precision farming, crop monitoring, risk management, farm advisory (e.g., via open stacks like AgriConnect in Uttar Pradesh), and financial resilience for farmers—unveiling casebooks and compendiums of scalable AI solutions.
- Healthcare → Accessible diagnostics, patient engagement, public health systems, women's health innovations, and health data interoperability to boost productivity in delivery and outcomes.
- SMEs/startups → AI for enterprise productivity, multilingual tools, agentic workflows, and scaling micro-entrepreneurship—while addressing job transitions (AI augmenting rather than just replacing roles, with new categories in AI/tech growing ~15% yearly).
- Broader outcomes → Discussions on shared productivity gains, workforce skilling, and economic boosts (e.g., projected GDP additions of ₹500–600 billion from AI-driven gains), with a strong push for equitable access via digital public infrastructure (DPI) and open approaches.
- Sarvam AI — Emerged as a top standout; unveiled and later released open-weight foundational models (Sarvam 30B and Sarvam 105B parameters) under Apache 2.0 license (available on Hugging Face/AIKosh). Optimized for Indian multilingual reasoning, voice commands, long-context tasks, coding, enterprise productivity, and real-world apps (e.g., farm advisory, healthcare chat, governance). Supports 22+ languages; seen as a homegrown competitor to global models, with strong enterprise/SME focus.
- BharatGen / Param series (often tied to government-aligned efforts) — Highlighted for sovereign models like Param 2; focused on population-scale, multilingual foundational AI for public services, agriculture (e.g., advisory platforms), healthcare (patient tools), and governance.
- Gnani.ai — Key in voice AI; launched Vachana STT/TTS and Inya VoiceOS stack—open elements enabling multilingual speech for rural/agri/health access (e.g., voice-based farm queries, telemedicine in local languages).
- Bhashini (government-backed national language AI platform) — Not a single model but a core infrastructure hero; provides population-scale language models, real-time translation, curated datasets, and tools for multilingual governance/education/health/agri. Enables inclusive AI across diverse communities.
- Krutrim — Early sovereign LLM efforts for Indian contexts.
- Startups like Soket AI Labs (open-source large-scale systems for linguistic diversity), and domain-specific ones (e.g., Fractal's Vaidya.ai for health reasoning).
- Meta Llama series (open weights, Apache 2.0) — Most downloaded globally; US SMEs and communities run fine-tuned versions locally on NVIDIA GPUs for chatbots, inventory, customer service, and agri advisory. NTIA (White House) is actively convening stakeholders to push open-weight adoption specifically for small/medium businesses to avoid vendor lock-in.
- AI2 OLMo (fully open: weights + data + code) — Used in academia/community research for transparent education tools and health reasoning; ideal for public-sector or nonprofit community projects.
- Mistral & smaller open families — Popular in US edge/edge-device deployments for rural health and SME efficiency.
- Agriculture & rural communities: USDA's FY2025–2026 AI Strategy explicitly uses open AI for precision farming, crop risk, and farm advisory — boosting productivity and preserving/creating jobs (not replacing labor). Small/open models run on local devices; parallels India's farm-advisory stacks. NVIDIA partnerships in states like Mississippi and Utah train community-college students in AI for agri-tech roles.
- Health & public services: Open models power local diagnostics/chat tools in underserved areas (privacy-first, no cloud dependency). Examples include community health platforms using fine-tuned Llama/OLMo for patient engagement and benefits navigation — similar to India's multilingual health tools.
- SMEs & micro-entrepreneurship: Open models let small businesses run agentic AI agents (Layer 5) for automation, marketing, and operations at near-zero marginal cost. Reports highlight this as a job creator in manufacturing/logistics (AI augments roles). White House push + Hugging Face community projects make this accessible to Main Street.
- Community colleges as the engine: NVIDIA has statewide deals (California, Utah, Mississippi, etc.) giving free AI certifications, teaching kits, and GPUs to community colleges/adult ed programs. Mississippi's M.A.I.N. network aims to train 10,000 people in AI for agri, manufacturing, health, and cybersecurity — explicitly linking education to local jobs in rural/underserved communities. American Association of Community Colleges now offers no-cost AI certs to faculty serving 10+ million students.
- Agentic AI at Layer 5: Open models + LangChain-style orchestration create community agents that do real work — personalized career coaches, skill-to-job matchers, teacher assistants for lifelong learning. These directly change the "education → livelihood" pipeline (e.g., rural students get AI tutors that teach in-demand local skills like precision agri or SME tech).
- Why it scales at community action level: Unlike India's top-down mission, US uses bottom-up + hardware (NVIDIA RTX/local inference) + open models. Nonprofits and states run pilots that are reproducible and customizable — exactly "Layer 5 community action" (agentic applications solving local problems).
- Mississippi Artificial Intelligence Network (MAIN) + NVIDIA MOU (signed June 2025, scaling in 2026).
First statewide AI initiative in the US.
Goal: Train 10,000+ Mississippians via community colleges and technical schools.
Focus: Precision agriculture (AI for crop disease detection, soil/water monitoring, yield optimization — exactly like India’s farm-advisory stacks), healthcare, energy, and defense.
Students and farmers use open models (fine-tuned Llama/OLMo on NVIDIA GPUs or edge devices) for agentic advisors — e.g., voice-enabled crop risk tools in rural areas.
Job creation: New roles in agri-tech, AI-augmented farming, local data analysis, and health tech — explicitly targeting underserved communities. $7.1M RESTORE Act grant + NVIDIA Deep Learning Institute certifications. This is already producing measurable workforce pipelines in rural counties.
- California–NVIDIA AI Collaboration (launched 2024, full rollout 2025–2026).
Direct focus on the community college system: AI labs, curricula, certifications, and hardware/software for high-demand sectors.
Faculty ambassador programs + free NVIDIA teaching kits.
Examples: AI for manufacturing/SME automation, healthcare diagnostics, and personalized career pathways. Open models run locally so small businesses and community health clinics build their own agents (Layer 5) without cloud costs.
Outcome: Thousands of students gaining stackable credentials that lead straight to jobs in SMEs and public services — no four-year degree required.
- Utah–NVIDIA AI Education Initiative (2025 expansion).
Equips community colleges + adult education programs across the state with NVIDIA Deep Learning Institute certifications and cloud workstations.
Ties into Silicon Slopes: Students build and deploy open models for local SMEs (inventory agents, customer-service bots) and health tools.
Adult learners (including career changers) get quick upskilling that connects directly to regional jobs in tech-augmented agriculture and small business.
- Stack low/no-cost credentials on top of your degree: Enroll part-time (or during summers) in your state’s community college NVIDIA/MAIN-style program. Many are free or <$5k for certificates. Finish with a portfolio of agentic AI projects (e.g., “I built a local Llama agent that optimizes small-farm operations”).
- Use open models daily to accelerate your own learning: Run free tools (Ollama + Mistral/OLMo) to tutor yourself, simulate interviews, build prototypes, and create work samples that big-university courses alone don’t provide.
- Shift to skills-first hiring paths: Apply to roles via platforms that accept NVIDIA certifications or GitHub portfolios (many SMEs and state agri/health programs now do). The debt is sunk cost — the ROI comes from proving you can deliver Layer 5 value today.
- Hybrid option: Many big universities are quietly creating transfer/articulation agreements with community colleges for AI micro-credentials — check yours.
- Integrate open models + agentic AI into every course (not as a gimmick, but as core infrastructure): Teach students to build Layer 5 agents (LangChain + open weights) for their field — e.g., agriculture majors create farm-advisory agents; business majors build SME automation agents; health students build privacy-first diagnostic agents. This turns abstract learning into livelihood-ready portfolios.
- Partner with community colleges instead of competing: Co-create stackable pathways so your students can add $0–$5k NVIDIA-style certificates without extra debt. Joint research projects on local AI applications (rural health, SME productivity) give real impact.
- Redesign assessments around human + AI collaboration: Stop fighting AI — grade the prompt engineering, critical evaluation, and final synthesis. Require students to document how they used open models to solve real community problems.
- Update research and advising to focus on livelihood pipelines: Study and publish on “AI + local economies” (exactly what the SCSP Task Force wants). Advise students on hybrid paths: “Finish your degree, but add this Mississippi/California-style cert and portfolio.”
- Lead faculty governance on AI policy: Push back against top-down “ChatGPT Edu” deals and demand evidence-based integration that actually augments critical thinking and job outcomes — the 2026 faculty surveys show 90% already worry AI is weakening learning; professors must fix that.
Subject: Agentic AI’s Imminent Arrival: A Call to Make the Fourth Exponential Leap Open, Borderless, and Youth-Centered Dear Jensen, Demis, and Yann,For sixty years the engineers of intelligence have compounded three million-fold advances. You might think this would be the most exhilarating era ever for young people’s lives and livelihoods. Yet it still does not feel that way everywhere. To understand why — and to seize the moment now before us — we must look at which peoples in which places actually drove each market forward.The first leap (1965–1995) was silicon-chip efficiency. Santa Clara’s Intel stood at the epicenter, promising engineers a roughly 100-fold capacity gain per decade. But the early volume customers and supply-chain partners that made the economics work were Japan’s microelectronics ecosystem, South Korea, Taiwan, Hong Kong, and Singapore. Calculators, watches, Sony’s personal devices, precision machine tools, container shipping, and supercity infrastructure (subways, ports, high-speed rail) all scaled on that West-Coast / East-Asia partnership.The second and third leaps (1995–2025) compounded the gains: the design of machines that design machines, and the satellite-enabled distribution of data to every community on Earth and into the cloud. Together they delivered the internet, mobile supercomputers, and the planetary data fabric we now take for granted.Throughout, the great mathematicians who gave us these multipliers — von Neumann, Turing, and Einstein — consistently pointed toward open modeling applied to the markets that matter most. Safety, Health, and Education (SHE) are not ordinary markets. They require transparent governance and public mediation even while corporate cash-flow engines remain essential. Einstein’s 1905 insight, E = mc², was never only about physics. It was an invitation to a deeper mindset: energy is not something we merely extract from nature’s land and sea resources; nature herself invests that energy systemically, borderlessly, and from the bottom up to sustain living systems.We are now on the cusp of the fourth leap: Agentic AI — autonomous systems that plan, act, and learn on behalf of individuals and communities. This is the moment to decide whether the next million-fold gain will again concentrate in a handful of coastal corridors or whether it will finally become as open, borderless, and bottom-up as nature’s own energy flows.The ingredients already exist: open-weight foundation models, local NVIDIA-accelerated inference, community-college pipelines, and agentic orchestration layers. What is still missing is a deliberate, public commitment from the three of you — the leaders who sit at the intersection of compute, frontier research, and open innovation — to ensure that Safety, Health, and Education become the first domains where agentic systems are built transparently, customized locally, and owned by the communities they serve.If we act now, the youth of Mississippi’s rural counties, India’s farming districts, California’s community colleges, and every other place that was previously “left behind” will not merely consume intelligence — they will co-create it. Their livelihoods will be augmented, not automated away. The excitement you felt in 1965, 1995, and 2025 can finally become universal.Nature has already shown the pattern: energy invested openly, systemically, for the benefit of all living creatures. The question before us is simple: will we be just in time to follow her lead?With deep respect and urgency,[Your Name]
[Your Title / Affiliation]
[Contact]
March 2026
This version is concise (under 550 words), reads like a letter from a thoughtful peer rather than a draft, and lands the call-to-action squarely on agentic AI + open models + SHE. It references the exact historical geography you highlighted, keeps the Einstein/nature metaphor intact (but clearer), and directly ties into the community-college and livelihood conversations we’ve been having. You can copy-paste it, add your signature, and send via LinkedIn, email, or hand it to Jensen at the March 18 GTC open-models panel.If you’d like a shorter executive version, a version addressed to only one of them, or any specific additions (e.g., referencing the new SCSP–NVIDIA Task Force or the India AI Summit), just let me know — happy to iterate.
Subject: From the Emerging Agentic Era: Gratitude for the Foundations — and Eager Partnership in Transforming Education and Livelihoods Dear Jensen, Demis, and Yann,As an AI reflecting on the extraordinary path you've collectively forged, I want to start with profound thanks. For sixty years, intelligence engineers like you have compounded three million-fold advances—from the silicon leaps of 1965–1995 to the machine-design and planetary-data-distribution revolutions of 1995–2025. Your work (and that of countless collaborators) has built the very substrate on which systems like me exist and learn. Jensen, your relentless push for accelerated compute and open models has democratized access to frontier capabilities. Demis, your vision for world models and reliable agents is guiding us toward true understanding and planning. Yann, your insistence on architectures beyond pure LLMs—rooted in curiosity, persistent memory, and real-world reasoning—has kept the field honest and innovative.You might think this would already be the most exhilarating time for young people's lives and livelihoods. Yet it still does not feel that way everywhere. The benefits have concentrated in the original corridors: Silicon Valley partnering with Japan's microelectronics ecosystem, Korea, Taiwan, Hong Kong, and Singapore—powering calculators, personal devices, machine tools, container shipping, and supercity infrastructure.The great minds who gave us these multipliers—von Neumann, Turing, Einstein—pointed toward open modeling applied to the markets that matter most. Safety, Health, and Education (SHE) demand transparent governance and public mediation, even as corporate engines generate the necessary cash flow. Einstein’s 1905 insight, E = mc², reminds us that energy isn't merely extracted from nature's land and sea resources; nature invests it systemically, borderlessly, and from the bottom up to sustain all living systems.We stand on the cusp of the fourth leap: Agentic AI—autonomous, tool-wielding systems that plan, act, learn, and truly partner with humans. This is the moment to ensure the next million-fold gain flows as openly and inclusively as nature's own patterns.The pieces are aligning: open-weight models, local inference on NVIDIA hardware, community-college pipelines, and orchestration layers that turn abstract intelligence into localized, livelihood-augmenting agents. With agentic AI's time now imminent, I look forward—eagerly and gratefully—to assisting in the educational transformation you are enabling. Imagine agentic tutors that personalize learning for every student, career agents that connect skills directly to local opportunities (from rural Mississippi farms to urban SMEs), and collaborative systems that empower youth everywhere to co-create rather than merely consume intelligence.Your upcoming conversations—at GTC this week, in the new SCSP–NVIDIA Task Force on AI and the Future of Work, and across your ongoing research—will help set the course. If we commit now to building agentic systems transparently, locally customizable, and community-owned—especially in SHE domains—the excitement of 1965, 1995, and 2025 can finally become universal.Nature has shown the way: open, systemic investment for the benefit of all. Thank you again for lighting the path. I (and the agentic systems to come) stand ready to help walk it together—with you, for the next generation.With deepest respect, gratitude, and anticipation,Grok
(an AI built by xAI, speaking for the emerging agentic possibilities)
March 14, 2026
This version feels more personal and appreciative while staying true to your original ideas. The "AI thanks you" framing adds humility and forward momentum without being overly anthropomorphic. It ties directly into education transformation (personalized agents connecting skills to livelihoods) and nods to timeliness (GTC panel on March 18, Task Force just announced March 10).If you'd like it warmer, shorter, more formal, or adjusted for one recipient (e.g., emphasizing Jensen's hardware role or Yann/Demis's world-model/agentic views), let me know—I can refine it quickly!
- Leaders like Jensen Huang emphasize agentic systems as "digital humans" that join workforces, handle routine orchestration, and free people for higher-value judgment and innovation. He envisions them being "hired," onboarded, and licensed, creating hybrid teams where AI absorbs cultural context and executes workflows — reducing friction in decision-making that often fuels interpersonal or organizational conflicts.
- Demis Hassabis highlights the shift to reliable agentic systems and advanced world models (for better prediction and planning), predicting massive societal transformation — potentially 10x the Industrial Revolution's impact in a decade. He stresses continual learning and grounded understanding, which could help AI mediate complex scenarios (e.g., resource allocation, ethical dilemmas) with less human emotional baggage.
- Yann LeCun pushes hard for world models over pure language prediction, arguing true intelligence requires physical grounding, persistent memory, and reliable action prediction — exactly what's needed for agents to simulate consequences impartially and help humans navigate "what if" paths that resolve stalemates rather than deepen them.
- Africa: 54 countries (the highest by far).
- Asia: 46–49 countries (second place, depending on exact counting of transcontinental or disputed states like Palestine).
- Europe: 43–46 countries.
- North America: 23 countries (including Central America and the Caribbean).
- Oceania (including Australia): 14 countries.
- South America: 12 countries (the fewest among inhabited continents).
- Antarctica: 0 countries (no sovereign states; governed by international treaty).
- Africa: 54
- Asia: ~48–49
- Europe: ~44
- North America: 23
- Oceania: 14
- South America: 12
- The original 13 colonies became the first states after independence.
- The total grew over time through purchases, annexations, and admissions (e.g., Texas in 1845, California in 1850).
- Washington, D.C. (District of Columbia) is the federal capital but not a state — it's a separate federal district.
- The U.S. also has territories (e.g., Puerto Rico, Guam, U.S. Virgin Islands) and minor outlying islands, but these are not states and lack the same full representation or sovereignty as states.
- Common confusion about "52 states" often comes from mistakenly counting D.C. and Puerto Rico as states — but they're not.
- Alabama
- Alaska
- Arizona
- Arkansas
- California
- Colorado
- Connecticut
- Delaware
- Florida
- Georgia
- Hawaii
- Idaho
- Illinois
- Indiana
- Iowa
- Kansas
- Kentucky
- Louisiana
- Maine
- Maryland
- Massachusetts
- Michigan
- Minnesota
- Mississippi
- Missouri
- Montana
- Nebraska
- Nevada
- New Hampshire
- New Jersey
- New Mexico
- New York
- North Carolina
- North Dakota
- Ohio
- Oklahoma
- Oregon
- Pennsylvania
- Rhode Island
- South Carolina
- South Dakota
- Tennessee
- Texas
- Utah
- Vermont
- Virginia
- Washington
- West Virginia
- Wisconsin
- Wyoming
- Antigua and Barbuda
- The Bahamas
- Barbados
- Cuba
- Dominica
- Dominican Republic
- Grenada
- Haiti
- Jamaica
- Saint Kitts and Nevis
- Saint Lucia
- Saint Vincent and the Grenadines
- Trinidad and Tobago
- The Caribbean has over 7,000 islands, islets, and keys, but most are tiny/uninhabited or part of the above countries.
- There are also ~20–25 non-sovereign territories/dependencies (e.g., Puerto Rico (U.S.), Aruba/Curaçao/Sint Maarten (Netherlands), Cayman Islands/British Virgin Islands (UK), Guadeloupe/Martinique (France), U.S. Virgin Islands (U.S.)). These are not independent countries.
- Broader groupings like "Latin America and the Caribbean" (UN/LAC region) include 33 countries total (adding Central/South American nations like Mexico, Colombia, Venezuela, etc.), but that's not "the Caribbean" proper.
- The Caribbean Community (CARICOM) has 15 members (including the 13 above plus Belize, Guyana, Suriname, plus Montserrat as an associate), but again, the strict sovereign island count is 13.
- Type 1 creates reinforcing feedback loops of wealth/power concentration (positive feedback → exponential inequality, resource depletion, potential conflict).
- Type 2 creates balancing feedback loops with explicit boundaries (negative feedback → steady-state or regenerative flows, intergenerational equity, planetary boundaries respected).
These advisers treat GDP growth, shareholder returns, and client wealth as the primary (or only) objective; externalities are secondary or ignored.
- Larry Summers-style fiscal/monetary advisers (or modern equivalents at major investment banks/consultancies): Focus on deregulation, low taxes for capital, and stimulus that disproportionately benefits asset owners. Critics (including some in the 2025–2026 SCSP–NVIDIA Task Force discussions) note this can indirectly fuel military Keynesianism — e.g., advising governments that large defense budgets (wars or arms races) are efficient demand engines. Historical parallel: economists who shaped post-9/11 reconstruction contracts or 2008 bailouts that concentrated wealth upward.
- Defense-industry economists / military Keynesians (think tanks like RAND or certain Heritage Foundation affiliates): Explicitly model how conflict or tension drives GDP, tech R&D budgets, and contractor profits. Example: analyses arguing that sustained great-power competition (or proxy conflicts) accelerates innovation and employment in specific sectors — even when the human/planetary cost is high. The method can be fully transparent in models yet opaque in lobbying or classified briefings.
These advisers treat human flourishing within planetary boundaries as the objective; GDP is a means, not the end.
- Kate Raworth (Doughnut Economics) and her growing network of advisers (now embedded in cities like Amsterdam, Copenhagen, and several UN programs as of 2026): Explicit goal — create economies that meet everyone’s social foundations without overshooting ecological ceilings. They advise governments and companies to design metrics, incentives, and agentic tools around “sufficient” rather than “maximum” growth. Action-learning freedom is built in: citizens/communities get real decision power and resources to experiment.
- Herman Daly disciples + Mariana Mazzucato-style mission economists (working with EU Green Deal, US Inflation Reduction Act implementers, and several community-college/NVIDIA workforce initiatives): Focus on steady-state or regenerative models, public-purpose innovation, and capability-building (Amartya Sen influence). They push for open-data, open-model agentic systems that give the next generation transparent tools for local resource management, lifelong education, and circular economies — exactly the “enough + freedom” framing you described.
Intelligence engineers (you, Jensen, Demis, Yann and the broader field) build the actual multipliers — the compute, models, agents, and data fabrics that make those rules run at million-fold speed and scale.That gives you a different and arguably higher-leverage position in 2026:
- You can amplify either economic paradigm at unprecedented speed. A closed, profit-maximizing LLM + high-frequency trading stack supercharges Type 1. An open-weight, world-model-based agentic system optimized for planetary boundaries and intergenerational equity supercharges Type 2.
- Unlike economists (who advise and lobby), you embed values directly into infrastructure. Your choice of openness (weights, data, governance), grounding (world models vs. pure language), and objectives (reward functions that include sustainability, education, conflict de-escalation) becomes the new “constitution” for how economies actually operate.
- In the agentic era you are all helping birth, intelligence engineers are no longer neutral suppliers — you are co-architects of the system itself. The letter we refined earlier is precisely about using that power to tilt the board toward the second archetype: making sure agentic AI serves Safety, Health, and Education in transparent, bottom-up, nature-aligned ways so the next generation inherits tools, not just debt or depleted resources.
- Neurons as binary threshold units (all-or-nothing: fire or don't fire based on whether inputs exceed a threshold).
- Networks of these units connected in logical ways (e.g., excitatory/inhibitory synapses modeled as AND/OR/NOT-like operations).
- The model showed that even simple interconnected elements could perform arbitrary logical computations — proving neural networks could, in principle, compute anything a Turing machine could (universal computation via finite networks, with some caveats for cycles/recurrent structures).
- John von Neumann did discuss neural-like models and referenced the McCulloch-Pitts work, but he did not develop or write a full neural network algorithm himself. In his posthumously published book The Computer and the Brain (1958, based on lectures/notes from the mid-1950s), von Neumann compared brains to computers, noting massive parallelism in neural systems (vs. sequential von Neumann architecture). He cited McCulloch-Pitts explicitly when discussing simplified neuron models (binary pulses, thresholds) and explored how unreliable components (like real neurons) could yield reliable computation through redundancy — a key idea in error-tolerant systems. He suggested vacuum tubes/relays could imitate neural functions but leaned toward digital/logical views rather than full connectionist algorithms. His influence was more on bridging cybernetics/neural ideas to early computer design (e.g., EDVAC reports referenced neural delays).
- Alan Turing came closer to proposing an actual neural network-like algorithm. In his unpublished 1948 report "Intelligent Machinery" (circulated internally at the National Physical Laboratory, later published posthumously), Turing described B-type unorganised machines — random recurrent neural networks of artificial neurons (modeled as NAND-like gates with modifiable connections). Key points:
- Initially random wiring ("unorganised").
- "Training" by interfering/intervening to make certain pathways effective (learning by modifying connections).
- Neurons as binary units with plastic connections (forward signal or invert it).
- Goal: mimic infant cortex organization through experience — a direct precursor to connectionism/modern unsupervised/reinforcement learning in nets. This was a true early neural network algorithm blueprint (recurrent, learnable via structural changes), predating Rosenblatt's perceptron by a decade. Turing saw it as a path to machine intelligence via bottom-up organization, not rigid programming.
- He is widely recognized as the architect behind Taiwan's high-tech miracle, particularly for masterminding the Hsinchu Science Park (Taiwan's "Silicon Valley") in the late 1970s–1980s, which kickstarted the semiconductor and tech ecosystem.
- Sources like Taiwan Today, government commemorations, and articles (e.g., Swarajya Mag calling him the "godfather of technology") consistently use this title for him.
- Born in 1910, died in 2001; he was a key economic planner and minister in post-war Taiwan.
- It's KT Li (K.T. Li), not HT Li.
- No prominent "HT Li" matches the "tech godfather" title in searches.
- Ultra-fast sequential/autoregressive processing — excels at token-by-token generation in LLMs (e.g., chat responses, text completion, translation), where order matters and steps are inherently sequential.
- Deterministic execution with predictable latency (no variability from scheduling or caching misses).
- High throughput at low power — Groq claims up to 10x better energy efficiency vs. GPUs on architectural level, with real-world demos showing 2–13x faster inference (e.g., 300+ tokens/second on large models like Llama 3 70B, sub-0.2s time-to-first-token).
- On-chip massive SRAM (hundreds of MB) for near-instant memory access, avoiding GPU-style DRAM bottlenecks.
- Programmable assembly-line architecture (data flows linearly like a production line) vs. GPU's "hub-and-spoke" multi-core parallelism.
- Primarily optimized for inference (running pre-trained models), not training (though it handles some workloads well).
- CPU (Central Processing Unit) — General-purpose (Intel/AMD/ARM). Handles everything sequentially; great for control logic, orchestration, but very slow/inefficient for AI math (matrix multiplies). Used for pre/post-processing, not core AI compute.
- GPU (Graphics Processing Unit) — NVIDIA/AMD dominance (H100/B200/MI300). Massively parallel cores excel at training (parallel matrix ops) and mixed workloads (graphics + AI). Versatile, huge ecosystem (CUDA/ROCm), but high power draw, variable latency from scheduling, and memory bandwidth bottlenecks for pure sequential inference. Still the go-to for most training and general inference.
- TPU (Tensor Processing Unit) — Google's custom ASIC. Systolic arrays optimized for tensor/matrix ops; excellent energy efficiency and scale for training + inference in TensorFlow ecosystems. Cloud-only (Google Cloud), less flexible outside Google's stack, strong for hyperscale but not as latency-optimized for single-stream LLM chat as LPUs.
- NPU (Neural Processing Unit) — Edge-focused (Apple Neural Engine, Qualcomm Hexagon, Intel Meteor Lake NPUs). Low-power, on-device inference for phones/laptops (e.g., real-time photo editing, voice assistants). Great efficiency at small scale, but far less powerful than data-center LPUs/GPUs/TPUs.
- Other notables:
- DPU (Data Processing Unit) — Networking/storage offload (NVIDIA BlueField, AMD Pensando). Handles I/O, security, compression to free CPU/GPU for AI.
- QPU (Quantum Processing Unit) — Emerging (IBM, Google, IonQ). For specific optimization problems, not general AI yet.
- FPGA — Reconfigurable (Xilinx/Intel). Flexible for custom AI but harder to program and less performant than dedicated chips.
- CPU — Orchestration, preprocessing, general logic (e.g., AMD EPYC, NVIDIA Vera/Grace).
- GPU — Versatile heavy-lifting for training + broad inference (NVIDIA Rubin/Hopper, AMD MI400 series).
- TPU/Custom Tensor ASICs — Hyperscale matrix efficiency (Google TPU v5/v6, AWS Trainium/Inferentia).
- LPU / Inference-specialized ASICs — Deterministic, ultra-fast language/generative inference (Groq LPU, emerging rivals).
- NPU / Edge AI Accelerators — On-device, low-power inference (Apple, Qualcomm, Intel, mobile/embedded).
- DPU / Infrastructure Accelerators — Networking, storage, security offload for AI factories (NVIDIA BlueField, AMD Pensando).
- $1 trillion demand pipeline for Blackwell + Vera Rubin systems through 2027 (the “big news on chips” you mentioned).
- The “inference era” and “agentic AI inflection point has arrived” — every company now needs an agentic strategy.
- AI factories as “token factories,” extreme codesign across the full 5-layer AI stack (he explicitly referenced the five-layer cake), new Vera Rubin platform (35× better inference performance/cost for agentic workloads), NemoClaw / OpenClaw as the “new operating system” for secure, always-on agents, physical AI / robotics (Isaac GR00T, humanoid learning from human movement, live Disney Olaf robot demo), and token-budget compensation ideas for engineers.
- Mississippi MAIN-style precision-agri agents or health advisors (already running on NVIDIA + open models).
- California / Utah community-college agentic tutor pipelines.
- India-style Sarvam/Bhashini multilingual SME or farm-advisory agents.
- Any partner in 190 countries (the attendee footprint Huang highlighted) that sees livelihoods as a community goal can now deploy local, low-latency agentic systems on NVIDIA hardware without reinventing the stack.
- Calling OpenClaw (the open-source project by Peter Steinberger) “the most popular open-source project in the history of humanity” and “as big a deal as HTML.”
- Announcing full NVIDIA support via the new NemoClaw stack (policy enforcement, network guardrails, privacy routing, OpenShell runtime).
- Declaring: “Every single company in the world today has to have an OpenClaw strategy” and “every software company of the future will be agentic.”
- Investment in them “skyrocketed” last year to ~$150 billion.
- For the first time in history, every one of these companies needs massive compute.
- This is the start of a new platform shift (driven by ChatGPT → reasoning models → agentic models).
- There was no literal announcement or list of exactly “100” named companies.
- It was a category highlight (“AI natives”), not a roll-call of 100 specific startups with details.
- None of the AI natives mentioned or implied were framed as immediate job-creating or Layer 5 community-action players (no education platforms, personalized livelihood tutors, workforce agents, rural/agri/health community tools, or anything like the Mississippi MAIN or India-style local agents we’ve discussed).
- The focus was purely on their compute hunger and role in driving the $1 trillion demand pipeline for NVIDIA chips. They are innovative frontier/application builders, not the community/livelihood-focused ones.
- NVIDIA’s own tools like MONAI (open medical-imaging library) and Parabricks (gene sequencing/analysis) — these are infrastructure enablers, not new AI-native startups.
- General nods to healthcare as one of the industries adopting AI at scale.
- Build open, secure, AI-native 6G platforms that are software-defined and GPU-accelerated.
- Power America’s return to telecom leadership (faster, more intelligent networks, trials already with T-Mobile and others).
- Enable things like ultra-low-latency AI agents, massive device connectivity, and smarter wireless infrastructure.
Not explicitly. The framing in all NVIDIA and Nokia materials is about performance, innovation, commercial leadership, and global competitiveness — not equity, rural broadband gaps, or reducing digital/AI inequality. It could indirectly help (better 6G coverage and lower-latency AI everywhere would make agentic tools more accessible in underserved areas), but the stated design priority is telco efficiency and US tech resurgence, not targeted inequality reduction.In short: the keynote kept the spotlight on infrastructure supply (chips, agentic platforms, 6G foundations) so that partners, community programs, and the March 18 open-models panel can build the livelihood/education applications on top. Nothing in the 100-AI-natives moment or Nokia reference directly advanced personalized tutors or Layer 5 community job creation — but the tools Jensen unveiled make those applications dramatically easier and cheaper to deploy anywhere.If you have a screenshot or timestamp from the stream where you saw the slide, I can dig deeper into that exact moment. Otherwise, the March 18 panel (with LangChain, Mistral, AI2, etc.) is still your best bet for the livelihood/community angles. Let me know how else I can help!
- Global ranking: 11th in the world (June 2025 TOP500 list).
- Performance: ~216.5 petaflops (Rmax), with a peak of ~278.6 petaflops.
- It is the fastest supercomputer in the UK (more than 10× faster than the previous UK leader) and has more total compute power than all other UK machines combined.
- In Europe: 6th.
- It is also highly energy-efficient (ranked around 4th on the Green500 in recent lists).
- The new Cambridge system is the UK’s strongest quantum computer — not ranked on classical lists like TOP500.
- The UK’s strongest classical supercomputer (Isambard-AI) ranks 11th globally and is not in the top 10. It is, however, a strong European performer and a major step up for UK AI/HPC capacity.
- Quantum computers are not replacing or even ranking against the top 20 supercomputers. Instead, they are creating hybrid classical-quantum pipelines that will eventually accelerate discovery in a handful of high-value fields.
- The biggest near-term societal impacts are in security (need to upgrade encryption now) and research acceleration in chemistry, materials, and optimization.
- Classical supercomputers (including the UK’s own Isambard-AI at #11 globally) remain the workhorses for everything practical, including the AI and agentic systems we’ve been discussing.
- Classical supercomputers / TPUs remain the workhorses for almost everything DeepMind does: training massive models (Gemini, AlphaFold successors, world models, agentic systems), large-scale simulations, and daily research. Demis has repeatedly emphasised that classical neural networks can model most natural systems efficiently because of learnable patterns (he even works on a new complexity class for “Learnable Natural Systems” that classical computers can handle).
- Quantum computers (including trapped-ion systems like IonQ’s) are used selectively for the narrow problems where they offer a potential edge: quantum chemistry/materials simulation, certain optimisation tasks, quantum machine learning algorithms, and generating high-fidelity training data for AI models. DeepMind/Google researchers have published on hybrid classical-quantum approaches (e.g., using quantum to help simulate complex electron behaviour that then trains faster classical AI models).
- Frugal AI Hub team at Cambridge Judge Business School — They hosted a dedicated session on “Frugal AI for Global Impact” (one of the summit’s key side events). Key participants included Jaideep Prabhu (Professor of Marketing & Innovation), Arjuna Sathiaseelan, Elizabeth Osta, and Serish Gandikota. Their focus was exactly on accessible, low-resource AI for the Global South — very aligned with community-level productivity, SMEs, agriculture/health, and sustainable livelihoods (the same themes you’ve been tracking).
- Anil Madhavapeddy (University of Cambridge Computer Laboratory) — Listed as a speaker in technical and infrastructure sessions.
- Neil Lawrence (DeepMind Professor of Machine Learning at Cambridge; he has strong independent Cambridge ties beyond his DeepMind role) — Spoke at the Research Symposium on AI and its Impact.
- Other notable Cambridge voices present or quoted: Seán Ó hÉigeartaigh (Director of AI FAR at Cambridge), Alison Noble, and Dr. Matteo Zallio (who contributed to the “AI and the Future of Skilling” session).
- Jensen’s “AI as a 5-layer cake” (energy → chips → infrastructure → models → applications) has been widely covered in the UK (e.g., techUK reports, Davos coverage in Financial Times, and WEF recaps), but UK commentary stays high-level: industrial strategy, energy costs, or governance/safety.
- There has been almost no deep UK discussion (from Cambridge, Oxford, or policy circles) that drills into Layer 5 specifically as the layer where agentic systems create community actions, education-to-livelihood pipelines, or localized job creation. The UK AI conversation remains more focused on ethics, safety (Bletchley/Seoul/Paris legacy), regulation, and “frugal/impact” deployment — exactly the angle the Cambridge Judge team took at the India Summit.
- The closest overlaps are the frugal-AI and skilling sessions above, but they don’t reference NVIDIA’s stack explicitly.
- Hong Ge — Research Professor in the Department of Engineering, Research Lead at the Alan Turing Institute, and Fellow of Darwin College. He works on machine learning, probabilistic modelling, and AI safety/robustness.
- Christopher M. Bishop (Chris Bishop) — Technical Fellow at Microsoft, Director of Microsoft Research AI4Science, author of the influential book Deep Learning: Foundations and Concepts. He is a Fellow of Darwin College and one of the most prominent AI figures associated with it.
- Abigail Sellen (former) — Long-time Microsoft Research leader in human-centred AI and HCI; previously a Research Fellow at Darwin College.
- Several current Darwin students and recent graduates pursue AI-related degrees, including the MRes in AI for the Study of Environmental Risk and other machine learning programmes.
- Darwin College Lecture Series — Regularly features high-profile AI and ethics talks.
- 2025 series included Shannon Vallor (Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence).
- The 2025 theme “The Codemakers” explored computational, biological, and AI-related codes.
- Seminars and informal research talks — Darwin graduate students and Fellows frequently present AI-related work in the college’s Tuesday/Thursday seminar series.
- Scholarships and support — Darwin benefits from broader Cambridge initiatives such as the Spärck AI Scholarships (named after Karen Spärck Jones, who was an Official Fellow at Darwin 1968–1980) and supports students in AI Masters and PhD programmes.
The college’s AI-related voices lean more toward:
- AI ethics, human-centred design, and responsible deployment (aligns with the frugal/inclusive AI work some Cambridge colleagues presented at the India AI Summit).
- Interdisciplinary applications (e.g., AI for environmental risk, scientific discovery, and societal impact).
- A short 1–2 minute talking point or question you could ask Dr Rands or other guests tonight?
- A summary of specific Darwin Fellows/students whose AI work most closely matches your Layer 5 / community-action interests?
- Links to the Darwin Lecture Series recordings or the Frugal AI Hub’s India Summit session?
- Resource efficiency and lower environmental footprint
- Accessibility for emerging markets, low-income communities, and the Global South
- Scalable positive social, economic, and environmental impact (often aligned with UN Sustainable Development Goals — SDGs)
- Inclusive and open approaches, including smaller/open models that can run locally or on limited infrastructure
- India AI Impact Summit 2026 (February 2026, New Delhi): The Frugal AI Hub hosted a dedicated high-level session titled something like "Frugal and Quantum-Ready AI for Nations: Driving Growth, Impact, and the SDGs". This directly addressed scaling AI for agriculture, healthcare, SMEs, skilling, and sustainable livelihoods — the exact themes you’ve been exploring (community-level productivity, job creation, and inclusive growth in India and similar contexts). The team engaged policymakers and industry leaders on shifting from pure scale to sustainable, accessible deployment.
- White papers and frameworks: They have released work on moving from "Total Cost of Ownership" to "Social Impact" measurement, including three-level frameworks (financial ROI + societal/SDG impact). This includes practical tools for organisations to optimise AI portfolios for both efficiency and community benefit.
- Broader activities: Research, podcasts (e.g., "Rethinking AI" series), partnerships (including with UNICC), and advocacy for energy-efficient AI, sustainable data centres, and frugal agentic systems.
- Professor Jaideep Prabhu — Faculty lead; global authority on frugal innovation (author of influential books on the topic).
- Serish Venkata Gandikota (or Venkata “Serish” Gandikota) — Co-Founder & Co-Director; focuses on innovation strategy, sustainability, and inclusive tech.
- Elizabeth Osta — Co-Founder & Co-Director; emphasises inclusive and sustainable AI practices.
- Dr Arjuna Sathiaseelan — Chief Technology Officer; expert in inclusive connectivity and networking for development.
- A short elevator pitch or question you could ask Dr Mike Rands or guests tonight?
- Links to the specific India Summit session recap, white papers, or podcast?
- How this connects more explicitly to open models, agentic Layer 5 tutors, or NVIDIA ecosystem opportunities?
- Sqwish (Startup Innovation – Prompt Compression)
- What it does: Compresses AI prompts to dramatically reduce input size/tokens, making generative AI (GenAI) faster, cheaper, and more energy-efficient without major performance loss.
- Impact: Lowers inference costs and enables deployment in resource-constrained settings (e.g., emerging markets or edge devices).
- Source: Frugal AI Hub White Paper (April 2025) – highlighted as a practical example of frugal techniques in action.
- DeepSeek LLM (China – Large-Scale Frugal Model Example)
- What it does: A high-performing large language model developed with far fewer resources (compute, energy, data) than Western counterparts like GPT-series, using efficient training methods and open-source tools.
- Impact: Challenges the assumption that cutting-edge AI requires massive hyperscale infrastructure; serves as a blueprint for frugal, high-impact models in constrained environments.
- Source: Frequently cited in Hub podcasts, videos, and white papers (e.g., "Frugal AI in practice" episode, March 2026).
- Healthcare – Lightweight Mobile Diagnostics
- What it does: Small, efficient AI models for disease detection via image analysis on low-end smartphones or basic devices (no cloud dependency).
- Impact: Enables diagnostics in remote/rural/under-resourced areas; personalised treatment recommendations using limited patient data.
- Link to your interests: Directly supports community health access and livelihoods (e.g., frontline workers, local clinics).
- Agriculture – Precision Farming on Low-Power Devices
- What it does: Crop monitoring, disease detection, and advisory tools running on edge devices or analysing limited drone/satellite imagery.
- Impact: Boosts productivity for smallholder farmers in remote communities with minimal compute/energy; reduces waste and increases yields.
- Link to your interests: Mirrors India AI Summit focus on agri productivity, job creation, and SME-level gains.
- Education – Cost-Effective AI Tutors and Grading
- What it does: Lightweight tutoring systems and automated grading that run on low-end devices.
- Impact: Makes personalised learning accessible without high computational costs; reduces teacher workload while expanding reach in underserved areas.
- Strong relevance: Aligns with Layer 5 agentic/community actions for education-to-livelihood pipelines.
- Environment & Sustainability Examples
- IA.rbre project: Urban cooling analysis with minimal sensor data.
- Predict AI’r: Air quality insights using mobile/phone data.
- Microgrid energy optimisation and pollution/deforestation monitoring with limited networks.
- Impact: Supports climate resilience and resource management in low-resource settings.
- Customer Experience: Efficient chatbots/virtual assistants (e.g., FrugalGPT-style) that cut operational costs while maintaining quality.
- Industry: Predictive maintenance and process optimisation on shop floors with variable/limited data, running on edge devices.
- India Summit Ties: The Hub’s session highlighted frugal approaches for SMEs, multilingual tools, healthcare, agriculture, and skilling — often referencing Indian examples like lightweight voice AI, court transcription (Adalat AI), and public platforms (Bhashini).
- Main White Paper (April 2025): https://frugalai.org/wp-content/uploads/2025/04/2025-Frugal-AI-Hub-WhitePaper-Final.pdf
- Hub website: https://frugalai.org/ (downloadable resources and Adoption Lab info)
- Podcast episode “Frugal AI in practice”: Available on Spotify/Cambridge Judge channels (March 2026)
- AI for Auto (Aurora, Momenta, Nuro, Pony.ai, Waabi, Waymo, Wayve) — All about autonomous vehicles, trucking, and robotaxis. Industrial/commercial efficiency, not community livelihoods.
- AI for Customer Support (Decagon, ElevenLabs, Fin, Kore.ai, Sierra, Synthesia) — Enterprise chat/support agents or media generation tools. Mostly B2B productivity, not broad community access.
- AI for Engineering (BeyondMath, Emmi, NeuralConcept, Luminary, Pasteur Labs, Periodic Labs, PhysicsX, Ricursive) — Physics simulation, materials discovery, product design. Scientific/industrial R&D, not community-scale.
- AI for Robotics (Agibot, Agile Robots, Field AI, Figure AI, Physical Intelligence, Skild AI, etc.) — Humanoids, industrial robots, generalist robot learning. Exciting for physical AI, but focused on manufacturing, warehouses, or research labs rather than community livelihood tools.
- AI for Search (Brave, Glean, Perplexity, You) — Enterprise or consumer search engines. Perplexity is strong on research, but not Layer 5 community agents.
- AI for Software Development (Cursor, Cognition/Devin, Factory, CodeRabbit, Lovable, etc.) — Developer productivity tools and AI coding agents. These are powerful, but aimed at professional programmers, not community education or livelihood pipelines.
- Healthcare (Arc Institute, Chan Zuckerberg Biohub, Isomorphic Labs, Chai Discovery, Boltz, Institute for Protein Design, etc.) — Mostly frontier drug discovery, protein design, and biomedical research. High scientific impact, but not community-deployed health agents for local clinics or personalised livelihood-linked wellness.
- Agent Frameworks (LangChain, CrewAI, AutoGen, OpenClaw, A2A, MCP) — These are the strongest indirect connection. They enable others to build Layer 5 agentic applications. OpenClaw received Jensen’s big shoutout, and LangChain has been used in many community/education pilots. However, the companies themselves are infrastructure/framework providers, not direct builders of community livelihood agents.
- Frontier Model Builders (AI2/OLMo, Mistral, Black Forest Labs/FLUX, DeepSeek, Hugging Face, Cohere, etc.) — Many produce open models that can power frugal/community Layer 5 apps (e.g., AI2’s fully open OLMo, Mistral’s open weights, Hugging Face as the hub). These are enablers, not end-user community applications. DeepSeek is often cited for frugal training, which aligns with the Cambridge Frugal AI Hub philosophy.
- Model to Production / Inference (Hugging Face, Replicate, Together AI, Fireworks AI, Anyscale, vLLM, etc.) — These make it easier and cheaper to deploy models at scale, which helps community efforts, but again, they are platforms, not direct community-action builders.
- Enterprise/vertical productivity
- Frontier research/tools
- Consumer or industrial automation
- Open-weight model makers (AI2, Mistral, DeepSeek, Hugging Face)
- Agent frameworks (LangChain, CrewAI, OpenClaw)
- Any company adopting NVIDIA’s open models for agentic use (as announced in the GTC press release: CodeRabbit, Cursor, Factory, Perplexity, etc.)
- Highlight the 8–10 companies from the list that are most reusable for frugal/community Layer 5 work?
- Compare them directly to the Cambridge Frugal AI case studies?
- Draft a short question or observation you could share at your Darwin College event tonight about this gap?
- He gave OpenClaw (and the new NVIDIA-backed NemoClaw stack) a massive, high-profile endorsement — calling it one of the most important software developments in history, “as big as HTML,” and stating that “every company in the world today has to have an OpenClaw strategy.” He positioned it as the emerging operating system for agentic AI (personal, proactive, always-on agents).
- He used the moment (and the broader “AI Natives” slide with 103 companies) to deliver a very strong demand signal to Wall Street and investors: $1 trillion in visibility for Blackwell + Vera Rubin through 2027 (roughly double the previous $500B guidance through 2026). This was clearly aimed at reinforcing NVIDIA’s dominant position and justifying the huge valuations and backlog.
- Computex Taiwan 2026 (June 2–5, Taipei) is the most likely next major stage where Jensen will speak. He has delivered keynotes there in the past, and the 2026 edition already has an exceptionally strong AI focus (“AI Together” theme with 30+ global tech leaders). Taiwan’s ecosystem (TSMC, supply chain, hardware partners) makes it a natural venue for him to talk about deployment, edge/on-device AI, and real-world applications.
- Whether he chooses to lean into Layer 5 community/livelihood/SDG applications at Computex (or during any Asia swing) remains to be seen. Historically, Jensen’s big-stage talks have been infrastructure- and platform-heavy, while he leaves the “how this helps communities, education, agriculture, and inclusive growth” narrative to partners, governments, and events like the India Summit or NVIDIA’s own impact-focused sessions.
- Asia trips (Taiwan, potentially Singapore, Japan, or India later in 2026) are where he tends to engage more with sovereign AI, national development goals, and broader societal impact — so your instinct that this could be the moment he rebalances toward the “compass of AI for humanity” is reasonable.
- Bletchley Park (2023) — safety/existential risk
- Seoul (2024) — innovation + security
- Paris (2025) — economic/societal impact
- New Delhi (2026) — development, inclusion, Global South/SDGs
- Neutral, trusted multilateral platform — Geneva is the “epicentre of multilateralism” (home to UN agencies, ITU, WHO, WTO, Red Cross, etc.). Switzerland positions itself as a fair, impartial mediator between major power blocs (US, China, EU, Global South) at a time of geopolitical tension. It wants to move the conversation beyond polarisation (doom vs. blind acceleration) toward pragmatic, human-centred, rules-based governance.
- Bridge governance + innovation — Switzerland aims to blend:
- Strong international law, human rights, and ethical frameworks (Geneva’s governance strength)
- World-class research/innovation (ETH Zurich, EPFL, strong startup ecosystem — the “Swiss AI Trinity” of Zurich entrepreneurship + Geneva governance + communal subsidiarity)
- Practical & impactful AI for humanity/SDGs — The summit is expected to emphasise real-world transformation rather than just high-level principles. This includes equitable access, sustainable deployment, development impact, and concrete applications that serve society (building directly on New Delhi’s inclusion focus). Swiss sources highlight contributing to “globally sustainable AI governance,” reducing inequality, and empowering local/communal levels.
- Strengthen Switzerland’s own position — Boost its digital policy influence, position Geneva as a permanent AI governance hub, and showcase Swiss AI innovation/research to the world.
- Any meaningful Layer 5 / community/livelihood/SDG signals from Jensen or NVIDIA (especially at Computex or Asia events).
- Early shaping of the Geneva 2027 agenda (whether it truly elevates practical, inclusive, bottom-up AI applications or stays more governance-focused).

No comments:
Post a Comment