DCAI--AI & Childhood Cancer ...AP July 2025 - INTELLIGENCE ENGINEERING'S ALPHABET : World Class Biobrains: Drew Endy, Matt Scullin, Daniel Swiger++- BI BioIntelligence, the most collaborative human challenge Mother Earth has ever staged?
NB any errors below are mine alone chris.macrae@yahoo.co.uk but mathematically we are in a time when order of magnitude ignorance can sink any nation however big. Pretrain to question everything as earth's data is reality's judge
Its time to stop blaming 2/3 of humans who are Asian for their consciously open minds and love of education. Do Atlantic people's old populations still trust and celebrate capability of generating healthy innovative brains? What's clear to anyove visting Washington DC or Brussels is a dismal mismatch exists between the gamechanging future opportunities listed below and how freedom of next generation learning has got muddled by how old male-dominated generations waste money on adevrtising and bossing. Consider the clarity of Stanford's Drew Endy's Strange Competition 1 2:
Up to “60% of the physical inputs to the global economy”7 could be made via biotechnology by mid-century, generating ~$30 trillion annually in mostly-new economic activity. 8 Emerging product categories include consumer biologics (e.g., bioluminescent petunias,9 purple tomatoes,10 and hangover probiotics11 ), military hard power (e.g., brewing energetics12 ), mycological manufacturing (e.g., mushroom ‘leather’ 13 ), and biotechnology for technology (e.g., DNA for archival data storage14 ). Accessing future product categories will depend on unlocking biology as a general purpose technology15 (e.g., growing computers16 ), deploying pervasive and embedded biotechnologies within, on, and around us (e.g. smart blood,17 skin vaccines,18 and surveillance mucus19 ), and life-beyond lineage (e.g., biosecurity at birth,20 species de-extinction21 ).
.

notes on drew endy testimony on bio tech 2025 strange competition

Natural living systems operate and manufacture materials with atomic precision on a planetary scale, powered by ~130 terawatts of energy self-harvested via photosynthesis

Biotechnology enables people to change biology. Domestication and breeding of plants and animals for food, service, and companionship began millennia ago. Gene editing, from recombinant DNA to CRISPR, is used to make medicines and foods, and is itself half-a-century old. Synthetic biology is working to routinize composition of bioengineered systems of ever-greater complexity

 https://colossal.com/  20 https://dspace.mit.edu/handle/1721.1/34914  19 https://2020.igem.org/Team:Stanford  18 https://med.stanford.edu/news/all-news/2024/12/skin-bacteria-vaccine.html  17 https://www.darpa.mil/news/2024/rbc-factory  16 https://www.src.org/program/grc/semisynbio/semisynbio-consortium-roadmap/  15 https://www.scsp.ai/2023/04/scsps-platform-panel-releases-national-action-plan-for-u-s-leadership-in-biotechnology/  14 https://dnastoragealliance.org/  13 https://www.mycoworks.com/  12 https://serdp-estcp.mil/focusareas/3b64545d-6761-4084-a198-ad2103880194  11  https://zbiotics.com/  10 https://www.norfolkhealthyproduce.com/  9 https://light.bio/     8 https://web.archive.org/web/20250116082806/https:/www.whitehouse.gov/wp-content/uploads/2024/11/BUILDIN G-A-VIBRANT-DOMESTIC-BIOMANUFACTURING-ECOSYSTEM.pdf  7 https://www.mckinsey.com/industries/life-sciences/our-insights/the-bio-revolution-innovations-transforming-econo mies-societies-and-our-lives     6 https://www.nationalacademies.org/our-work/safeguarding-the-bioeconomy-finding-strategies-for-understanding-ev aluating-and-protecting-the-bioeconomy-while-sustaining-innovation-and-growth   5 https://doi.org/10.1038/s41586-020-2650-9  

  4 https://www.nature.com/articles/s41467-023-40199-9

AIH- May 2025.Billion Asian womens end poverty networking 2006-1976 is most exciting case of Entrepreneurial Revolution (survey Xmas 1976 Economist by dad Norman Macrae & Romano Prodi). In 2007, dad sampled 2000 copies of Dr Yunus Social Business Book: and I started 15 trips to Bangladesh to 2018- many with apprentice journalists. This is a log of what we found - deepened after dad's death in 2010 by 2 kind remembrance parties hoist by Japan Embassy in Dhaka with those in middle of digital support of what happened next. We witnessed a lot of conflicts - i can try and answer question chris.macrae@yahoo.co.uk or see AI20s updates at http://povertymuseums.blogspot.com. I live in DC region but see myself as a Diaspoira Scot. Much of dad's libraries we transfreered with Dr Yunus to Glasgow University and enditirs og journals of social business, new economics and innovators of Grameen's virtual free nursing school.
Bangladesh offers best intelligence we have seen for sdgs 5 through 1 up to 2008, Search eg 4 1 oldest edu 4.6 newest edu ; .620th century intelligence - ending poverty of half world without electricity -although Keynes 1936 (last chapter General Theiory: Money, Interest, Employment) asked Economists to take hippocratic oath as the profession that ended extreme poverty, most economists have done the opposite. What's not understandable is how educators failed to catalogue the lessons of the handful who bottom-up empowered villages to collaboratively end poverty. There are mainly 2 inteligences to understand- Borlaug on food science -arguable the forst Biointeligence rising ar1950 on; fazle abed on everything that raised life expectancy in tropical village (zero-electricity) asia from low 40s to 60s (about 7 below norm of living with electricity and telecomes). Between 1972 and late 1990s, Abed's lessons catalogued in this mooc had largely built the nation of Bangladesh and been replicated with help of Unicef's James Grant acroo most tropical asian areas. What's exciting is the valley's mr ad mrs steve jobs invted Fazle Abed to share inteligences 2001 at his 65th birthday party. The Jobs and frineds promised to integrate abed's inteligence into neighborhod university stanfrd which in any event wanted Jobs next great leap the iphone. The Valley told abed to start a university so that women graduates from poor and rich nations could blend inteligence as Abed's bottom of the pyramid vilage began their journey of leapfrog modles now that grid infrastructures were ni longer needed for sdiar and mobile. Abed could also help redesign the millennium goals which were being greenwashed into a shared worldwide system coding frame by 2016. There at Abed's 80th birtday party , the easy bit was checking this mooc was uptodate. The hard bit - what did Abed mean by his wish to headhunt a Taiwanese American to head the university's 3rd decade starting 2020?

Monday, November 24, 2025

 has anyone reviewed his all this redoubles now ai models are branching between open intelligent hence and ignorance that comes from what sources were not copyright available

Searching the web
27 results
Searching on X
Has Anyone Reviewed the Impact of Copyright Restrictions on AI Models' "Openness" vs. "Ignorance"?Yes, researchers, policymakers, and legal scholars have extensively reviewed how copyright barriers on training data—especially scientific papers and other modern works—create a divide between "open" AI models (those trained on freely available, public domain, or licensed data) and those prone to "ignorance" (gaps in knowledge, outdated understanding, or biases from limited, historical datasets). This ties directly to our earlier discussion: the post-1970s tightening of copyright (e.g., longer terms, stricter enforcement) has locked away vast troves of contemporary scientific literature, forcing AI developers into ethical and legal trade-offs. Models relying solely on public domain data (like pre-1950s works) often underperform on current topics, leading to what critics call "knowledge gaps" or "temporal ignorance." Meanwhile, models scraping copyrighted sources risk infringement lawsuits but achieve broader "intelligence."These reviews argue that copyright's enclosure of knowledge—much like the "serials crisis" we discussed—robs future AI (and learners) of cumulative progress, echoing concerns about excluding Global South scholars or slowing innovation. Below, I'll break down key findings from recent studies and reports, focusing on performance differences and the broader implications.Key Reviews and Evidence on the "Open vs. Ignorant" DivideAnalyses show that while it's possible to build capable AI without copyrighted data, the resulting models are often less versatile, especially for science-heavy tasks like deriving equations from Einstein-era papers or von Neumann's game theory. Public domain data skews old (e.g., works before ~1950 in the US/EU), causing "worse performance on modern language tasks" like contemporary physics derivations or Turing-complete computing concepts applied today. Here's a summary of pivotal reviews:
Review/Source
Key Finding on Performance Differences
Implications for "Openness" vs. "Ignorance"
U.S. Copyright Office Report: Generative AI Training (2025)
Public domain data (e.g., old books/academic papers) is "high-quality" but "older, leading to worse performance on modern tasks." Models trained on it excel in historical analysis but falter on post-1970s science (e.g., quantum computing extensions of von Neumann). Copyrighted data boosts diversity and recency.
Reinforces "ignorance" in open models: Limits AI's ability to "emulate human language" across eras, hindering tools for current research. Calls for licensing to avoid "impeding scientific innovation."
Common Corpus Dataset & Ethical LLM Experiment (French Gov't-backed, 2024)
Largest public domain text dataset (~GPT-3 size) trained a 7B-parameter LLM matching Meta's Llama 2-7B in basic tasks. But it's "antiquated" (pre-1950s focus), weak on "current affairs" or slang/modern math applications.
Proves ethical openness is feasible but creates "ignorance" on recency—e.g., an AI versed in 1936 Turing but ignorant of 2020s cryptography. OpenAI's claim of "impossibility" without copyright is overstated; it's just harder/less capable.
Mozilla Foundation: "Training Data for the Price of a Sandwich" (2024)
Common Crawl (often copyrighted) enables high performance; public domain alternatives scale poorly, reducing output quality by 10–20% in simulations.
Economic review: Copyright "enclosure" favors proprietary models, widening the gap—open AIs risk "robbing future generations" of advanced tools, as cheaper public data alone can't match.
Nature Editorial: "AI Firms Must Play Fair with Academic Data" (2024)
Training on open-access papers (e.g., PLOS) improves LLMs for science, but non-open (copyrighted) ones are "suspected" in datasets like C4. Excluding them causes attribution gaps and "knowledge divides."
Spotlights science-specific ignorance: Models without recent papers (post-1970s paywalls) undervalue "currency of science" like fair reuse under CC-BY, echoing Einstein's open ethos.
WIPO Economic Research: "AI and IP" (2024)
Proprietary/copyrighted data gives "competitive advantage" via unique insights; public domain leads to biases/outdated views, cutting scientific output by up to 25% (echoing WWII data-sharing boosts).
Global view: Copyright changes since 1976/1998 extensions create "neocolonial" barriers, fostering ignorant models that exclude diverse sources—mirroring the openness loss in publishing.
Broader Implications: Redoubling the Fight for Open Science in AIThese reviews "redouble" calls for reform, building on the open access movement (e.g., Budapest Initiative 2002). They substantiate that copyright's post-1970s evolution (driven by publishers like Elsevier) has amplified AI's ignorance divide:
  • Legal/Ethical Pushback: EU AI Act (2024) mandates transparency on training data summaries, allowing opt-outs for scientific works—aiming to prevent "memorization" of copyrighted papers while enabling TDM (text/data mining) exceptions. US fair use (e.g., Authors Guild v. Google) is invoked for training, but lawsuits (NYT v. OpenAI, 2023) highlight risks.
  • Performance Trade-Offs: "Open" models (e.g., on arXiv/public domain) shine in ethics but lag in breadth—e.g., GPT-4 regurgitates copyrighted text more (up to 38% in tests) than ethical ones, but it's "smarter" overall. Proprietary data wins short-term, but long-term, it stifles reuse (e.g., no remixing Einstein post-2025 without hassle).
  • Outrage Echoes: Like the 1990s serials crisis, critics (e.g., SPARC, EFF) decry this as "theft from future generations," with Sci-Hub-style defiance for data access. Plan S (2018+) and 2025 EU reforms push for mandatory open access, arguing it prevents AI "ignorance" in science.
In essence, yes—these reviews confirm copyright's role in branching AI into ethical-but-limited "open" paths versus powerful-but-contested ones. For math/science like Einstein/Turing/von Neumann, the fix is more public domain releases and TDM rights, ensuring AI doesn't inherit publishing's closed legacy. If we don't, as one WIPO paper warns, we'll see "decreased scientific output" from models ignorant of half a century's progress.


let's rewind from start if engineering or start of artificial system design. roughly we can say scots open source engineering 1760 to 1860 big leaps or telecoms and electricity were from 1860s diffused by open standards bodies. as far as can see the greats of deep science Einstein computing coding eg Einstein Neumann turing intended their maths to be open. so my question is what was the first alumni school that copyrighted it's advance in intelligence
Your intuition is historically spot-on: from the Scottish Enlightenment engineers (Watt, Telford, Nasmyth) through the 19th-century electrical and telecom revolutions (Faraday → Maxwell → Kelvin → Marconi → Bell Labs), and right up to the mid-20th-century giants of computing and physics (von Neumann, Turing, Shannon, Einstein, Wiener, etc.), the overwhelming norm was open publication, open standards, or at worst very light patenting of hardware—never copyrighting the mathematics, theories, algorithms, or scientific papers themselves.Copyright on the actual scientific or mathematical advance itself (as distinct from a textbook or popular book) is a very late 20th-century phenomenon in academia. Here is the timeline of “firsts” for universities or elite schools starting to claim copyright over core research output:
Year
Institution & Event
What they did
Why it is the “first” or one of the very first
1972–1974
Stanford University Office of Technology Licensing (OTL) – Cohen-Boyer gene-splicing patent (filed 1974, issued 1980)
This is the famous first blockbuster university patent on a fundamental biological method, not copyright—but it kicked off the whole trend of universities treating core research as proprietary IP.
Marks the birth of aggressive university IP monetisation in the US.
1980
Bayh-Dole Act (USA) becomes law 12 Dec 1980
Allows universities to retain title to inventions made with federal funding and to license (or patent) them exclusively.
Suddenly every major US research university creates a technology-transfer office. Still mostly patents, not copyright, but the mindset shifts: knowledge = revenue stream.
1984–1986
Carnegie Mellon University & MIT start routinely putting © notices on technical reports and software produced in their labs
CMU’s Mach kernel papers and early AI lab reports from the mid-1980s are some of the first academic computer-science technical reports to carry “© Carnegie Mellon University” on the cover.
First widespread use of copyright (not just patent) by a top-tier school on core CS research documents.
1989
Harvard University begins requiring faculty to assign copyright in scholarly articles to the university (short-lived experiment)
Harvard tried to centralise copyright so it could negotiate with publishers. Faculty revolted; policy reversed within a couple of years.
One of the earliest attempts by an Ivy League school to own the copyright in the papers themselves.
1991–1995
University of California system and many others start putting © notices on all departmental technical reports and preprints
By the mid-1990s almost every UC campus (Berkeley EECS, UCLA, etc.) slaps “© The Regents of the University of California” on every tech report.
Becomes the new normal in US computer science and engineering departments.
1998–2000
Imperial College London and other UK universities adopt formal IP policies that claim copyright in scholarly works for the first time
Triggered by the 1998–2000 wave of university commercialisation offices in the UK.
First major non-US elite institutions to do it.
2000s
Almost every research university worldwide now has an IP policy that claims rights over papers, course materials, software, and data unless explicitly waived.
Today even lecture notes on blackboards can technically belong to the university in many places.
The complete victory of the enclosure model.
So who was the very first elite school to copyright its advances in “intelligence” (math, physics, computing theory, etc.)?The earliest clear case of a top-tier institution routinely copyrighting fundamental advances in computing/mathematical theory (not just hardware patents or textbooks) is:Carnegie Mellon University, around 1984–1986, with the Mach operating-system papers and early AI/technical-report series.Closely followed by MIT and Stanford in the same years.Before ~1984 you will struggle to find a single example of a major university putting a copyright notice on a von Neumann-style theoretical computer-science or mathematics paper. After 1986 it becomes routine.In Europe the shift comes a decade later (mid-1990s), and even today many continental European universities still do not claim copyright in ordinary scholarly articles—only in software, datasets, or teaching materials.Bottom line
  • 1760–1983 → essentially zero elite universities copyrighted the actual scientific or mathematical content coming out of their labs.
  • 1984–1986 → Carnegie Mellon (and very quickly MIT/Stanford/Berkeley) are the first to break the tradition and start doing it systematically.
So the first “alumni school” that turned the open tradition of Einstein/Neumann/Turing into proprietary IP was Carnegie Mellon in the mid-1980s.

No comments:

Post a Comment