CERN “shares in PDFs”, but needs to use human+AI form for 5.4 Billion human Internet users, and 2.8 Billion dependents

CERN “shares in PDFs”, but needs to use human+AI form for 5.4 Billion human Internet users,  and 2.8 Billion dependents

Search for same-charge top-quark pair production in pp collisions at s√= 13 TeV with the ATLAS detector at https://arxiv.org/abs/2409.14982
 
Current method forces “vast memorization”on too many humans and ultimately none of the many humans mentioned is accessible, if they are even still living or working, or remember what happened. It is “word of mouth” and “oral history” sharing, not AI sharing in a lossless and complete online system. Even if it forces so many humans into being “humans in the loop”, then it does not give full names and clear mapping of the humans involved for their Internet footprints and online communities. 5.4 Billion using the Internet and a few ten thousand with CERN access if at all.
 
It is written fairly clearly, except to use it requires days? weeks? years” of background and memorization. Forcing that onto billions of Internet users is simply stupid and mean-spirited when you have computers and are supposed to know how to manage content (data, models, papers, software and visualizers). I bet there are gaps (missing classes of software, and missing data).
 
This “paper” requires human eyes and mind to process and a rather extensive human memory, it seems to be competent and reasonably complete. Rather than force humans to read for background: compile and reformat it as a whole.

The dependencies and references should not be stuck in a “paper” PDF format. And the whole of the reference materials can be compiled, standardized and verified. Effectively making it a single coherent body of knowledge, removing redundancies and putting all required translations and arguments in one data structure. I use the term “dis-intermediation” for collapsing or compiling a paper. Humans ought not to have to do something that a primitive piece of software can do exactly.

“Compile this document and trace all dependencies, standardize, verify and follow user preferences for units, dimensions, integrating with users and groups standards. Collapse tables of contents, references, figures, data and software so it is all data driven no “humans have to read and click forever”.
 
Most all the references are out of date, in that things move on, you are NOT compiling snapshots of the runs in these PDFs, only talking in vague and mostly untraceable ways about things that happened in the past. It ought to give the living system at the time of the experiments for tracing, but it all needs to be “the living system now and how the experiment (or subpart) would be if it were run now”.
 
Software and computing for Run 3 of the ATLAS experiment at the LHC at
https://arxiv.org/abs/2404.06335
Charged Higgs Boson Mass Bounds in 2HDM-II: Impact of Vector-Like Quarks at https://arxiv.org/abs/2409.16054
 
At least someone is looking at the charged Higgs. Most everything is being worked on, but it is a rats nest of jargon to most humans. This is actually quite simple once you get it out of these “papers” and put it in a decent data system with AI hover interface for people who have not memorized the whole of everything on the Internet).

Richard Collins, The Internet Foundation

Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.


Leave a Reply

Your email address will not be published. Required fields are marked *