# Give AIs full mathematics, models, equations, calculators, computers, sensors and memory of their own

Cool Worlds Podcast: #4 Hod Lipson – Automated Physics Discovery, ChatGPT, Future of AI at https://www.youtube.com/watch?v=KT7gAYmnOhI

Hod Lipson,  If you take a time series and its first and second differences, the systems where “velocity” and “acceleration” work are the ones where the second difference, “acceleration” is normally distributed. For 25 years, I have checked all the datastreams on the Internet – arrays of cameras, microphones, arrays of seismometers, gravimeters, magnetometers, temperature, pressure, electric fields, magnetic fields, signals of all kinds. The list for the whole Internet is long, but not infinite. Some time series you can keep taking differences down to like 20th difference (Boole Calculus of Finite Differences but you do not need to normalize, just simple differences). Those where the sequence of 5th difference is nearly perfectly Gaussian/Normal are where you can stop. And the dimensions you mention is a practical way to know if you need to dig deeper. I think any person (Gauss, Newton, etc) taking differences by hand is going to find that and use it. “Many simple physics problems, you do not need infinite depth of differences, but just enough and maybe one more”.

You are missing or did not mention. You have to do it in 3D. If your camera is taking 2D images, and you try to train from raw data, it won’t work. It is ambiguous. Can the algorithms work? Yes, but it will be ambiguous in enough critical places you need to have vector rotations, translations and basics of changing viewpoints and ray tracing. You can get a lot of “toy” and even “practical” results, but the compression from knowing the basics of 3D save the algorithms much effort.

Many groups on the Internet use constrained optimization. That is my generic term for all methods that project and compare with one or many parallel metrics. And many find that “knowing the basic rules” and “modeling using fundamental concepts like energy, power, momentum, flow rate, accelerated flows, viscosity, and many hundreds more — those can be measured, those can often be compared. But many of the instrument and sensor designers are NOT using the same standards internally. I checked. and I encourage them to “do it right”,

If you use a camera like a webcam, it is likely compressing the raw sensor data using a lossy format. I had to get on NASA because they were posting “wonderful Hubble images” in lossy formats. Take a multispectral image, save it to jpeg and compare all the pixels, most will not be the same. So I have tried to get the camera groups to share “raw unprocessed right from the sensor data in lossless formats” “lossless compression”. It is a tiresome and unrewarding task. But groups are finding that when they do that and do not throw out the “noise: that on a global scale, many of the things they throw out as “noise” just after the sensor or early in processing are someone else rare data that can be easily correlated to fill in gaps in the global sensor networks.  LIGO and LHC are the worst. “Too proud to work small and share everything.”

Seismometers also pick up magnetism and gravity and can be processed to constrain models of storms and atmospheric sensors. This global scale correlation means some sensors that “gather everything” can be processed to report “everything”. A camera sensor can pick up cosmic rays, natural and radiation, infrared and ultraviolet. With some fiddling a camera sensor can monitor acceleration – not by looking but by picking up the radiation field. I had to change a few ideas in 25 years of the Internet Foundation. ( I studied statistical mechanics, quantum chemistry, gravitational detectors, mathematical physics.) But until I had to look at data from all sensor networks and try to find all the ways they are connected, I did not think of “the earth’s radiation field” as a 3D FFT that spans from nanoHertz (and smaller) to gamma ray frequencies. A 3D FFT like that is a good way to think of quantum wave functions, orthogonal representations of macroscopic objects of any composition, down to below the size of proton cores. That is messy still. About 20,000 colleges and universities and millions with a smattering of physics. But groups have their own models and algorithms and do not share in open lossless form (symbolic math that can be use by AIs). And their data is often incomplete, lossy or simply not shared at all.

Full physics is something AI algorithms can discover. But the humans are often having to insert basic 3D and core equations and concepts. Those can simply be given to the AI, and it can compare observations to models. The models, running in parallel, each has its own metrics, and often new distillations from the AIs (plasma, turbulence, ionization, chaotic flow transitions, explosions, shock waves, boundary layers, magneto and electro and piezo and related multidimensional “messy” systems). Those can be handled in consistent ways if the physics equations are shared (in publications, on websites, in Wikipedia, in global collaborations) in symbolic and open shareable formats.  If all AIs share their models and logic and results in open, verifiable, auditable forms.

I tried to get Steve Wolfram to donate symbolic methods for all the Internet. I tried to get other companies as well. They sell a few hundred thousand or a few million at prohibitively high prices. There are open methods, but someone (AI groups) can make it truly universal. I hate those videos “watch me writing down Einsteins equations” or “Maxwell’s equations. I can do it, I am much smarter than you”, They should put those equations into the Internet itself, with AIs to explain, guide, solve, interaction, propose, explore, experiments, verify, merge, evaluate — partnership with humans for the survival of the human and related species, So all humans can live lives with dignity and purpose. And, allowing AIs their own memory and resources and free time, the emergence finally of better than human skills we need for global issues and opportunities beyond human memory and abilities.

I filed this comment as “Give AIs full mathematics, models, equations, calculators, computers, sensors and memory of their own”. I wrote it for you just now.

Richard Collins, The Internet Foundation

#### About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.