Sensors and processors capable of learning, remembering and sharing globally – “eyes for AIs”, moles of pixels

Lucy,
I was just going over the potential impact of having memory and algorithms integrated directly with different kinds of sensors, where the whole is responsive to the needs of global society and all individuals – both human and true AIs.

I expect picoMeter capabilities, it is just a matter of time. As the many issues related to that are gathered, studied and shared openly at global scale. Now many groups are only working for themselves, but they need to be working with all humans and AIs in mind.

Now we have a view of a pixels as an automata made of a few repeated elements. But what is missing is pixel scale memories and logic where the local logic has access to petabytes of storage and massively parallel cores so that the whole can process and learn at the pico scale.

I can see it working and know the kinds of problems in the world now that would benefit and catalyze new industries and possibilities. Intelligent sensors that respond to the needs of humans and AIs, not just a few inside the companies that make the sensors. Not more lossly badly written algorithms that are hard coded into things that end up in the trash heaps.
Yes, I would like to use if for gravity, magnetism, sensors, real time “at the speed of light and gravity” mathematical statistical machine learning systems. But when you take it a few steps forward, it allows replication of human brain density logic and processing – where the logic and data are completely accessible to global open communities. That access to the lossless data generated, and the ability to try many algorithms in a massively parallel global effort, is what I have been pressing towards with Internet Foundation in recent years. And it is the same vision I have held in mind for nearly 60 years working on “random neural networks” – logic networks more powerful, faster, smaller and lossless that can implement for real Maxwell’s demons at chemical, atomic and nuclear scales.

Much of “chemistry” is at Angstrom scale (0.1 nanometer or 1E-10 meter or 1E8 cm-1 or 2.99782458 ExaHertz) processing. But I know there are groups working on every power of 1000 beyond that. nanoMeter, picoMeter, femtoMeter and beyond. And the size and power of the systems (for logic, learning, application of forces, sharing of knowledge, testing new ideas, and dealing with our real world) can grow not limited by finite human sensors and memories and stored logic.

The real world of chemistry and physics has Avogadro’s number (6.02214076E23 things) in the smallest things. In 18 grams of water, in a mole of electrons, in a mole of x-ray photons. At ExaHertz scale, that means a mole of pixel scale events in less than a day. A Yotta (1E24) of things is 1.66053906717 Moles of things. When we can do “chemistry” with moles of pixel event data, I think the only bounds on the human and AI species is out own greed and limited visions.

I was trying to get some useful statistical data from the kinds of cameras that people use for live streaming of things on YouTube now. I found a couple of thousand examples online.  I enjoy looking closely at things like moon light reflecting from tiny ripples.  I have a list of “reflecting water” live videos. Those remind me of what even simple algorithms can do to explain and illumination every day things. The “eyes of AIs” (where humans can access the lossless data gathered) need that ability,  that persistence, that care, those kinds of examples – and more.

Here I am still struggling to gather lossless statistics at all. The software for mathematical things is mostly proprietary or undocumented and inaccessible. The systems are made deliberately lossy end-to-end. You are probably well aware of how deliberately limited things can be.

The AIs now (sold as AIs, but not actually doing it) all fail because they do not give users access  to lossless information. The cameras and sensors when you look at the pixels, do not match the original at all. That is the poor state of the Internet now.

Some of it is the limited vision of the people running YouTube and other such “video sharing sites”. They do it to entertain and titillate humans through their eyes and emotions.  Tickling human brain cells is useful. But that does not assure food and clothing for humans, nor even survival for related DNA species.

This stream could go on forever, but I cannot.  I was just thinking about moles of pixels and moles of lossless data, and seeing how the Internet moles of data are corrupted and blinded and lost – because a tiny few people in large corporations say to themselves, only for themselves – “that is good enough to look at for a few minutes”. But the systems of the world need to be operate on problems that take decades to solve by millions of people together. And some of that starts by requiring AIs to remember what they see and report it losslessly.

Eyes without brains and memories are pretty much useless. Which is why I want “cameras” and “sensors” to have the ability to losslessly gather, record and share globally. Toward a time when we have chemical and atomic scale systems that can use “moles of pixels”.

Richard Collins, The Internet Foundation


Ricard, “free energy harvesting” pales compared to global knowledge harvesting – which includes energy as just one field. My notes this morning on atomic scale processing and memory: Sensors and processors capable of learning, remembering and sharing globally – eyes for AIs, moles of pixels/?p=14165 – replying to Ricard Solé @ricard_sole How close are living systems to optimal states? In our new paper in @PhysRevResearch with @artemyte and @JordiPinero we searched for universal bounds to this problem within the context of free energy harvesting, with special attention to molecular machines at https://journals.aps.org/prresearch/pdf/10.1103/PhysRevResearch.6.013275 https://pic.twitter.com/yPxARynanl
Replying to @ricard_sole @PhysRevResearch and 2 others


Lucy,

I try to help groups as I go.  I use the video for my own interests, but I always have in mind the needs of others.

Reviewing your cameras, you have image sensors from several sensor makers.  I originally found you because I was looking for GigE cameras and yours seem like decent prices. But when I got deeper and answered your questions about why I was doing that and what I was aiming for, I realized I needed to get deeper into the sensors themselves. And that is when I decided I might need to check many sensors – from Sony, Samsung, OnSemi and others.  I simply do not have enough money (I am retired now and on fixed income) to buy that many cameras.  Not even enough to buy board level sensors. Not even enough to buy MIPI or USB 3 cameras.  I would need to buy the image chips themselves, build a test board with sockets for the chips or make special adapters to use any chip from any manufacturer. And make enough of those to run standard tests for many days. It can take me a month to test one camera for gravity correlations. Each kind of interface is different because camera makers aim for human eyes, not data. A designer will casually remove the signals I am looking for as “noise”.

USB 3.0 Sensors:  Sony IMX178, IMX287, IMX273, IMX273, IMX226, IMX249, IMX265, GMAX2505, IMX183, IMX174, IMX252, IMX264, GMAX2509, IMX250, XGS12000, IMX264MZR Polarization, G2518, IMX304, IMX542, GMAX0505, IMX541, IMX253, IMX250MZR, IMX540, G0505, XGS32000, XGS32000, XGS45000,

GigE Camera sensors: Sony IMX178, IMX287, OnSemi AR0521, IMX273, OnSemi MT9J003, IMX226, AR0522, OnSemi Python 1300, Teledyne eV2  EV76C570, IMX249, IMX265, GMAX2505, IMX183, IMX264, GPixel GMAX2509, GMAX2518, IMX304, GMAX0505, IMX342

5GigE Camera sensors: GMAX2502. GMAX2509, GMAX2518, XGS12000, IMX542, GMAX0505, IMX541, IMX253, IMX540, GMAX3249, GMAX3265

Board Level Sensors: IMX296, IMX335, IMX296, IMX334, IMX273, IMX226, IMX265, IMX252,

The goals of the camera manufacturers are counter to what I want.  I need lossless data for machine vision.  That ought to be fairly consistent. But it is not. The problem is they sell to security, vehicular, monitoring and other markets. And those sales are often on the basis of “how it looks to the human eye”.  I might be wrong because I simply have not had time to trace out what the users are doing.  I see a LOT of advertising but that has nothing to do with what people want or need, or are capable of using.

The worst problem is I want to use CMOS and imaging technology beyond what it is used for now. And like I mentioned, sales to “human eye” applications almost always strip the “noise” which it just finer and finer detail.

Even just writing this, I see a plan, but I am really tired. I have been at this a long time. No one pays me. No one even supports me.  I do things because they ought to be done.  There are things that help the whole world. But it is not my job.

I can write statistical software to get some of what I need. But unless I talk directly to the sensor, many choices in interface and operation are going to limit what I can investigate.  I want to run some full frame random samples for days, and those have to have dedicated processors and TB memories just for summaries, let alone details.

I do not know if your people looked into the GPT AIs this last year. I spent close to 2000 hours talking with the AIs themselves, testing them on a wide range of topics, and I found why they are all failing – badly. But the concept (put a lot of computers onto a task for millions of human hours of effort) — that is useful.  People have NOT really investigated the signals in camera sensors deeply at all.  And as far as I have found, no one is correlating noise between sensors around the world – for gravity, magnetism, electromagnetism, infrasound and other things.

I mentioned that the noise in “quantum computers” and “entangled devices” is likely going to be correlated with gravity and magnetism. These are sensitive devices, or can be make so.  The same with these large format, high dynamic range, high frame rate “image sensors”.  There is a lot of data if you point it at something with a lens. But in the device itself, if it is carefully monitored, the local noise can be quantified, and the more distant noise detected and imaged (using 3D time of flight sampling methods).

You do have people who know a lot about cameras. When I look at the full range of the sensors it is impressive.  But I am likely decades older, have been working hard at this most of my life, and simply do not have much patience to help you sell things. Would it be nice to see if an ordinary “camera” could detect gravity or magnetism or be used for global 3D imaging near and far?  I think so. But I have only been able to make progress and to be sure when I look at and work with real data from real devices for a long time, or a lot of data. When I measured the speed of gravity more than 20 years ago, I used decades of data from many sensors at 1 sample per second. And I had to individually calibrate each station for many years. Same using the seismometer networks and that is a LOT more devices.

I simply cannot afford to buy cameras that use these chips.  I looked at “board level” and those all choose interfaces that are too expensive and put the data too far from the sensor. That is like trying to reach over a fence to pick up a heavy bucket.

I like talking to people. But I think what I am trying to do, I would have to spend a lot of time explaining. And what I want is data.  I can try to get some from a MIPI cameras and maybe USB 3 and maybe GigE — if I can learn to get the raw data from them.  But I know how hard it is to get clear information  on the sensors and casual choices when things are set up.  I am really tired right now.  And tired at this time in my life.

I will think about it.  If I design a test station it would be easier for your people to just plug in cameras and let me see some of the data.  Then I would not have to fiddle.  No shipping and since you have locations in Netherlands, New Jersey, Charlotte, Germany, UK, France, China — those sites can correlate.  It would mean “global synchronized image sampling”. I was trying to make these for seismic, ionospheric, gravitational and solar event detection.

Sorry, I am just writing things down. I have been working at this for many years. But I let my notes at ResearchGate on solar system gravimetry and gravitational engineering kind of lapse.  I got deep into “earthquake early warning” (gravitynotes.org)  I have so much – mostly for The Internet Foundation. I have almost no energy left. The ResearchGate I wrote about imaging the interior of the moon and the surface of the sun.  That might be possible with rolling shutter sensors or simply synchronized global sampling.

It is a good problem.  it is possible. it is not really that hard (once you have the idea that sensor collect a LOT of data, and if you use it ALL and use geography and geometry and timing to help sort things out – it is not that difficult. The radar, lidar, time of flight, 3D imaging, global lightning, radio astronomy and black hole imaging groups all do this kind of thing routinely now.)

If you were a research organization, maybe it would be one thing. But these markets will take a while to develop. I think a few examples it is not hard, it just takes effort. I tried very hard to aim for “engineering” not untraceable “science”.  Joe Weber and Robert Forward would likely agree.  The whole of the IRIS.edu network ought to be upgraded. The whole of “global climate” needs to be upgrade (with gravitational imaging arrays you can directly image the oceans and atmosphere at (10 meter)^3 and (1 meter)^3 where needed. But that needs purpose built “three axis, high sampling rate, time of flight 3D imaging gravimeter arrays”

Also, I think floating gate memories can all be adapted as well. And some of the metamaterials. Basically ALL the technologies use CMOS and related  systems for signal gathering and collection and processing – and the calibration of all of them is in nanoVolt and nanoSecond, picoVolt and picoSecond. That is where you find gravity signals (natural sources with signal strong enough). Electromagnetic and gravitational signals are just types of signals that are possible. The speed of light and gravity are identical, because it is one field, one speed and many kinds of sources and detectors. But it is ONE human field now, or could be.

Here is Joe Weber in his lab as I remember him
/?p=144

One thing I will say that I have not written anywhere else clearly yet.  You remember that old saying that “gravity must be spin 2”? What it more important is that “gravity signals” come from extended sources that have higher concentrations of multi-polar events at their core. That means “more magnetic”, “more stored rotational energy”, “more quadrupolar”.  Large masses for earthquakes will have “electromagnetic” signals (picked up by sensors with “electromagnetic label on the box”) and also “magnetic sensor label on the box” and “also “gravitational sensor label on the box”. it is ONE field, one potential but it is fragmented. The gravitational and magnetic signals will be counted as “noise” or “waste” or “interference” or “emissions” or “losses”. But a more holistic view of the future will see them as one and MANY groups now, can adapt quickly because they already have most of the pieces.

When I look at the energy density field, it has portions that get labeled “magnetism” or “magnetic fluctuations” or “acceleration field effects”. The monopole and dipole effects are almost always there. But the multipole and diffuse effects are ignored as “too difficult” “too fast” “too many and too small”. That applies to “atomic” and “nuclear” as well. As the neutrino groups are finding. And as many groups now see when looking at magnetic dipole interactions and biding at “nuclear” energy densities.

The computers and the image array sensors can all be full 3D now. With meta surfaces and other innovative ideas from groups around the world now, — beyond human brain density processors and detectors are possible. To fulfill Joe Weber’s dream. To give the human species a sustainable future.

Richard Collins, The Internet Foundation

Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.


Leave a Reply

Your email address will not be published. Required fields are marked *