Sub nanosecond flash 3D video sampling allows gravitational imaging, “imaging accelerometry”

I am interested in your “double exposure”. 200 ns would be 5 Mfps. Can you say which specific cameras from Imperix or elsewhere can support that? I am also interested in LWIR “thermal” camera for passive radiometry. Do you have anyone interested in camera interface standards? There are many cameras, and the cost of storing and accessing the data is high. A global open resource for cameras interfaces might speed development and growth.

“Particle accelerometry” or “particle image accelerometry” requires at least 3 frames and there are situations where up to 20 or more more frames are needed in a few nanosecond with sub-nanosecond timing. Much of the literature is a decade or more ago, but there seems to be renewed interest. If the cost of acquiring and processing data had come down. “accelerometry” is often called “gravimetry” ( pronounced as grav im met tree), since gravitational potential fields can more easily be monitored by imaging acceleration and “jerk”, the time derivative of acceleration.

It looks like new interest is based on lower cost, high resolution, high frame rate sampling. And image processing. At there are 1066 entries today for “particle tracking” with 125 in the last 12 months. If you help organize the research, it can help grow interest and speed development.

Timepix3: single-pixel multi-hit energy-measurement behavior at

Smartphone-based Particle Tracking Velocimetry for the in vitro assessment of coronary flows

I will see what else is going on.

I am checking the sub-nanosecond and picosecond groups just now.  I try to track all sensors on the internet, especially those that are reaching “global” levels. There is where new industries start. “Flash 3D Accelerometry” uses three or more frames as samples where sustained continuous operation is not possible. Wide area and global coordination at picosecond resolution is possible for correlating gravitational imaging. My interest started with gravitational imaging arrays, but since I set policies for the Internet, I have been spending more and more time with camera groups.  Getting better support for the newer cameras integrated into the Internet might spark very rapid advances in many fields.  It seems likely. Richard

It is 10 pm and I just got out of an hours long conference on future directions in AI and global collaborative networks.I can barely see.

Most of the earthquakes on earth happen in a few places. There are active areas. Volcanoes are just one more source of gravitational noise on earth. Even the surf is well known as “microseisms” and it too has specific locations.

The speed of gravity is identical to the speed of light. I used data from the network of superconducting gravimeters (SGs) to mearsure it, just over 20 years ago. Now I tell groups like new MEMS gravimeters researchers that after they track the sun and moon with their gravimeters, they need to measure the epeed of gravity. With the rapid advances in “time of flight (electromagnetic) devices in the last couple of years that has lowered the cost of the detectors and processing and correlation dramatically. Those SGs are 1 sample pers seond at  1 Angstrom/second^2 accuracy. The MEMs gravimeters are about 1 sample every minute and 1 nm/s^2.  They are modied MEMS accelerometers.

So the core trick now is how to get good sensitivity, global coverage and precise volumetric imaging?

If you use GigaSamples per second (routine now) you get 30 cm accuracy.  If you are working at global scale the signals are incoherent, but small sources can be imaged by correlation — if you can gather enough data,

If you have an array of sensors spread around the earth, it is a trivial exercise to work out to the nearest microsecond when a gravitational event anywhere on or near the earth will arrive at each sensor, and plan for when all the sensors need to collect their local data, so that they are all focused on the same voxel.  It can take in to account the earth tides. The geophysical data is very precise these days. And if you try something new, they will usually pitch in to help.

So imagine  a few dozen high pixel count, high dynamic range (bits) cameras of other array sensors in arrays.  At 5 MegaPixels and 2 bytes per pixel, and 100 frames per sample, that is only 1 Gigabytes per sample. And because these sources are stable, you only need a few samples per second or less for good correlations from strong sources.

It is like lightning detection in reverse.  The global time of flight is the same. but you just take snapshots of data in bursts. That the electronics was intended for cameras is incidental. Why so much data? Because there are not calibrated sources yet.  Now it is possible to use the same methods with sources like cars and truck that can be followed.  Or each waves breaking on a beach. Current in the atmosphere, magma and seismic waves. Now the seismic networks ( are very mature now. And global tracking and quantifying spreading seismic waves is also mature. There are people doing “gravitational earthquake early warning”.  The Japan earthquake in 2011 Tohoku Japan earthquake registered on the SGs network and in the broadband seismometer network (they can be operated as 1 sps and 1 nm/s2 three axis gravimeters.)

I am just writing this for me. I have been working on these things for decades now. There are so many gravimeter groups now and different ways to measure it. LIGO can probably be redudes to to smaller nodes of atom interferometer (about 20 competing methods now) detectors in arrays that cover the earth.

I have had to put up with “whatever people have”.   I studied gravitational and electromagnetic noise in all detectors.  Some of the Nyquist noise is actually gravitational. But unless you correlated data from many devices it is difficult to impossible to sort out.

Now I simply am to tired. I thought you were not going to answer for a while.

I made videos, I have written about it. Mostly I talk to people individually.  A guy yesterday did not know how to pronounce gravimeter (grav im meT er). So I said, call it a very sensitive accelerometer.

The camera electronics pick up and record the data. It can be hard wired with DMA and local memory.  To move a lot of data collected in a microsecond, to video or GPU memory is not hard. There are ways to screen the raw data and remove some of the less useful things. With machine learning, that get better and faster with just a few iterations.

I am 75 now. I studied gravitational wave detection at University of Maryland at College Park from 1975-1979.  Charles Misner was my academic advisor on paper, but I met Joe Weber and he encouraged me to follow Robert Forward. Robert went to start LIGO (laser interferometer) and Robert is the one who said that gravity and electromagnetism could be combined.  Beside photon interferometry and atom interferometry, it is also possible to use electron correlation methods and that allows using high frequencies.  As the frequencies go up to GHz themselves, that means more advantage to velocity and acceleration and higher derivatives to the detector sensitivity which is related to the actual information content and that can be estimates and measured.

Richard Collins, The Internet Foundation.

I forgot to copy Luc. I am tired but  I am trying to write thing down.  Gravity cameras might be a ways away, but correlation imaging is used every where now or can be.

Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.

Leave a Reply

Your email address will not be published. Required fields are marked *