Contact Imaging Sensors, Linear CCD arrays for gravitational and magnetic imaging

Jean,

I was trying at least two pathways:

1. Build circuits to read the analog signals from these linear sensors, and put the fastest and best ADCs to read and store the data. The focus on high precision measurement of the signals as a function of reference voltage.

2. Somehow learn to talk to these mainly USB devices, leaving them inside their present containers, and then share the software so that people who have these can share their data. This can include regular cameras. This is along the lines of the cosmic ray detector groups the use cell phones to monitor cosmic ray events over large regions. Or software defined radio groups who share their devices in global networks, Or student and independents who share things like magnetometers, gravimeters, seismometers. As the cost of these traditional devices goes down, these global communities are popping up everywhere. But they are not working together and most die or dwindle.

I just want to look at the data streams from these devices. I DO NOT want to have to build them.

A software defined radio is essentially a fast ADC (2 Msps to 10 Gsps) at 8 to 16 bits per reading, single or multiple channels. I can correlated the signals to steer the focal point of an array of them, and then monitor specific locations.

For instance, an array of detectors running at 10 Msps (Mega samples per second) has radial resolution of 26.9792458 meters. I can focus on the moon for detectors all over the earth. That is sufficient resolution to gather variations inside the volume of the moon, in near real time, and begin to map its interior. Same with the sun.

These linear detectors I am treating as high sampling rate amplifier, ADC, data monitoring and storage devices. To correlate between sensors using ambient noise fields. I would intend to characterize each sensor in a linear array over time (count the values and transistions, first and higher time derivatives over time) so the each pixel can be calibrated, and then the calibrated pixels used to improve overall resolution. I have done that sort of thing with superconducting gravimeters, seismometers and many other shared data streams on the Internet. But all the existing streams are at best 100 sps and so cannot image the interior of the moon and earth. LIGOs basic data stream is 16384 sps, that is 18297 meters (18.297 km) and there are times when there are three sensors running at once, and several more planned. But I have checked those data streams and they are messy and fiddled with. Over several years I was able to convince them to share their vibration isolation data and environmental data which is used to remove the earth “Newtonian noise” (gravitational noise) and some of that is gathered at 1 Msps (299.792458 meters). But they did not design their system from scratch for sharing and collaboration on a global scale, so it is hard to impossible to use.

So I was kind of thinking of what an array of flatbed scanners, desktop scroll scanners (where the paper goes through) and wand scanners (where the person moves the scanner over the paper) even industrial devices that can gather noise data during use. These signals are tiny and need lots of samples to get decent detection. But at the low frequencies of these electromagnetic gravitational sources, the arrays can work even when there is lots of local noise – because that is completely independent from the source.

The sun and moon are good sources, three axis gravimeters can precisely locate themselves and the data streams can be solved for the latitude, longitude and height of the sensor, and its three D orientation. I call that “gravitational GPS”. But it works with low frequency magnetic or electric or other fields and only is possible because data can be gathered in sufficient quantities, and globally to sort out the sources over time.

So do you have a fast CCD or CIS sensor that gathers analog signals at high rates? If I have 16384 pixels per line, and it runs at 50,000 lines per second, I can use the statistics of the pixels to learn their individual variations, then use that for 5995.64916 meter resolution – using the 16384 data points per line to characterize and correlate between devices.

My idea was to use three linear sensors, along three axes, then check to see if I can track some strong signals. One possible signal is radio transmitters, sending known patterns. I have also looked at using ocean wave 3D models to calculate the complex 3D gravitational signals at detectors. For speed of gravity measurement, and for imaging.

I have been at this kind of sensor array calibration and imaging for about 20 years now. But with no control over the data gathering, I have had to use what ever I could find in terms of raw data.

If i was just going to do simple correlations, then I should probably use several SDRs and use antennas of various shapes. But my main question, for the last ten years or so, is where the dark noise in every electronic device – including cameras, memory devices, communication channels, scanners, radios, wifi networks, lidar networks, GPS — can be used “en masse” to not only determine direction and shape, but to characterize the complex processes inside the target (moon, ocean, atmosphere, trucks, cars, people). It is a generic algorithm that seems to work for any of these devices. It has taken me so long to learn to calibrate and quantify the results and performance. So any time I see radar devices, 5G networks, lidar and GPS networks, what I see is the analog noise in each devices, and particularly in each pixel of these increasingly dense sensor arrays. 100 MegaPixel camera would be nice, if I could use the bandwidth and low latency tools to sample the electromagnetic gravitational field. They need tools for scanning planets and asteroids. They need tools for earth exploration for minerals, magma flows, petroleum and salt and water. I have looked at the use of all sensors where any data is shared on the Internet, and look ahead to when they operate as efficient and integrated networks. Now that process takes decades of human time, but it is all the same for every global group. Kind of like companies who sell on the Internet, and don’t realize the noise is more valuable than the devices when used at global scale.

I am looking ahead to the needs for “solar system colonization”. The same detector arrays (“sensor fusion” “distributed sensing”) are use in nuclear collision monitoring, nuclear radiation monitoring, plasma monitoring — and all those have their modeling algorithm development groups. All re-inventing the wheel.

I am a bit tired, I work 12-18 hours a day, 7 days a week and often 20 hour days. Besides these sensor arrays that are my personal interest (I studied gravitational detectors in university and kept track for more than 40 years), I am also tracking all methods that groups use for Internet global collaboration. I review lots of papers and projects, and test and review many efforts. The original Internet Foundation should have been funded at about a billion dollars a year, and I have tried to do the same work and more, alone.

If I had to say my highest priority, it would be to scan and image the earth’s atmosphere to help calibrate and improve the global, regional and local climate models – and particularly the gravitational arrays since they are mostly not attenuated by distance through the earth.

But even with samples of these CIS in front of me, I am too tired and distracted to be able to build a simple circuit to read the data at maximum resolution and sampling rate.  Because gravitational acceleration fields cannot easily distinguish gravitational or electromagnetic sources, the only solution I have found is to have the noise sensors of all types paired with three axis accelerometers, microphones, microbarographs, thermal imaging and other local measurement to reduce some of the confounding noise.  So it simple – measure the 20 to 40 basic noise sources for any location and then use the arrays to scan the interior of the moon or sun or earth or mars – and map each to compare to the rats nest of models of each scattered over the one Internet that ought to be for everyone.

Richard Collins, Director, The Internet Foundation


Could you use a linear sensor that can do 16 bit monochrome at 50,000 lines per second? Store the data to SSD or disk?I was thinking if you made two, and sent me one, and one more, that is three and we could try to track the moon. It might just be a walk through and completely useless, but it would be a good learning experience and focus on the essentials. I don’t like H__ because it leaves everyone to do everything themselves. Maybe I just find it clumsy to talk to people through these screens. I would like to be able to ask for things to be built and I can work on statistics. I was a “Senior Mathematical Statistician” during part of my career, and I have always felt that is what I do best – devise algorithms, merge and standardize datasets, help people learn to work together, verify and test the models and methods.


The simplest thing that I want to see is 24 hours of data from a CIS of any kind. An 8.5 inch sensor at 300 dpi is 2550 reading per line. The Canon LIDE 300 can scan 11 inches in 10 seconds, that is 11*300/10 lines per second or 330 lines per second. Each line is 2550*3 colors at 2 bytes per reading or 15300 bytes per line. So that is 5,049,000 bytes per second or 436.2336 Gbytes per day. To correlate three sites requires calculating the position of the moon for the three location in barycentric coordinates, which I can get from NASA JPL. I have been trying to help them improve sharing on the Internet. I have worked on and studied orbit determination and gravitational field measurements for close to 50 years.

The CCD/CIS is compact and the pixels are more spread out and usually larger. They send analog data, so better ADCs can be used to improve resolution. I have been looking at fiber optic distributed sensing and those can be kilometers long and measure every meter or so. So the CCD/CIS is a walk through of the data handling and coordination and storage needed for larger and more expensive sensors. The fiber optic distributed sensing developed from time of flight cable testing methods and now covers humidity, temperature, pressure, electric fields, magnetic fields and acceleration fields. But it is a bit expensive and a lot of data.

I take really hard problems that need global scale cooperation, then I look at all the steps needed to develop the new industries and devices needed. If some industry is close, and maybe looking for new applications, I leave hints and sometimes detailed explanations and ideas. A global community needs a really hard problem that is not easily solved or gamed by a few people or groups. It is a hard balance, but I have spent about 50 years working with state, federal, international, corporate, nonprofit and social groups to work out best practices. The cooperation of many tens of millions of people – all focused on a single goal is what I have specialized in. “covid”, “solar system colonization”, “global climate change”. Every one has tens of millions or more involved and affected. And every topic is boiling up from the bottom with no coordination or rules. So most of the effort (99.99% usually) is wasted in duplication, incomplete efforts, greed and envy and hoarding and the worst of human ways of “working together”.


All I am trying to do now is collect raw data from a flatbed scanner for a month continuously. I can buy disks and computers and scanners. What I cannot figure out is how to not have the scanner move and how to get the fullest resolution, monitor it and store it. I have tried Raspberry PI, Jetson Nano, Intel I7, Ryzen 7. I have used C and VB.net and javascript. I try to pick tools that are universally accessible – so no proprietary dependencies, no organizational dependencies.

What would you recommend if you were going to store the data from a CCD/CIS array continuously? What one would you buy new, and what would be needed to make it work? I can do samples of full raw data for algorithm development, and I have written many statistical tools for this sort of thing.

No, I want to turn OFF the lights and have the sensor sealed. I am using the noise in the sensors as the signal. I even asked some large observatories that have gigapixel cameras if I can get them to gather their noise data while observing, and then use it for this type of imaging. Or run the cameras when they are not using it, but covered and only using the noise in the sensors. The reason you can use a camera (covered and shielded) or any optical sensor is that the noise at these low frequencies is not easy to shield. But it carries high frequency noise that can be used for global correlations. I have found about 40 types of sensors. I even checked the noise in neutron statistics because that can be tuned to follow these kinds of low center frequency, high sampling rate noise signals. I studied statistical mechanics and thermodynamics, electrodynamics, quantum mechanics, and quantum chemistry before I decided to focus on gravitational detection. And found that all the sensors use the same mathematics and assumptions at the core. It has been a long process but I understand it well, and want to test a few simple things like “Can I use the noise in everyday sensors” to track the sun and moon and image their interiors at cubic kilometer resolution?”


Yes, the camera dark current is pretty well studied. but the newer cameras are fiddling with the data and making arbitrary (freshman) algorithms to make it look pretty for people taking selfies. The CIS/CCD linear arrays seem to allow raw data collection, that is part of why I wanted to try them. For ten years or so, I have run many low cost cameras and gathered data, but for geometry, I thought these large sensor would have some advantages, and have to see the data itself to work out the best algorithms and practices.


I see 16 bit CIS and CCD arrays that are up to a meter in length, at 1200 DPI and can run at 300,000 lines per second (250 inches per second, 6.35 meters per second) But I see it as 2.99782458E8/300000 = 999.308193 meter resolution (1 km resolution) for imaging using noise from the sensors. The sun and moon move slowly, but I can use them to test and improve. My first target after that is the atmosphere and ocean for global climate change. When I worked for Phillips Petroleum I worked on global climate change, alternative fuels, hydrogen economy, CNG, LNG, clean air, US and global fleet emissions, methane polymerization and other things. When I worked for the Federal Transit Administration, I worked on magnetic levitation trains, CNG, LNG and alternative fuels. I remember most everything I read, see or think about. So I can use most all those methods in any combinations.


I sometimes wish I had not skipped my undergraduate Electronics Lab to take graduate quantum mechanics and electrodynamics. So I have really good theory and organizational methods, but cannot wire circuits. If I force myself, I can do all the math and modeling, but it is frustrating because there are lots of people way better at that that I can help on things they don’t know. That is what collaboration is about. Just hard to do through these clumsy Internet sites, that ought to know better and pitch in and help everyone.

Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.


Leave a Reply

Your email address will not be published. Required fields are marked *