Richard Behiel – on linear regression for 3D gravitational calibration, and 3D gravitational imaging

Richard Behiel: The Beauty of Linear Regression (How to Fit a Line to your Data) at

Richard Behiel, The vector tidal gravity signal from the sun moon and earth at a superconducting gravimeter station is simple Newtonian GM/r^2. The suns vector acceleration on the station, minus the suns acceleration on the center of the earth. The moons vector acceleration on the station minus the moons vector acceleration on the center of the earth. Add the centrifugal acceleration of the station in earth centered coordinates for WGS84 latitude and longitude. Rotate to station North East Vertical. Then a LINEAR regression for each axis. It is absolutely beautiful. The potential signals travel at the speed of light. The sun’s position 499 seconds ago is used, not the sun now. The SGs are single axis 1 sps instruments. But 0.1 nm/s^2 sensitivity at 1 sps. The broadband seismometers at quiet stations, all three axis match the vector signal with only a linear regression. Global networks can be locked precisely to the sun and moon and thus together. The three axis instruments at 100 sps have 8,640,000 readings for each axis each day, and only 2 parameters (a b) (offset multiplier) needed. It is possible to solve for the orientation. The “Transportable Array” has some instruments with no orientation data. Put the rotation matrix, and then solve for the direction. The position (latitude and longitude) and orientation (three angles) at permanent stations can be monitored over decades to give (“as good as or better than VLBI”). Extending the sampling rate to Gsps and Msps or higher allows “time of flight correlation imaging”. An active sources region inside the earth will generate seismic signals, but it will also generation Newtonian gravitational potential signals. And the gravimeters and gravity gradiometer measure the local gravitational potential gradient effected. The density changes, the potential changes locally and diffuses outward at the speed of light and gravity, the three axis gravimeter arrays pick it up, and correlate. The “image” is the same kind of heat map you used for the ab plots. But you use multispectral, FFT waterfall and FFT time series and other tools.

The Japan earthquake registered as a “speed of graivity” signal on the SG network and the broacband seismometer network. You can use imaging arrays to track the 3D volumetric image the atmosphere, oceans and inside the earth. If you work on those kinds of problems, they teach “interfererometery” but that is just a subset of the broader study of statistics and modeling using real data. Electron interferometery is just devices using electrons that can compute correlations. Machine learning is mostly simple statistical regression underneath. You can have people working on real data, real problems if you do not force them to reinvent mathematics and methods. Teach them how to use computers and shared data as tools, not oppressive masters where humans have to memorize all the steps.

There are about 8 billion humans. And 2 billion between 4 and 24. “First time learners”. Ideally every human living in the next century of “solar system colonization” will be able to have tools to gather select display summarize and work with many visualizations. NOT making them all memorize steps and images of particular representations, most of which are derived from blackboards, quill pens on parchment, “paper” technologies. Even a calculator is a crutch. A computer that you have to memorize ten million scraps is a crutch and a burden. A language AI (that knows its input, that can lay out its steps, that can absorb and use symbolic math, that can run all computer software) would help. Most of the 700 Million humans over 65 have long lives and experiences. They might all well be more useful and engaged if they have tools that would let them watch the steps animated, rocking back and forth, step by step, debug and trace, working with real problems. Not just animations on the screen to memorize, internal memory used to index the pieces, tying those pieces to other similar or connected instances, and the mapping to try to insert the right data into the right slots, then “calculate”, “visualize”, “animate”.

Your “generalization” at the end is not a generalization, but a summary of what you did in the video. The essential steps were taking the partial derivatives and solving the summation equations. You need to get the computer to do the symbolic math for you, then you have tools, not changing images on the screen to memorize. The partial derivatives, multiplifications, factoring are not more difficult that addition subtraction multiplication and division. If you recommend “mathematics” programs that have symbolic capabilities, those are all bloated expensive and poorly shared. Your putting things in matrix form, is simpler for some, if they happen to have sat through a class or watched the right videos. The gradient descent can be used much more efficiently. Your blue dot in the ab space was looping around, not really focused. It wandered and was repeated many times. You could have, should have, gone immediately to the gradient descent. AND you should have given at least a 3D problem, not linear. You want to generalize, that really means helping people learn to solve real problems efficiently (their time, their efficiency, their not having to wander and memorize).

Richard Collins, The Internet Foundation

Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.

Leave a Reply

Your email address will not be published. Required fields are marked *