LIGO Detector locations and times for solar system correlation studies

Jonah,
Thanks, that first link seems to answer my immediate question about reference location and calculating time between sensors.  Thanks for the pointer, I should have been able to easily find such a key piece of the puzzle.
My son, Ben, set up a Linux server with Python and ran some samples.  I found data from Aug 2017 when it was quiet and H1 L1 and V1 all gathering data. That should give us some practice setting things up.
LIGO has stuff scattered all over the Internet.  Anyone who gets paid to work on LIGO probably gets an orientation but anyone on the outside (“share with the world”) sees a mess.  All that reading and finding is hard.  Lots of false leads and duplicates and variations.  It is slightly better than some groups on the Internet, but far from ideal.
The LIGO group acts like 10,000 individuals and somewhat like a few hundred groups.  And things are spread in millions of little snips over the Internet.  I pity the people who do not read English, or who don’t get much preparation, or have no computers or high speed Internet.  There are more people who can understand gravitational waves than have access.  A graduate student path for everyone won’t work for many parts of the world. Since covid, things have accelerated onto the Internet for education and research, jobs and collaboration.  But many organizations are not helping or going backwards.
Is there anywhere to share results of investigations? The LIGO Scientific Collaboration is way out of date now.  With only a couple thousand people, there are too many new things and not enough people, by almost ten thousand fold.  The next generations of people are not going to follow traditional routes.
I will check all the universities mentioned at https://en.wikipedia.org/wiki/LIGO_Scientific_Collaboration but I know that is not everyone. Some of the best work is in groups who have nothing to do with laser interferometer methods. The list is definitely not representative of all the countries, nor all the groups, nor demographically complete. There are a lot of companies and small industries growing.  I think I know where most of the pieces are, but I always look.  Yeah, lots of old material, lots of broken links, lots of partial and out of date copies of things.
I am looking for ways to upgrade the LIGO laser facilities. The next generation atom and electron detectors will be smaller and more sensitive and much less expensive. But I think the current setups could squeeze another few orders of magnitude.  Especially time of flight methods.  That works easier using earth based signals, they are much stronger and closer and can be verified from many directions.  I am pushing on correlations with all the sensor networks. The weakness of LIGO facilities is their lack of integration with other groups for earth and solar system signals. There are not nearly enough people and the problems severe.  Just earthquake early warning would help, but the routine calibrations will transform some tough problems now.  If this works, solar and stellar models will get a big boost.
Richard Collins, Director, The Internet Foundation

Hello,
I finally found a simple problem that can use the LIGOs and VIRGO strain data, but I was not sure where to find the position and orientation of the detectors.  And a clear mathematical map of their sensitivity by direction.
https://en.wikipedia.org/wiki/LIGO has the latitude and longitude, but not the heights.  And I am not sure where within the facility to assign the location for the GPS start and end time of each sample.
LIGO Hanford Observatory: 46°27′18.52″N 119°24′27.56″W 46.455144-119.407656 WGS84
LIGO Livingston Observatory: 30°33′46.42″N 90°46′27.27″W 30.562894-90.774242 WGS84
VIRGO 43.6313°N 10.5045°E 43.631310.5045 WGS84
Certainly VIRGO is more precise than that.  Maybe someone can update Wikipedia.
https://www.gps.gov/systems/gps/performance/accuracy/ says “40 nanoseconds 95% of the time”.  I will find the more precise methods.  If a college wanted to do an experiment, they would want low cost and easy to implement timing.
https://www.ligo.org/scientists/GW100916/detectors.txt has the heights.  Maybe you should give that data to Wikipedia in shareable format.
It was hard to search for the direction and dimensions in precise form.  If you assume that people will need certain critical information for any correlations, and proper tools, then it is not hard to put that in one place. Easy to use.  You have supercomputers or people who can share them, to run correlations for anyone.  Why force every person or group to do their own.  You are the experts.  You cannot expect a high school student to know all these arcane things, even if they can select dates and locations and transformations and visualizations.  Learn it by using, then learn how to improve and change things. There are about 2 Billion first time learners in the world from (5-21) now. And another billion with backgrounds to be interested in gravitational imaging.
Most people in the world do not use radians. That is easy to add.  On the whole Internet the most common use of angles is decimal degrees.  Or decimal cycles.
Gravitational Wave – GWpy uses the GPS time for the example at https://gwpy.github.io/docs/v0.1/examples/timeseries/qscan.html
But they are using faded fonts on many pages that are not accessible to people with limited vision.
I can use https://gwpy.github.io/docs/v0.1/examples/frequencyseries/coherence.html but is it also using faded fonts and hard to read.  I will see if I can get my son to help me set up GWpy to run some samples.  All I have to do is run these kinds of correlations and coordinate the other detectors.
I think LIGO should be responsible for everything that references LIGO on Wikipedia and the web.  It is not anyone’s job to do that for you.  And the mess hurts anyone trying to use LIGO data in conjunction with other datasets. There are about a billion people now with backgrounds sufficient to understand and run LIGO correlations and samples. With not too much work, compared to what people are spending now, that could be made universally accessible to all schools, colleges, universities, researchers and curious people anywhere.  I know many mathematicians and engineers in different fields who can easily understand and use the data, if it were clearly and completely documented. Most people are not programmers, and LIGO has no clear symbolic math datasets for people wanting symbolic math and related simulations.  Computer languages cannot be converted to mathematics, but mathematical forms can have lossless and precise conversion to computer languages.
Richard Collins, Director, The Internet Foundation
Good stuff.
Here is a picture of L1 and the other detectors, H1, H2, G1, V1.
Here is Gravitational Wave Open Science Center (GWOSC) for data https://www.gw-openscience.org/about/
I want to use the 16 kHz data from the O2 Data Release 30 Nov 2016 to 25 Aug 2017 – when H1 L1 and V1 (Hanford, Livingston and Virgo Italy) were all running at once.  Three detectors to point to one spot in 3D.
https://www.gw-openscience.org/archive/O2_16KHZ_R1/ is that dataset.  It comes in HDF5 format, which astropy should be able to read.   Each file (need three of them) is 4096 seconds long or 4096*16384 = 67,108,864 values  The moon is only 1.2 seconds away. The earth is diameter 6371000 meters or 0.0212513685 seconds across.  That is 348.1824 samples.  If the moon were aligned on Hanford and then through the earth to Italy, that would only be about 100 samples farther.  The signal gets to Hanford, then through the earth to Italy.  You would get sample 4096 from Italy, and then go back that about 100 samples and pick the one from Hanford, Washington and about 50 samples back to get the one from Livingston Louisiana.
The data is stored by time at the location. But I will compare the data at a point inside the moon or nearby and then take the data from when the signal arrives at the speed of light at each station.  I use these dates and one sample, get the distances from the moon to these locations and give you the list of samples to compare
L1 H1 V1
121 80 12 – get the 121st sample from L1, the 80th sample from H1, the 12th sample from V1.
122 81 13 –
123 82 14 –
It won’t be exactly the same offset in each row, since the distances between the moon and these detectors is changing.  But it is slow and not changing really fast.  I will give the tree index value based on time from moon to detector.  Put those values in three arrays, then correlated the time series – L1-H1, L1-V1, H1-V1 and then scratch my head and try to see if it makes any sense.  It will be more chaotic looking than those wiggly lines. But I think I will figure it out.
Here is an index of the data available.  I chose “JSON formatted table of files and data quality” at https://www.gw-openscience.org/archive/O2_16KHZ_R1/ and pressed continue.
There are urls in the json index file – I will have to find blocks of 4096 seconds where they were all three running.  One day is 86400/4096 = 21.09375 blocks of 4096.  That is about 21 GB per day.  I think they were all running from the first of August to 25th Aug 2017. That neutron star collision occured on Aug 17.  So before or after.
Here are three files at the same time:  4096 seconds.  The first full block of 4096 on 25 Aug 2017.  I looked through the three json files and found the last day.

 

Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.


Leave a Reply

Your email address will not be published. Required fields are marked *