Low frequency variations of high frequency signals

I have been looking at the general situation of tracking and studying slow variations of fast signals.  If you have a star, its main signal is likely electromagnetic or gravitational.  You track its “colors” with an FFT or spectrometer, and perhaps its slow variations up to decades of years or longer.

(“pulsar” OR “pulsars”) “MHz” gives 3.1 Million entry points and people study “pulse profile”, “Faraday rotation”, “period”, “giant radio pulses”, “radio bursts”,”daily variability”.

Just looking for new ones or better data on old ones (“pulsar” OR “pulsars”) (“survey” OR “surveys”) gives 1.7 Million entry points.

But they are using center frequencies in the MHz (Mega 6) and GHz (Giga 9) range, or “optical” sensors in THz (Tera 12) , PHz (Peta 15), EHz (Exa 18), ZHz (Zetta 21), YHz (Yotta 24) then cosmic rays and gravitational waves, gamma rays (10 EHz+).  Since the gravitational waves are so fine grained, I usually consider them very high frequency, but people are only measuring KHz (Kilo 3).  A slow flow of water might be one drop per second, but its actual particle frequency can be Zetta particles per second.  A slow flow of current can be milliAmperes (milli -3 ) or microAmperes (micro -6) or nanoAmperes (nano -9). But those are PetaElectronsPerSecond Peps (Peta 15), Teps (Tera 12) and Geps (Giga 9).

So listen to an AM radio station (KHz Kilo is bigger so use K, not k which is smaller) and MHz and its sounds are KHz and Hertz down to nanoHertz if you are patient enough.  If you listen to times when there are no signals broadcast, or if you know the exact signal from taking it from their digital source, or from a local measurement – you can subtract the signal and look at the channel information – and measure it as finely as you are able.

There are people digitizing signals at Gsps (Giga SamplesPerSecond), Msps, Ksps, sps, msps (milliSps), usps (microSps), nsps (nanoSps), psps (picoSps), fsps (femtoSps).

What I would like to do is extend the “radio telescope” range down to femtoHertz (fHz -15) and aHz (attoHz -18) and zHz (zeptoHz -21) and yHz (yoctoHz -24).

I think these prefixes are dumb now, because they are not symmetric.  It might be better to name them all with numbers (Hz 9 for GHz), (Hz 24 for yottaHz),  I can remember and calculate with 24’s but not “yottas”.  Let the computer remember them.  Or make the browsers smarter so they assist, rather than being dumb terminals serving static pages and  canned displays.

A red star has a center frequency of about 400 TeraHertz (12) or 0.4 PetaHertz (15),  But if you measure its variations with a Gsps (GigaSamplePerSecond) analog to digital convertor enabled sensor (ADC) you can report on variations closer to the analog time variations that make up our sense of “color”.  I just checked and there is a patent for a TeraHertz Analog to digital converter.  But I would say it should be Tsps (Tera samples per second) to distinguish the digital recording capabilities from the analog signal variations and time resolution of the signal being tracked.

If you look closer at a red signal, you will find that it is made up of finer grained changes at much higher frequencies.  The signals comes to you at the average speed of light and gravity, but its spatial structure (moving rigidly as a rare occurrence varies over nanoMeter (nanoM or nM), picoMeter, femtoMeter and smaller distances.

The reason I am using CamelCase is that there are so many people using programming languages where reading picoMeter is easier than picometer and never PICOMETER.  If you read as many computer languages as often as I do, you can appreciate the extra milliseconds or seconds to resolve milliamperes or magnetophotoacoustic (MagnetoPhotoAcoustic) for a process that uses Magnetic excitation of a structure to produce light which then excites a phonon (sound) that can be measured. But if you trace through those words in many different situations, you will likely come to what I have learned is that Magnetic (THz) Photo (PHz) Acoustic (GHz) would be easier to understand.  Or just say Tera Peta Giga as the chain of variations coming from a source of Tera through a Peta material property to produces a Giga signal of interest.

It looks like I have been writing about these sorts of things.  I searched for PetaHertz and ADCs and found my own post Note in Houston Astronomical Society about Gravitational Engineering at https://theinternetfoundation.org/?p=1764

Any signal that can be digitized can be processed to get the slower frequencies.  The “cheap” software defined radios (SDRs) are using tens of Msps up to Gsps (Giga samples per second) ADCs to record variations in signals.

Now what I have been looking at more closely is using ZERO FREQUENCY methods to monitor across all frequencies.  That is not strictly possible but it makes a cute term.  For any gain bandwidth product (where for a given power expended you can afford a range of gains for different bandwidths) if you narrow the bandwidth you can go to higher gains. This works across all frequencies.  It is just an algorithm, memory and processing.

Why would you do this? Because if you want to use gravitational methods (very small signals in the milliVolt, microVolt, nanoVolt, picoVolt, femtoVolt and smaller range) then you are going to be handling lots of data and tracing out “where did this signal come from?”.  That is about the only way you can separate a “gravitational” signal from a “magnetic” or “acoustic” or “electromagnetic” one.  A “red” signal will have lots of other colors or frequencies mixed in.  It can have TeraHertz modulation or variations.

An analog “photo” detector is built to respond to “red” at 635 nanoMeters (472 THz).  We don’t have cheap 1000 Tsps ADCs yet.  I am sure there is someone, somewhere, who records “red” signals sufficiently well to play them back.  Not cameras yet, they are slow usually (30 fps, FramesPerSecond) or 60 fps, or 120 fps or 1000 fps.  Some of the “area of interest” or “region of interest” or “windowed image sensors” can get to 100,000 fps or faster.  You take the bandwidth (1920*1080*3 Bytes per frame at 60 fps raw is 373,248,000 Bytes per second.  If you could sample ANY pixels from the image at that rate, in principle you could grab a 20×50 area at (373,248,000/20*50*3) = 124,416 images per second of 20*50*3 Byte images — using the same channel capacity.

60*1920*1080/20*50 is easier if the same bytes per pixel are used.

We right now, have not separated the channel capacity of our computers from the content.  If a camera were able to send 373.248 MBps (MegaBytes per second) it should not matter which bytes those are.

Why would you do that (use millions of frames per second for a 100 pixel region?  Because it can be a camera used for astrophotography and you are stacking all those to build a 3D model of a star.  Or you have an emergency or security situation that requires ultrafast zoom on tiny areas.

What limits this?  Gain and bandwidth.  If you take a camera that can do 480 frames per second for a 128*128 pixel region you have to push the gain up to get the signal in the range of the ADC.  If you just take frames faster, for a given Watts/Meter^2 (Wpm2) source, then you can usually just amplify the signal (turn up the gain) and the process the data to take care of any nonlinearities in the gain or noise variations.

Did you know that most signals act like water or fluid?  If you increase the flow of electrons in a resistor, that “kT” noise is more like turbulence and follows Reynolds number rules, then simple V = I*R with no thought to what is going on.  Watch the electrons as they flow through the tiny channels in the material. They tunnel, they scatter, they are absorbed and reemitted, they cause phonons, they are caused by phonons.  It is a rich, complex and wonderful world they live in.  And, because there are now, more and more low cost “single electron” devices to work with, that means a tiny microAmpere flow is a rich collection of 6.2414 TeraElectronsPerSecond (Teps 12) and many of those can be measured individually.

Slicing by voltage.  Now I have looked closely at what is going on with spectral measurements across all the phenomena and energies and frequencies and phenomena.  And all the electronVolt processes can be re-cast as microElectronVolt or nanoEV or picoEV processes.  We only use whole electron volts because we usually have coarse grained instruments that only measure to the nearest Volt. But if you maintain nano accuracy on everything, then nanoElectron is as easy as Electron.  It is just a unit.

I use proper case for ALL the units named after someone.  It is to honor them, but also camelCase is easier and faster to read.  microVolt is easier than microvolt, especially in places like microVoltaMetry.  Break it up, process it, classify it – fast!

Now, I know that most collisions of photons or electrons do not cause ionization, or excitation. There are more “Rayleigh” (lossless or tiny energy exchanges and “scattering” events, than “Compton” (with change of internal levels or states).  I first count the energy of something, then see what happens.  If I have an Electron, it can “have” an energy of 1 Volt. But that is its environment.  If it is spinning, it can have that 1 Volt (1 Joule per Coulomb of Electrons, 1 electronVolt per electron) because the energy is stored in the electron itself.  It can move though the vacuum at a given velocity and have “1 Volt of translational energy”.

 (1/2)m*v^2  = e*V where m = 9.1093837015*10^-31 KiloGrams, e = 1.602176634*10^-19 Joules/ElectronVolt and the velocity comes to 593,096.96 meters per second.  A 1 nanoVolt electron is moving at 18.76 m/s. And a picoVolt (pV) electron is moving about 0.593096 meters per second.  A nanoMeter per second electron would be 2.8428*10^-30 Volts. And I don’t remember what names have been assigned or suggested for 10^-30.  I guess you could all it 2.8428 femtofemtoVolts (10^-15*10^-15 = 10^-30).  LOL!

I can see that I need to put some calculators and tools online.  These are cute, but the equations just sit there and have to be copied and pasted, or hand entered to be useful. And I want the computer to keep track of things, and suggest things for me, not me have to do all this with my worn out neurons.

Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.


Leave a Reply

Your email address will not be published.