What to do about Covid on the Internet – Part I, Draft

Dear Friends and Family,
 
Anyone good at editing, help organize and condense this?  Suggestions? Recommendations?
 
I am trying to decide what, if anything to do about global issues on the Internet.  I have spent almost 23 hard years at it.  I have some basic ideas that guide me as I am looking at global communities, and global topics.  I think it is possible to greatly shorten the time it is taking everyone on earth to understand and respond to “covid” or “climate change” and thousands of similar topics – where when you search the topic, it gives millions or billions of results. 
 
Changing a few search engine companies or replacing them is “easy” in relative terms, if there is political will and effort. But cleaning up and organizing and finding the authors and responsible persons for all the “covid” or “climate change” scraps of information on the Internet requires many tens of millions of people to each do relatively small things – all in the same directions.
 
I am pretty used to large problems, but there is not much more I can do to analyze the problem and potential solutions, while people are still dying. I traced out most of Covid last year on the Internet and talked to people about all aspects of the problem where changing the Internet would help.
 
The problem on the Internet is simple – massive duplication, mostly untraceable materials on the Internet, lack of immediate information for many small dependencies for any topic that each require too much time, and then the whole of society still using print methods on the Internet.  This last requires humans, usually very expensive ones, in the loop for any global issue. When you use PDF or similar lossy “print” methods, it strips all the intelligence from Internet shared materials. The equations are no longer equations, but colored patterns on a screen. The data streams are only words.  The conclusions and relationships only described and not useful.
 
Waynette, I wrote some of this yesterday to reply to your last message, but I was so discouraged and disillusioned I canceled it instead. “Freezing, sick and without water in Texas, worried about food and water again” is just too depressing to talk about. All because Texas information systems are not integrated or complete. And massive chaos from “news”.
 
Last night, I wrote a long note to a machine intelligence group at Oxford University, but did not post it either.  I was trying to get them to take the lead in cleaning up their own domain (oii.ox.ac.uk), the Oxford domain (ox.ac.uk), the whole academic domain in the UK (ac.uk) and the UK domain itself (uk)  — For covid related materials.  I thought that was something a small groups of 20 to 30 could do, or get started.  But it needs more than a gentle or not-so-gentle nudge from a stranger out of the blue.
 
I wrote similar things to Library of Congress, and some gov and org domain sites.  But it needs to be organized and people at it full time to stir movement.  I think the whole world could clean their web sites for covid materials related to Detections, Preventions, Monitorings, Treatments, Deaths, Persons, Groups, Resources, Volunteers, Needs, Jobs, Projects.
 
There are about 9 billion places on the Internet where “covid” OR “coronavirus” OR “corona virus” are mentioned.  And these fall into only a few hundred core categories.  Every entry has a person or group who authored, posted and is supposed to be responsible for it.  I want to hold everyone responsible for what they post.  Not to beat them up, but to be sure they are listed and can get the resources and ideas they need.
 
(“covid” OR “coronavirus” OR “corona virus”) has 9.38 Billion entry pages
(“climate change” OR “global warming” OR “global climate” OR “climate” OR “weather”) has 2.74 Billion
 
These queries, where someone is looking at a global scale problem or opportunity or issue or industry or process – where there are millions or billions of results is what I have concentrated on for the last 22 (23 years in July). They are not hard to deal with, just tedious.  It takes a lot of people to make small changes, and a lot of programmers to spend time, and a lot of computers to scan and organize.  But it is NOT tens of Trillions of dollars impact of “covid” or hundreds of Trillions of “climate change”, but a few tens of billions.  I will take paying 0.1% any time rather than 100%.  A million dollar a year group at the Famine Early Warning System can prevent hundreds of thousands of deaths.  That is a good deal.
 
 
If we, the whole human species (about 4.5 billion of the 7.8 billion humans are using the Internet and all impacted by global things) can organize the materials on “covid”, then we will learn enough to tackle “climate change”, “education”, “power systems”, “power transmission”, “clean water” and about 20,000 other core topics that make up the global topics that are on the Internet.
 
Do you have any recommendations for me?  What should I do?  I monitored “covid” most of last year, looking particularly at what was happening on the Internet that was causing the global response to be so slow.  Best practices (I helped designs some of them), say we (the whole world) should have responded in days, not months and now years.  And the number of deaths should not be in the millions, but just a few.
 
In 1984 and 1985, the famine in Northern Africa – Sudan, Ethopia, Somalia and other countries killed a million or more people. The USAID and State Department and UN and others were responsible for helping.  But their decision process was so slow (gathering information, finding people, locating resources, putting it all together, then making sense of if, coming up with strategies and projects) that many died before they even got started.  They did not ask me to help until Dec 1985.  By June 1986 the basic system was in place that could monitor the whole world for things that indicated a high probability or actual steps on the road to famine outbreaks.
 
So I have been through this before in detail.  I want to help.  I know what to do and the implications of doing it the way I suggest. But I am nobody.  One person at Oxford could poo-poo what I want to do and no one would listen. Their pride won’t let them be any less than the experts. The same is true of every organization on the planet.
 
I can publish my notes, reviews and recommendations on the Internet.  Just let the bare facts and recommendations speak for themselves.  I trust God to take care of the details and keep thing fair.  But I am already worn down.  I have only been working an hour today, I tried to rest last night. But there is no part of me that does not hurt badly.
 
I put off doing much since last year, saying to myself – “There are probably going to be 4 to 6 million deaths from this, when everyone is infected and has to go through it for there to be some kind of herd immunity. But the variants are growing too fast for that to happen.
 
Cleaning up the Internet would help.  Cleaning up the Internet in Texas and all the related information systems would help.
 
It used to be that newspapers and television and radio news was helpful.  It kept you informed. But now there are too many “news” sites all churning our dribs and drabs of hints of partially digested facts.  I could not find my local water and electric status.  I could not find the weather data for where I live and Texas and the regions affected — without hours of diligent and often lucky finds to get any kind of complete picture just to decide how to plan if I needed to ask for help.
 
This is a lot to throw at you.  You keep talking about all your smart, skilled, connected kids and their families.  I wonder if you know anyone who might help.
 
Every university in the world could work on their own website and set of connected sites.  Work together and clean up the mess.
 
site:utexas.edu has 1.11 Million entry pages
site:utexas.edu (“covid” OR “coronavirus” OR “corona virus”) has 121,000 entry points (mentions of these terms)
 
That is too many entries on each site, to expect all those independently posted materials to be organized. All the authors and posters of that 121,000 items are probably not working together. Anyone going to UT Austin site for help on Covid is going to see a mess.  The groups at UT are not working together, and the ones who are – do not have all the information. Any one small group (and there are many thousands of groups in a million pages) might be dedicated, but they are not working together where everyone can see the whole of what they are about and where everyone fits in.
 
site:tamu.edu has 1.05 Million entry pages
site:tamu.edu (“covid” OR “coronavirus” OR “corona virus”) has 126,000 entry points
 
site:rice.edu has 428,000 entry pages
site:rice.edu (“covid” OR “coronavirus” OR “corona virus”) has 23,000 entry points
 
site:ttu.edu has 264,000 entry pages
site:ttu.edu (“covid” OR “coronavirus” OR “corona virus”) has 18,700 entry points
 
site:Texas.gov has 2.06 Million entry pages
site:texas.gov (“covid” OR “coronavirus” OR “corona virus”) has 121,000 entry points
 
There are about 29.4 Million persons in Texas.  
“Texas” (“covid” OR “coronavirus” OR “corona virus”) has 1.21 Billion entry points on the Internet
 
What I want you to understand is that if every one of 20 Million people (leaving out very young and old) has to survey and integrate knowledge from billions of pages of stuff on the Internet, it is a huge cost to society in wasted search and compilation time.  The lack of basic information from Google hurts the whole world.  And I can count and estimate the cost.
 
But the basic concept is general.  If you have a thousand people working on a topic and they are not all working together, then each one has to survey and try to understand the whole.  That is a thousand-fold duplication of human effort.  But the world has billions of adults and billions of children and they are forced to try to understand the complex and chaotic Internet “news and information and data” each on their own.  It is “billions times billions” of queries and clicks. 
 
The problem with Google is they get paid by the click.  More queries means more money in their pockets. The more they break a topic into tiny pieces and never let anyone complete their search – always stretching the search out to more and more – they benefit but society pays the cost. They optimized their entire system to answer trivia questions and completely ignored anything that takes hours to answer, let alone months, years, decades. Ot that requires hundreds of millions of people to all do small things together to correct a global problem.
 
The cost to society is GDP per capita per hour times the hours of searches. That is currently about 4 million searches a minute, and let them each take a minute on average. That is 1440 minutes a day.  Or 5.76 billion minutes per day, or 96 million hours per day.  At developed country GDP of about $25 per hour that is $2.4 Billion each day.  And they basically take a tax on that data flow.
 
But any serious query that cannot be answered in one or two results — Google does not address at all.  So all they do is report that they found a lot of pieces.  (But will not share them with you.) And their supersmart and high paid kids who work there have no clue how to deal with a million results, let alone a billion. It is not hard, but they only are paid by the click.
 
When I worked in international development I did population projections and models for all the countries in the world.  That includes morbidity (sickness), mortality (deaths) and fertility (births), and migration, labor force, education, industries, trade, agriculture, health, and all aspects of societies. It has a lot of pieces, but it is “not hard, just tedious”.  I had to often deal with estimating the value of a human life.  “Should we spend a billion dollars to save a million lives?” came down to (at least partly) “How much is each life worth?”  At a $1 Million per person, you would spend a $Billiion to save a 1000 lives.  For a million people you would spend a $Trillion.
 
I am good at estimating things and models.  I estimated the total number of deaths world wide from Covid, assuming everyone was infected or exposed to be between 4 and 6 million people. Some really nice people at UN Population and Max Planck Institute for Demographic Research helped me. We are about half way through global infection.  And the mixing is much faster than when people were trying all the time to prevent the spread. So instead of taking 4 years to have that many deaths, it is compressed into about 2.5 years.  We are about 1 year into it, and 2.5 million deaths, so 6 million is not out of reach for covid.
 
What I would like to do is see is if cleaning up the Internet sites (something easy for people to think about and do with some degree of precision and efficiency, and learn from doing) – would reduce the cost of searching dramatically.  About a thousand to one.  And then also fix the research communication process.  Roughly, all the global research goes through print and requires very expensive humans to read and interpret.  I am all for jobs programs for expensive experts, but not at the cost of human lives and society.  Most of that data communication can be handled with better software, decent policies and focus on what is important.  From cleaning up the websites, each university, federal government, state governments, county government, and city government can reorganize itself. But starting with the gov domain is subject to political whim, and one senator seeking glory and money for his state can cripple a global effort in a few minutes of casual political ploys.
 
So I was looking at the Universities, Colleges, High Schools and private online universities and training centers for some effort.  The AI, big data and machine intelligence groups – they say “we can do great things with computers” let them prove it by real results for a real problem. Not just token demonstrations and smoke and mirrors.
 
Richard Collins, Director, The Internet Foundation

Richard Collins

About: Richard Collins

Sculpture, 3D scanning and replication, optimizing global communities and organizations, gravitational engineering, calibrating new gravitational sensors, modelling and simulation, random neural networks, everything else.


Leave a Reply

Your email address will not be published. Required fields are marked *