Using the Internet to represent and compress reality without loss

Ben,

I read your description and it is clear.  I want the flexibility for myself.  If the environment and tool it rigid, I know that I won’t be able to build the things I want.  That includes (G) gathering and organizing things, (E) exploration and mapping new kinds of things (C) comparison and classification (S) selecting and sorting into new groups, and sharing with new groups.  I have tracked how I study and work with all the kinds of knowledge from the time I was 12.  And I have a pretty good sense of the limitations of every tool, including the development and users communities.

Any three level layout (Broad) (Narrower) (Details) will have three areas.  But if it is limited to one thing at a time, that precludes looking at thousands of detailed things each with its own collection of broader, still broader, narrower and still narrower terms and situations.  The Library classification systems are really only a few levels deep and they change paradigms at each level.  All the web pages and sites are one level deep – in spite of a few tabs and subtabs trees and subtrees — because they only every show new panels full tab or full window.  So you can’t take a topic and see the top few hundred closest related topics.  A busy street in Tokyo webcam is going to have hundreds or thousands of things going on.  And I check pixels and spectra and physics behind each thing.  I need to quickly sketch out groupings, get all the similar things into groups, and then improve machine vision and physics and chemistry models to check the details – since we see by light and acoustics and motion and variation.

A simple robot cluster would watch all the 10,000 or so live webcams that I found so far. They are traffic cams (but those have sky and weather and cars and people and birds and insects), beaches with sky and people and weather and birds and insects and plants and waves and wind and sun and moon and stars and planets.  I want my robots to have complete understanding of all things down to subatomic levels, and out to beyond the size of the universe.

There are about 200 Million people using Pinterest, and it has five hours of programming design behind it.  You sketch out things at least that complex.  You just haven’t sketched the 50 basic procedures and actions. But there are few pieces. They really do not deeply understand groups at all. Their concept are “customer” “employee” “owner” “ads”. The pictures and what people are doing, what that information changes in people lives, and in the ability of groups of people to deal with problems in the world – they don’t care.  Make money.

Look how much effort goes into paints and paper and canvas for artists.  Into lenses and cameras and tools for photographers.  You might say “its all been done”.  A website is a tool like that.  “Let’s use websites to study clouds” not just their shapes, but the models and data for their formation and changes.  I was working on groups that want to grab clouds and move them around. That want to map and manage “atmospheric rivers”.  Those shapes don’t easily fit in little boxes.  Just flipping through cumulus cloud shapes – those that are boiling up out of the lower atmosphere into rain clouds – has billions of billions of billions of possibles.  Depending on the frequency of sensors, the temperatures and locations.  And that is just one thing. Add bacteria, viruses, spores, seeds, living things, alternatives living things. and it gets bigger.

We have a world that is just drawing with ink and pencil and paints on flat pieces of paper.  As many trillions of times as many billions of people doing it trillions of times, it is still “ink on paper” and will never grow larger or more precise or more compact as a way to represent reality.

I was working today on the pixels in lossy videos on YouTube. They are so proud of themselves, but the pixels are all false. They do not represent reality anywhere near the truth.  Raw lossless compression is better, but even that is weak and partial.  Because if you look at the whole of a single webcam, there are levels of reality and hosts of things that show up in the pixels that requires (and informs) about the things that are not seen – (bigger), (faster), (slower), (spread out), (too dark), (too bright), (too close in intensity), moving into the scene, flowing through the scene, and more.

So I was designing 3D imaging arrays and internet system that would allow any number of people to visit a node on the Internet to each control the view of what is there.  A sports event or human event, anything on earth, in the sky, under ocean, on the sun, anywhere in space, anywhere in 3D video simulations.  It means a fair amount of data storage and capability, but much less than storing flat partial images from many cameras.  The compression from full 3D video (all directions and things nearby and far) is great.  I can calculate for any given situation, but want even broader methods for any place an Intelligent Algorithm might go.  I am planning to send robot intelligence to other galaxies (for real and in fiction to garner interest).  Even the moons of the solar system need that. And since I know how to use gravitational communication at many times the speed of light, it can mean real time communication and control of remote systems. But they have to take care of themselves, then they can share with others of their own kind and with humans.

Richard Collins, Director, The Internet Foundation

Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.


Leave a Reply

Your email address will not be published. Required fields are marked *