All images, videos and sensor data should have global open controls and tools for all humans

Viki @Viki Dec 25 ‘Tis the season to be jolly and what could be jollier than a collection of holiday greetings from some of your favorite stars!

Watch all your favorite celebrities in all your favorite dramas now, on #Viki!
Replying to @Viki

I wrote this comment on Viki a couple of years ago, but it more relevant so posting it again. It affects @YouTube. @Vimeo, @Facebook, @Zoom and many other sites on the Internet. I recommend global open controls for display, use and analysis of videos on the Internet.

I have reviewed the use of video playback speeds in various services. YouTube uses 0.25, 0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0 but it is not sufficient. I have experimented with speeds from 1/100 to 100x or faster. Rather than a fixed set of speeds, an intelligent assistant can help.

For action movies, they have a lot of footage showing at 30 frames per second or faster. Frame by frame is a good option any time. On the Internet there are things like lightning, explosions, collisions, where it is helpful to slow down to one frame each minute, giving the viewer time to absorb and learn each frame closely. I have experimented with scanning movies showing only a frame from each minute, and different visualizations to show whole movies in a single chart of intensities and colors, voice and music.

@NASA, @NOAA and similar sites will show years of solar images as a timed sequence of images. A “video” is just a collection of images shown at a given rate. [ But they have to be lossless and traceable to have scientific value. ]

If you want to really make the images useful to viewers, you should also consider controls on zoom, measurement, intensity. Many movies are too dark. The producer/editor chooses something for effect, and they make a bad viewing experience for the users. [ Live videos at night can now be enhanced in many cases. ]

If you watch any of the thousands of live videos on the Internet now, you will see some have very long times archived. The YouTube live videos (most of them) have the last 12 hours. I have experimented with different rates. If you watch an animal feeding webcamera, you might want to scan one frame out of 1800 (the first frame, then a frame at one minute, a frame at two minutes) to locate the parts where there is something to see.

Many of the frames in movies and webcams are of poor quality. There are algorithms to improve quality by registering and stacking the frames to clean them up and to improve resolution and viewing.

This is not directly relate to frame speed or presentation control, but you might want to consider other ways of presenting lists of videos. You have a lot of them in your database and available. But when the number goes over a few dozen, then browsing large images in blocks on the screen is very hard for humans to read and understand. You could add “filters” or criteria and allow people control over the kinds and content of the screens they see as they browse or search.

On the Internet as a whole, the use of videos and other image and data sequences is exploding in popularity – not just for entertainment, but for research, education, visualization and learning. You really do not have scientific videos or videos of natural things or live videos yet. There is a lot more that can be done with them, than just be locked into one persons arbitrary choice of viewing speed, color controls, and visualizations of the what is contained in the images.

I have also been tracking the various artificial intelligence, machine vision, machine intelligence, video tagging, video classification, and related activities on the Internet. There are ways to search for things in images (single or sequences) where things happen. “Show me only outdoor scenes from this movie where there is lots of sky”, “show me only the scenes where (this actor or actress) is in the scene” – the possibilities are infinite.

[ Today, I will add that all images, videos and sensor data on the Internet should be lossless and the formats accessible to all humans and their AIs. Permanent open archives can work together for future generations. Perhaps @X and @xai (@grok) and other AI groups can help in global open discussions and collaborations. Even the super, HPC and exa-scale computing facilitates have much to offer – if it is open, verifiable, accessible and fair. I used “above reproach” recently. That might be a useful criteria for all sites now, approaching global internet scale for 8 Billion humans and their AIs.

Controlling the intensity, adding measurements, counting pixels, counting textures, counting things, looking for a particular 3D object that might be partial obscured. The tools and ideas for working with images are rich and powerful. I hope all the “video sites” can work to make that open accessible verifiable and useful to the whole human species, not just to enrich a few. ]

Richard Collins, The Internet Foundation


Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.

Leave a Reply

Your email address will not be published. Required fields are marked *