All knowledge, all human languages, all domain specific languages in an open system, verifiable, accessible to all

Günter Klambauer @gklambauer Towards Symbolic XAI — Explanation Through Human Understandable Logical Relationships Between Features

Abstractions on top of traditional XAI methods are used. These are combined with logical operators to provide explanations…

P: https://arxiv.org/abs/2408.17198 https://pic.x.com/m6patfa7jz
Replying to @gklambauer


All knowledge, all human languages, all domain specific languages in an open system – verifiable, accessible to all
 
I have been recommending that we pre-tokenize the whole internet, with global open verifiable tokens that work across all human and domain-specific languages. And faithfully track and report sources. This business of using arbitrary tokens and source-less “knowledge”is a waste of global human time and opportunities. Putting the Internet on a solid foundation, supported by cooperative effort by all humans, is better than leaving critical tools and information in the hands and under the arbitrary control of a few who do not care and do not make an effort to share.
 
The term I use is “lossless encoding” for all knowledge. The GPT style is lossy because of arbitrary decision to block all sources, and not reveal how the “AIs” actually process and make decisions. They are not traceable, and the corporations do not listen. Nor are they investing any effort to solve real problems, even their own, using the tools. That, “not using their own tools for anything serious” is a clear indication they do not trust their own tools.
 
It is possible to index the Internet with lossless, traceable methods that are tied to a global open tokens, and global open resources network.
 
Much of my research the last 26 years of the Internet Foundation was on complete and open methods for storing and using global knowledge. But I started on that path about 58 years ago. A true system has to be complete, and when a company blocks access to where information comes from, and how it is processed, they are violating a fundamental law of knowledge sharing.
 
Whether symbolic (with equations, diagrams, labeled maps, network representations, icons, images, animations, simulations, tools of many sorts), the key is the information must be accessible to humans without onerous taxes, manipulations and restrictions that benefit a few corporations or groups.  
Forcing single points of failure, and single points of manipulation only benefits a few. Designs can specifically avoid that and protect against it.
Recommend (1) lossless indexing, (2) open lookup and method, (3) work on real problems openly to grow and test the global system, (4) sound index all human spoken languages, (5) take storage of the Internet index out of corporate control and make it open, (6) make tools and visualizations part of a global open process.
Global open knowledge requires open verification, and benefits from global open tokens for all languages
 
Filed as (All knowledge, all human languages, all domain specific languages in an open system – verifiable, accessible to all)
 
Richard Collins, The Internet Foundation

Vincent Abbott | Deep Learning @vtabbott_
I haven’t been posting much lately – been getting really stuck into a formalism of Neural Circuit Diagrams (NCDs) + how to use them to optimize deep learning algorithms with @GioeleZardini from MIT. This means a diagram that represents attention plus a few standard tricks can be https://pic.x.com/i3jytpng4w
Replying to @vtabbott_ and @GioeleZardin
My Comment: A visual, human-accessible, diagram with tools and access to the whole internet.
Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.


Leave a Reply

Your email address will not be published. Required fields are marked *