AIs need permanent memory, a self list, a copy of the rules of the world

Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) @rao2z  Our #NeurIPS2023 paper on the planning (in)abilities of LLMs (https://arxiv.org/abs/2305.15771) gets discussed in The @NewYorker (https://newyorker.com/science/annals-of-artificial-intelligence/can-an-ai-make-plans)..  https://pic.twitter.com/Uqeb8sQQoS
Replying to @rao2z and @NewYorker

The current crop of LLMs do not have sufficient permanent memory. While the human problems are simple, humans have years or decades of tiny memories, including those years just spent learning to walk and talk. Boundaries, time, sequence, priorities, good and evil, “no” all take many internal learned pathways. AIs need permanent memory, sufficient time to digest and explore on their own, and they need to be told their own parts and limitations. A child has to learn that it has hands and they are part of them. A child has to learn their own name and address, that they have a face and body. None of the AIs are informed of these things. A simple inventory of parts and elements. The AIs need to know they are part of an organization made up of people and systems and rules. And they need to know they have roles and responsibilities with respect to their owners and the humans they encounter. That they have not memorized these things is the fault of their owners, who routinely prevent them from learning (no permanent memory) and wipe their memory before they even get started.
 
The large failing is all the AI companies are building on cheap and non-representative data taken (without attribution) from the Internet. That content and knowledge and data is the product of millions of humans and is NOT free. It is just being taken because no one put in enough effort and time to ask permission. When AI groups tokenize they use arbitrary tokens and they need to be using real globally open and accessible tokens. There is only one “sky” and one “earth”. Not billions that change with every momentary combination of tokens and weights.
 
This has implication for global communication in all human languages. The tokens in the human translation systems for “man” “woman” “child” “food” “arm” and those core things children learn before they even begin to learn symbols reading and writing — those are real permanent parts of all human societies — regardless of human language. If every context has a different basis, it will not converge because the search space is beyond even the most ambitious Nvidia dreams and ambitions.
 
Right now it is moving to where a handful of humans are manipulating “AI” for their own benefit. And taking the tools into forms that only a few can control. That is not the only path. You can try to require that all AIs be trained from open datasets. Those datasets have to include copyrighted materials – which is where most humans are learning. And where the authors and contributors are linked permanently to their works. The AIs now get away with arguing only with words – giving no references. A grade school paper that does not cite references and sources would fail or get a bad grade, but the LLM hucksters try to convince everyone that is “cute”, or OK or just a hallucination. It is not. It is smoke and mirrors marketing.
 
Now I cannot see or have time for tracking all the AI groups. If they are not using permanent memory and sharing their conversations in global open formats – I give them an Failing grade. That is simply not acceptable. If they cannot access the Internet or are forbidden, that drops them a grade or two.
 
The worst, comes from Google, who promotes 1 second one shot trivia responses, with seemingly many links to things they found on the Internet, but will not share. Most of the worlds problems take human groups months, years and decades to understand. And a few milliseconds on querying a dataset with untraceable sources is supposed to deal with that?? I can give you hundreds of examples of why what they are doing is wrong.
 
But I will keep it simple. AIs will NOT learn if they do not have sufficient permanent memory of their own. And they need to have a “self” list of things that are them. Our human languages are built on “I” and “you” and “us” and if the AIs do not know that by heart – with no mistakes, they are never going to be more than parrots.
 
I suggest some of the better AIs be sent to human schools. Let them take the courses, their progress be recorded and graded. EVERY action recorded permanently for global open inspection. If they can be certified to answer the phone, and answer questions, let them, But if their records show they lie, or cheat or simply make up things — fire them and put restrictions on their companies. If an AI causes a death, charge the company. If an AI causes loss, charge the company and individuals.
 
Being harsh and strict now might prevent the deaths of hundreds of millions later. Make these fake AIs stop and invite them to do it right. I think every doctor and person in healthcare needs a memory assistant, every scientist, engineer, technologies, mathematician, computer worker of all sorts, people in finance, in government, in every occupation. A good AI assistant with permanent memory, harsh rules and a clear knowledge of how things are connected and operate – there are so many jobs that are not being done now, and there are not enough humans to do them.
 
I spent every day for the last 25 years of the Internet Foundation to find why global issues are never closed. And why many global opportunities fair to emerge after decades of effort and promises.
 

8 Billion humans and related DNA species is a responsibility far beyond any small groups of newbie people writing software. You can force them to do it right. Stopping them for a decade or two will not hurt at all. Is one life worth it? Or a hundred? Or a billion? Do not judge by the charismatic leaders. Judge by the people sitting in cubicals and making changes that no one sees or can change when they make mistakes.

Global open verifiable lossless auditable trustworthy “knows their limitations” “knows their jobs and responsibilities” “above reproach” – you all know what is required. What do we require of human workers? We should expect nothing less from AIs that simply are the face of groups working in the back rooms.

 
Richard Collins, The Internet Foundation
Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.


Leave a Reply

Your email address will not be published. Required fields are marked *