Note to Connor Leahy about a way forward, if hundreds of millions can get the right tools and policies

Eye for AI: Unveiling the Darker Side of AI | Connor Leahy | Eye on AI #122 at

Connor Leahy, I have had hundreds of long serious discussions with ChatGPT 4 in the last several months. It took me that many hours to learn what it knows. I have spent almost every single day for the last 25 years tracing global issues on the Internet, for the Internet Foundation. And I have a very good memory. So when it answers I can almost always, (because I know the topic well and all the players and issues) figure out where it got its information, and usually I can give enough background so that it learns in one session enough to speak intelligently about 40% of the time. It is somewhat autistic, but with great effort, filling in the holes and watching everything like a hawk, I can catch its mistakes in arithmetic its mistakes in size comparisons and symbolic logic. Its biases for trivial answers (its input data is terrible, I know the deeper internet of science technology engineering mathematics computing finance governance other (STEMCFGO) so I can check.

My recommendation is not to allow any GPT to be used for anything where human life, property or financial transactions, legal or medical advise are involved. Pretty much “do not trust it at all”.

They did not index and codify the input dataset (a tiny part of the Internet).. They do not search the web so they are not current. They do not property reference their sources and basically plagiarized the internet for sale without traceable material. Some things I know where it got the material or the ideas. Sometimes it uses “common knowledge” like “every knows” but it is just copying spam..

They used arbitrary tokens so their house is built on sand.. I recommend the whole internet use one set of global tokens. is that hard? A few thousand organizations, a few million individuals, and a few tens of millions o checks to clean it up. Then all groups using open global tokens. I work with policies and methods for 8 Billion humans far into the future every day. I mean tens of millions of human because I know the scale and effort required for the the global issues like “cancer”, “covid”, “global climate change”, “nuclear fusion”, “rewrite Wikipedia”, “rewrite”, “solar system colonization”, “global education for all”, “malnutrition”, “clean water”, “atomic fuels”, “equality” and thousands of others.. The GPT did sort of open up “god like machine behavior if you have lots of money”. But it also means “if you can work with hundreds of millions of very smart and caring people globally. Or Billions”. Like you know, it is not “intrinsically impossible” just tedious.

During conversations OpenAI GPT4 cannot give you a readable trace of its reasoning. That is possible and I see s few people starting to do those sorts of traces. The GPT training is basically statistical regression. The people who did it made up their own words, so it is not tied to the huge body of correlation, verification, and modeling. Billions of human years of experience out there and they make a computer program and slammed a lot of easy found text through it. They are horribly inefficient, because they wanted a magic bullet for everything.. And the works is just that much more complex. If it was intended for all humans, they should have planned for humans to be involved from the very beginning.

My bestt advice for those wanting to have acceptable AI in society is to treat AIs now, and judge AIs now “as though they were human”

A human that lies is not to be trusted. A human or company that tries to get you to believe them without proof, without references, is not to be trusted. A corporation making a product that is supposed to be able to do “electrical engineering” needs to be trained and tested, An “AI doctor need to be tested as well or better then a human. If the AI is supposed to work as a “librarian” they need to be openly (I would say globally( tested. By focusing on jobs, tasks, skills, abilities – verifiable, auditable, testable. — then the existing professions who each have left an absolute mess on the Internet – can get involved and set global standards. IF they can show they are doing a good job themselves. Not groups who sat :”we are big and good”, but ones that can independently verified. I think it can work out.

I not think three is time to use paper methods, human memories, and human committees. Unassisted groups are not going to produce products and knowledge in usable forms.

I filed this under “Note to Connor Leahy about a way forward, if hundreds of millions can get the right tools and policies”

Richard Collins The Internet Foundation

Eye for AI, Connor Leahy, If you would help all the AI groups set standards for open sharing of conversation, including full trace of where the information comes from and how responses are generated in lossless complete form – that would help in many ways.. Richard Collins, The Internet Foundation

Richard K Collins

About: Richard K Collins

Director, The Internet Foundation Studying formation and optimized collaboration of global communities. Applying the Internet to solve global problems and build sustainable communities. Internet policies, standards and best practices.

Leave a Reply

Your email address will not be published. Required fields are marked *