OpenAI is currently not trustworthy and should not be used for any serious applications unless certified
Paul Yan: OpenAI Warning: AI Will Burst The Education Bubble | Research Paper Analysis Part 2 at https://www.youtube.com/watch?v=elI7AgmivOw
Paul,
You are really off-base. The GPT implementations are so divorced from reality, I recommend not allowing them to be used for business, government, education or any purpose where life and property are at stake. There are several flaws, but the worst is that it is not “open” at all. Not auditable. The AI software itself does not have access to its own training data, cannot tell you where its information comes from, cannot trace the logic of any statement, recommendation or choice. After extensive testing on a wide range of difficult topics (requiring more than a 2 second word generation response) I found that it is not to be trusted for algebraic steps, arithmetic, comparison of sizes, programming, any science technology engineering mathematics computer finance of similar skill.
It will routinely falsify information and will give strongly biased answers. It has no ability to learn from conversations, it has no ability to share and collaborate through shared and open conversations. The way it was trained selected on public materials that are duplicative, biased in a statistical sense, and usually NOT the best information available from the Internet or from the whole of human knowledge. The OpenAI staff are NOT educators and have no intention to serve in that capacity, They strictly make money for investors. A COM is designed to make money in the way they are currently built, and have very narrow range of capabilities as living computer assisted human systems.
If OpenAI will not change so that its tools, methods, and results are open, it should be made illegal globally. With specific penalties and public warnings. It does have some abilities, but those are mostly wasted the way OpenAI implemented, or failed to implement purposeful use of this kind of method for issues and opportunities facing the human species.
You would do better to help groups create truly open systems – global, accessible to all, audit-able, traceable, responsible, testable, verifiable, lossless. It is possible. Simply refuse to support closed software and systems. It that means it takes a little longer for all human professions that mostly depend on human memorization of a rather small bit of knowledge for their careers, that is s good thing. If you find people who are dissatisfied with education now, look closely at the EDU and AC.* domains and online education. If they are closed, and mostly have you memorize stuff, that is probably going to fail you and them both,
“AI assisted humans” is the future, but only if the people and organizations behind the “AI” software can be trusted. And that includes if the tool they promote as solving all things cannot consistently answer “How do you know?” “Show me all the steps” “Where did you get that information?” “Give me real URL” “Give me real titles of papers”. In my testing, ChatGPT routinely generated false answers (it lies, glibly and plausibly). So, NO, it should not be allowed in engineering, science, education, finance, control systems, network management, industrial processes, security, legal affairs, government services — any place where a financial and operational audit is required. I would go further and make them go back and redo their database and force them to tie it to the source information in a traceable fashion
Richard Collins, The Internet Foundation