AIs solving shallow problems is NOT the most critical issue.
lmsys.org @lmsysorg Introducing VTC: the first fair scheduler for LLM serving.
Are you troubled by OpenAI’s rate limit? Although it’s a necessary mechanism to prevent any single client from monopolizing the request queue, we demonstrate that it doesn’t guarantee either fairness among clients or… https://pic.twitter.com/Bs42RloLHS
Replying to @lmsysorg
I would much rather have a price range. If I request that it spend an hour on a problem then post the results to a file or folder, it can schedule to get the job done. Right now, “forget everything as you go” is more harmful than any rate limitations. Most every real problem takes tens of steps or tens of millions of steps.
The database and responses are not the problem. but bad algorithms using the data. Most of the errors are “lack of checking results before speaking”. I have to constantly repeat because the interface memory is so tiny. OpenAI policy throws away all permanent memory, so us poor humans have to keep instructing an app that has no permanent memory. That is strictly limited design, not any fundamental limit on using this kind of parameter database. I know that no one will listen to a person who has only spent 58 years working at these kinds of things, but I will write it anyway.