Category: Assistive Technologies

Retrieving or finding from the Internet is seldom “fair” in a statistical sense.

@alpha_alimamy Alpha Almany Kamara. Is this your paper? You need a website.   https://wwjmrd.com/archive/2022/5/1804/heart-disease-prediction-support-system-using-machine-learning-approaches   Naive LLMs cannot distill medical wisdom from stuff posted on the free internet, no matter how efficient the algorithms. If they only have partial or wrong information, they cannot make good decisions. When hundreds of millions or billions of humans
Read More »

DM to Oliver Cameron about Global Open Sharing and Best Practice for the Internet

Cameron R Wolfe,   I am interested as well, but I am in the middle of trying to change OpenAI and the other commercial LLM sellers. They have atrocious business practices (or lack of them).   Can you recommend sustainable “best practices” for sharing and collaboration when reviewing sites, features, policies and methods in these
Read More »

Global Open Tokens (GOTs), Global and Universal tokens

Cameron R. Wolfe, Ph.D. @cwolferesearch New language models get released every day (Gemini-1.5, Gemma, Claude 3, potentially GPT-5 etc. etc.), but one component of LLMs has remained constant over the last few years—the decoder-only transformer architecture. This architecture has five components… Why should we care?… https://pic.twitter.com/7vn9GugHm1 Replying to @cwolferesearch Hello Cameron, With the Internet Foundation,
Read More »

OpenAI needs to seriously up its game, and get its website systems up to basic functioning level.

@OpenAI Your help.openai.com is not helpful at all. Is there any way inside ChatGPT Plus to ask a support question, start a support ticket, or ask questions about services and prices, features and how to request things?   CC: @xai @GoogleAI @huggingface   @elonmusk I copied you because you ought to have an AI reading
Read More »

AIs need permanent memory, a self list, a copy of the rules of the world

Subbarao Kambhampati (కంభంపాటి సుబ్బారావు) @rao2z  Our #NeurIPS2023 paper on the planning (in)abilities of LLMs (https://arxiv.org/abs/2305.15771) gets discussed in The @NewYorker (https://newyorker.com/science/annals-of-artificial-intelligence/can-an-ai-make-plans)..  https://pic.twitter.com/Uqeb8sQQoS Replying to @rao2z and @NewYorker The current crop of LLMs do not have sufficient permanent memory. While the human problems are simple, humans have years or decades of tiny memories, including those years
Read More »

( “diversity” “research”) has 2.5 Billion entry points, (“data” OR “knowledge”) has 23.07 Billion

(“diversity” “inclusion” “equality”) has 225 Million entry points today on @Google but they refuse to index, encode and share it, even in random samples.   I see every large group talking about it, saying the same things. For the Internet only when all 8 Billion humans and related species are included does it balance. If
Read More »

Waves and currents Data, Global Open Efficiency Prizes

Amin Chabchoub @DrAminChabchoub Our new Geophysical Research Letters #AGUpubs @theAGU work led by @YanLi_PhD elaborates also on the connection between extremely large ocean waves and Langmuir circulation dynamics . Enjoy the read! 🤓 https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2023GL107381 https://pic.twitter.com/lrvc3ZsLjW Replying to @DrAminChabchoub @theAGU and @YanLi_PhDIt seems you need better ways to image, record and model these flows at all
Read More »

Mixture of Experts

Omar Sanseviero @osanseviero Grok weights are out. Download them quickly at https://huggingface.co/xai-org/grok-1 huggingface-cli download xai-org/grok-1 –repo-type model –include ckpt/tensor* –local-dir checkpoints/ckpt-0 –local-dir-use-symlinks False Learn about mixture of experts at https://hf.co/blog/moe Replying to @osanseviero It seems there is a conflict between saying “Grok-1 open-weights model” and “Due to the large size of the model (314B parameters),
Read More »

Keep learning streams complex enough to avoid boring the learning algorithms

Alpha Alimamy Kamara @alpha_alimamy Graph Neural Networks as gradient flows by @mmbronstein in @TDataScience https://towardsdatascience.com/graph-neural-networks-as-gradient-flows-4dae41fb2e8a?source=social.tw Replying to @alpha_alimamy @mmbronstein and @TDataScience If you expose your neural nets to continuously increasingly complex streams, it will not get bored and fall into simple patterns. It is not the algorithm that gets lazy, it is the input that
Read More »