Author: Richard K Collins
The Internet Foundation
Internet policies, global issues, global open lossless data, global open collaboration
Our current human population: about 8 billion people. What sizes of shadow do they cast while floating any orientation, if lighted by the sun directly overhead? by mass. A 3D anthropometric database? A full body MRI database? To begin to estimate acoustic levitation pressures.🔥
Pradeep, There are many things I do not like about this video. (1) It is “talking heads” so every viewer has to convert voice to image in their heads. If V2I is hard for AIs, it is also hard for humans. Dinkar Srivastava has some in his head, but he gives no open model for
Read More »
Matt Krisiloff @mattkrisiloff Aug 9, 2022Â Scientists at startups should care more about equity. Most scientists don’t fully see the value of owning shares. That sucks for them, and it sucks for the startups they’re at too. I wrote a post on this, and I hope we can change this. https://mattkrisiloff.com/scientists-should-care-more-about-equity Replying to @mattkrisiloff and
Read More »
Peter Boshard Olson @peterbolson Clear explanation from @LangChainAI’s @hwchase17: There are 4 main ways to give an AI app background info: 1. Instruction prompting 2. Few-shot examples 3. RAG 4. Fine-tuning It hadn’t occurred to me that these are all ways of accomplishing the same broad goal. 🧵(1/2) https://pic.twitter.com/9QBuH3xQOR Replying to @peterbolson @LangChainAI and @hwchase17
Read More »
Yi Ma @YiMaTweets Our white-box transformer paper is over 100 pages. Just believe some truly important idea is better conveyed through wholesale than retail. 8-page conference papers are prone to encourage incremental progresses or fragmented ideas. One needs to see the big picture once a while. Replying to @YiMaTweets Just process your 100 pages and
Read More »
kamalikac @kamalikac Do representation learning models memorize their training data? To understand this, we propose a new method called Deja vu to measure memorization in these models. #NeuRIPS2023 Replying to @kamalikac Full access to lossless complete source data is critical to accepted and trustworthy AIs. I criticize all AI groups now that refuse to cite
Read More »
Gary Marcus @GaryMarcus Hate blocking people but when I do, it’s almost always for dwelling at the bottom level of this pyramid. (Twice this morning already). If you keep your arguments against me to the top half of the pyramid, I promise I won’t block, no matter how strenuously I disagree. twitter.com/garymarcus/sta… I hope you
Read More »
Q-CTRL @qctrlHQ Large-scale #quantum computers will require some form of error correction – a distant prospect. In the meantime, the best way to tame unruly, near-term quantum processors is “error suppression”. We’re proud to bring this technology to @IBM Quantum. https://buff.ly/3TgemhQ Replying to @qctrlHQ and @IBM Groups finally realize that quantum components are just components
Read More »
Lorena Barba @labarba@fosstodon.org @LorenaABarba Quoted in @Nature article “Is AI leading to a reproducibility crisis in science?” by @philipcball, I may sound a bit harsh, but it’s the truth… #SciML #reproducibility https://pic.twitter.com/T0HjaCuMK3 Replying to @LorenaABarba @Nature and @philipcball Not “leading to”. The sciences and much online already use closed methods and “nearly impossible to trace
Read More »
Sasi @freest_man Naive Bayes Classification assumes each feature is independent of others. Naive Bayes classifiers are probabilistic classifiers that predict based on the probability of an object. They’re based on Bayes’ Theorem and assume that every pair of features being classified is… https://pic.twitter.com/KYIUhRKbmQ Replying to @freest_man Which is why GPTs grab as many nearby tokens
Read More »