Jason Li

1.2K posts

Jason Li banner
Jason Li

Jason Li

@jasonjli

Software developer, AI researcher, musician, member of ABC Advisory Council 2019-2022. All tweets are my own views, retweet do not equal endorsement.

Canberra Katılım Ocak 2011
159 Takip Edilen59 Takipçiler
Jason Li
Jason Li@jasonjli·
@isobelroe Congratulations, and that’s not a bad photo either!
English
0
0
0
36
Jason Li retweetledi
Arif Ashraf
Arif Ashraf@aribidopsis·
I was reading 2023 Nobel laureate Katalin Kariko's book, Breaking Through. She was brutally honest about the academic system in US. And, here is an example section from her book.
Arif Ashraf tweet media
English
214
1.5K
7.5K
2.6M
Jason Li retweetledi
Hamel Husain
Hamel Husain@HamelHusain·
Dang 🤗 Text Generation Inference server (a popular tool for LLM inference) is 🚨no longer commercially permissible. 🚨 Apache 2.0 -> HFOIL 1.0 Never thought this would happen to a HuggingFace product, but its a business 💰💰 so it's understandable github.com/huggingface/te…
English
18
20
181
217.4K
Jason Li retweetledi
John Gibbons 🇵🇸
John Gibbons 🇵🇸@think_or_swim·
Not to be alarmist but…this is what’s called a six-sigma event, now unfolding in Antarctica. Otherwise known as a once-in-7.5-million-year event. Hang onto your hats. HT @EliotJacobson
John Gibbons 🇵🇸 tweet media
English
878
7.1K
21K
6M
Jason Li retweetledi
Jason Li retweetledi
Philipp Singer
Philipp Singer@ph_singer·
Wow, I am getting crazy good local scores with the Llama-2-7b-chat already. I believe this model will make most chat-finetuning redundant. Also, output seems to be very sensitive on the system prompt, allowing to do similar type of prompt engineering as for gpt.
English
1
9
55
11.7K
Jason Li retweetledi
Los Angeles Times
Los Angeles Times@latimes·
It’s official: Actors are on strike. SAG-AFTRA members will join Hollywood writers on the picket lines for the first time in 63 years, as part of a historic labor battle. latimes.com/entertainment-…
English
43
863
2.6K
508.1K
Jason Li
Jason Li@jasonjli·
Sounds like a raw deal.
English
0
0
0
19
Jason Li retweetledi
Yann LeCun
Yann LeCun@ylecun·
One cannot just "solve the AI alignment problem." Let alone do it in 4 years. One doesn't just "solve" the safety problem for turbojets, cars, rockets, or human societies, either. Engineering-for-reliability is always a process of continuous & iterative refinement.
English
140
279
2.4K
350.6K
Jason Li retweetledi
Sayash Kapoor
Sayash Kapoor@sayashk·
OpenAI's ChatGPT lost its browsing feature a couple of days ago, courtesy of @random_walker's demonstration that it could output entire paywalled articles. But Bing itself continues to serve up paywalled articles, word for word, no questions asked.
Sayash Kapoor tweet media
English
10
60
265
54.2K
Jason Li
Jason Li@jasonjli·
“Life finds a way.”
Veera Rajagopal @doctorveera

A mind-blowing paper has come out today in @Nature In 2016, JC Venter Institute scientists trimmed a bacterial genome to its barest minimum required for life to synthesize what they called a "minimal genome" (science.org/doi/10.1126/sc…). Today, a group of scientists from Indiana University reports how that minimal genome evolved over 2000 generations in comparison to the non-minimal genome. The authors found that even when you reduce a bacterial genome to its absolute minimum where every nucleotide matters, the genome undergoes mutational events generation after generation as much as the non-minimal genome. One simply cannot stop the evolution. Just over 300 days of evolution (equivalent to 40,000 years in humans) the minimal cell has gained everything it lacked in fitness on day one in comparison to the non-minimal cell. When comparing the evolved traits between the minimal and non-minimal cells, the scientists found something striking. The evolutionary process increased the cell size of non-minimal cells but not that of the minimal cell. But that is not the striking part. The scientists were able to identify the key mutation that resulted in cell size evolution. And it turned out that the mutation that helped the non-minimal cells to grow bigger is the same that helped the minimal cells to stay smaller. Growing bigger had a survival advantage for non-minimal cells and not growing bigger had a survival advantage for minimal cells. So, the mutation had a context-dependent effect. This just demonstrates that the evolutionary effects on traits have no absolute direction. All that matter is what is beneficial for the organism's survival. The conclusion of the paper is metaphorically a quote from the Jurassic Park movie: “Listen, if there’s one thing the history of evolution has taught us is that life will not be contained. Life breaks free. It expands to new territories, and it crashes through barriers painfully, maybe even dangerously, but . . . life finds a way". (scienmag.com/artificial-cel…) nature.com/articles/s4158…

English
0
0
0
20
Marc Bee
Marc Bee@marcbeaupre·
@leavittron Again, ultimately, I know nothing, but there could also be advances that require much less magnetic field.
English
1
0
1
223
Matthew Leavitt
Matthew Leavitt@leavittron·
As a neuroscientist imma call bullshit on this. All these "mind reading" techniques rely on an fmri scanner: a multimillion dollar, 10000lb+ machine that requires a purpose-built facility and you have to lie perfectly still in it for it to work. Nobody's stealing your thoughts
Enzo Avigo@0zne

We’re basically done.

English
120
119
1.2K
400.9K
Jason Li retweetledi
Tom Goldstein
Tom Goldstein@tomgoldsteincs·
Training an LLM takes about 1 trillion words. That’s about 30,000 years of typing. But where does this data come from? And what does this have to do with the Reddit protests? Here’s how OpenAI trains models on “the entire internet.” 🧵📜
English
15
293
1.1K
357.5K
Jason Li retweetledi
Raunak
Raunak@raunakdoesdev·
A recent work from @iddo claimed GPT4 can score 100% on MIT's EECS curriculum with the right prompting. My friends and I were excited to read the analysis behind such a feat, but after digging deeper, what we found left us surprised and disappointed. dub.sh/gptsucksatmit 🧵
Aran Komatsuzaki@arankomatsuzaki

Exploring the MIT Mathematics and EECS Curriculum Using Large Language Models Presents a comprehensive dataset of 4,550 questions and solutions from all MIT EECS courses required for obtaining a degree arxiv.org/abs/2306.08997

English
53
783
3.2K
3.5M
Jason Li retweetledi
Nora Belrose
Nora Belrose@norabelrose·
Ever wanted to mindwipe an LLM? Our method, LEAst-squares Concept Erasure (LEACE), provably erases all linearly-encoded information about a concept from neural net activations. It does so surgically, inflicting minimal damage to other concepts. 🧵 arxiv.org/abs/2306.03819
English
46
243
1.3K
298.8K