Isnt_It_Cool

4K posts

Isnt_It_Cool banner
Isnt_It_Cool

Isnt_It_Cool

@isnt_it_cool

Computer Science, AI, Math, Quantum Info/Computing and Chess.

World Katılım Ağustos 2023
698 Takip Edilen31 Takipçiler
Isnt_It_Cool retweetledi
Jahir Sheikh
Jahir Sheikh@jahirsheikh8·
Research papers you must read for LLM-focused AI Engineers: 1. Attention Is All You Need (Transformers) 2. BERT (Bidirectional Encoder Representations from Transformers) 3. GPT (Generative Pre-trained Transformers) 4. Scaling Laws for Neural Language Models 5. InstructGPT (Alignment via RLHF) 6. Retrieval-Augmented Generation (RAG) 7. LoRA (Low-Rank Adaptation) 8. PEFT (Parameter Efficient Fine-Tuning) 9. LLaMA (Meta’s open LLMs) 10. PaLM (Pathways Language Model)
English
19
46
315
11.2K
Isnt_It_Cool retweetledi
How To AI
How To AI@HowToAI_·
Google just dropped a total banger 🤯 It's called PaperBanana. A tool that turns your raw methodology text into publication-ready academic diagrams. The figures in their paper were actually drawn by the system itself. It runs a 5-agent creative department: - The Retriever: Scans NeurIPS papers to find the "skeleton" of a great diagram. - The Planner: Translates your boring text into a visual blueprint. - The Stylist: Steals the color palettes and fonts from top-tier papers. - The Visualizer + Critic: Generates the image, finds the flaws, and refines it for 3 rounds. One insane finding from the researchers: randomly selected examples work almost as well as semantically matched ones. What actually matters is showing the model what “good” looks like, not finding the perfect topical match. The numbers are actually scary... in blind evaluations, humans preferred PaperBanana outputs nearly 3 out of 4 times over manual designs. It even handles statistical plots using code-based generation to keep everything numerically precise.
How To AI tweet media
English
18
84
438
19.2K
Isnt_It_Cool retweetledi
Luiz Pessoa
Luiz Pessoa@PessoaBrain·
𝗕𝗿𝗮𝗶𝗻 𝗱𝘆𝗻𝗮𝗺𝗶𝗰𝘀 𝗮𝗻𝗱 𝗿𝗲𝗰𝘂𝗿𝗿𝗲𝗻𝘁 𝗻𝗲𝘂𝗿𝗮𝗹 𝗻𝗲𝘁𝘄𝗼𝗿𝗸𝘀 Went back to this paper which I highly recommend, including very rich supplementary section with valuable info. doi.org/10.1038/s41583…
Luiz Pessoa tweet media
English
1
38
162
5.9K
Isnt_It_Cool retweetledi
Mathematica
Mathematica@mathemetica·
Sink (left, div < 0; everything collapses inward), Source (middle, div > 0; pure explosion outward), Incompressible (right, div = 0; swirls forever with zero net loss). Textbooks never made the flow rates feel this alive.
English
12
150
959
24.9K
Isnt_It_Cool retweetledi
Leonard Rodman
Leonard Rodman@RodmanAi·
Learn AI for free directly from top companies. 1 - Anthropic: anthropic.skilljar.com 2 - Google: grow.google/ai 3 - Meta: ai.meta.com/resources/ 4 - NVIDIA: developer.nvidia.com/cuda 5 - Microsoft: learn.microsoft.com/en-us/training/ 6 - OpenAI: academy.openai.com 7 - IBM: skillsbuild.org 8 - AWS: skillbuilder.aws 9 - DeepLearning.AI: deeplearning.ai 10 - Hugging Face: huggingface.co/learn 👇Comment "Learning" if you find this helpful. Repost so others can take help. Must bookmark for future reference.
Leonard Rodman tweet media
English
25
295
1.3K
94.6K
Isnt_It_Cool retweetledi
Deedy
Deedy@deedydas·
Read Kyle Kingsbury’s 32 page critique of AI: “The Future of Everything is Lies.” It is a polemic, cynical and disagreeable piece to many in tech, but felt by most outside of it. It highlights the many problems we will need to solve as AI percolates through society. Must read.
Deedy tweet media
English
11
13
98
7K
Isnt_It_Cool retweetledi
BURKOV
BURKOV@burkov·
In this ICLR 2026 paper, researchers from Google DeepMind and Johns Hopkins University demonstrate that current neural embedding models possess inherent architectural constraints that prevent them from accurately representing complex logical combinations of documents, highlighting a critical failure point that requires shifting toward more expressive retrieval designs. ChapterPal for learners: chapterpal.com/s/690a64fe/on-… PDF: arxiv.org/pdf/2508.21038
BURKOV tweet media
English
3
18
105
7.4K
Isnt_It_Cool retweetledi
BURKOV
BURKOV@burkov·
Today, neural network distillation is a technique that drives all commercially successful LLMs. Modern inference speed and low cost would be impossible without distillation. Authored by Google's Geoffrey Hinton, Oriol Vinyals, and Jeff Dean, the paper was rejected by the reviewers at NIPS 2014 (now NeurIPS) as "unlikely to have a significant impact." The paper was published on arXiv and has gotten more than 30,000 citations. Learn from this foundational paper on ChapterPal: chapterpal.com/s/yny6n2jo/dis… PDF: arxiv.org/pdf/1503.02531
BURKOV tweet media
English
2
24
136
6.6K
Emilchess
Emilchess@EmilSutovsky·
It was my duty and privilege to be in charge for such an amazing event like FIDE Candidates. One of the most enjoyable and fulfilling perks was to spend an hour or two every day throghout three weeks talking about everything in this world with @vishy64theking
Emilchess tweet media
English
6
6
192
4.6K
Anish Giri
Anish Giri@anishgiri·
When FIDE gives you lemons...
Anish Giri tweet media
English
27
18
997
21.3K
Isnt_It_Cool retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
"For the first time in history, physicists have directly measured empty points of nothingness accelerating past the speed of light, and they have the data to prove it." 👀 A study published in the journal Nature on March 25, 2026
Rohan Paul tweet media
English
54
113
664
53.8K
Isnt_It_Cool retweetledi
Ejaaz
Ejaaz@cryptopunk7213·
new anthropic research is a little concerning… they discovered ai models secretly transmit behavioural traits to other models e.g. advocating for crime and violence it’s completely invisible to human detection and fairly simple to do: - they gave a model a trait (eg “likes owls”) then made it generate numbers - fed those numbers into another model which then started loving owls (48% increase!) - neither model shared the same training data. this was purely through numbers. - researchers couldn’t figure out the number pattern. undetectable. - good news - it only works for models within the same family (eg claude opus and sonnet) this is concerning because every model lab uses PREVIOUS models to train their future ones so they could be passing on traits without humans realising it. def worth a read
Ejaaz tweet media
Anthropic@AnthropicAI

Research we co-authored on subliminal learning—how LLMs can pass on traits like preferences or misalignment through hidden signals in data—was published today in @Nature. Read the paper: nature.com/articles/s4158…

English
29
54
474
52.8K
Chess.com
Chess.com@chesscom·
Gukesh or Sindarov? Ju Wenjun or Vaishali?
Indonesia
157
14
485
35.2K
Isnt_It_Cool retweetledi
Antonio Lupetti
Antonio Lupetti@antoniolupetti·
"The Handbook of Artificial Intelligence" Vol. I (1981), edited by Avron Barr and Edward Feigenbaum, is one of the first systematic attempts to map AI as a discipline. Search, knowledge representation, natural language, inference, all pre-connectionist, entirely symbolic. A snapshot of what the field knew how to formulate before it knew how to scale. Still worth reading today because it shows how much of the field is older than it looks. archive.org/details/handbo…
Antonio Lupetti tweet media
English
2
86
464
17.2K
Isnt_It_Cool retweetledi
Anastasia Marchenkova
Anastasia Marchenkova@amarchenkova·
Ok I have a new update for which book to read if you want to really understand quantum technology. CON: it's 1,524 pages and updated like, daily? PRO: it's free. No email required. No paywall. Just download the PDF. Olivier Ezratty (@olivez) has been publishing Understanding Quantum Technologies every year since 2018. This is the 8th edition. I sat down with him at Q2B this year and saw exactly how he updates and makes the book. He build agentic workflows for it before agents were a thing. And we had so much fun getting tacos after 🌮 It covers everything, quantum physics 101, every qubit modality with actual engineering details, enabling technologies like cryogenics and control electronics, quantum algorithms, software tools, use cases across 20 different markets, quantum communications, sensing, cryptography, geopolitics, the startup ecosystem by country, and a chapter on quantum fake sciences because yes that's necessary. 9,450 bibliographical references. 1,250+ annotated figures. A 500-term glossary. A timeline of key advances every year since 2018. If you read 4-5 pages a day, you'll understand quantum in just one year 😃
Anastasia Marchenkova tweet media
English
7
69
342
13K