Juan Cobos

122 posts

Juan Cobos banner
Juan Cobos

Juan Cobos

@JCobosAlvarez

PhD student in Neuroscience at CNRS Montpellier

เข้าร่วม Eylül 2021
555 กำลังติดตาม47 ผู้ติดตาม
Juan Cobos
Juan Cobos@JCobosAlvarez·
@De_dicto The link isn't working, could you share it?
English
1
0
0
389
Juan Cobos รีทวีตแล้ว
Takeshi Imai
Takeshi Imai@TakeshiImaiLab·
Our live tissue clearing paper is out in @naturemethods! We achieved optical clearing of mammalian brain tissues without compromising normal neuronal function. Big congrats to @Shigenori774 and our wonderful collaborators! 🎉 nature.com/articles/s4159… (1/10)
English
19
184
704
146.8K
Juan Cobos
Juan Cobos@JCobosAlvarez·
@elonmusk That means anyone can crush your car with a gesture. Doesn't seem the best feature
English
0
1
1
18
aditya
aditya@adxtyahq·
NASA writes mission-critical flight software in C. And the rules are absolutely INSANE. > No recursion. Ever. > Every loop must have a provable upper bound. > No dynamic memory allocation after initialization. > Max ~60 lines per function. > Minimum 2 assertions per function. > Every return value must be checked. > Zero compiler warnings allowed. > Daily static analysis. Zero warnings there too. > No function pointers. > Restricted pointer dereferencing. This is how they write code at NASA / JPL for mission-critical systems.
aditya tweet media
English
804
1.5K
19.6K
1.8M
Juan Cobos
Juan Cobos@JCobosAlvarez·
Just one last prompt
English
0
0
0
14
Juan Cobos รีทวีตแล้ว
Pavel Durov
Pavel Durov@durov·
Today, Telegram notified all its users in Spain with this alert: Pedro Sánchez’s government is pushing dangerous new regulations that threaten your internet freedoms. Announced just yesterday, these measures could turn Spain into a surveillance state under the guise of “protection.” Here’s why they’re a red flag for free speech and privacy: 1. Ban on social media for under-16s with mandatory age verification: This isn’t just about kids—it requires platforms to use strict checks, like needing IDs or biometrics. ⚠️ Danger: It sets a precedent for tracking EVERY user’s identity, eroding anonymity and opening doors to mass data collection. What starts with minors could expand to all, stifling open discourse. 2. Personal and criminal liability for platform executives: If “illegal, hateful, or harmful” content isn’t removed fast enough, bosses face jail. ⚠️ Danger: This will force over-censorship—platforms will delete anything remotely controversial to avoid risks, silencing political dissent, journalism, and everyday opinions. Your voice could be next if it challenges the status quo. 3. Criminalizing algorithm amplification: Amplifying “harmful” content via algorithms becomes a crime. ⚠️ Danger: Governments will dictate what you see, burying opposing views and creating echo chambers controlled by the state. Free exploration of ideas? Gone—replaced by curated propaganda. 4. “Hate and polarization footprint” tracking: Platforms must monitor and report how they “fuel division.” ⚠️ Danger: Vague definitions of “hate” could label criticism of the government as divisive, leading to shutdowns or fines. This can be a tool for suppressing opposition. These aren’t safeguards; they’re steps toward total control. We’ve seen this playbook before—governments weaponizing “safety” to censor critics. On Telegram, we prioritize your privacy and freedom: strong encryption, no backdoors, and resistance to overreach. ✊ Stay vigilant, Spain. Demand transparency and fight for your rights. Share this widely—before it’s too late.
English
2.6K
11.2K
38.9K
3M
Juan Cobos
Juan Cobos@JCobosAlvarez·
One of the greatest skills nowadays is being comfortable with uncertainty
English
0
0
0
16
Iason Gabriel
Iason Gabriel@IasonGabriel·
🚨Exciting new opportunity🚨 Come and work with me and a fantastic team @GoogleDeepMind exploring the political, economic, social and cultural impact of advanced AI technology, including AGI and beyond! The details and application link can be found below!
English
24
47
503
173.4K
Juan Cobos
Juan Cobos@JCobosAlvarez·
I've used claude after a while of using chatgpt and I'm wholy amazed. Now I'm scared of trying gemini3
English
0
0
0
47
Juan Cobos
Juan Cobos@JCobosAlvarez·
I've used claude after a while of using chatgpt and I'm wholy amazed. Now I'm scared of using gemini3
English
0
0
0
35
Juan Cobos รีทวีตแล้ว
nature
nature@Nature·
Living costs and politics have lead to PhD students studying abroad Read the full story: go.nature.com/4oRxW1c
nature tweet media
English
3
71
235
30.7K
Juan Cobos รีทวีตแล้ว
elDiario.es
elDiario.es@eldiarioes·
Más de 850 investigadores siguen sin contrato predoctoral por el retraso del Ministerio en resolver las ayudas eldiario.es/andalucia/850-…
Español
3
51
60
21.3K
Juan Cobos
Juan Cobos@JCobosAlvarez·
@ebarenholtz Though I agree with your critic to static files, not sure how retrieval is different from (re)generating. In fact, to generate meaningful sequences, not only context but stored/internal representations are essential, both in LLMs and humans
English
2
0
3
377
Elan Barenholtz
Elan Barenholtz@ebarenholtz·
No, there are no static “files”. There is only generative capacity. That’s why you can “remember” your mother laughing or crying or in a blue dress or riding a dinosaur. You aren’t accessing a stored representation. You are generating a just in time sensory-perceptual pattern based on the cognitive demands/context. LLMs have shown us the way to a complete rethinking of the nature of memory. Memory is generation, not retrieval open.substack.com/pub/elanbarenh…
Steven Pinker@sapinker

"But what is a mental image? Not for the last time, I got caught up, when I was in graduate school, in a raging controversy called the imagery debate. And this was a debate over the format of mental imagery. Now, everyone reports the experience of a mental picture, and that’s the way we describe it in English. But, of course, there isn’t a real picture in the brain. It isn’t as if there’s a little man sitting in a theater who’s looking at a screen. The behaviorists were right about this. How do we make sense of what’s going on in people’s heads when they say they have a mental picture? The easiest way to make sense of it is to use the concept from computer science of an image file—basically, an array of pixels, as we would now call them. And my graduate advisor, Stephen Kosslyn, came up with what at the time was a revolutionary theory: that images were like image files in a computer." I'll be saying more about the workings of the human mind in February 2026 in Australia and New Zealand. @thinkableevents

English
49
27
278
35.4K
Juan Cobos
Juan Cobos@JCobosAlvarez·
@karpathy When will be "nanochat from scratch" 10h video uploaded?
English
0
0
0
15
Andrej Karpathy
Andrej Karpathy@karpathy·
Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single, dependency-minimal codebase. You boot up a cloud GPU box, run a single script and in as little as 4 hours later you can talk to your own LLM in a ChatGPT-like web UI. It weighs ~8,000 lines of imo quite clean code to: - Train the tokenizer using a new Rust implementation - Pretrain a Transformer LLM on FineWeb, evaluate CORE score across a number of metrics - Midtrain on user-assistant conversations from SmolTalk, multiple choice questions, tool use. - SFT, evaluate the chat model on world knowledge multiple choice (ARC-E/C, MMLU), math (GSM8K), code (HumanEval) - RL the model optionally on GSM8K with "GRPO" - Efficient inference the model in an Engine with KV cache, simple prefill/decode, tool use (Python interpreter in a lightweight sandbox), talk to it over CLI or ChatGPT-like WebUI. - Write a single markdown report card, summarizing and gamifying the whole thing. Even for as low as ~$100 in cost (~4 hours on an 8XH100 node), you can train a little ChatGPT clone that you can kind of talk to, and which can write stories/poems, answer simple questions. About ~12 hours surpasses GPT-2 CORE metric. As you further scale up towards ~$1000 (~41.6 hours of training), it quickly becomes a lot more coherent and can solve simple math/code problems and take multiple choice tests. E.g. a depth 30 model trained for 24 hours (this is about equal to FLOPs of GPT-3 Small 125M and 1/1000th of GPT-3) gets into 40s on MMLU and 70s on ARC-Easy, 20s on GSM8K, etc. My goal is to get the full "strong baseline" stack into one cohesive, minimal, readable, hackable, maximally forkable repo. nanochat will be the capstone project of LLM101n (which is still being developed). I think it also has potential to grow into a research harness, or a benchmark, similar to nanoGPT before it. It is by no means finished, tuned or optimized (actually I think there's likely quite a bit of low-hanging fruit), but I think it's at a place where the overall skeleton is ok enough that it can go up on GitHub where all the parts of it can be improved. Link to repo and a detailed walkthrough of the nanochat speedrun is in the reply.
Andrej Karpathy tweet media
English
691
3.4K
24.3K
5.8M
NVIDIA GeForce
NVIDIA GeForce@NVIDIAGeForce·
🟢 GEFORCE DAY IS BACK 🟢 To celebrate, we're giving away TWO GeForce RTX 5080 Founders Edition GPUs, signed by NVIDIA CEO Jensen Huang. Want one? Comment "GeForce Day" for a chance to WIN & stay tuned for more!
NVIDIA GeForce tweet media
English
58.5K
3.6K
47.7K
5.9M
Juan Cobos รีทวีตแล้ว
François Chollet
François Chollet@fchollet·
GenAI isn't just a technology; it's an informational pollutant—a pervasive cognitive smog that touches and corrupts every aspect of the Internet. It's not just a productivity tool; it's a kind of digital acid rain, silently eroding the value of all information. Every image is no longer a glimpse of reality, but a potential vector for synthetic deception. Every article is no longer a unique voice, but a soulless permutation of data, a hollow echo in the digital chamber. This isn't just content creation; it's the flattening of the entire vibrant ecosystem of human expression, transforming a rich tapestry of ideas into a uniform, gray slurry of derivative, algorithmically optimized outputs. This isn't just innovation; it's the systematic contamination of our data streams, a semantic sludge that clogs the channels of genuine communication and cheapens the value of human thought—leaving us to sift through a digital landfill for a single original idea.
English
454
987
6.5K
678.2K