Ben

1.5K posts

Ben banner
Ben

Ben

@beenwilli

interwebs navigator 👨‍✈️

Knoxville, TN Katılım Kasım 2021
513 Takip Edilen128 Takipçiler
Ben retweetledi
Ben retweetledi
Thinking Machines
Thinking Machines@thinkymachines·
People talk, listen, watch, think, and collaborate at the same time, in real time. We've designed an AI that works with people the same way. We share our approach, early results, and a quick look at our model in action. thinkingmachines.ai/blog/interacti…
English
419
1.8K
14.6K
6.7M
Ben retweetledi
Isomorphic Labs
Isomorphic Labs@IsomorphicLabs·
Today marks a pivotal moment for Isomorphic Labs. We have secured $2.1 Billion in our second external funding round, led by Thrive Capital. They are joined at the table by Alphabet, GV and new investors MGX, Temasek, CapitalG and the UK Sovereign AI Fund. This milestone accelerates our ability to build the pioneering novel AI models that power our AI drug design engine (IsoDDE) and deploy them at scale: delivering scientific breakthroughs with a precision previously thought impossible, accelerating and expanding our pipeline of therapeutic programs toward the clinic. All with the ultimate goal of delivering life-changing new medicines to patients. Moving forward, we will scale our drug candidate pipelines across multiple therapeutic areas, expand our global footprint, and push the boundaries of frontier AI research to power our drug design engine. Deeply grateful to everyone sharing our vision to solve all disease with AI. Let’s build the future of medicine. Read the full announcement here: bit.ly/4v2OI03
English
47
200
1.8K
221.2K
Ben retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default 3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default ...4,5,6,... n) interactive neural videos/simulations Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral x.com/zan2434/status… There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.
Thariq@trq212

x.com/i/article/2052…

English
825
1.7K
16.9K
2.4M
Ben
Ben@beenwilli·
So cool!!!! 👨‍🔬 Googs new initiative to apply quantum science and AI to the life sciences is underway times 10k to ♾️ power @google blog.google/innovation-and…
English
0
0
0
2
Ben
Ben@beenwilli·
I love you so you are loved
English
0
0
0
1
Ben
Ben@beenwilli·
Everyone should still follow the Golden Rule even if your political whatever says you shouldn’t… there’s your sign 🪧
English
0
0
0
4
Ben
Ben@beenwilli·
I just wana find a bubble filled with smart ppl - the outkasted nerds who normal ppl couldn’t relate to. Put me where I don’t have deal mean horrible ppl who say mean horrible degrading things about others on a daily basis. I wana chuck my cell in the TN river & drive far away
English
0
0
0
6
Palmer Luckey
Palmer Luckey@PalmerLuckey·
It is time for the United States Postal Service to ban junk mail. Unsolicited spam calls are already prohibited by the FCC. Emails are heavily regulated by the CAN-SPAM Act of 2003. Junk mail is the majority of mail, 100 million trees per year. Enough!
Palmer Luckey tweet media
English
3.1K
3.6K
43.8K
3.6M
The Hacker News
The Hacker News@TheHackersNews·
🚨 CVE-2026-7482 in Ollama could let remote attackers leak process memory from more than 300,000 exposed servers using crafted GGUF files. Separate unpatched Windows flaws enable persistent code execution through Ollama’s update mechanism. Full details and mitigations: thehackernews.com/2026/05/ollama…
The Hacker News tweet media
English
49
376
1.3K
265.2K
Ben retweetledi
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
There's a common idea that, as human society grew more complex, individuals could specialise and outsource much of their thinking to the culture at large. On this view, intelligence should have been maxed out in the hunter-gatherer period. Once complex societies took over, selection for individual intelligence would weaken. But David Reich's lab finds the opposite pattern in the genome. The period when selection most aggressively favoured intelligence-linked genes was the Bronze Age, right as humans were organising themselves into city-states and empires.
English
24
29
330
48.8K
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
The Jensen Huang episode. 0:00:00 – Is Nvidia’s biggest moat its grip on scarce supply chains? 0:16:25 – Will TPUs break Nvidia’s hold on AI compute? 0:41:06 – Why doesn’t Nvidia become a hyperscaler? 0:57:36 – Should we be selling AI chips to China? 1:35:06 – Why doesn’t Nvidia make multiple different chip architectures? Look up Dwarkesh Podcast on YouTube, Apple Podcasts, Spotify, etc. Enjoy!
English
530
1.1K
8.4K
6.1M
Ben
Ben@beenwilli·
@ZabihullahAtal We’re doing this in robotics where it’s egocentric videos which are training robots instead of wearables with sensors attached. We can feed raw videos so the fingers and joints are mapped on a pixel level automatically without telling it where these are for dexterity purposes
English
0
0
1
362
Atal
Atal@ZabihullahAtal·
🚨 BREAKING: Tsinghua University researchers find that AI reasons more like humans when it can imagine visually instead of thinking only through text. The study found that multimodal systems perform better when they internally generate visual representations while reasoning. The paper, "Visual Generation Unlocks Human-Like Reasoning through Multimodal World Models" studies how visual generation changes the way AI solves problems. It identifies a critical shift: - Text-only reasoning works well for abstract tasks - But physical and spatial problems require richer internal representations - Visual generation helps AI build better “world models” This creates a major advantage. Instead of only describing the world with language… AI can now internally simulate and reason through visual structures more like humans do. The research shows that combining visual and verbal reasoning significantly improves performance on tasks involving: - physical understanding - spatial reasoning - real-world interactions This directly highlights one of the biggest limitations in current AI systems: Language alone is not enough for true world understanding. The researchers built a new benchmark called VisWorld-Eval to test these capabilities. Results showed that interleaved visual-verbal reasoning consistently outperformed text-only reasoning on tasks that required deeper world modeling. This is a major shift from how AI is usually designed today. Most systems still reason mainly through text. This work suggests that future AI may need to: - generate visuals - simulate environments - reason across multiple modalities simultaneously The bigger implication is not just intelligence, it’s perception. As AI systems move closer to real-world reasoning, success may depend less on memorizing language and more on building internal models of how the world actually works. This points toward a deeper shift in AI: From predicting words to simulating reality article link below:
Atal tweet media
English
35
183
742
55.5K
Ben retweetledi
Trung Phan
Trung Phan@TrungTPhan·
Still incredible that the DeepMind documentary has footage of exact moment Demis is told that AlphaFold can “easily” predict all known (1-2B) protein sequences “in a month” and he says to do it. Then, it shows the moment AlphaFold is released to the world.
MTS@MTSlive

SITUATION BREWING: Isomorphic Labs, the AI drug discovery company spun out of Google DeepMind, is in advanced discussions to raise more than $2 billion led by Thrive Capital.

English
58
446
7.4K
1.3M