scikityearn

128 posts

scikityearn banner
scikityearn

scikityearn

@scikityearn

เข้าร่วม Şubat 2023
534 กำลังติดตาม106 ผู้ติดตาม
scikityearn
scikityearn@scikityearn·
@EricTopol Because fixing the root cause of limited lifespan, aging, will obviously also extend healthspan. Such as in the super centenarians - why ignore them?
English
0
0
0
176
Eric Topol
Eric Topol@EricTopol·
Why is there such obsession with extending lifespan when the bigger issue is that average healthspan is 65 years and there are no data (except in super-centenarians) that longer lifespan = longer healthspan (known as compression of morbidity)?
Eric Topol tweet media
English
68
126
596
47.9K
scikityearn
scikityearn@scikityearn·
@gabriberton Makes sense for different modalities to have different optimal encoding paths. We will use same model for both tasks tho. There probably needs to be task conditioning on the encoder. Like we ourselves decide how close to look at something depending what we’re doing.
English
1
0
1
268
Gabriele Berton
Gabriele Berton@gabriberton·
Cool paper from Meta suggesting that future MLLMs will be Native Multimodal Models (NMM), hence no vision encoders anymore But I disagree I actually think we'll go in the other direction (what? more encoders? yes! read on...) All you need to know about the future of MLLMs 🧵
Gabriele Berton tweet media
Weiming Ren@wmren993

1/ 🚀 We’re excited to share Tuna-2: Pixel Embeddings Beat Vision Encoders for Multimodal Understanding and Generation! Tuna-2 is a native unified multimodal model that supports visual understanding, text-to-image generation, and image editing directly from pixel embeddings. 🐟✨ 📄 Paper: arxiv.org/abs/2604.24763 🌐 Project: tuna-ai.org/tuna-2 💻 Code: github.com/facebookresear… Most unified multimodal models still rely on pretrained vision encoders, which add architectural complexity and can create representation mismatches between understanding and generation. Tuna-2 asks a simple question: Do we still need vision encoders? 👀 Our answer is No! Tuna-2 has a completely encoder-free architecture, where images are processed directly by a unified transformer together with text tokens. Take a glimpse at what our model can generate ↓ 🎨🖼️

English
9
22
175
48.5K
scikityearn
scikityearn@scikityearn·
@krishnanrohit @pmarca New bottlenecks that aren’t coding labor that constrain deploying full augmented workforce effectively. Ideas, deployment, hardware, coordination with other firms. Perhaps amount of high level management for new business lines - execs are constrained but not by headcount.
English
0
0
1
98
rohit
rohit@krishnanrohit·
@pmarca No no they're all saying it. I'm asking why at a time when finally those overstaffed folks can be deployed productively with AI somehow there's a massive failure of imagination of what they could even do, which is maybe one reason why layoffs are easier. Seems odd.
English
21
1
119
6.4K
scikityearn
scikityearn@scikityearn·
@tjparker A medical board gatekeeping and hand wringing? Everyone.
English
1
0
7
770
TJ Parker⚡️
TJ Parker⚡️@tjparker·
Wow, who could’ve seen this coming..
TJ Parker⚡️ tweet mediaTJ Parker⚡️ tweet media
English
8
5
83
17.8K
scikityearn
scikityearn@scikityearn·
@amanwon You are in the same business. That’s a good thing! I’m a customer of both. Letting people take control of their health is the main benefit. Scaling makes it affordable and helps more people.
English
0
0
1
249
Aman 🧙‍♂️
Aman 🧙‍♂️@amanwon·
Respectfully, the Netflix analogy is the problem, not the pitch. Netflix won by being the best distribution layer for a commodity. The entire premise of modern telehealth is that medicine should work the same way: more drugs, faster, cheaper, with fewer gates. That framing is exactly how the category ended up looking less like healthcare and more like a pill mill with good design. Our patients at Healthspan don’t need a bigger GLP-1 catalog. They need someone looking at their labs, their training, their protein, their sleep and titrating an evidence-based protocol accordingly. That’s not a distribution problem. It’s a care problem. And this post is a pretty blunt admission of where the industry’s priorities actually sit. Adding more SKUs is easy. Looking after a patient comprehensively is hard. Most companies in the space have quietly picked the former and dressed it up as the latter.
andrewdudum@AndrewDudum

As of this morning, providers on our platform can now send prescriptions for Zepbound® vials and KwikPen®, as well as Foundayo™, to the LillyDirect® pharmacy and access self-pay pricing for our customers because of an expansion in our platform’s functionality. In many ways, today reminds me of Netflix’s early days, when everyone talked about whether they would have the latest blockbuster in their catalog. As if Netflix’s success depended on its ability to become the distribution channel for a single film. They were missing the forest for the trees: Netflix wasn’t just renting DVDs. It was changing consumer behavior by ruthlessly prioritizing choice and inventing new pathways to the things people wanted the most. By offering a full range of FDA-approved GLP-1s on our platform, we’re similarly giving our customers more choices through all the tools we have available – and we’ll continue to push here on behalf of everyone who depends on us for their care. Read more on how we’re making this possible, including important info, here: news.hims.com/newsroom/full-…

English
5
27
522
190.7K
scikityearn
scikityearn@scikityearn·
@mattparlmer Makes sense pretty certain it happens eventually but seems like more of a chip competition question than lab model monopoly
English
0
0
0
14
mattparlmer 🪐 🌷
mattparlmer 🪐 🌷@mattparlmer·
@scikityearn The technology will be productive and useful but I think ppl underrate the risk of a serious 2008 style financial collapse if Nvidia and Nvidia-correlated assets take a dip, there’s immense leverage out there rn and it’s only growing month over month
English
1
0
1
50
mattparlmer 🪐 🌷
mattparlmer 🪐 🌷@mattparlmer·
This seems like the obvious direction for AI use, and I think this is a good direction for business and retail users, but I worry about the macroeconomic implications of this scenario bc this would be a world in which many of the current big lab business plans fall apart
[email protected]@trydotworks

2 years from now: We will be routing between several cloud models and several local models based not only on application, but based on tasks within that application, and requirements for latency, performance, cost, and capabilities. So there will be a pressure on costs and also for providers to provide specialized models.

English
3
0
8
1.8K
scikityearn รีทวีตแล้ว
scikityearn
scikityearn@scikityearn·
But if, there-after, reason should fail, and science should find no answers, but should multiply knowledge and power without improving conscience or purpose; if all utopias should brutally collapse in the changeless abuse of the weak by the strong
English
1
1
0
292
scikityearn
scikityearn@scikityearn·
@astupple Unless it’s some hooligans occasionally hopping the fence
English
0
0
0
22
Aaron Stupple
Aaron Stupple@astupple·
If we approached a planet with an inferior tech stack, we’d be good and goddamned careful about how we notified them of our presence. We sure as shit wouldn’t drip out vague sightings that could stoke conspiracy theories and otherwise cause all sorts of problems.
The Free Press@TheFP

Michael Shermer, the founder of ’Skeptic’ magazine, makes the case for why President Trump’s promised disclosure of UFO files probably won’t amount to much. thefp.com/p/what-does-am…

English
4
0
14
1.7K
scikityearn
scikityearn@scikityearn·
@waitbutwhy That we put through hell on earth. Ban factory farming.
English
1
0
9
730
Tim Urban
Tim Urban@waitbutwhy·
2,400 chickens every second
English
80
58
726
129.7K
mattparlmer 🪐 🌷
mattparlmer 🪐 🌷@mattparlmer·
So what if it’s 2026Q4 and the Chinese open weight models are only six weeks behind American frontier model releases rather than six months, and with similar cost advantage to what they have today, does anybody actually have a plan for that?
English
4
3
23
2.8K
Guive Assadi
Guive Assadi@GuiveAssadi·
Is there any advice that is (1) not obvious to the average American, (2) broadly applicable (not, e.g., a treatment for a specific illness), (3) increases life expectancy by > 4 months? The only thing I can think of is taking statins. So learning about health is pretty useless.
English
10
0
22
4.5K
scikityearn
scikityearn@scikityearn·
@JuliusCasio @SamoBurja We’re a ways away from the environment we evolved to reproduce in and haven’t evolved to fit this one yet
English
0
0
2
113
Fynn
Fynn@JuliusCasio·
@SamoBurja how is this picture motivated? Obviously fertility itself doesn’t want anything. So what is the real attractor here?
English
2
0
3
2.1K
Samo Burja
Samo Burja@SamoBurja·
Now of course a China of 400 million people with the GDP of the United States and the age structure of Germany is still a world power. However, I expect we will see the Party eventually embrace mass immigration from Southeast Asia. It more and more seems "first world" is a transient phase.
Benjamin Wolf 🇺🇦@benbawan

If current demographic trends hold, Europe (the entire continent) may soon record more live births than all of China. That would probably be the first time since the Qing dynasty 300 years ago, possibly even the first time in history.

English
38
19
389
67.2K
scikityearn
scikityearn@scikityearn·
@gabriberton Sort of makes sense to me in a sense it is “chunking” its thought process on the problems. Wonder if could be applied for test time compute on single problem
English
0
0
0
168
Gabriele Berton
Gabriele Berton@gabriberton·
Chat is this real? Doesn't make much sense to me Especially the screenshot below: bad training data, good results ?!? My guess is that it only works on a small subsets of models / datasets with very narrow hyperparams, unless I'm missing something
Gabriele Berton tweet media
Bo Wang@BoWang87

Apple Research just published something really interesting about post-training of coding models. You don't need a better teacher. You don't need a verifier. You don't need RL. A model can just… train on its own outputs. And get dramatically better. Simple Self-Distillation (SSD): sample solutions from your model, don't filter them for correctness at all, fine-tune on the raw outputs. That's it. Qwen3-30B-Instruct: 42.4% → 55.3% pass@1 on LiveCodeBench. +30% relative. On hard problems specifically, pass@5 goes from 31.1% → 54.1%. Works across Qwen and Llama, at 4B, 8B, and 30B. One sample per prompt is enough. No execution environment. No reward model. No labels. SSD sidesteps this by reshaping distributions in a context-dependent way — suppressing distractors at locks while keeping diversity alive at forks. The capability was already in the model. Fixed decoding just couldn't access it. The implication: a lot of coding models are underperforming their own weights. Post-training on self-generated data isn't just a cheap trick — it's recovering latent capacity that greedy decoding leaves on the table. paper: arxiv.org/abs/2604.01193 code: github.com/apple/ml-ssd

English
15
2
66
19.3K
scikityearn
scikityearn@scikityearn·
@anesmithbeck Beautiful, honest writing. You’re past the hardest part and have even more joyful parts ahead!
English
1
0
2
146
RYAN SΞAN ADAMS - rsa.eth 🦄
Here's what Vitalik has said about his previous funding of FLI, he seems to be somewhat distancing himself from their recent political approach: x.com/VitalikButerin… But I do think going full decel forces you into an anti-freedom position. FLI pushing liability for open weight models, no open-source carve-outs...absolutely stifles self-sovereign AI. It's not great.
vitalik.eth@VitalikButerin

There are often posts mentioning that I donated a very large amount of funds to @FLI_org years ago and connecting me to various policy actions that they take. I thought I would make clear the record both on the nature of my connection to them, and on similarities and differences between my approach to the AI risk topic and theirs. First, what happened: * In 2021, I received a large amount of SHIB and other dog coins, seemingly because the creators wanted to use "Vitalik owns half our supply" as a marketing tactic and be "the next Dogecoin" * The tokens quickly rose in value, and at the peak the "book value" of those tokens was over a billion dollars * I felt that surely this was a bubble, it would pop quickly and the price would drop massively, and so I scrambled to retrieve the funds from my cold wallet (this included things like calling my stepmother in Canada and asking her to go into my closet and read out a 78-digit number, and then adding it to a different 78-digit number transcribed from a paper in my backpack). I sold what I could for ETH and donated to relatively more "normal" things (eg. $50m to GiveWell). But then I was still left with lots of SHIB * I sent half to @CryptoRelief_ (half of _those_ funds ended up supporting Balvi, and the other half is being spent by @sandeep and team on improving medical infrastructure in India). I sent the other half to FLI * At the time, they presented me with a comprehensive roadmap that focused on improving all major existential risks (bio, nuclear, AI...) as well as general pro-peace and pro-epistemics (ie. helping us know the truth in adversarial contexts) initiatives * I thought that surely they would cash out at most $10-25M, because there's no way the SHIB market is deep enough to cash out more * Instead, they managed to cash out ... something like $500M (same with cryptorelief) * Since then, FLI had an internal pivot by which they started focusing on cultural and political action as a primary method, quite different from the original approach. * Their justification is that the situation has changed greatly since 2021, AGI is coming very soon, and their pivot is needed to affect the world fast enough, and to counteract the lobbying warchests of large AI companies. * My worry is that large-scale coordinated political action with big money pools is a thing that can easily lead to unintended outcomes, cause backlashes, and solve problems in a way that is both authoritarian and fragile, even if it was not originally intended that way. * For example, their primary approach to biosafety has been "how do we put guards into bio-synthesis devices and AI models so that they refuse to create bad stuff?". I view this as a very fragile solution: there are many ways to jailbreak, fine-tune or otherwise get around such restrictions. Ultimately, putting all your eggs into this strategy can lead to very dark places like "let's ban open-source AI" and then "let's support one good-guy AI company to establish global dominance and don't let anyone else get to the same level". Approaches like this VERY EASILY backfire: they make the rest of the world your enemy. * More generally, historical experience tells us that when regulations are made on dangerous tech, "national security" orgs (today, realistically incl Palantir) inevitably get exempted, and in fact those very same orgs are a major source of risk (see: pandemic lab leaks typically coming from government programs). This is something I worry about. * My approach on these topics has been centered around d/acc: build the tech (eg. air filtering, early detection, continuous passive PCR-quality air testing, prophylactics etc for pandemics, greatly improving software and hardware verifiability for cybersecurity...) to help us survive a much higher-capability world safely, and open-source the tech so that the entire world can freely incorporate it. * This is the sort of thing that the ~$40m I recently allocated is for. A big part of that pot is for secure hardware, which is good both for Ethereum users who do not want to lose their coins, and for humanity if we want ubiquitous computer chips to not be hackable (incl by AI) and spy on us. If I had the FLI warchest and tweet-chest, I would use it to do more of those things. * I have shared my difference in perspective with them on several occasions. * At the same time, I've also been heartened by many of @FLI_org 's recent moves. I think the "pro-human AI declaration" ( humanstatement.org ) is a very good philosophical path forward. It unites conservatives, progressives and libertarians, America, Europe and China, people worried about unemployment, surveillance, psychosis and paperclip doom, atheists and the Pope. They have also been researching ways to avoid concentration of power resulting from AI. These things are all good. I wish them best of luck on these positive initiatives, and hope that they operate with the caution and wisdom that their task deserves.

English
11
1
216
19.3K
scikityearn
scikityearn@scikityearn·
@pmarca It was pretty crazy reading his recent post proposing everyone align on good faith agreements not to develop AI. He created the second largest blockchain whose whole point is such agreements are insufficient consensus mechanisms when a lot is at stake.
English
0
0
0
10
Yun-Ta Tsai
Yun-Ta Tsai@yunta_tsai·
@scikityearn Dictionary size is much smaller than humans'. We have enough vocabulary to expand knowledge and pass it down to generations.
English
1
0
1
128
Yun-Ta Tsai
Yun-Ta Tsai@yunta_tsai·
One big difference between how animals sense versus robots is that animals can physically perform compression and frequency analysis simultaneously, sending only minimal neural spikes to the brain, whereas in robotics these steps are done sequentially. Take our retina and cochlea, for example: the photoreceptor cells and cochlear hair cells sample photons and sound waves in both space and time, then perform frequency analysis such that the neural spikes essentially send a truncated Fourier transform signal to the brain. This process is extremely low-power and efficient, such that we can constantly use our ears and eyes without draining too much battery. Humans take the compression steps even further with language, which separates us from the rest of the animals. Language enables long-form context, which allows us to make sense of what we sense, instead of just reacting. Language allows us to compress sensing into semantic form, agnostic to sensor type—eyes, touch, or hearing—leaving only the important parts and storing them to form a reasoning chain. Thus, as we grow and sense more, language helps us develop wisdom despite its extremely lossy nature. Yet, is our language the optimal form of semantic compression? It developed in a way that was limited by our communication bandwidth. Imagine you could increase the bandwidth beyond dial-up speed; then our communication protocol might look very different. Maybe we could communicate in tensors instead of letters. Who knows?
English
14
15
222
13.6K