Wolfram Stigmatica

4.1K posts

Wolfram Stigmatica banner
Wolfram Stigmatica

Wolfram Stigmatica

@HunterKinder

man of constant gumption, folk experientialist, bayou arbitre ikahawa.eth ikofi.eth he/li/sô

Greater St. Louis Katılım Nisan 2011
2.9K Takip Edilen425 Takipçiler
Wolfram Stigmatica retweetledi
David Sirota
David Sirota@davidsirota·
Destroying the @InternetArchive's @WayBackMachine would be the equivalent of the burning of the Library of Alexandria - one of the worst losses of knowledge in history. Media giants are now threatening to do this. We can't let this happen. Pass it on.
English
435
14.1K
31.8K
961.7K
Wolfram Stigmatica retweetledi
n x d
n x d@nxd1979·
what every day feels like now
English
171
18.2K
105.9K
3.3M
Wolfram Stigmatica retweetledi
CRONK 🎩 Crypto Reporter
CRONK 🎩 Crypto Reporter@CryptoCronkite·
Monero to $10K would retire every criminal thus ending crime worldwide
English
83
112
2K
92.9K
ₕₐₘₚₜₒₙ
ₕₐₘₚₜₒₙ@hamptonism·
I’m seeing a lot of people feeling that there was a universal period , a few years ago , that has set back the social skills of society in general. Thoughts? Or is this cope?
English
63
35
4.1K
322.4K
Wolfram Stigmatica retweetledi
Scott Wessman
Scott Wessman@scottew·
This Mamdani socialism thing is quite scary, for example I heard he has already nationalized 10% of Intel and extorted a 25% cut of all chip sales to China by Nvidia. What's next, redistribution payouts from regular taxpayers to party-favored farmers?
English
217
1.9K
15.9K
448.9K
Wolfram Stigmatica retweetledi
Zcash 🛡️
Zcash 🛡️@Zcash·
"Privacy allows people to think freely. There's the space inside your head where you can basically think whatever you want and be fairly confident that no one is reading your mind. You can reason about the world and reason about what should happen in the world. @feministPLT #zcash $ZEC #privacyisnormal
English
61
188
1.4K
4.7M
Wolfram Stigmatica retweetledi
WikiLeaks
WikiLeaks@wikileaks·
Hollywood has been planting the mental seeds for the war in Iran and the massacre of Palestinians for years
English
137
2.6K
6.1K
242K
Wolfram Stigmatica
Wolfram Stigmatica@HunterKinder·
McDonald's coffee tastes like they used black-eyed peas to brew with.
English
0
0
0
59
Wolfram Stigmatica
Wolfram Stigmatica@HunterKinder·
The state, better yet, western civ, still adheres us in our Apology for the corruption of youth.
English
0
0
2
21
nom
nom@14JUN1995·
recommend me ur fav books on/about poetry :)
English
57
18
285
30.9K
Wolfram Stigmatica
Wolfram Stigmatica@HunterKinder·
@sama ever thought about just releasing an older model "for free?"
English
0
0
0
12
Sam Altman
Sam Altman@sama·
deep research out for chatgpt plus users! one of my favorite things we have ever shipped.
English
1.6K
987
18.6K
1.9M
Wolfram Stigmatica retweetledi
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
Software developer job postings over the last five years Hard to find a crazier chart
BuccoCapital Bloke tweet media
English
434
1K
12.8K
2.8M
Wolfram Stigmatica retweetledi
Curt Jaimungal
Curt Jaimungal@TOEwithCurt·
The Free Energy Principle: a 'theory of everything' that includes not only brains, societies, but perhaps even the universe itself… It's known for being considerably convoluted but the principles underlying it are actually straight forward. If the FEP is right, then is your entire reality a 'controlled hallucination.'? What does that even mean? And how does this relate to entropy? 1/13
Curt Jaimungal tweet media
English
27
47
267
32.3K
Wolfram Stigmatica
Wolfram Stigmatica@HunterKinder·
🎶 Come along in the wilderness!? 🎶
Wolfram Stigmatica tweet media
English
0
0
0
28
Wolfram Stigmatica retweetledi
Andrew Ng
Andrew Ng@AndrewYNg·
The buzz over DeepSeek this week crystallized, for many people, a few important trends that have been happening in plain sight: (i) China is catching up to the U.S. in generative AI, with implications for the AI supply chain. (ii) Open weight models are commoditizing the foundation-model layer, which creates opportunities for application builders. (iii) Scaling up isn’t the only path to AI progress. Despite the massive focus on and hype around processing power, algorithmic innovations are rapidly pushing down training costs. About a week ago, DeepSeek, a company based in China, released DeepSeek-R1, a remarkable model whose performance on benchmarks is comparable to OpenAI’s o1. Further, it was released as an open weight model with a permissive MIT license. At Davos last week, I got a lot of questions about it from non-technical business leaders. And on Monday, the stock market saw a “DeepSeek selloff”: The share prices of Nvidia and a number of other U.S. tech companies plunged. (As of the time of writing, some have recovered somewhat.) Here’s what I think DeepSeek has caused many people to realize: China is catching up to the U.S. in generative AI. When ChatGPT was launched in November 2022, the U.S. was significantly ahead of China in generative AI. Impressions change slowly, and so even recently I heard friends in both the U.S. and China say they thought China was behind. But in reality, this gap has rapidly eroded over the past two years. With models from China such as Qwen (which my teams have used for months), Kimi, InternVL, and DeepSeek, China had clearly been closing the gap, and in areas such as video generation there were already moments where China seemed to be in the lead. I’m thrilled that DeepSeek-R1 was released as an open weight model, with a technical report that shares many details. In contrast, a number of U.S. companies have pushed for regulation to stifle open source by hyping up hypothetical AI dangers such as human extinction. It is now clear that open source/open weight models are a key part of the AI supply chain: Many companies will use them. If the U.S. continues to stymie open source, China will come to dominate this part of the supply chain and many businesses will end up using models that reflect China’s values much more than America’s. Open weight models are commoditizing the foundation-model layer. As I wrote previously, LLM token prices have been falling rapidly, and open weights have contributed to this trend and given developers more choice. OpenAI’s o1 costs $60 per million output tokens; DeepSeek R1 costs $2.19. This nearly 30x difference brought the trend of falling prices to the attention of many people. The business of training foundation models and selling API access is tough. Many companies in this area are still looking for a path to recouping the massive cost of model training. Sequoia’s article “AI’s $600B Question” lays out the challenge well (but, to be clear, I think the foundation model companies are doing great work, and I hope they succeed). In contrast, building applications on top of foundation models presents many great business opportunities. Now that others have spent billions training such models, you can access these models for mere dollars to build customer service chatbots, email summarizers, AI doctors, legal document assistants, and much more. Scaling up isn’t the only path to AI progress. There’s been a lot of hype around scaling up models as a way to drive progress. To be fair, I was an early proponent of scaling up models. A number of companies raised billions of dollars by generating buzz around the narrative that, with more capital, they could (i) scale up and (ii) predictably drive improvements. Consequently, there has been a huge focus on scaling up, as opposed to a more nuanced view that gives due attention to the many different ways we can make progress. Driven in part by the U.S. AI chip embargo, the DeepSeek team had to innovate on many optimizations to run on less-capable H800 GPUs rather than H100s, leading ultimately to a model trained (omitting research costs) for under $6M of compute. It remains to be seen if this will actually reduce demand for compute. Sometimes making each unit of a good cheaper can result in more dollars in total going to buy that good. I think the demand for intelligence and compute has practically no ceiling over the long term, so I remain bullish that humanity will use more intelligence even as it gets cheaper. I saw many different interpretations of DeepSeek’s progress here in X, as if it was a Rorschach test that allowed many people to project their own meaning onto it. I think DeepSeek-R1 has geopolitical implications that are yet to be worked out. And it’s also great for AI application builders. My team has already been brainstorming ideas that are newly possible only because we have easy access to an open advanced reasoning model. This continues to be a great time to build! [Original text: deeplearning.ai/the-batch/issu… ]
English
282
1K
4.3K
616.8K
Wolfram Stigmatica retweetledi
Anna K. Winters
Anna K. Winters@tenshi_anna·
the current Nietzschean revival among young men is so manifestly demonic that I don’t know how people tolerate it. at least read Bataille or something. all these BAPists are basically total brutish animalistic narcissists with hardly any capacity for thought who want to be elite, but this is a total joke. if you think the decisive factor for humanity is superior genes, you do not yourself have superior genes, because that is a vulgar animalistic understanding of human spirit. BAP’s reading of Plato is like Zeno of Citium’s stupid materialist reduction of Platonic doctrine which St. Augustine polemicizes against in Against the Academicians, pointing out that the whole of Academic Skepticism was just a defensive posture against the idiot Stoics. the Stoic revival among similarly stupid and narcissistic young men, “broicism,” is related intimately to this BAPist idiocy. I hope these people grow up and get right with God.
English
407
97
1.7K
936.4K