Juggalos For Context🌴🥥

62.7K posts

Juggalos For Context🌴🥥 banner
Juggalos For Context🌴🥥

Juggalos For Context🌴🥥

@ebarcuzzi

JD, MD, PhD. Noble Prize winner. World's top expert on Freddy-Kruger Effect.

711 Parking Lot Katılım Şubat 2010
2.7K Takip Edilen1.3K Takipçiler
Juggalos For Context🌴🥥 retweetledi
PBS News
PBS News@NewsHour·
A year ago this week, Ruben Ray Martinez, a 23-year-old U.S. citizen, was shot and killed by an ICE agent in Texas. But it was not until this past February, 11 months after the shooting, that ICE confirmed its involvement. It's now become the first publicly known instance of ICE fatally shooting a U.S. citizen as part of President Donald Trump’s immigration crackdown. While the Department of Homeland Security says Martinez intentionally rammed his vehicle into an agent, recently released body cam footage — which shows shots being fired into Martinez's vehicle — calls that narrative into question. And one year later, Martinez's family is still searching for answers. @GeoffRBennett spoke with Ruben Martinez's mother, Rachel Reyes, and her attorney, Charles Stam.
English
18
898
2K
59.2K
Sen. Bernie Sanders
Sen. Bernie Sanders@SenSanders·
I spoke to Anthropic’s AI agent Claude about AI collecting massive amounts of personal data and how that information is being used to violate our privacy rights. What an AI agent says about the dangers of AI is shocking and should wake us up.
English
1.2K
2.9K
18.4K
4M
NY Post Opinion
NY Post Opinion@NYPostOpinion·
Mamdani’s 15 mph speed limit plan threatens to turn NYC into city that never moves — just like London trib.al/hIZYHmT
NY Post Opinion tweet media
English
63
45
205
73.1K
cee
cee@1ovesickkk·
theres always that friend who will do coke molly shrooms acid weed alcohol but wont touch a vape bc its unhealthy
English
158
744
15.9K
408.2K
Juggalos For Context🌴🥥 retweetledi
Zephyr
Zephyr@zephyr_z9·
Looks like it's the new Xiaomi model They scaled really really quickly Congrats
Zephyr tweet media
Lei Li@_TobiasLee

@zephyr_z9 You might be wrong.

English
3
21
286
45.3K
Juggalos For Context🌴🥥 retweetledi
sid 🌹🔆🇨🇦
sid 🌹🔆🇨🇦@lilbabygandhi·
our objectives are clear. we want to get rid of the regime but also we don't care if the current regime stays in power and also we dont want boots on the ground but we also do want american soldiers in iran and
Aaron Rupar@atrupar

Q: Why are we helping Israel prosecute this war if they're going to pursue their own objectives? HEGSETH: We hold the cards. We have objectives. Those objectives are clear. We have allies pursuing objectives as well.

English
0
2
22
1K
Juggalos For Context🌴🥥 retweetledi
ChrisO_wiki
ChrisO_wiki@ChrisO_wiki·
1/ Denmark was reportedly preparing for full-scale war with the US over Greenland in January, with military support from France, Germany, and Nordic nations. Elite troops and F-35 jets with live ammunition were sent, and runways were to be blown up to prevent an invasion. ⬇️
ChrisO_wiki tweet media
English
952
2.5K
11.6K
1.6M
Juggalos For Context🌴🥥 retweetledi
microplastics slime quest 🧫
microplastics slime quest 🧫@facetedcarapace·
every Millennial who took 10000 bong rips while Afroman was playing in the background in 2007
GIF
English
2
6
216
2.8K
Juggalos For Context🌴🥥 retweetledi
Juggalos For Context🌴🥥 retweetledi
Joe
Joe@JoePostingg·
If you get deep enough into TikTok your feed becomes mostly Chinese commodity wholesalers
English
209
848
16.9K
776.4K
Juggalos For Context🌴🥥 retweetledi
🎭
🎭@deepfates·
You might think the "agents" thing is just coming for software engineers. Yeah, agents write code, code and code sells a bunch of tokens, But most people's work isn't code, it's memos or decks or whatever. Why this is false: Agents can do anything you can do on a computer, and they do it by spending output tokens to write code. The number of keypresses used by a consultant to do a task is not a good measurement of the number of tokens an agent would use. For example: one "deep research" report might be 20 pages of output tokens. But it also might have required more than 20 pages of output tokens to do all the searches, fetches, PDF parsing and interim summaries that you never even see as the user. It also had to input all the tokens of every document it read in searching — likely more than 20 pages, since the point of the report is to collect and summarize this information. So now we're at 3x tokens for the final output. That one report is so cheap, and so fast, then now you can do more research than ever. This is valuable! If your business relies on having good information about the world, you can probably find a way to make more money by doing 3 deep research reports and then synthesizing them. More tokens! Now you've kicked off three deep research reports you deserve a little treat, right? So you fire up your browser agent and tell it go find me some nice linen shirts for summer in my size. Open them in tabs so I can look through. Well your browser agent has to interact with the browser using some kind of tool and you know what that tool is? Code, baby. Tokens. And the tokens are so cheap. You got to understand. We're spending a lot in the aggregate, but in the moment it is "spend a nickel to for 10 minutes of being literally Superman". Like yes I'll just keep spending nickels actually. I will never stop being Superman at that price. All knowledge workers will feel this. A lot of you already do, you're just hiding it from your boss so you can have more free time while "working from home". And maybe it's better to protect yourselves from Jevons as long as possible, because once you get the bug it's hard to stop. You realize that you could be creating all of the businesses and projects and art you ever wanted and all you've got to do is put your instructions in the right order and put the nickels in the bag. I would happily bet against Anthropic's revenue spike being a brief "sugar high". So would most capital allocators! That is because they have already seen that software can eat the world. White collar knowledge work fundamentally changes in the face of agent economics and entirely new forms of knowledge production? It's happened already in finance: high frequency trading. Now it's happening in tech: high frequency software. Then we will have high frequency science, high frequency governance, high frequency engineering, high frequency medicine and high frequency law. Human society is about to be absolutely DDOSed by information at all levels of the stack. Our civilization was never meant to handle this many tokens. If anything can be done on a computer it will be turned into tokens instead of human actions and it will happen faster and in parallel. This stuff works, it is real, it is getting better. It is going to hit economically and socially this year and nobody is ready and I think it is important to start taking it seriously, instead of finding ever more arbitrary reasons to remain in denial.
Derek Thompson@DKThomp

New newsletter: The transcript of my AI bubble conversation, with @pkedrosky. Feat.: - Why did the Mag7 equity miracle suddenly stop? - The growing private credit crisis, explained - Why the enormous revenue boom from new agents like Claude Code might be a sugar high, in which explosive revenue growth today precedes much slower revenue growth after AI adoption among software engineers peaks - Where equity value is flowing if it’s leaving software - Why US productivity seems to be rising but actually isn't derekthompson.org/p/yes-ai-is-a-…

English
24
49
499
95.8K
Juggalos For Context🌴🥥 retweetledi
bad_stats 🕜💵🖨️🕣
Beep beep, new Bret Weinstein medical advice has come in. -> Getting measles is good for you -> Measles protects you from other diseases -> Your child won't be in danger of getting harmed by measles as long as you protect them from WiFi radiation and offgassing
English
59
47
587
33.6K
Juggalos For Context🌴🥥 retweetledi
Sean
Sean@sean_from_earth·
This is another interesting way to attempt to explain LLMs but I think we're all leaning a little too hard on insufficiently complex analogies. The problem is that LLMs are a hyperobject and attempts at reductivism are mostly creating false or highly incomplete narratives. The last well-known hyperobject, ecosystem disruption, was completely derailed for this reason. We attempted to reduce it to a single, comprehensible analogy - greenhouse effect (which became global warming) - so we could shape a narrative around it. Unfortunately, it eventually became obvious to many people just how incomplete and simplistic this was and, instead of seeking to understand the very complex truth, they just decided it was all fake. With LLMs, they are both non-deterministic and woven so deeply into capitalism and the internet (both also hyperobjects) that they are already forming into something entirely new. And the more we attempt to explain them in terms of old systems, the worse we'll be at anticipating and adapting. Like with other hyperobjects, we'll mostly need to be more precise and not try to analogize them broadly but instead try to explain them in specific contexts and forms, like "LLM-based models in agentic harnesses when applied to building software are a lot like X". It's just easier to say "LLMs" or "AI" though so we'll mostly keep doing that. Side note: Interestingly, LLMs give us the glimpse of something that may be able to understand other hyperobjects sufficiently to be able to generate real insights and wisdom about them.
rohit@krishnanrohit

I find it better to think in terms of LLM Inc, a company rather than a species. A species makes you think of a large group of individuals with autonomy, with shared genetics. It's not really true here. Instead, we're trying to design a new weird intelligence form which is as a new distributed organization to act nice to us. It's an LLM Inc. LLM Inc has the equivalent of corporate personhood. A company all of us can use and give inputs to and get outputs from. Accessible with a simple API. Like, say, Stripe. But unlike Stripe, it has no clear purpose, beyond being able to do everything. Like McKinsey. McKinsey via an API. It's hard, sometimes the entity, our amazing LLMcK, is so nice that it's are completely unreliable. Sometimes not nice at all and completely unreliable. Sometimes somewhat reliable and somewhat nice. In each of these instances we are trying to design the entity such that we can get the responses we want from LLM Inc. There's no way to guarantee exact outcomes. Just like a regular company. Like people have dsputes with Stripe because it messes up their payments or deletes their account, which nobody actually wants. But it happens. And lord knows it happens all the time either mckinsey!. Partially because nobody knows what nice means sufficiently well to design it, but also because organizations are weird and not like a mathematical equation. Companies all have rules and guidelines and norms and external laws and HR and compliance. And still we get the occasional Purdue. Here they're all in the weights and its interplay with the context. So for LLM Inc we try to add all sorts of pressures to try and make sure these things happen less. There are rules and procedures that we try to instill inside the organization, but just like any organization nothing gets followed 100%. But we do see that as we design this new ecology with this kind of data and those kinds of stories and these kinds of selection pressures that teach it what it should learn, turns out it ends up being more useful and acts in the way that we would want it to. Sometimes the way we would want it to act is also a moving target, and LLM Inc tries its best. It's hard, because people use it for everything. It's a company that acts as a friend and does coding and does research and does role-play and helps the military carry out strikes abroad and everything in between. Like McKinsey. It's hard to make human McKinsey behave better and just so it's also hard to make LLM mck behave better. Unlike human mck it doesn't have individual AGI components you can yell at or prosecute. It can't be intimidated into changing its policies. It can't change its mind in the abstract. It can't even remember everything it does. The makers of LLM Inc can somewhat do this though. So they keep an eye on everything that it does and try to damp down on the bad stuff and learn from the good stuff and try to push that into the internal of the next upgrade of LLM Inc directly. Obviously this is a lossy process because it is being done by humans with all of our human foibles. Even when we manage to automate part of this it will still be lossy because the automation will be taught by us. The way we keep companies on the straight and narrow is by extensive internal and external monitoring. We developed institutions over centuries as we learned each way they go wrong and added fixes. Sometimes the fixes caused more problems down the road and we fixed those two. All the while continuing to use the machine so that when the next fix is needed we know more and we have more. We used the very form of progress as the way to create and enforce safety. For LLM Inc we will need to do the same thing. It won't look the same because the mechanism by which the laws, regulatory institutions, compliance, and norms affects it is different. But it's going to be no less important and no less effective. That's real AI safety.

English
1
2
12
1.1K
Juggalos For Context🌴🥥 retweetledi
Juggalos For Context🌴🥥 retweetledi
they/them might be giants ☭
they/them might be giants ☭@babadookspinoza·
*decades where nothing happens* wow this sucks. *weeks where decades happen* wow this is worse.
English
9
872
7.3K
64.5K