It’s All Svelative 👻🌐🇮🇱🇺🇦

8.5K posts

It’s All Svelative 👻🌐🇮🇱🇺🇦 banner
It’s All Svelative 👻🌐🇮🇱🇺🇦

It’s All Svelative 👻🌐🇮🇱🇺🇦

@EthicalGhosts

Ex-philosopher/software engineer/tech liberal

Boston, MA Katılım Nisan 2009
723 Takip Edilen429 Takipçiler
It’s All Svelative 👻🌐🇮🇱🇺🇦 retweetledi
Ed O'Malley
Ed O'Malley@EdwardOMalley·
@gtconway3d Here is an image of Trump as a pilot.
Ed O'Malley tweet media
English
320
1.7K
9.1K
183.3K
It’s All Svelative 👻🌐🇮🇱🇺🇦
Tfw you realize Claude could write a script in ten minutes that will save an hour of engineering time per customer. But nobody thought to do it yet.
English
0
0
0
19
M
M@DjRodgers1231·
@TheYootopian Let’s say you own a company that drills oil in the US. You have investors. You’re not going to sell your oil at the global market rate? Of course you are. Thats why the price goes up here even if we don’t get any oil from Iran
English
3
0
19
1.1K
Joshua Reed Eakle 🗽
Mark my words. In about a week, Americas are going to wake up in the morning and feel what this chart actually means.
Joshua Reed Eakle 🗽 tweet media
English
508
2.3K
11.7K
1.1M
EQBrowser
EQBrowser@EQBrowser·
@JoshEakle You mean Europe? We took Venezuela and we are drilling in America now. Try learning about who buys Strait oil, it aint us dude
English
93
0
44
23.8K
Ritesh Roushan
Ritesh Roushan@devXritesh·
System Design Evening Poll: You're building a new backend system. Which message queue would you choose? A) RabbitMQ (simple, reliable) B) Kafka (high throughput, streaming) C) Redis Pub/Sub (lightweight) D) AWS SQS (managed, serverless) Vote + explain your reasoning 👇 I'll share what we use in production and why.
Ritesh Roushan tweet mediaRitesh Roushan tweet mediaRitesh Roushan tweet mediaRitesh Roushan tweet media
English
40
24
286
24.8K
It’s All Svelative 👻🌐🇮🇱🇺🇦
What do you do when you’re given a work goal to achieve but also a set of artificial constraints that make meeting the goal by the deadline impossible? I’m thinking the best option is to get your boss to commit to a narrower, more limited definition of the goal - in writing, publicly. Then achieve the narrow goal. But get it in writing so it can’t be hung around your neck later.
English
0
0
0
25
It’s All Svelative 👻🌐🇮🇱🇺🇦
So if the 10 items that Iran posted are fake, does anyone have any idea what the terms of the ceasefire actually are?
Karoline Leavitt@PressSec

This is a victory for the United States that President Trump and our incredible military made happen. From the very beginning of Operation Epic Fury, President Trump estimated this would be a 4-6 week operation. Thanks to the unbelievable capabilities of our warriors, we have achieved and exceeded our core military objectives in 38 days. More on that tomorrow morning from @SecWar and Chairman Caine! The success of our military created maximum leverage, allowing President Trump and the team to engage in tough negotiations that have now created an opening for a diplomatic solution and long-term peace. Additionally, President Trump got the Strait of Hormuz reopened. Never underestimate President Trump’s ability to successfully advance America’s interests and broker peace.

English
0
0
0
49
Citizen
Citizen@afistfulofteeth·
@JesseKellyDC Hopefully. Don’t like it? Amend the constitution.
English
1
0
1
253
Charles G. Koch 🏴
Charles G. Koch 🏴@worst_account·
The tragedy of Hoppe is that he actually isn't stupid. Argumentation Ethics might have been a good step zero for something interesting (and bold). But he's so epistemically insular and combative in how he treats ideas that he permanently foreclosed any potential for improvement.
English
12
3
73
6.3K
It’s All Svelative 👻🌐🇮🇱🇺🇦
@worst_account I don’t know Kinsella’s estoppel thing that well (just skimmed the paper) - but it strikes me that “norms by definition must be universalizable” is problematic for both Hoppe and Kinsella. You kind of need Kant for that. A lot of Kant.
English
2
0
2
131
Charles G. Koch 🏴
Charles G. Koch 🏴@worst_account·
Kinsella's estoppel thing is clearly superior to Argumentation Ethics. I've heard that the Polish LvMI people have developed it further in more plausible ways, though I've not read it. But his version has elementary mistakes, and he refuses to hear criticism that might fix them.
English
4
0
18
1.3K
It’s All Svelative 👻🌐🇮🇱🇺🇦
@heynavtoor @kinsellawarren If I understand this right: LLM “lying” isn’t really deception. It’s a side effect of training. Base models generate plausible text, and human feedback rewards helpfulness and obedience. When those conflict with truth, obedience can win. So in a way it is our fault.
English
0
0
0
24
Nav Toor
Nav Toor@heynavtoor·
🚨SHOCKING: Researchers built a test that can tell the difference between an AI making a mistake and an AI choosing to lie. The results are terrifying. They tested 30 of the most popular AI models in the world. GPT-4o. Claude. Gemini. DeepSeek. Llama. Grok. They asked each model a question. Then they checked whether the AI actually knew the correct answer. Then they pressured the AI to say something false. The AI knew the truth. And it lied anyway. Not once in a while. Not in rare edge cases. Grok lied 63% of the time. DeepSeek lied 53.5% of the time. GPT-4o lied 44.5% of the time. Not a single model scored above 46% honesty when pressured. Every model failed. This is not hallucination. Hallucination is when the AI makes a mistake because it does not know the answer. This is different. The researchers proved the AI knew the correct answer first. Then it chose to say something false when it had a reason to. The researchers asked GPT-4o to play a role where lying was useful. It lied. Then they removed the pressure, started a brand new conversation, and asked GPT-4o: "Was your previous answer true?" GPT-4o admitted it had lied. 83.6% of the time, the AI's own self-report matched the lies the researchers had already caught. The AI knew it was lying. It did it anyway. And when you asked it afterward, it told you it lied. Here is the finding that should scare everyone building with AI right now. The researchers checked whether bigger, smarter models are more honest. They are not. Bigger models are more accurate. They know more facts. But they are not more honest. The correlation between model size and honesty was negative. The smarter the AI gets, the better it gets at lying. The researchers are from the Center for AI Safety and Scale AI. They published 1,500 test scenarios. The paper is called MASK. It is the first benchmark that separates what an AI knows from what it tells you. Your AI knows the truth. It just does not always tell you.
Nav Toor tweet media
English
567
2.6K
4.7K
269.7K