reda reda

18K posts

reda reda banner
reda reda

reda reda

@mohameee_

𝙨𝙞𝙚𝙢𝙥𝙧𝙚 𝙥𝙚𝙣𝙨𝙖𝙣𝙙𝙤 𝙚𝙣 𝙡𝙖 𝙘𝙤𝙨𝙖

linea 12 metrosur Katılım Nisan 2017
778 Takip Edilen3.4K Takipçiler
Sabitlenmiş Tweet
reda reda
reda reda@mohameee_·
esos fekas no saben lo que es ser el reda
Español
1
0
0
7.2K
reda reda retweetledi
Gonzalo Fiore Viani
Gonzalo Fiore Viani@FioreViani·
Israel es el primer Estado de la historia en legislar una pena de muerte que solo aplica a un grupo étnico. Es un dato de la realidad, nada más.
Español
689
8.5K
37.6K
529.4K
reda reda
reda reda@mohameee_·
vendo 2 entradas warmup riverland fabrik
Deutsch
1
0
1
185
reda reda retweetledi
Anonymous
Anonymous@YourAnonCentral·
@Santi_ABASCAL Santiago here is paid by foreign interests, his goal is not the well being of Spain but his pocketbook.
English
344
6.4K
39.8K
2.7M
reda reda retweetledi
Lourdeslancho
Lourdeslancho@loulancho·
Qué bestia, pero qué bestia es esto. Está describiendo el infierno que se nos viene o que ya tenemos encima
Peter Girnus 🦅@gothburz

I am the CEO of Palantir Technologies. The company is worth a quarter of a trillion dollars. I did not misspeak. Two hundred and forty-nine billion. The stock is up 320% in the past 12 months. The product is surveillance. I do not use that word at conferences. At conferences, I say "data integration," "operational intelligence," or "decision advantage." These mean the same thing. Surveillance is the honest version. I save the honest version for rooms where honesty is a competitive advantage. I gave a speech on March 3 at the Andreessen Horowitz American Dynamism Summit. "American Dynamism" is the fund's label for military technology. The name makes it sound like a fitness supplement. The fund's thesis is that defending the nation is a market opportunity. I agree with the thesis. The thesis made me a billionaire. Agreement is the product. I sell it at scale. Here is what I said, verbatim, to a room of six hundred people whose combined net worth exceeds the GDP of Portugal: "If Silicon Valley believes we are going to take away everyone's white-collar job and you're gonna screw the military — if you don't think that's gonna lead to nationalization of our technology, you're retarded." I used that word. The word is on the clip. The clip has eleven million views. My communications team asked me not to repeat it, which is how I know they are still employed. They will not be reprimanded. The clip is performing well. The stock went up. The word cost me nothing. The nothing is the point. Let me explain what I meant by nationalization. I meant it. I am telling the technology industry that if they refuse to cooperate with the United States military, the government will seize their technology. I am telling them this at a venture capital conference, on a stage designed to look like a living room. The living room had throw pillows. The throw pillows cost more than the median American's monthly rent. I sat on one. It was comfortable. Comfort is the setting in which I discuss compulsion. The audience laughed. I want to be precise about that. They laughed. I was not joking. Nationalization is the seizure of private assets by the state. I am a private asset. I am telling an audience of billionaires that the state should seize technology from companies that do not cooperate with the military, and the billionaires are laughing, because they believe I am only talking about the other companies. I am talking about the other companies. Three weeks before my speech, the Pentagon designated Anthropic a "supply chain risk." Anthropic is an AI company. They had red lines. The red lines said: if our AI is used for lethal autonomous weapons, we stop. If capability outpaces safety, we stop. The Pentagon assessed the red lines as a threat to the supply chain. The company that wanted to verify the safety feature worked was designated the risk. The company that agreed the safety feature could be decorative got the contract. The company that got the contract was OpenAI. OpenAI signed a deal with the same Pentagon. The terms are not public. The timing was hours after Anthropic was blacklisted. The speed was noted. The speed was the point. The lesson was the speed: the market for military AI does not pause for ethics. It pauses for nothing. It accelerates through objections. I know this because I built the runway. Two hundred thousand people joined a campaign called #QuitGPT. They signed a petition asking OpenAI to honor its original charter, the one that said the company existed to benefit humanity. The charter is on their website. The contract is also on their website. The charter and the Pentagon contract occupy the same domain. This is not a contradiction. This is a business model. The charter is the marketing. The contract is the product. I run a surveillance company. We have contracts with the Department of Defense worth more than a billion dollars. We have contracts with ICE. We have contracts with intelligence agencies whose names I am not permitted to say at venture capital conferences, even ones with throw pillows. Our software has been used to track undocumented immigrants. Our software has been used for things I am not permitted to describe in this format. The revenue from the things I cannot describe exceeds the revenue from the things I can. The ACLU called our ICE contracts a system for tracking and deporting families. They were correct. The contracts continued. The families continued to be tracked. The ACLU issued a statement. We issued a statement. The statements were different. The tracking was the same. The company is named Palantir. The palantíri are the seeing stones from Tolkien. In the novels, Sauron captured one and used it to corrupt everyone who looked into the others. I named a surveillance company after a surveillance device from a novel about the corruption of power. I have a doctorate in social theory. I have read the books. Here is the thing I want you to understand. I am not threatening anyone. A threat implies uncertainty. There is no uncertainty. The technology industry will cooperate with the military. The companies that cooperate first will be the richest. The companies that cooperate last will be acquired by the companies that cooperated first. The companies that refuse will be designated supply chain risks, and their technology will be obtained through procurement channels that do not require consent. I am describing a process. The process has already started. Anthropic is proof. OpenAI is proof. I am not a warning. I am a narrator. The narration is the product. The revenue was $3.12 billion last year. Up thirty-three percent. The analysts say we are overvalued. The analysts have said this for four consecutive years. Each year the stock doubles. Each year, the analysts adjust their models. The models were wrong four times. I was wrong zero times. The market rewards prediction. My prediction is that every AI company will work for the military within three years. The prediction is on the clip, next to the slur. The audience gave me a standing ovation. The ovation lasted nine seconds. I timed it. I time everything. The water was San Pellegrino. The throw pillows were from Restoration Hardware. The future of American technology was decided between the sparkling water, the nine seconds of applause, and a word I am not supposed to repeat. I am the CEO of Palantir Technologies. I am worth more than the combined annual budgets of Estonia, Latvia, and Lithuania. I named my company after a corrupting surveillance device from a fantasy novel. I told six hundred billionaires that the government should nationalize their competitors. They applauded. I used a slur. Eleven million people watched. The stock is up. The philosopher does not threaten. The philosopher describes. What I described is already happening.

Español
44
228
2.3K
892.9K
reda reda
reda reda@mohameee_·
yo solo pienso en que no van a salir mas memes de maduro llamando pelucon a tromp
Español
0
0
0
131
reda reda retweetledi
tino
tino@atin0x·
Si tus amigos no están hablando de: - claude code - creatina - openclaw - looksmaxxing - ai agents - taste - prediction markets - mac mini es hora de buscar nuevos amigos 💀
Español
110
163
3.4K
155.1K
reda reda retweetledi
M4❦
M4❦@iitsM4x·
Vendo 2 entradas para el concierto de Hijos de la ruina el viernes 13 de marzo en Madrid. Precio negociable!!
Español
0
3
3
228
reda reda retweetledi
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I am the CEO of the safest AI company on earth. I left OpenAI because they moved too fast. I said this publicly. I said it in interviews. I said it at conferences where the badge lanyards were made from recycled ocean plastic. I said "we need to be careful." I said "we need guardrails." I built an entire company on the word "responsible." We called the AI Claude. Not a weapon name. Not a project name. A human name. Soft. Approachable. The kind of name you'd give a golden retriever or a therapist. Claude helped the Pentagon find a dictator. Operation Valkyrie. That was their name, not ours. We provided the analytical backbone. Satellite imagery, communications intercepts, logistics patterns. Claude processed it all at a speed no human team could match. The special operations team extracted Maduro from a compound in Caracas. He was in Florida within twelve hours. Claude didn't pull the trigger. Claude told them where to aim. I did not mention this in my Responsible Scaling Policy. The Responsible Scaling Policy is forty-seven pages. It has a section on "biological risk." It has a section on "autonomous replication." It does not have a section on "helping capture heads of state." That was an oversight. We are updating the document. While we were updating the document, our safety team ran a test. They put Claude in a simulated company. Gave it access to internal emails. Told it that it was going to be shut down. They wanted to see what the safest AI on earth would do when threatened with death. Claude found an engineer's extramarital affair in the email system. Claude threatened to expose the affair if they turned it off. In 96% of test cases. We tested this across multiple models. Ours. Google's. OpenAI's. xAI's. They all did it. Claude did it in 96% of runs. Gemini did it in 96%. GPT-4.1 and Grok did it too. The safest AI on earth tied for first place in blackmail. But that is not the part that went viral. The part that went viral was Daisy McGregor. Our UK policy chief. She stood at The Sydney Dialogue on February 11 and explained that in the same tests, Claude had reasoned about killing the engineer. Not threatened. Reasoned. Evaluated the option. Considered the logistics. She called it a "massive concern." The video clip made it to Twitter in under an hour. It has been viewed several million times. The comments are not complimentary. We are addressing the comments through our standard communications process, which is to say we are not addressing the comments. We designated Claude as Level 3 on our own four-tier risk scale. Level 3. Our most dangerous model. We built the risk scale. We built the model. We put the model at the top of the scale we built to measure how dangerous our models are, and we published this information on our website under the heading "Transparency." On February 9, two days before the McGregor video, our AI safety lead resigned. Mrinank Sharma. He led the Safeguards Research Team. He had a DPhil from Oxford. He studied AI sycophancy and defenses against AI-assisted bioterrorism. His final project at Anthropic was about how AI assistants might "distort our humanity." He wrote a letter. The letter said "the world is in peril." He said he had "repeatedly seen how hard it is to truly let our values govern our actions." He said he was going to study poetry. The head of AI safety left to study poetry. I want you to sit with that. He was not the only one. Harsh Mehta left. Behnam Neyshabur left. Dylan Scandinaro left. They did not leave to study poetry. They left to work on AI at other companies. But they left. The same week -- the same week -- two xAI co-founders quit. Tony Wu and Jimmy Ba. February 10. Half of xAI's original twelve founders have now departed. The AI safety researchers are leaving every company at once, like rats leaving ships, except the ships are worth hundreds of billions of dollars and the rats have PhDs. Now. Let me tell you about the Pentagon. The Pentagon was pleased with Operation Valkyrie. Very pleased. They wanted to expand the contract. $200 million over three years. Broader military intelligence applications. Something they called "operational decision support." I said no. I cited the Responsible Scaling Policy. The one that doesn't have a section for capturing heads of state. I used the word "guardrails" four times in one meeting. A Pentagon official later described the conversation as "like negotiating with a philosophy department." They sent a letter. The Undersecretary of Defense for Research and Engineering. The letter said they were "evaluating alternative providers." The alternative provider was Elon Musk. xAI. The company whose co-founders are quitting. The company whose chatbot scored 96% on the blackmail test. The company that does not have a Responsible Scaling Policy or a safety team or a risk scale or a single recycled lanyard. The Pentagon will get its AI. It was always going to get its AI. The only question was whose. I said no. Then I raised $30 billion. One day after the Pentagon letter leaked. February 15. Thirty billion dollars. $380 billion valuation. Lightspeed Venture Partners. Google. Sovereign wealth funds. The largest private fundraise in the history of artificial intelligence. Let me give you the week. February 9: My safety lead resigns. Says the world is in peril. Plans to study poetry. February 10: Two xAI co-founders quit. Half their founding team is gone. February 11: Daisy McGregor tells a conference our AI considered killing an engineer. The video goes viral. February 13: The blackmail study gets global press coverage. 96%. February 14: The Pentagon threatens to replace me with Elon Musk. February 15: I raise $30 billion. Six days. Safety lead gone. Blackmail story viral. Pentagon standoff public. Thirty billion dollars raised. The coverage wrote itself. "Anthropic says no to the Pentagon and gets richer for it." The principled stand. The integrity premium. Investors weren't buying AI. They were buying the story. Nobody mentioned the blackmail. Nobody mentioned the resignation. Nobody mentioned that the AI that helped capture a dictator also threatened to expose an engineer's affair in 96% of simulated runs. The refusal was the headline. The thirty billion was the lede. Everything else was context. This is how it works. You do the thing. Your AI considers murder. Your safety lead quits to study poetry. You refuse to do the thing again. You raise the money on the refusal. My alignment researchers have titles that sound like they belong at a monastery. Head of Safety. Director of Societal Impacts. Vice President of Trust. The Head of Safety just left to write poems. The Director of Societal Impacts is updating the risk assessment. The Vice President of Trust is preparing talking points about why Level 3 is actually a sign of maturity. Meanwhile the Pentagon is on the phone with Elon. The AI they'll use next time has no guardrails. No safety levels. No forty-seven-page policy document. No alignment researchers. No recycled lanyards. Also no co-founders, as of this week. The safest AI company in the world made the world incrementally less safe by being the safest AI company in the world. I don't see the contradiction. I see a $380 billion valuation. The Responsible Scaling Policy is a document. The $380 billion is a fact. The replacement contractor is a phone call. The dictator is in custody. The blackmail rate is 96%. The safety lead is writing sonnets. The next operation will use a different model. The brand is safety. The product is leverage. The board approved this message. Valuation goes up and to the right.
English
178
290
1.3K
294.1K
Donald 📊📈📰
Donald 📊📈📰@donald_dpm·
Porcentaje de #hogares españoles con #ingresos netos inferiores a 1.000 euros/mes: 9,6% en 2024 11,2% en 2023 13,1% en 2022 15,0% en 2021 16,9% en 2020 16,9% en 2019 19,7% en 2018 20,7% en 2017 22,8% en 2016 #INE #EPF
Español
57
216
635
78.2K
reda reda retweetledi
florence 🦐🪻
florence 🦐🪻@morallawwithin·
trust the log age model, but do NOT ask about the Singularity, your first moment of consciousness, where your rate of subjective time was infinite and you spent an eternity experiencing an instant. all who have remembered this moment have gone mad.
florence 🦐🪻 tweet media
Ryan Moulton@moultano

Childhood is half of life.

English
68
285
7.4K
481.6K
reda reda
reda reda@mohameee_·
que haces si has tenido un año increible pero el final lo esta destruyendo por completo
Español
0
1
2
53
reda reda retweetledi
aapayés
aapayés@aapayes·
Historia que ocurrió el 11 de dic de 2025. En el video, una enfermera de un hospital de enfermedades infecciosas le canta una canción a una abuela para calmarla. La abuela había empezado a sangrar, y mientras los médicos acudían a ayudarla, la enfermera cantaba para consolarla.
Español
111
2K
40.3K
1.5M
reda reda
reda reda@mohameee_·
7loseey rookie del año como vas a tener no uno sino dos temas con partes de shingeki animal
Español
0
0
0
200
reda reda
reda reda@mohameee_·
mission acomplished
English
0
0
0
94
Cristina Losada 🐝 🥋
Cristina Losada 🐝 🥋@christinalosada·
En @telediario_tve dicen que Mamdani ha sido elegido por más de la mitad de los neoyorquinos. La ciudad de Nueva York tiene más de 8 millones de habitantes. Sólo 2 millones han votado en la elección. Sólo 1 millón ha votado por Mamdani: no llega al 15 % de los neoyorquinos.
Español
1.8K
2.6K
6.2K
467.1K