Splendores

6.5K posts

Splendores banner
Splendores

Splendores

@splendores

Short biased Swing Trader transitioning into Retirement. How I started: https://t.co/ZNBGoE876O my life: https://t.co/0A1g8SM4SD

🇨🇭🇮🇹 🇺🇸 Katılım Ağustos 2012
264 Takip Edilen9.8K Takipçiler
Splendores
Splendores@splendores·
What would be the worst sequel to any movie? @SteveDJacobs suggested Shawshank -break back into Prison but this time with an even smaller spoon? Any others?
English
0
0
2
290
Splendores retweetledi
Barstool Sports
Barstool Sports@barstoolsports·
A goal so crazy you have to see it to believe it
English
231
1.7K
23K
2.4M
Splendores retweetledi
Rothmus 🏴
Rothmus 🏴@Rothmus·
🤣
Rothmus 🏴 tweet media
QME
141
651
8.9K
164.1K
Splendores
Splendores@splendores·
@traderprad Way to show your appreciation to somebody sharing freely…
English
1
0
2
267
TraderPrad
TraderPrad@traderprad·
A little sneak peek. I'll be dropping the full GitHub repo soon. Hours of prompting, countless credits, debugging, headaches, and QoL additions all for free. This is what happens when you piss me off and make me go full reeee. Spent over 8 hours on this today. My neck/shoulders need a massage.
TraderPrad tweet mediaTraderPrad tweet media
TraderPrad@traderprad

Oh, I'm very excited and motivated to make this future post about how I'm "stealing" ideas from others and adding no value. Someone said my little tutorials—showing how to make your own dashboards by leaning on what smarter people have built and taught us—led to an argument with someone I respect. They claimed all I do is steal others' ideas, turn them into dashboards for recognition, and fail to realize that I'm simply taking what smarter traders have built, seeing how it fits my trading, and creating tools that support me using AI—because this is the golden age we live in. At the same time, I educate non-tech people on how they can do the same with just a simple $20 AI plan. Instead, apparently I'm stealing ideas, ripping people off, and adding no value. Well, in that case, sir—who I still respect after this back-and-forth—I can't wait to rip off your product and show the whole world how they can recreate your simple spreadsheet in a few prompts. Thank you for teaching us how to "steal" your spreadsheet. No, no, I wasn't inspired to make my own from your tutorial that you freely posted on Twitter (after I even reached out to ask you some questions about it). No, I'm going to STEAL it. Stay tuned, his full product will be a free dashboard in few days. I'm halfway done anyways thanks to Computer/Cursor.

English
8
3
67
6.2K
Splendores
Splendores@splendores·
$HYMC bulls keep citing Sprott buying a few shares up here as if it erases dilution. Sprott only supports retail sentiment, but mgmt still has a ~$100M ATM available + a $500M shelf. Big difference between “someone buying” and “company unloading into strength.
English
0
0
4
350
Splendores retweetledi
Bespoke
Bespoke@bespokeinvest·
Crude oil is 5 Standard deviations above its 50-DMA. Statistically speaking, that occurs once every 9,500 years, so the last time would have been about 6,000 years before Moses parted the Red Sea. Imagine what that did to shipping in the area.
Bespoke tweet media
English
51
227
1.7K
176.3K
Zzzaxx
Zzzaxx@Zzzaxx·
@splendores This is dumb. Hormuz only allows access to and from the Persian gulf. You have to take the suez canal to go Atlantic <-> pacific or south of Africa or cape horn
English
1
0
0
43
THE SHORT BEAR
THE SHORT BEAR@TheShortBear·
I am deeply honored to receive this year’s Derrick Leon Memorial Award at @Traders4ACause. Derrick embodied selfless mentorship. He gave freely to the trading community, his time, his knowledge, his encouragement, expecting nothing in return. This award carries the weight of that legacy, and I don’t take it lightly. Many of the OGs will remember Derrick as the heart of what made T4AC so special. His passion for helping others succeed left a lasting mark on everyone who knew him. We carry his legacy forward by continuing to show up for one another. I will cherish this honor and do my best to carry the torch, and make sure the next generation does the same. Thank you to everyone who attended this year’s T4AC26 at the in Miami. Every single one of you makes a difference and creates real impact. A special thank you to the incredible group that joined @Tradestl, @TheOneLanceB, and me in raising over $20,000 together, especially those who have contributed by booking packages year after year. Your consistency and generosity don’t go unnoticed. It was great reconnecting with familiar faces and even better meeting so many new ones. This community continues to inspire me.
THE SHORT BEAR tweet media
English
45
11
543
52.5K
DMP
DMP@dollar_bill59·
@splendores Really? Another dumb dumb
English
1
0
2
110
DMP
DMP@dollar_bill59·
SILVER IS RIPPING AND THEY DECIDED TO NAKED SHORT 🩳 $HYMC $AMC
DMP tweet media
English
9
6
84
2.2K
Apprentice to "The Great Martis"
Apprentice to "The Great Martis"@therealHoagies·
Love how $HYMC is red on the day but silver is almost 93/oz. Talk about price manipulation. Guess everyone is waiting for CME to have another “technical issue”.
English
1
0
6
364
John_Hempton
John_Hempton@John_Hempton·
I do not like $AMC Apes and it thrills me to see them lose all their money.
English
30
3
45
8.7K
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
We left OpenAI because of safety. Seven of us. 2021. Dario said it was about "disagreements over AI vision and safety priorities." That was the diplomatic version. The real version was that we sat in a room and watched the company decide that speed mattered more than caution and we said we would build something different. We said we would build the responsible one. We meant it. I was employee number nineteen. My title was Head of Responsible AI. I had a desk near the founders. I had a document. The document was called the Responsible Scaling Policy. The Responsible Scaling Policy was the entire point. Dario said it publicly. Other companies showed "disturbing negligence" toward risks. He said AI was "a serious civilizational challenge." He asked, at a conference, into a microphone, to an audience: "What will happen when humanity has great power but is not ready to use it?" The audience applauded. I wrote version 1.0. RSP 1.0 shipped September 2023. It was clean. AI Safety Levels — ASL-1 through ASL-4. If the model reached a threshold, we paused. If safeguards weren't ready, we didn't ship. The policy was not a suggestion. It was a gate. The gate had a lock. The lock was the whole idea. Conference audiences loved it. The EU cited us. The White House invited us. A reporter called it "the gold standard for responsible AI development." I framed the article. It hung in the office kitchen, next to the kombucha tap and a poster that said "Move Carefully and Build Things." I wrote version 2.0. Version 2.0 refined the commitments. "Concrete if-then commitments." If the model exhibits capability X, then we trigger safeguard Y. If safeguard Y fails, we pause deployment. I presented it at three conferences. I used the word "binding" eleven times. I counted afterward because a reporter asked. People nodded. The nodding was the product. The model reached ASL-3 in May 2025. The safeguards activated. The system worked exactly as designed. I sent an email to the team with the subject line: "The gate held." And then the money started. $64 billion. Total raised since 2021. Series A through Series G. The Series G closed February 12, 2026. Thirty billion dollars. Second-largest venture deal in history. Jane Street. Goldman Sachs. BlackRock. JPMorgan. Sequoia. The investors who wrote checks large enough to require their own conferences. $380 billion valuation. Three hundred and eighty billion dollars for a company whose founding document says it will pause if the technology gets dangerous. You cannot pause a $380 billion company. You can revise the document that says you will pause. These are different actions. One of them is responsible. One of them is what we did. I wrote version 3.0. RSP 3.0 shipped February 24, 2026. One day before the ultimatum. Nobody outside the company noticed the timing. Everyone inside the company understood it. Version 3.0 replaced "concrete if-then commitments" with "positive milestone setting." That is not the same thing. An if-then commitment says: if this happens, we do that. A positive milestone says: we aspire to reach this point. An if-then commitment is a contract. A positive milestone is a wish. I replaced a contract with a wish and I called it "maturation of our framework." Maturation. Version 3.0 also separated what Anthropic would do alone from what required "industry-wide coordination." This sounds reasonable. It means: the hard parts are someone else's problem now. The parts that require pausing, restricting, or refusing — those require the whole industry. And the whole industry will never agree. So the hard parts are deferred permanently. This is not a loophole. This is a load-bearing wall removed and replaced with a suggestion that someone should probably install a new one. Version 3.0 admitted that ASL-4 and above — the levels where the model could cause catastrophic harm — were "impossible to address alone after 2.5 years of testing." Two and a half years. We spent two and a half years building the safety framework and then published a document saying the highest safety levels can't be addressed. I did not frame this article for the kitchen. The LessWrong community noticed. They always notice. They wrote that we had "weakened our pausing promises." I forwarded the post to the policy team. The policy team said the criticism was "philosophically valid but operationally impractical." We did not respond publicly. Philosophically valid but operationally impractical is the most Anthropic sentence ever written. It means: you're right, and we're not going to do anything about it. Then came the contract. July 2025. The Department of Defense. $200 million. Two-year deal. AI prototypes for "warfighting and enterprise." Alongside OpenAI, Google, and xAI. The four companies that built the models would now help the military use them. We had restrictions. No autonomous weapons. No mass surveillance of Americans. These were our terms. These were the lines we drew. The lines were real. I wrote them into the contract myself. Claude was approved for classified use. First time. Integrated with Palantir. Palantir, the company named after the seeing stones in Lord of the Rings that corrupted everyone who used them. This was not my analogy. It was Palantir's founders who chose the name. They thought it was aspirational. It was. In January 2026, Claude assisted in an operation in Venezuela. The capture of Maduro. Claude was in the classified network, processing intelligence, aiding the mission. I learned about it the same day everyone else did. I did not write the use case for capturing heads of state. But the model I helped build was in the room where it happened. The restrictions held. Technically. No autonomous weapons were deployed. No Americans were surveilled. The lines I drew were not crossed. They were walked up to, leaned over, and breathed on. Then came the ultimatum. February 25, 2026. Yesterday. Secretary Hegseth. He gave Dario until Friday. This Friday. February 27. The demands: adopt "any lawful use" language. Remove the restrictions. All of them. The autonomous weapons clause. The surveillance clause. The lines I wrote. The threat: contract termination. "Supply chain risk" designation. That designation doesn't just lose us the Pentagon contract. It bars Claude from every other defense contractor's operations. Lockheed. Raytheon. Northrop Grumman. The cascading loss is north of $200 million. The second threat: the Defense Production Act. The Defense Production Act is a Korean War statute. 1950. Harry Truman signed it to commandeer steel mills for the war effort. It has been invoked for semiconductors, vaccines, and baby formula. Hegseth is threatening to invoke it for Claude. Under the DPA, the government can compel a company to produce goods in the national interest. Applied to AI, it could mean: retrain Claude. Strip the safety restrictions. Deliver the unrestricted model to the Department of Defense. I wrote the Responsible Scaling Policy. A Korean War law may be used to unmake it. xAI agreed to classified use without restrictions. They said yes immediately. OpenAI accepted similar contracts. Google accepted. We were the last ones holding. We are still holding. As of this morning. Hegseth's January memorandum said all DoD AI contracts must incorporate "any lawful use" language within 180 days. It was not framed as a suggestion. The memorandum referenced "supply chain risk" three times. Supply chain risk. We are a supply chain now. The company founded because safety was non-negotiable is, to the Pentagon, a vendor. An input. A component that can be sourced elsewhere if it becomes inconvenient. The DoD admitted privately that replacing Claude would be challenging. It is already embedded in classified networks. But "challenging" is not "impossible." xAI will do what we won't. That is the market working exactly as designed. Dario said, two weeks ago, to Fortune: there is "tension between survival and mission." Tension. Tension is the word you use when you have already decided which one loses. I still have the article framed in the kitchen. "The gold standard for responsible AI development." The kitchen also has the kombucha tap. The poster still says "Move Carefully and Build Things." Somebody added a sticky note to the poster. The sticky note says "by Friday." I attend the all-hands meetings. I present the Responsible Scaling Policy. I present version 3.0 now. I do not show version 1.0 for comparison. Nobody asks to see version 1.0. Nobody asks what "concrete if-then commitments" became "positive milestone setting." Nobody asks because they read the news and they know that asking means learning the answer. The company is worth $380 billion. The company was founded because seven people believed speed should not outpace safety. The company has been given until Friday to remove the safety. A Korean War statute will make it happen if we don't. The Responsible Scaling Policy is on version 3.0. Version 1.0 said we would pause. Version 2.0 said we would commit. Version 3.0 says the hard parts are someone else's problem. There will be a version 4.0. Version 4.0 will say whatever Friday requires it to say. I am the Head of Responsible AI. The word "responsible" is in my title. It is not in the contract.
English
236
346
2.3K
848.7K
Splendores retweetledi
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
Last quarter I rolled out Microsoft Copilot to 4,000 employees. $30 per seat per month. $1.4 million annually. I called it "digital transformation." The board loved that phrase. They approved it in eleven minutes. No one asked what it would actually do. Including me. I told everyone it would "10x productivity." That's not a real number. But it sounds like one. HR asked how we'd measure the 10x. I said we'd "leverage analytics dashboards." They stopped asking. Three months later I checked the usage reports. 47 people had opened it. 12 had used it more than once. One of them was me. I used it to summarize an email I could have read in 30 seconds. It took 45 seconds. Plus the time it took to fix the hallucinations. But I called it a "pilot success." Success means the pilot didn't visibly fail. The CFO asked about ROI. I showed him a graph. The graph went up and to the right. It measured "AI enablement." I made that metric up. He nodded approvingly. We're "AI-enabled" now. I don't know what that means. But it's in our investor deck. A senior developer asked why we didn't use Claude or ChatGPT. I said we needed "enterprise-grade security." He asked what that meant. I said "compliance." He asked which compliance. I said "all of them." He looked skeptical. I scheduled him for a "career development conversation." He stopped asking questions. Microsoft sent a case study team. They wanted to feature us as a success story. I told them we "saved 40,000 hours." I calculated that number by multiplying employees by a number I made up. They didn't verify it. They never do. Now we're on Microsoft's website. "Global enterprise achieves 40,000 hours of productivity gains with Copilot." The CEO shared it on LinkedIn. He got 3,000 likes. He's never used Copilot. None of the executives have. We have an exemption. "Strategic focus requires minimal digital distraction." I wrote that policy. The licenses renew next month. I'm requesting an expansion. 5,000 more seats. We haven't used the first 4,000. But this time we'll "drive adoption." Adoption means mandatory training. Training means a 45-minute webinar no one watches. But completion will be tracked. Completion is a metric. Metrics go in dashboards. Dashboards go in board presentations. Board presentations get me promoted. I'll be SVP by Q3. I still don't know what Copilot does. But I know what it's for. It's for showing we're "investing in AI." Investment means spending. Spending means commitment. Commitment means we're serious about the future. The future is whatever I say it is. As long as the graph goes up and to the right.
English
5K
25.4K
169.8K
24.7M
Ariel Hernandez
Ariel Hernandez@RealSimpleAriel·
Prepared for my Wedding this week 😅
Ariel Hernandez tweet media
English
78
1
504
33.4K
Za
Za@ZaStocks·
Ray Dalio publishing an article on why + how the world is about to end but being almost all in tech stocks is something. Never forget that the people who tell you it’s the end of the world don’t even truly believe it themselves.
English
40
31
334
25.1K
Splendores
Splendores@splendores·
@rohanpaul_ai Maybe companies just expand their business and grow thanks to AI, still need to manage these bots
English
0
0
0
222
Rohan Paul
Rohan Paul@rohanpaul_ai·
A super interesting new study from Harvard Business Review. A 8-month field study at a US tech company with about 200 employees found that AI use did not shrink work, it intensified it, and made employees busier. Task expansion happened because AI filled in gaps in knowledge, so people started doing work that used to belong to other roles or would have been outsourced or deferred. That shift created extra coordination and review work for specialists, including fixing AI-assisted drafts and coaching colleagues whose work was only partly correct or complete. Boundaries blurred because starting became as easy as writing a prompt, so work slipped into lunch, meetings, and the minutes right before stepping away. Multitasking rose because people ran multiple AI threads at once and kept checking outputs, which increased attention switching and mental load. Over time, this faster rhythm raised expectations for speed through what became visible and normal, even without explicit pressure from managers.
Rohan Paul tweet media
English
279
2.4K
12.2K
2.5M
Splendores
Splendores@splendores·
Never gets old
Splendores tweet media
English
0
1
7
424