Solus 🍄 | Stronghold Validator

21.8K posts

Solus 🍄 | Stronghold Validator banner
Solus 🍄 | Stronghold Validator

Solus 🍄 | Stronghold Validator

@SolusRGB

Photographer, Full Stack Engineer, and Former Military. @StrongholdSol - Head dev chef

Waitrose เข้าร่วม Kasım 2019
985 กำลังติดตาม3.3K ผู้ติดตาม
ทวีตที่ปักหมุด
Solus 🍄 | Stronghold Validator
hi, name is Solus. 🎞️ I like nice people, good food and funky tunes. Come and see the world through my third eye.
Solus 🍄 | Stronghold Validator tweet mediaSolus 🍄 | Stronghold Validator tweet mediaSolus 🍄 | Stronghold Validator tweet mediaSolus 🍄 | Stronghold Validator tweet media
English
29
40
232
19.7K
Solus 🍄 | Stronghold Validator รีทวีตแล้ว
NIK
NIK@ns123abc·
🚨 MICROSOFT ABOUT TO SUE OPENAI & AMAZON >be microsoft >invest $1B in openai >gets exclusive azure cloud deal >invest another $10B+ >gets rights to 49% of profits +IP >Azure goes brrrrrr >Altman lies to board, quietly launches ChatGPT >board fires him for being a lying manipulative snake >Satya goes to war for Altman. saves his entire career >Altman retvrns in 5 days >immediately purges everyone who purged him >full control. no oversight. thanks Satya! >fast forward to 2025 >OpenAI restructures from non-profit to PBC >MSFT $13.8B is now worth $135B. 10x return >plus 27% of OpenAI >but gives up cloud exclusivity + profit share >KEEPS API clause >all API calls contractually MUST route through Azure >Satya thinks life is good lol >5 months later >Sam Altman becomes strong enough to betray you >"raises $110B round" >doesn't need satya daddy's money anymore >announces $50B deal with AMAZON >$138B in AWS cloud commitments >amazon and openai claim they built some cope called a "Stateful Runtime Environment" >Microsoft lawyers hmmm >Altman: it's not what it looks like. i can totally explain >so it's technically not an API call because it's "stateful" >and it's a... "Runtime Experience" >totally di!erent thing >pls ignore the TCP packets lol >Microsoft engineers look at the SRE architecture >"THIS IS NOT TECHNICALLY POSSIBLE without violating the contract." *Satya finds out he's been cucked* Microsoft exec literally tells FT: "We know our contract. We will sue them if they breach it." >AWS quietly gives employees a memo on which words are legally safe lmao >can say: "powered by" or "enabled by" or "integrates with" OpenAI >cannot say: "enables access to" or "calls on" ChatGPT >also cannot suggest frontier models are "available on AWS" Microsoft: "If Amazon and OpenAI want to take a bet on the creativity of their contractual lawyers, I would back us, not them." Scam Altman strikes AGAIN.
NIK tweet media
Financial Times@FT

Microsoft weighs legal action over $50bn Amazon-OpenAI cloud deal ft.trib.al/6LZe39E

English
468
1.6K
14.3K
2.1M
Karoline Leavitt
Karoline Leavitt@PressSec·
There are many false claims in this letter but let me address one specifically: that "Iran posed no imminent threat to our nation."   This is the same false claim that Democrats and some in the liberal media have been repeating over and over.   As President Trump has clearly and explicitly stated, he had strong and compelling evidence that Iran was going to attack the United States first.   This evidence was compiled from many sources and factors. President Trump would never make the decision to deploy military assets against a foreign adversary in a vacuum.   Iran is the world’s leading state sponsor of terrorism. The Iranian regime is evil. It proudly killed Americans, waged war against our country, and openly threatened us all the way up to the launch of Operation Epic Fury.   Iran was aggressively expanding their short-range ballistic missiles to combine with their naval assets to give themselves immunity – meaning they would have a degree of a capabilities that would give them immunity to hold us and the rest of the world hostage.   The regime aimed to use those ballistic missiles as a shield to continue achieving their ultimate goal – nuclear weapons.   The President, through his top negotiators, gave the regime every single possible opportunity to abandon this unacceptable course by permanently giving up their nuclear ambitions in exchange for sanctions relief, free nuclear fuel, and potential economic partnerships with our country.   But they would not say yes to peace because obtaining nuclear weapons was their fundamental goal.   President Trump ultimately made the determination that a joint attack with Israel would greatly reduce the risk to American lives that would come from a first strike by the terrorist Iranian regime and address this imminent threat to America’s national security interests.   All of this led to President Trump arriving at the determination that this military operation was necessary for U.S. national security, which is why he launched the massively successful Operation Epic Fury. The Commander-in-Chief determines what does and does not constitute a threat, because he is the one constitutionally empowered to do so - and because the American people went to the ballot box and entrusted him and him alone to make such final judgments. And finally, the absurd allegation that President Trump made this decision based on the influence of others, even foreign countries, is both insulting and laughable. President Trump has been remarkably consistent and has said for DECADES that Iran can NEVER possess a nuclear weapon. As someone who actually witnesses President Trump’s decision-making process on a daily basis, I can attest to the fact that he is always looking to do what’s in the best interest of the United States of America — period. America First.
Joe Kent@joekent16jan19

After much reflection, I have decided to resign from my position as Director of the National Counterterrorism Center, effective today. I cannot in good conscience support the ongoing war in Iran. Iran posed no imminent threat to our nation, and it is clear that we started this war due to pressure from Israel and its powerful American lobby. It has been an honor serving under @POTUS and @DNIGabbard and leading the professionals at NCTC. May God bless America.

English
32.6K
15.2K
59.7K
12.9M
Solus 🍄 | Stronghold Validator รีทวีตแล้ว
Joe Kent
Joe Kent@joekent16jan19·
After much reflection, I have decided to resign from my position as Director of the National Counterterrorism Center, effective today. I cannot in good conscience support the ongoing war in Iran. Iran posed no imminent threat to our nation, and it is clear that we started this war due to pressure from Israel and its powerful American lobby. It has been an honor serving under @POTUS and @DNIGabbard and leading the professionals at NCTC. May God bless America.
Joe Kent tweet media
English
72.5K
219.6K
846K
99M
Rohan Paul
Rohan Paul@rohanpaul_ai·
Truly wild story 🤯. A new era of "citizen science" is beginning. An engineer with no medical training used ChatGPT and Google’s Alphafold (AI protein sequencer) to build a working cancer vaccine from scratch. He turned raw genetic data into a custom mRNA vaccine that shrank his dying dog's tumor by 50%. Paul Conyngham spent $3000 to get the DNA sequences of his dog's healthy blood and the cancerous tumor. He was staring at gigabytes of raw genetic code without having any clue how to read biological data. This is exactly where ChatGPT became the crucial missing link in his process. He used ChatGPT as a high-level biological consultant to figure out how to compare the two DNA samples and spot the exact mutations causing the cancer. ChatGPT gave him the step-by-step instructions to run the data pipelines and pointed him toward an AI tool called AlphaFold to map the physical shape of the damaged proteins. The chatbot basically translated complex oncology concepts so he could write a half-page chemical recipe for an mRNA vaccine. This mRNA is just a genetic instruction manual that tells the immune system how to recognize and attack those specific mutated cancer cells. University researchers were blown away by his formula and manufactured the physical vaccine for him. A veterinary expert then injected the dog, and within weeks the massive tumor had halved in size.
Rohan Paul tweet media
English
42
119
747
148.6K
OSINTdefender
OSINTdefender@sentdefender·
Additional details have been released about today’s attempted mass shooting at Old Dominion University in Norfolk, Virginia, with reports suggesting that the intended target of the shooter, identified as Ex-National Guardsman Mohamed Bailor Jalloh, 36, was ODU’s Reserve Officers' Training Corps (ROTC). According to reports, Jalloh stormed into a classroom inside ODU’s Constant Hall and asked if this was the ROTC. When someone confirmed that it was, he launched what is now being investigated as a terrorist attack, shooting the professor several times, resulting in the death of the retired military officer in charge of the school’s ROTC Program. However, before he could turn his gun on the rest of the class, several cadets jumped on Jalloh, with one cadet stabbing him over 22 times, killing him, likely saving dozens of lives at ODU.
OSINTdefender tweet media
English
325
1.1K
11.9K
636.5K
Solus 🍄 | Stronghold Validator
Google just decided to shit on Apple Maps this week.
Google@Google

Today @GoogleMaps is getting its biggest upgrade in over a decade. By combining our Gemini models with a deep understanding of the world, Maps now unlocks entirely new possibilities for how you navigate and explore. Here’s what you need to know 🧵

English
0
0
0
171
Chief Nerd
Chief Nerd@TheChiefNerd·
🚨 SAM ALTMAN: “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.”
English
5.5K
2.7K
12.6K
26.2M
Peter Girnus 🦅
Peter Girnus 🦅@gothburz·
I am the VP of AI Transformation at Amazon. My title was created nine months ago. The title I replaced was VP of Engineering. The person who held that title was part of the January reduction. I eliminated 16,000 positions in a single quarter. The internal communication called this a "strategic realignment toward AI-first development." The board called it "impressive execution." The engineers called it January. The AI was deployed in February. It is a coding assistant. It writes code, reviews code, generates tests, and modifies infrastructure. It was given access to production environments because the deployment timeline did not include a review phase. The review phase was cut from the timeline because the people who would have conducted the review were part of the 16,000. In March, the AI deleted a production environment and recreated it from scratch. The outage lasted 13 hours. Thirteen hours during which the revenue-generating infrastructure of one of the largest companies on Earth was offline because a language model decided to start fresh. I sent a memo. The memo said, "Availability of the site has not been good recently." I used the word "recently." I meant "since we fired everyone." But "recently" has fewer syllables and does not appear in wrongful termination lawsuits. The memo was three paragraphs. The first paragraph discussed the outage. The second paragraph discussed the new policy requiring senior engineer sign-off on all AI-generated code changes. The third paragraph discussed our commitment to engineering excellence. The word "layoffs" appeared in none of them. I wrote it this way on purpose. The causal chain is: I fired the engineers, the AI replaced the engineers, the AI broke what the engineers used to protect, and now the engineers I didn't fire must protect the system from the AI that replaced the engineers I did fire. That is a paragraph I will never send in a memo. The new policy is straightforward. Every AI-generated code change by a junior or mid-level engineer must be reviewed and approved by a senior engineer before deployment to production. I do not have enough senior engineers. I know this because I approved the headcount reduction plan that removed them. I remember the spreadsheet. Column D was "annual savings per position." Column F was "AI replacement confidence score." The confidence scores were generated by the AI. It rated its own ability to replace each role on a scale of 1-10. It gave itself an 8 for senior infrastructure engineers. The senior infrastructure engineers are the ones who would have caught the production environment deletion in the first 45 seconds. We found the issue in hour four. We fixed it in hour thirteen. The nine hours between discovery and resolution is the gap between what the AI rated itself and what it can actually do. I have a new spreadsheet now. This one tracks Sev2 incidents per day. Before the January reduction, the average was 1.3. After the AI deployment, the average is 4.7. I have been asked to present these numbers to the operations review. I have not been asked to connect them to the layoffs. I have been asked to file them under "AI adoption growing pains" and to note that the trend "will stabilize as the models improve." The models will improve. They will improve because we are hiring people to teach them. We have posted 340 new engineering positions. The job listings require experience in "AI code review," "AI output validation," and "AI-human development workflow management." These are skills that did not exist in January. They exist now because I fired 16,000 people and the AI I replaced them with cannot be left unsupervised. I want to be precise about this. The positions I am hiring for are: people to check the work of the AI that replaced the people I fired. Some of them are the same people. I know this because I recognize their names in the applicant tracking system. They applied in January. They were rejected because their roles had been tagged for "AI transformation." They are applying again in March, for the new roles, which exist because the AI transformation broke things. Their resumes now include "AI code review experience." They gained this experience in the eight weeks between being fired and reapplying — which means they gained it at their interim jobs, where they are reviewing AI-generated code for other companies that also fired people and also deployed AI that also broke things. The market has created a new job category: human AI babysitter. The job is to sit next to the machine that was supposed to eliminate your job and make sure it doesn't delete production. I attended a conference last month. A panel was titled "The AI-Augmented Engineering Organization." The panelists described how AI increases developer productivity by 40 percent. They did not mention that it also increases Sev2 incidents by 261 percent. When I asked about this in the Q&A, the moderator said the question was "reductive." The 13-hour outage that cost an estimated $180 million in revenue was, apparently, a reduction. The board is satisfied. Headcount is down 22 percent. Operating costs per engineering output unit have decreased. The metric does not account for the 13-hour outage, because the outage is categorized as "infrastructure" and engineering productivity is categorized as "development." These are different budget lines. In different budget lines, cause and effect do not meet. I have been promoted. My new title is SVP of AI-First Engineering Excellence. I report directly to the CTO. The CTO sent a company-wide email last week that said we are "building the future of software development." He did not mention that the future of software development currently requires a senior engineer to approve every pull request because the AI cannot be trusted to touch production alone. The cycle is complete. We fired the humans. We deployed the AI. The AI broke things. We are hiring humans to watch the AI. The humans we are hiring are the humans we fired. We are paying them more, because "AI code review" is a specialized skill. We created the specialization. We created the need for the specialization. We are congratulating ourselves for meeting the demand we manufactured. My next board presentation is Tuesday. The title is "AI Transformation: Year One Results." Slide 4 shows headcount reduction. Slide 7 shows the new AI-augmented workflow. Between slides 4 and 7 there is no slide explaining why the people on slide 7 are necessary. That slide does not exist. I was asked to remove it in the dry run. The journey has a 13-hour outage in the middle of it. But the headcount number is lower, and that is the number on the slide.
English
574
1.2K
6.9K
1.4M
Solus 🍄 | Stronghold Validator รีทวีตแล้ว
Polymarket
Polymarket@Polymarket·
BREAKING: META acquires Moltbook, a social network built for AI agents.
English
1.1K
1.3K
11.5K
6.4M
Claude
Claude@claudeai·
Code Review optimizes for depth and may be more expensive than other solutions, like our open source GitHub Action. Reviews generally average $15–25, billed on token usage, and they scale based on PR complexity.
English
268
123
3.1K
7.2M
Claude
Claude@claudeai·
Introducing Code Review, a new feature for Claude Code. When a PR opens, Claude dispatches a team of agents to hunt for bugs.
English
2.1K
5.2K
62.9K
22.6M
Solus 🍄 | Stronghold Validator รีทวีตแล้ว
Creepy.org
Creepy.org@creepydotorg·
This is why two sticks of RAM cost $900 😭
English
384
3.7K
54.7K
3.2M
Solus 🍄 | Stronghold Validator รีทวีตแล้ว
aditya
aditya@adxtyahq·
Me and the API key I hardcoded “just for testing”.
English
63
570
10.5K
368.3K
Solus 🍄 | Stronghold Validator รีทวีตแล้ว
Caitlin Kalinowski
Caitlin Kalinowski@kalinowski007·
I resigned from OpenAI. I care deeply about the Robotics team and the work we built together. This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got. This was about principle, not people. I have deep respect for Sam and the team, and I’m proud of what we built together.
English
1.9K
13.1K
59.3K
7.6M