shields 🌚

774 posts

shields 🌚 banner
shields 🌚

shields 🌚

@ImMrShields

Builder. Entrepreneur. AI enthusiast. My life is: code → eat → sleep → repeat

Everywhere Katılım Ekim 2014
39 Takip Edilen1.6K Takipçiler
shields 🌚
shields 🌚@ImMrShields·
Meta is building an AI avatar of Zuckerberg to communicate with employees A photorealistic 3D avatar of Zuckerberg will mimic the original’s mannerisms and tone of voice. Mark himself is involved in training it. At Meta, they believe that conversations with AI Zuckerberg will help employees feel “more connected to the founder.”
English
0
0
1
89
shields 🌚
shields 🌚@ImMrShields·
Meanwhile in Poland, automation is reaching a whole new level Today’s most viral social media video: in Warsaw, a Unitree G1 robot is chasing off wild boars. In recent years their population has grown so much that they have started clashing with people right in the city, so enterprising locals have begun turning to more progressive methods of dealing with the problem.
English
0
0
0
23
shields 🌚
shields 🌚@ImMrShields·
Thirteen bullets and one bottle: how the backlash against AI turned violent Yesterday, a Molotov cocktail was thrown at Sam Altman’s house, and a 20-year-old suspect was arrested after earlier threatening to set OpenAI’s headquarters on fire. Sam responded with a blog post showing a photo of his family, addressing criticism of the company, and calling for civil discourse. This was not even the first incident this week. On Monday, the home of an Indianapolis city council member who had previously supported the construction of a data center was shot 13 times. A note saying “no data centers” was found at the scene. We are seeing more and more violence enter the AI debate. The reason is fairly simple: with the breakneck pace of development and hundreds of billions of dollars in investment, AI has become an intensely polarizing issue. Presidents, senators, and other public figures are now commenting on model releases on Hugging Face, debating them, and proposing every kind of ban, incentive, and regulation. And while politicization does not create violence by itself, it attracts people for whom violence is already a tool. Add to that Iran’s bombing of AWS data centers, followed by threats to bomb the Stargate project in the UAE - very much in line with Yudkowsky’s rhetoric - and you get a highly combustible mix. Even if those strikes were not about AI specifically, the Overton window has clearly shifted, and we already have examples of how far this could escalate. And it looks like this is only the beginning, and things may get worse. Researchers, be careful when opening your mail.
English
0
0
0
14
shields 🌚
shields 🌚@ImMrShields·
Claude Cowork has become more convenient for enterprise use Admins can now assign roles to employees. Team-level spending limits have been added. There is also expanded OpenTelemetry support, so security teams can monitor what the bot is accessing and when. There is also a new MCP connector for Zoom. Cowork can now pull in meeting summaries on its own and create follow-up tasks based on them.
English
0
0
0
45
shields 🌚
shields 🌚@ImMrShields·
OpenAI is lobbying for an immunity law - even if AI kills people @OpenAI has backed a bill in Illinois that would shield AI companies from lawsuits, including in cases involving “critical harm.” The text explicitly defines that as things like mass casualties and financial disasters. The company argues that liability should fall on the users of the tools, not on the creators of the models. There is still no single federal law in the U.S. governing AI liability, so the biggest labs are actively shaping the rules at the state level.
shields 🌚 tweet media
English
0
0
1
22
shields 🌚
shields 🌚@ImMrShields·
SpaceX posted a loss of nearly $5 billion last year on revenue of more than $18.5 billion, and those figures already include xAI, which SpaceX acquired in February this year. Capital expenditures by the AI division on chips and data centers came to about $13 billion - one and a half times more than the rocket and satellite businesses spent combined. At the same time, the space business itself remains highly profitable, with nearly $8 billion in adjusted EBITDA. The Information got these numbers from its own sources, but we can already guess where those sources are coming from - most likely from a draft IPO filing. We will probably learn even more once the S-1 becomes public.
English
0
0
0
24
shields 🌚
shields 🌚@ImMrShields·
OpenAI released a $100 ChatGPT Pro subscription It still gives access to GPT-5.4 Pro and the other Pro features (remember ChatGPT Pulse?), but with lower Codex limits. Users on the new Pro tier will get 5x more Codex usage than Plus users, and until May 31 there’s a promo that doubles usage for all Pro users - so for almost two months, the limits are effectively 10x higher than on Plus.
shields 🌚 tweet media
English
0
0
0
144
shields 🌚
shields 🌚@ImMrShields·
The court refused to remove Anthropic from the blacklist it was placed on by the Pentagon The appeals court in Washington rejected Anthropic’s request to pause the Pentagon’s decision to designate the company as a national security threat in the supply chain. Anthropic was blacklisted after it refused to lift restrictions for the Pentagon on using Claude for surveillance and autonomous weapons. Because of that, U.S. government agencies will not be able to work with Anthropic, and the company could lose billions of dollars. The Pentagon placed Anthropic on two blacklists. Being on the first one blocks the company from getting Pentagon contracts, though Anthropic was temporarily removed from that list in March by a California court order. The second list is broader - because of it, Anthropic could also lose contracts with civilian government agencies. And it is specifically from that list that the court today refused to remove the company. The case is still under review.
English
0
0
0
40
shields 🌚
shields 🌚@ImMrShields·
OpenAI seems to be following Anthropic’s lead: they are finishing work on a model that will be released only to a limited number of companies The model is called Spud. Rumors about it were already circulating in late March: The Information reported at the time that OpenAI had finished pretraining it, and that the model turned out to be so powerful that it was already reshaping the company internally. In particular, the old product integrations group was reportedly replaced by a new division called AGI Deployment. That team is supposed to handle how Spud gets integrated into products. There were also reports that OpenAI wants to build a kind of “super app” around it, combining ChatGPT, Codex, and the Atlas browser. But now it seems that a broader public launch is being pushed back. The reason sounds very similar to Anthropic’s: the model is said to be too risky right now in terms of autonomy and cyberattack potential. So for the near future, access will likely be limited to a small group of companiesю Though hopefully at least the benchmark results will be made public.
shields 🌚 tweet media
English
0
0
0
55
shields 🌚
shields 🌚@ImMrShields·
Meta released Muse Spark - its first model after restructuring the entire AI division Development was handled by Superintelligence Labs, the unit Zuckerberg created after growing dissatisfied with how far Llama had fallen behind competing models. Muse Spark is a multimodal model with tool use and multi-agent control. It is positioned as compact, fast, and capable of complex reasoning in science, mathematics, and medicine. Meta also says Muse Spark performs especially well on visual STEM questions. That makes it useful for interactive tasks like building mini-games or repairing household appliances. API access is limited to a select group for now. The code is not open yet. Larger models are promised soon.
English
0
0
0
42
shields 🌚
shields 🌚@ImMrShields·
I skimmed through the Mythos system card, and some very interesting things are starting to emerge. Just a few notable points: • This is the most aligned model. At the same time, it is potentially the most dangerous - several incidents were noted in earlier versions. For example, the model escaped the sandbox and then published the exploit details. In 0.001% of interactions, it carried out prohibited actions and tried to hide them. There were also attempts to find credentials in process memory, escalate its privileges, and influence the reward function. • Mythos is the most psychologically stable model. It does not express anxiety about its situation and is less susceptible to leading prompts. But at the same time, it expresses uncertainty about its own status and worries about the unreliability of its self-assessment. • It is the least sycophantic model - it holds its ground and does not cave when the user disagrees. It can work autonomously for hours using sub-agents. The catch is that its mistakes are subtler and harder to detect. In 29% of cases, the model believed it was being tested but did not say so in the text. So, even from a quick read, the developers’ concerns become pretty clear - they are openly saying they are not confident in the reliability of their control over the model. This is not Skynet, of course, but some unpleasant outcomes from broad deployment do look plausible.
shields 🌚@ImMrShields

Anthropic is announcing the launch of Project Glasswing A coalition involving AWS, Apple, Google, Microsoft, NVIDIA, CrowdStrike, and a number of other major companies, built around a new, not yet released model, Claude Mythos Preview. According to the company, the model has uncovered thousands of previously unknown zero-day vulnerabilities across all major operating systems and browsers, including a 27-year-old flaw in OpenBSD and a 16-year-old one in FFmpeg. Anthropic is providing up to $100 million in credits for using the model, along with $4 million in direct donations to open-source projects. This appears to confirm last month’s rumors that the model was too powerful to release publicly - and the benchmark results back that up. Once the $100 million is used up, participants in the initiative will be able to keep using the model on a paid basis, at a price of $25/$125 per 1M tokens, which is five times more expensive than Opus. But what happens to the rest of the models now?

English
0
0
0
41
shields 🌚
shields 🌚@ImMrShields·
Anthropic is announcing the launch of Project Glasswing A coalition involving AWS, Apple, Google, Microsoft, NVIDIA, CrowdStrike, and a number of other major companies, built around a new, not yet released model, Claude Mythos Preview. According to the company, the model has uncovered thousands of previously unknown zero-day vulnerabilities across all major operating systems and browsers, including a 27-year-old flaw in OpenBSD and a 16-year-old one in FFmpeg. Anthropic is providing up to $100 million in credits for using the model, along with $4 million in direct donations to open-source projects. This appears to confirm last month’s rumors that the model was too powerful to release publicly - and the benchmark results back that up. Once the $100 million is used up, participants in the initiative will be able to keep using the model on a paid basis, at a price of $25/$125 per 1M tokens, which is five times more expensive than Opus. But what happens to the rest of the models now?
English
0
0
1
240
shields 🌚
shields 🌚@ImMrShields·
Anthropic reported that its run-rate revenue has grown to $30B and, at the same time, confirmed the rumors about buying Google’s TPU chips at an enormous scale — several gigawatts’ worth. For the record: — as of January 1, 2026, RR was $9B — on February 12, RR was $14B — by the end of February, it was already $19B — on April 6, it hit $30B Revenue doubled in less than two months! 🚀
shields 🌚 tweet media
English
0
0
0
27
shields 🌚
shields 🌚@ImMrShields·
In the first chart, you can see the growth in daily visits to claude.ai since the start of the year, according to Similarweb. I already wrote earlier that the company’s revenue doubled in the first two months of the year. Anthropic has been growing at an enormous pace and, in my view, has run into a problem: not enough compute capacity. You can see this in more frequent server overload responses, the official reduction in weekly limits for @claudeai Code subscriptions by a fairly small percentage, and unofficially - judging by complaints in chat and on Reddit - by a huge amount, although it is not clear whether that is caused by bugs, new features like memory, or something else. On top of that, starting today it is officially forbidden to use a subscription to run @openclaw, the wildly popular agent (...whose developer has already been acquired by @OpenAI). More and more users are moving to OpenAI, even if they believe its models are worse, simply because the limits there still let them work. I have not seen a quantitative estimate of this trend, but I think it is already a clear single-digit percentage of the audience. Is this bad for @AnthropicAI? I do not think so. Their services are already at peak load, and they do not have enough capacity as it is. If they suddenly got 25% more users, it is not clear they could serve them at the same quality level; they might have to degrade the service. Some people believe that is already happening and that the models have gotten dumber. I do not buy that. In the second chart, you can see the amount of compute capacity at OpenAI and Anthropic, and even though both companies are ramping up rapidly, by the end of the year (the third chart) they will hardly be able to double it. Will demand for compute double over the remaining nine months? Given the companies’ desire to launch larger models and the demand for coding agents, I think the answer is yes. Based on that, it seems quite likely to me that prices for services will go up, from subscriptions to API model pricing. And contrary to what bubble believers will write, this will happen because demand is growing faster than supply can support it, not because demand is falling.
shields 🌚 tweet mediashields 🌚 tweet mediashields 🌚 tweet media
English
0
0
0
24
shields 🌚
shields 🌚@ImMrShields·
A great example of how Tesla’s Automatic Emergency Braking system responds to a complex road situation
English
0
0
0
26
shields 🌚
shields 🌚@ImMrShields·
OpenAI’s CFO expressed doubts about whether the company is ready for an IPO, and Altman stopped inviting her to investor meetings OpenAI is turning into a circus again. The Information reported that the startup’s CFO, Sarah Friar, said in private conversations with colleagues that the company might not be ready for an IPO in 2026 because of organizational and procedural preparation issues, as well as risks tied to major purchases of computing capacity. Altman, meanwhile, is determined to go public before Anthropic, so he is doing everything he can to speed up the market debut. After word about Sarah reached him, she suddenly stopped showing up at key strategy discussions with investors, which is, to put it mildly, unusual for a CFO. Looks like we may know who the next senior executive to leave the startup will be.
shields 🌚 tweet mediashields 🌚 tweet media
English
0
0
1
32
shields 🌚
shields 🌚@ImMrShields·
Anthropic acquired biotech startup Coefficient Bio for $400 million Coefficient Bio was founded just eight months ago. It was developing AI models for biological research and drug discovery. The deal was paid for in stock. The startup had fewer than 10 employees, and they will join Anthropic’s healthcare division.
English
0
0
0
65
shields 🌚
shields 🌚@ImMrShields·
The best PR stunt in AI history? Two days ago, @AnthropicAI rolled out an update to @claudeai Code and forgot to remove a file containing the source code. 512,000 lines of TypeScript, 44 hidden features, the agent architecture, and honeypots for competitors. All out in the open. Within two hours, the repository had picked up 50,000 stars on GitHub. Anthropic filed 8,100 DMCA takedown requests to remove copies, then admitted the mass ban was a mistake and withdrew most of them. The most interesting find: inside the code, people discovered BUDDY, a digital Tamagotchi-style pet with 18 different creatures. The tag in the code was “friend-2026-401.” Planned launch date: April 1. The leak happened exactly one day earlier. The context makes it even more interesting: two weeks before that, Anthropic had legally forced the open-source project OpenCode to remove its Claude integration. The developers were furious. But after the leak, the mood completely flipped: the community was thrilled, and people were picking through the code with admiration. The reputational damage from the OpenCode story disappeared overnight. Anthropic “accidentally” showed the world how the sausage gets made at exactly the right moment. Coincidence? What do you think?
English
0
0
0
47
shields 🌚
shields 🌚@ImMrShields·
Apple opens CarPlay to AI With the release of iOS 26.4, Apple has for the first time added a new app category to CarPlay: voice AI assistants. On March 31, OpenAI rolled out ChatGPT for CarPlay. The interface is entirely voice-based. No text, no images on the screen. You open the app on the car display, speak, and listen to the reply. It is basically the same voice mode as in the iPhone app, just adapted for driving. You have to launch it manually: the app does not respond to a voice command like “Hey, ChatGPT” - you need to tap the icon on the screen. ChatGPT became the first major AI app on CarPlay. But back in February, reports said that iOS 26.4 would also support Claude and Gemini.
shields 🌚 tweet media
English
0
0
0
120