Hedayat
1.6K posts

Hedayat
@VisionCraft007
Motion Designer | Testing AI tools daily |AI updates daily Helping creators use AI without sounding like AI Content tips, design & marketing insights.

Cal AI has been acquired by MyFitnessPal 🚨 Henry and I started Cal AI as 17-year old high school students with one mission: make calorie tracking easier with AI. In just 18 months, we’ve helped millions of people lose millions of pounds. And we broke $50m in ARR along the way. We are at an incredible inflection point in history where ANYBODY can build a product that can improve lives and make millions. As founders, we get a lot of praise. The truth is that this would not have been possible without our incredible 30+ person team. We are so proud of what this team has accomplished, and are thankful to everyone that has been instrumental in Cal AI’s development and success. Cal AI will continue as a separate app from MyFitnessPal. The combined team will share resources to continue helping people achieve their fitness goals!






In 3 years from December 2019 to December 2022, Block $XYZ more than tripled its headcount from 3,900 to 12,500. Unwinding less than half an insane COVID overhiring binge has much more to do with Jack Dorsey's managerial incompetence than whether AI is going to take your job.


Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.


the "supply chain risk" designation, can theoretically trigger a cascade of other existential crises for Anthropic. 1. Forced Seizure Threats The government could theoretically invoke the Defense Production Act. This law might allow the federal government to legally compel Anthropic to hand over their technology or remove safety guardrails against their will, effectively seizing operational control of their product. 2. The Enterprise Contagion The decree states no contractor doing business with the military may conduct commercial activity with Anthropic. This extends far beyond cloud hosting. Massive data integration firms, defense hardware titans, and enterprise software companies holding federal contracts must sever ties. 3. Eviction from Classified Networks Anthropic previously held a massive competitive advantage with approval to operate on military classified networks. By refusing the Pentagon's demands, they lose this status. Competitors will immediately fill the vacuum, permanently entrenching themselves in a defense ecosystem Anthropic may never re-enter. 4. The Allied Domino Effect If the United States designates a company as a severe national security risk, allied nations notice. Intelligence partners across the "Five Eyes" (US, UK, Canada, Australia, New Zealand) and NATO will likely face immense pressure to follow the American lead, freezing Anthropic out of public sector contracts globally. 5. The Capital Squeeze Training frontier AI requires billions in continuous funding. Investors despise regulatory uncertainty. The prospect of backing a company legally barred from doing business with the federal government and its contractors is terrifying. Hence, this federal siege could severely bottleneck Anthropic's future funding rounds.

How to never sound like an AI. But yourself: 1. Don't write any prompt. Instead, go to Wikipedia. 2. Search "Signs of AI writing." 3. Open the full article. Read nothing. 4. Do Ctrl+A. Then Ctrl+C. Copy the entire page. 5. Open a Google Doc. Paste everything in it. 6. Don't edit. Don't summarize. The full thing. 7. Rename the Google Doc as "anti-ai-writing style." 8. Click File → Download → Markdown (.md) 9. Go to Claude.ai. 10. Don't type your prompt yet. 11. Click the '+' button. Upload your .md file first. 12. Start every chat with this prompt (copy & paste): Prompt: "Read the uploaded file. It contains every known pattern of AI writing I want to avoid. Apply these as rules to everything you write for me. Do NOT start writing yet - ask me clarifying questions first."








USA has ChatGPT USA has Grok USA has Claude USA has Gemini USA has Llama USA has Copilot China has DeepSeek China has Qwen China has Ernie China has GLM China has Kimi China has MiniMax Europe has?





BREAKING: President Trump orders ALL Federal agencies in the US Government to immediately stop using Anthropic's technology.

Xcode 26.3 with Claude Agent & Codex hits the Mac App Store today! With advanced reasoning capabilities in Xcode, you can streamline workflows and build faster. And MCP support lets you easily connect other compatible agents.

Cool chart showing the ratio of Tab complete requests to Agent requests in Cursor. With improving capability, every point in time has an optimal setup that keeps changing and evolving and the community average tracks the point. None -> Tab -> Agent -> Parallel agents -> Agent Teams (?) -> ??? If you're too conservative, you're leaving leverage on the table. If you're too aggressive, you're net creating more chaos than doing useful work. The art of the process is spending 80% of the time getting work done in the setup you're comfortable with and that actually works, and 20% exploration of what might be the next step up even if it doesn't work yet.




