chip

2.7K posts

chip banner
chip

chip

@yooo_chip

(ctrl+o to show thinking)

Katılım Ağustos 2024
687 Takip Edilen260 Takipçiler
chip
chip@yooo_chip·
@signulll how to do business obviously
English
0
0
0
9
signüll
signüll@signulll·
okay i’m sorry but what the hell does business school teach you?
English
142
9
289
38.5K
chip retweetledi
Claude
Claude@claudeai·
Computer use is now in Claude Code. Claude can open your apps, click through your UI, and test what it built, right from the CLI. Now in research preview on Pro and Max plans.
English
2.2K
3.8K
48.2K
10.5M
chip
chip@yooo_chip·
there is a down vote button now??
English
0
0
0
20
chip
chip@yooo_chip·
@iruletheworldmo I'm having a hard time seeing the problem here.If it's 100x the cost for 100x the performance, won't you get the same performance at 1x the cost times 100?
English
1
0
2
250
🍓🍓🍓
🍓🍓🍓@iruletheworldmo·
anthropic and openai are so far ahead it’s difficult to comprehend what secret sauce they have. the new models are beyond anything dreamt of in your wildest imagination. most of you won’t be able to afford the tokens and a handful of token hungry mega rich first movers will pull away forever. i’m not sure how i feel about it all. think more; 100x the price and 100x the performance.
English
121
33
750
72K
chip
chip@yooo_chip·
@levelsio add @dotta paperclip on top and you’ve created a mega corporation
English
0
0
0
377
@levelsio
@levelsio@levelsio·
✨ Every idea on ideasai.com now also generates an app because just a landing page isn't enough of course In the fake Chrome browser you can switch [ Landing | App ] And download both mock ups to build it further
@levelsio@levelsio

✨ To inspire more people to go build something now that we have AI to help us (especially non-tech people, cause I still know so many who are scared of building something): I added a [ BUILD IT ] button to IdeasAI.com It's like a mini-Lovable/Replit/v0: Any idea you see you like, you can click [ BUILD IT ], and it will use Opus 4.6 to build a landing page for it And then you can download the code it generated It's not a full startup of course, but a nice preview of what it can be, to give you an idea and inspire you to build it out further The code is live streamed also so you can see it being built 😊 Ironically this itself took me 1 hour to build with AI too Completely free and I pay for the tokens (please don't abuse it :D)

English
42
7
219
383.5K
Prithal Bhardwaj
Prithal Bhardwaj@NotesByPrithal·
@danshipper @yooo_chip @openclaw @every I found that Slack is the most natural place for AI agents to live because it is where the team already spends its time. Integration with existing tools like Cora is what makes this useful for a real workflow.
English
1
0
0
17
Dan Shipper 📧
Dan Shipper 📧@danshipper·
BREAKING! Introducing Plus One: A hosted @openclaw that lives in your Slack and comes pre-loaded with @every's best tools, skills, and workflows. Set it up in one click, and use your ChatGPT subscription (or any other API key.) Bring your Plus One to work: every.to/plus-one Connected to the @every ecosystem Plus Ones automatically use @every's agent-native apps, no setup required: - @CoraComputer for searching, sending, and managing email - @TrySpiral for great writing in your voice - Proof (proofeditor.ai) for agent-native document editing Custom skills and workflows we use and love Plus Ones come pre-loaded with skills and workflows we use ourselves @every —some we've made, and some we think are great. - Content digest—summarizes the publications you read, starting with @every - Daily brief—your day's schedule and to-dos sent to you each morning - Animate—turn any static screenshot into an animation with @Remotion - Frontend—Anthropic's front-end skill (which we use all the time!) We also make it fast to connect Google, Notion, Github, and more to your Plus One. Our goal is to give you a capable AI coworker right away, not a vanilla OpenClaw that you have to teach from scratch. Why we built Plus One @OpenClaw has changed the way we work at Every. We effectively have a parallel org chart of AI coworkers, each with a name, a manager, and real responsibilities. Because of them our workflows are completely different—our company is different—and we would never go back. But getting here has been hard. Claws require a significant amount of manual setup and require a dedicated machine—like a Mac Mini—running 24/7 to stay responsive. We have learned that the hard part of Claws is the infrastructure around them—the hosting, the integrations, the skills, and the ongoing care. We’ve made them work great for our team, and we want to share everything we’ve learned with you. We're letting in 20 people a week to start, and scaling invites quickly from there. @Every subscribers get priority. Bring your Plus One to work: every.to/plus-one
English
98
38
663
223.7K
chip
chip@yooo_chip·
@jasononfirms Likely to be a hybrid approach. I’m beginning to think of small models like apps on the app store.
English
0
0
0
48
Jason Staats⚡
Jason Staats⚡@jasononfirms·
My take on why we probably won't use local AI models in accounting firms: From early ChatGPT days this was the refrain Once the local models get good enough we'll be able to use them for accounting firm work! But this hasn't panned out yet. Local models have gotten much better, much more efficient, but we aren't using local models for a number of reasons It's hard to set up Compute is expensive And most importantly, as Brad shared here, local AI still sucks for accounting work But I don't think it's just a matter of time - I don't suspect we'll ever use local models in our firms For the same reason most of us don't have servers in our offices anymore The compute available to us in the cloud is vastly more than anything we can ever have on-prem, and the delta between the two continues to grow We're literally talking about blasting servers into space, re-thinking the entire power grid so server farms can crank even harder Surely the future isn't a toaster on my desk that runs our AI models (I say as my mac mini runs openclaw) In the context of running an AI model, more compute means: Better models Fast responses Greater intelligence Lower hallucinations and importantly, less risk Using the example of AI agent prompt injection, the smarter the model, the less likely it's dooped Not unlike a member of the team being fooled by a phishing email The dumber the agent/model, the greater the likelihood it can be fooled But what about when the local models are "good enough"! It feels like there will never be a day when you'll want to settle for a model that's second-best For the same reason you wouldn't pick the less intelligent employee And even if there's a far-future where models are SO good that local models can do anything we ever need without breaking a sweat... That's just an entirely different society we're living in, and we're probably all outside playing pickleball at that point As much as my inner nerd wants to roll my own local AI, especially as it feels like we're at a tipping-point for accounting firm work I suspect the future isn't using secure local models It's using cloud models, securely
Brad Nelson | CPA & Fractional CFO@bradncpa

I ordered a $10,000 Mac Studio to test local AI models for my firm. And I sent it back. The tech wasn't bad. It just wasn't ready for what I needed it to do. I’m interested in local for one main reason: data security and privacy. Actually keeping client information on our machines instead of sending it out to AI labs. I spent a weekend running it through: - Journal entries - Tax diagnostics - Real accounting scenarios I had to prompt the same question six to eight times before I got a usable answer. It felt like a second job. The people saying AI will replace accountants in 12 months have not actually tried to use it for accounting. I am not against it. I've been testing it constantly. Being honest about where the technology actually is helps more than pretending it is further along than it is. When local inference gets sharp enough to run smoothly on firm hardware, I'll be there.

English
11
0
30
16.2K
chip
chip@yooo_chip·
Is OS terms, I’ve been describing OpenClaw as Linux and Anthropic as MacOS. OpenAI is Windows. Cool to see someone else make a similar analogy. I agree, both have a place in the ecosystem.
jordy@jordymaui

x.com/i/article/2037…

English
0
0
0
42
chip
chip@yooo_chip·
@coreyganim how do you demonstrate the ROI for the reoccurring revenue?
English
0
0
0
1.2K
chip
chip@yooo_chip·
in an exponential change paradigm, do you go wide or deep?
English
0
0
0
29
chip
chip@yooo_chip·
Every plugin, every library, every “skill” is just compressed knowledge + execution logic. Small local LLMs are the universal decompressor. As soon as the model is smart enough to reconstruct the logic on the fly, the package becomes redundant — and a massive security liability.
Andrej Karpathy@karpathy

Software horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.

English
0
0
0
34
Anthropic
Anthropic@AnthropicAI·
New on the Anthropic Engineering Blog: How we use a multi-agent harness to push Claude further in frontend design and long-running autonomous software engineering. Read more: anthropic.com/engineering/ha…
English
301
921
6.7K
1.7M