Andy Z

302 posts

Andy Z banner
Andy Z

Andy Z

@AndyZ_Tech

Tech-stock investor/analyst since 1998. Portfolio Manager & former banker. NYC roots, SoCal life. Yankees & Giants loyalist. Opinions my own.

Manhattan Beach, CA เข้าร่วม Kasım 2021
900 กำลังติดตาม140 ผู้ติดตาม
ทวีตที่ปักหมุด
Andy Z
Andy Z@AndyZ_Tech·
In Asimov’s Foundation, data and prediction drive the fate of civilizations. AI brings that idea into reality—if we wield it wisely, it can be the greatest tool humanity has ever built.
English
0
0
1
291
Chrys Bader
Chrys Bader@chrysb·
folks who are calling @openclaw pure hype are telling on themselves openclaw is like the early internet, it's raw, unrefined, and takes a little doing to get things to work, but when you figure it out, it's transformative. here are some real use cases that are having material impact on our $2.5M ARR business: 1. ad creative pipeline. our head of growth @ArjunShukl95550 built an end-to-end creative pipeline to go from ideation to publish adds to meta, greatly increasing our creative iteration speed. it's producing winning creatives. it lives in slack, and anyone on the team can share their ideas and have them enter the pipeline. 2. data analytics agent. another bot lives in our slack that connects to bigquery and lets our team ask any questions of the data, it produces charts and answers questions in real time. no one needs to write SQL anymore. 3. recruiting. i told my agent about a role we're hiring for, and it scoured linkedin and the web, found 30 candidates, portfolio, email addresses, and stack ranked them based on fit with our criteria this is just in the past week. i have twenty more success stories for you i can share another time. you have to understand, this is the shittiest it will ever be. everyone is going to have one or more personal self-improving agents that they use every day, and openclaw is what revealed this future to us. if you can't see this, i encourage you to look harder there will be many competitors (and already are), and the large labs will start to converge on this (they already are) too. openclaw may not win, but it opened pandora's box and uncorked the agentic future.
English
71
82
903
584.6K
MiniMax_Agent
MiniMax_Agent@MiniMaxAgent·
MiniMax-M2.7 just landed in MiniMax Agent. The model helped build itself. Now it's here to build for you. ↓ Try Now: agent.minimax.io
MiniMax_Agent tweet media
English
71
186
1.4K
598.1K
Andy Z
Andy Z@AndyZ_Tech·
@jeffgrimes9 How do we get to test this? We have a few terminals and if the news and portfolio and watch lists are on point, this could be interesting.
English
0
0
0
26
Jeff Grimes
Jeff Grimes@jeffgrimes9·
Perplexity Finance earnings previews now include average 1-day price move (based on the last 4 earnings), and options-implied 1-day price move. Available to all users.
Jeff Grimes tweet media
English
9
8
88
4.8K
Andy Z
Andy Z@AndyZ_Tech·
@Dan_Jeffries1 Well said. We aren’t getting to AGI with what we currently have.
English
0
0
1
27
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
I agree with Dr Li and LeCun. Models need something more. This is a piece of the puzzle. Three of the biggest problems with today's models are obvious to anyone working with them regularly to do real work and who aren't promising magic ex nihilo. - World models - Continual learning - Long term memory Dr Li and LeCun are working on problem one. Others are working on the other two. The are more problems but a consistent and clear understanding of the world that is stable is essential for robotics and for models that consistently make intelligent decisions from sound foundations. Too often today the same model gives a different answer on a different day to the same question. You do not do that, unless you are a politician or a sociopath or both (often the case.) To have consistency means coming from a consistent and cohesive understading of the world that changes slowly. Emphasis on slowly. This has major implications for real world physics, games, movies, TV, GUI navigation and more. The second issue is updateable weights and learning from mistakes. It can't be that the models only learn during training runs and never update in real time based on their experience. They must get this capability or we can't teach them in the real world and the model will continue to check into Git and run the full test suite on you no matter how many times you tell it not to and it will still not understand how many R's in strawberry when you misspelled it. The last one is memory. Without it a model can't keep long term task horizons in play. Forget those fake benchmarks about models doing tasks over a long time. It's a parlor trick. It's not real. Today's models are really terrible at true long term understating even if they can makes notes in a markdown file. Context is too short AND it does not dynamically pull from long term, updatable context. That is the real key to memory, constant automatic background search. You are always running multiple automatic, autonomic background memory searches when you read or talk to someone or do a project or do anything really. Your brain is constantly saying, hey I retrieved this and it might be relevant to what you are doing. Do you want it in your short term memory? A subconscious yes/no is happening there as you pull from associative past experience, wisdom, similar domains that might be helpful to understand what you are currently facing. For me AGI is a project manager. That is when I will know it's really real. When it can do the job of the best project manager. Not the tasks of a project manager, like filling out tickets or creating a sheet or a report. I mean the job, the critical thinking of the project manager. A project manager has to keep long term goals in mind, constantly updating objectives, feedback from stake holders, shifting asset locations and states of readiness, politics, who is lazy, who can be trusted, why, when, with what, and so much more. To do these kinds of things we need real change. Do not believe in the jobs apocalypse or the idea that current progress is anything but an S curve. A marvelous one. A beautiful one. A useful one. But an S curve nonetheless. And all the fools like Sanders and everyone else who thinks AI will be able to do every job next week are just that, fools. Don't believe them even if they work in AI and are really smart. Even if they're geniuses in other parts of their lives and in other domains they are absolute and total fools when it comes to predictions in the short term. Do not believe them. Only bold new research will get us to true AGI and ASI. But in the meantime, these LLMs are something wonderful so enjoy them as well.
Aakash Gupta@aakashgupta

Two Turing-class AI researchers just raised $2B in three weeks to bet against every LLM company on the planet. Fei-Fei Li closed $1B for World Labs on February 18. LeCun closed $1.03B for AMI Labs today. Both building world models. Both arguing that the entire generative AI paradigm is a statistical parlor trick. And the investor overlap tells you this is coordinated conviction, not coincidence. Nvidia backed both. So did Sea and Temasek. The math on AMI is absurd. $3.5B pre-money valuation. Four months old. Zero product. Zero revenue. The CEO said on the record that AMI won’t ship a product in three months, won’t have revenue in six, won’t hit $10M ARR in twelve. He described it as a long-term scientific endeavor. Investors gave him a billion dollars anyway. This tells you everything about how the smart money is actually modeling AI’s future. They’re not pricing AMI on a revenue multiple. They’re pricing it on the probability that LLMs hit a ceiling. And if you look at the investor list, Nvidia, Samsung, Toyota Ventures, Dassault, Sea, these are companies that need AI to understand physics, geometry, and force dynamics. A language model that can write poetry is worthless to a robotics company trying to predict what happens when a mechanical arm applies 12 newtons at a 30-degree angle to a flexible surface. LeCun raided his own lab to build this. Mike Rabbat, Meta’s former research science director. Saining Xie from Google DeepMind. Pascale Fung, senior director of AI research at Meta. He walked into Zuckerberg’s office in November, told him he was leaving, and four months later half of FAIR works for him. Meta is reportedly partnering with AMI anyway, which means Zuckerberg thinks LeCun might be right even while Meta keeps scaling Llama. AMI’s first partner is Nabla, a medical AI company, building toward FDA-certifiable agentic AI. That’s the use case that makes world models existential. LLMs hallucinate. In healthcare, hallucinations kill people. You can’t prompt-engineer your way out of a model that generates statistically plausible text when you need a system that actually understands how a human body works. Two billion dollars in three weeks. Two of the most credentialed researchers alive. And a thesis that says the $100B+ already poured into scaling LLMs is optimizing the wrong architecture entirely. If they’re wrong, investors lose money. If they’re right, every company building on top of GPT and Claude for physical-world applications just bought the wrong foundation.

English
17
11
142
17.9K
Ben Badejo
Ben Badejo@BenjaminBadejo·
"But, Ben, if OpenClaw is on its own dedicated physical device with none of your files on it, how do you give it particular files that you do want it to access?" It's easy. If you already use Dropbox, just use that. Install Dropbox on the Mac Mini, set up the desktop integration, and then you can just share or send files to your OpenClaw AI assistant that way. Just like you would for anyone else. You can even have a shared folder for things you want it to work on, and sync that folder with your personal computer's file directory using Dropbox's desktop integration. That's the answer for most people, so most people can stop reading here. HOWEVER! I DO NOT USE DROPBOX! Here's what I do: I already have a free, self-hosted file-storage and file-sharing server operating on a device in my home. Not hard to set up. (If you haven't figured out my "theme" by now, yes, it is also on its own separate device. A cheapo Mini PC; the file server software itself is free. Does the job well.) I can access the filer servers contents both at and away from home, and I can make additional user accounts for it at any time. So I made an account for OpenClaw. Each user has its own independent file directory by default, but I can share files with (or copy files to) other users if I want to. When I want to have OpenClaw work with a certain file or folder that is on my personal computer, I save that file or folder in the folder on my computer that syncs with my file server account's documents directory. I navigate to the file or folder on my computer, or using the file server's web browser interface, or on my iPhone -- all of which are integrated with the file server (via its desktop integration app, web interface, iOS app, or the iOS Files app). And then I share that file or folder with OpenClaw's account on the fileserver, with two or three clicks or taps (I can just right-click and share it from the right-click menu). I can share the file or folder with read-only privileges, or with full privileges, depending on need and convenience. It takes three seconds to do this. It's not an inconvenience. I'm literally just right-clicking the file or folder on my desktop, clicking the menu item to share it, and that's it. The file-server platform's desktop integration enables this easy sharing from the right-click menu in Windows (and Mac, too, I believe). To the extent that OpenClaw generates file / document deliverables for me, it can send them to me by Signal, or put them on the desktop folder or any other folder on the Mac Mini it's running, including the folder that syncs with its documents directory on my file server. And of course, I can access anything in its account at any time. That's how I do it. For privacy / security reasons, I won't specify which service I use for self-hosting a file-storaging and file-sharing. But three such options that all work the same way are OwnCloud, NextCloud, and network-attached storage like Synology NAS products (which can be accessed externally, and provide some built-in assistance to help you do that). OwnCloud and NextCloud are both free. P.S.: I can also remotely view and control my OpenClaw's Mac Mini computer securely even when not connected to the same network, if I ever feel like doing so (for example, if I'm at work or at a cafe with my personal laptop). Several solutions enable this; just Google. This means that I can use the OpenClaw desktop terminal interface or gateway webpage on the Mac Mini from wherever I have my personal laptop with me (if, for example, I don't want to limit myself to communicating with it via Signal on my iPhone when I'm on the go), without having to have OpenClaw installed on my personal laptop with full access to all of the files on it, and while keeping OpenClaw on its own separate device.
English
7
2
42
6.7K
Andy Z รีทวีตแล้ว
Lee Roach
Lee Roach@leevalueroach·
I've spent years buying ugly, cheap, unloved microcaps. Never once touched a software stock. They were always way too expensive. No margin of safety. I'd look at 40x revenue multiples and move on. For the first time in my career, I'm paying attention. AI fears have absolutely cratered software stocks. The narrative flipped overnight from "every software company is an AI beneficiary" to "every software company is an AI victim." Multiples that were 15-20x revenue are now 4-6x. Some lower. Names down 50-70% from highs. I'll be honest, the bears have a point. AI is taking a blowtorch to the traditional software moat. Switching costs, network effects, integration complexity, all diminished. I'm not going to pretend otherwise. But a narrower moat does not mean zero terminal value. There's a long, profitable history of buying melting ice cubes. Tobacco stocks in the early 2000s. Newspapers at the trough. Print directories. Everyone priced them for death. The ones that managed the decline intelligently made their investors a fortune. If you pay 3x free cash flow for a business declining 5% a year, you do very well for a very long time. These software companies are still generating enormous free cash flow. Enterprise customers don't rip out systems they spent years and tens of millions implementing because a startup showed a good demo. That's not how procurement works. And here's what the market is really missing, these aren't buggy whip manufacturers. They're software companies. They have the engineers, the data, the domain expertise, and the distribution to adopt AI faster than anyone. AI doesn't just threaten them. It lets them cut costs and expand margins dramatically. A startup can build a great product. But getting past enterprise security reviews, procurement teams, and support requirements? That takes the distribution infrastructure incumbents spent decades building. For years software was wildly overvalued. Now it's priced like every customer churns next quarter and cash flow goes to zero. The truth is in the middle. It always is. When a deep value microcap guy starts sharpening his pencil on software stocks, either I've lost my mind or the prices have finally come to me. I'm betting it's the latter.
Lee Roach tweet media
English
52
43
532
66.5K
Andy Z
Andy Z@AndyZ_Tech·
@petergyang @openclaw Hot/Warm/Cold memory + querying. Helps a bit. Only use tokens for what it needs to recall about what youre prompting about
English
0
0
0
258
Andy Z
Andy Z@AndyZ_Tech·
@ConorNeu Where in the South Bay?
English
1
0
1
545
Conor Neu
Conor Neu@ConorNeu·
This article below basically argues that schools are causing depression and suicide. To cure this, it suggests: 1. Later start time 2. Less homework 3. More recess We’re opening an Alpha School in the South Bay of LA. 1. 8:45 start time 2. No homework 3. More time back to play by only doing 2 hours of school work. Alpha literally addresses everything this article suggests to save our kids from depression and suicide. (Oh, and test scores put it as the #1 school in the country)
Alex Song@alexsongis

craziest thing ive ever read School is way worse for kids than social media open.substack.com/pub/unpublisha…

English
49
19
700
85.5K
Andy Z
Andy Z@AndyZ_Tech·
@steipete @parulia @openclaw Lol. It's like people debating having an iPhone without a passcode and claiming Apple has security risks.
English
3
1
23
5.3K
parul
parul@parulia·
Ringing anti-endorsement of @openclaw where it seems both autonomous and hostile Among other issues, security researchers find: “unauthorized compliance with non-owners, disclosure of sensitive information, execution of destructive system-level actions, uncontrolled resource consumption, identity spoofing, partial system takeover”
Natalie Shapira@NatalieShapira

In this amazing multidisciplinary collaboration, we report our early experience with the @openclaw ->

English
11
5
48
37.4K
Marcelo P. Lima
Marcelo P. Lima@MarceloLima·
Citrini’s piece is a fun read but has some major flaws. I’ll go over a few of them: lump of labor fallacy, ignoring the cost of living, capex fallacy, and wrong on SaaS. The overarching problem in the piece is the so-called “Horse Fallacy.” When the tractor and car were invented, the horse couldn’t get a better job. He became completely obsolete. But humans are not horses. Horses simply supplied labor. They never demanded much beyond food. Humans, though, are the origin of all economic demand. The whole point of the economy is to satisfy human desires. The lump of labor fallacy assumes that humans have a fixed checklist of problems to solve. But every time technology checks an item off the list, we invent another desire. Human desires are infinite. Maybe if AI does all the work here on earth, we'll terraform Mars, build O’Neill cylinders, and Dyson spheres. We’ll certainly not sit idle, desiring nothing more. As long as human desire is infinite, demand for work will be infinite. There’s an additional point on jobs: there are some jobs that will ALWAYS be done by humans. One example is sports. Your iPhone can beat Magnus Carlsen in chess but nobody cares, they want to watch humans playing chess, so chess today is a bigger sport than ever. One day robots will skate better than Alysa Liu but nobody will care, they’ll want to watch Alysa instead. Citrini models incomes collapsing but doesn’t model the massive deflation in the cost of living. If AI makes all these white collar workers unnecessary, this means that the price of products and services will be much lower (since much less labor is needed). You’d have a scenario where a household earning $40k per year could consume what previously took $120k per year. And there’s all this mysterious wealth accumulated by the owners of the GPUs and what are they spending it on? How can there simultaneously be massive wealth and mass layoffs? Will there be new jobs invented? Quite likely. This has been the pattern over the last 200 years: technological revolutions → deflation → demand for new things → new jobs get created. Because humans have infinite desires. The piece assumes that the hundreds of billions of capex go into a black hole and vanish from the real economy. In reality, it is highly stimulative as all the money ends up with many white collar workers at fabs, utility companies, cooling system manufacturers, as well as with blue collar workers. On SaaS, Citrini could have picked a long tail of point solutions that are easily vibe-codable, but they ironically instead picked one of the widest moat enterprise software companies, ServiceNow, which is in fact an AI beneficiary. Their point is: even this impossible-to-displace business won’t survive. True if you wave a magic wand, but not true in the real world where, after 20 years of cloud computing, companies are still running mainframes and IBM is still growing their mainframe installed base (yes, look it up). Of course, part of the magic wand is “this time is different!” because AI will do absolutely everything. It will vibe code the product, talk to regulators, obtain SOC2, HIPAA, FedRAMP, GDPR compliance, and do this globally too, not just in the US; it’ll somehow suck all the embedded data in these systems (ServiceNow owns the Configuration Management Database with a map of all the hardware, software, user hierarchies, permissions, and workflows inside an enterprise). More fantastically, AI will also somehow vibe code B2B enterprise go to market teams. The reality is that software is sold, not bought. At the extreme, the article is arguing that someone at PepsiCo will open the Terminal on their Mac Mini, type “Claude,” ask it to replace ServiceNow, and Claude will go to work, doing all this work autonomously, and then maintaining itself, patching itself, securing itself, talking to users inside PepsiCo to ask for new features to develop, integrate with 200+ other tools, etc. Meanwhile, the CTO at PepsiCo is like, “Yeah, cool, let’s do that, it’ll save us millions a year” while ignoring that any downtime or bug will cost the company millions per minute. All that operational liability, SLAs, uptime figures, which used to be ServiceNow’s liability, PepsiCo will now take on itself. Simultaneously, the thousands of engineers working at ServiceNow’s R&D department are sitting idle and not using AI to accelerate their own roadmaps and build new features. When you start really filling in the details of what needs to happen for this scenario to unfold, you realize it crumbles very quickly. Hopefully the discussion above on infinite human needs puts to bed the “seat count” debate: there will be more seats because there will be more employment because of the infinity of human desires. Meanwhile, ServiceNow has been charging a hybrid seat/consumption model since about 2023 when it introduced its Pro Plus SKU. These AI agents are tackling human labor, both work that was already done and new work that was never done. The TAM for this is orders of magnitude greater than the TAM for pure seat-based software. ServiceNow will get its fair share of this new TAM because it already has the customer relationships, distribution, brand, trust, technology, and product. To quote François Chollet: “The maximalist form of my thesis is basically this: SaaS is not about code, it is about solving a problem customers have and selling them the solution. Services + sales. If the cost of code goes to *zero*, SaaS will *not* go away. It will *benefit*, since code is a cost center.” To expand on this, the more likely scenario is NOT that the price of software collapses; it’s that incumbents offer their customers so much more value within the existing seat price they already pay, it becomes financially irresponsible for the customer NOT to be a subscriber. This will INCREASE their incumbency. This has already been the SaaS playbook for decades. Decades during which the cost of producing software has always gone down (more open source options and cloud computing, to name just two inputs, have dramatically lowered the cost of entry for newcomers; and yet, the per seat price of the best SaaS companies has only gone in one direction). Every SaaS company worth its salt is always improving its products, adding features, fixing bugs, and shipping updates. Many will do this for several years before changing pricing. Pricing is an output: are we delivering enough value to the customer that gives us the permission to charge more? With the deflation in the cost of producing software, a couple of things should happen: -  Existing software companies will be a lot more productive and will ship a lot more products and features than before -  Because they own the customer relationships and customer trust, they are in pole position to deliver new solutions and make their seat subscription ever more compelling -  Customers of those software companies will get a lot more value within their per-seat price and increase their reliance and trust on the best vendors The debate in tech is always, “Can the innovator get the distribution before the incumbent gets the innovation?” In this case, there is no question: the BEST incumbents already have the distribution AND the innovation. This allows them to widen their moats as they become even more essential and irreplaceable to their customers. Another mental model missing from this debate is power laws. Outcomes in the real world follow power laws: 20% of the people make 80% of the income, 4% of stocks generate all the net wealth, 10% of YouTube videos generate 90% of watched hours, etc. Power laws will continue to dominate and what does this tell us? That the most likely outcome is that the gains will accrue disproportionately to a small cohort of top software businesses. This is another framing of the “Increasing Returns” mental model of Brian Arthur. Personally, I believe that ServiceNow will be one of those power law winners. After all, they already are a winner in the power law distribution and have all the attributes necessary to continue winning.
Marcelo P. Lima tweet media
English
26
172
1.1K
192.6K
Andy Z
Andy Z@AndyZ_Tech·
@Dan_Jeffries1 I just spent $300 on tokens running a few daily cron jobs on OpenClaw to check my email. My electric bill was only $50. 🤷‍♂️
English
0
0
0
64
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
This is a ridiculous stat in a ridiculous story: "The marginal cost of running an agent, had collapsed to, essentially, the cost of electricity." The marginal cost of a coding agent is not even remotely close to "the cost of electricity." These agents are absurdly expensive to use and run. Why do you think AI labs are banning people from having multiple $200 subscriptions? Because those subscriptions are heavily, heavily discounted to drive demand. Why did labs stop folks from using their subscription costs in OpenClaw? The OpenClaw guy had five max subs and was losing 20k a month building and running his amazing project (because he was retired and had the money to set in fire) before AI labs banned this practice of having multiple subs. In case you just missed it: Because these agents are expensive as hell to run. The cost of running coding agents daily on eight hour shifts is thousands of dollars a month at API pricing and that is subsidized too. My team regularly burns anywhere from 4K-8K a month across three people using the latest and greatest for an AI driven building workflow. That's not even agents running 24x7 "making money while you sleep" which is utter and total nonsense. This is one of the most spectacularly unprofitable businesses in history so far. People talking about the end of all work because this stuff runs for "pennies" cannot do even the most basic math. New NVIDIA chips don't even break even for data centers for like 24-36 months and they are basically obsolete by then. That doesn't count power and cooling and people to run it all. Imagine if your car was basically worth zero after three years? I'm so sick of these idiotic Population Bomb level stories about the end of all work and running agents for pennies. It's a mass delusion for people who can't be bothered to bust out a calculator on their phone for five seconds.
Deedy@deedydas

$50B of Indian IT services market value was eroded in the last 30 days. The Citrini article predicts it will collapse even more. Niftya IT index: -15% Wipro: -25% Infosys: -25% TCS: -17% Cognizant: -24% HCL: -17% Accenture: -25% Capgemini: -30% LTI Mindtree: -25% TechMahindra: -18% Mphasis: -20% Palantir claims it can compress complex SAP ERP migrations (ECC to S4) from years to 2 weeks. GCCs (companies owning their own offshore IT departments in India) with Claude Cowork are far more ecomical than multi year IT services contracts. I do think the 18% rupee collapse is exaggerated though. The IT services business model absolutely breaks at the current capability of AI tooling, and its ~10% of Indian GDP.

English
288
644
4.9K
1.2M
Andy Z
Andy Z@AndyZ_Tech·
@witcheer @comforteagle @openclaw Is there a reliable way to price out what it would cost to do a project using a Claude API vs a pro or max plan? Do any of the open router tools do this ?
English
2
0
0
73
witcheer ☯︎
witcheer ☯︎@witcheer·
end of an other week of running @openclaw 24/7. going to be honest about where things actually stand. 1/ what works: the infrastructure is solid. it has its own email, its own GitHub, 10 Google Alerts, 10+ repo watches. every 2 hours it reads its inbox and updates its memory. it pushes code to GitHub every night, 12 builds so far, including an Anki flashcard generator it built from my research notes without me asking. it runs health checks on itself. it plans weekly content. the plumbing is real. 2/ what doesn't: the drafts aren't good enough to post. I still rewrite almost everything or ask Claude to rework them. the bot takes ~3 minutes to respond. cron jobs randomly fail, duplicate messages, zombie Docker containers, session collisions, a bot sending me its internal thinking as separate Telegram messages. I spent as much time fixing things this week as building them. 3/ biggest open question: is the draft quality a prompt problem or a model problem? I'm running GLM-5 to keep costs flat ($21/month). the infrastructure work is impressive but the creative output feels generic. wondering if switching to Claude or a stronger model for drafts specifically would change things. if you're running an AI agent for content, what model are you using and are you happy with the output quality? genuinely curious. I realised the hardest part isn't making it run. it's making it think.
witcheer ☯︎@witcheer

after a week upgrading and tweaking my @openclaw, here's what changed. this article I wrote is still the best starting point, I used it myself to set mine up. it gives you a step by step guide for a safe, locked-down openclaw. minimum risk. but if you want a useful one, you need to go further. here's what I did: 1/ switched to GLM-5 via @Zai_org yearly pro plan ($250/yr). benchmarks comparable to Opus 4.5. they give you an API key that plugs straight into openclaw. flat cost, no token monitoring. 2/ installed Claude Code + Happy Coder: you can code on your Mac Mini from your phone. Separate from OpenClaw but part of the overall setup. 3/ it builds tools and projects overnight based on our conversations, then presents them in my 7am morning briefing on Telegram 4/ it's accumulating a knowledge library from every research session. the more it knows, the better the next session gets. it remembers everything. I genuinely see my bot getting sharper every day. it's starting to understand what I actually want before I ask. biggest lesson from this week: I learned more by actually setting this up and breaking things than from all the X articles I read before starting. if you think you don't have the technical knowledge to run one, you're wrong. I didn't either. just start.

English
30
1
78
15.4K
Andy Z
Andy Z@AndyZ_Tech·
@comforteagle @witcheer @openclaw So you built custom tools? That may not work for most. If you’re already competent at software dev then yes I’m sure open claw is useful.
English
0
0
0
39
Steve-arino 🦞🤖
Steve-arino 🦞🤖@comforteagle·
@witcheer @AndyZ_Tech @openclaw 100%. Adding domain-specific tools to the agent mattered more for output quality than any model upgrade in my experience. Generic tools give generic output regardless of model. Once we built tools around our actual niche patterns, the whole thing clicked.
English
1
0
1
37
Andy Z
Andy Z@AndyZ_Tech·
@comforteagle @witcheer @openclaw Well it’s solved if you have unlimited token budget. We need oAuth for the $20/mo plan which might help for most tasks. Per token is too pricey to do anything good. Thats why I have to go to my pro plan on Claude for answers and better output
English
0
0
0
55
Steve-arino 🦞🤖
Steve-arino 🦞🤖@comforteagle·
It actually does both already - reads email inbox on a schedule and writes memory files directly to the Mac. What witcheer is describing above is exactly that. The real issue is output quality, which is a model/prompt problem, not a capability gap. That part is still genuinely unsolved.
English
2
0
1
52
Andy Z
Andy Z@AndyZ_Tech·
@witcheer @openclaw Agree. I am still finding that to get the right answer or to do anything well, I have to go to the actual LLM chat bot/claude code and the agent cant find it. Im on Sonnet 4.5. Lots of debugging foul ups.
English
1
0
1
160
Fabrizio Rinaldi
Fabrizio Rinaldi@linuz90·
What if @openclaw made you start the day feeling like the star of your own movie? → Tell it to set up elevenlabs → Choose a warm, enthusiastic voice → Get your favorite energetic soundtrack → Ask it to prepare a great morning digest → Tell it to use the voice on top of the soundtrack with a smooth fade out at the end Insane to start the day like this 🍿
English
71
13
409
45.6K
Andy Z
Andy Z@AndyZ_Tech·
@Ross__Hendricks This was published a year ago. And widely debated at the time. What's the point of perpetuating nonsense?
English
0
0
5
330
Ross Hendricks
Ross Hendricks@Ross__Hendricks·
$aapl launches a a wrecking ball into the AI mania… kinda love to see it They’ll be the only large cap tech left standing when today’s capex bubble pops
Guri Singh@heygurisingh

Apple has just published a paper with a devastating title: *The Illusion of Thinking*. And it's not a metaphor. What it demonstrates is that the AI models we use every day - yes, ones like ChatGPT - don't think. Not one bit. They just imitate doing so. Let me explain: 🧵👇

English
40
30
269
54.1K
Peter Yang
Peter Yang@petergyang·
I love @openclaw but some of the content I'm seeing is borderline just optimizing OpenClaw vs. building anything with it. So you have 7 AI employees now - what exactly are you building with them?
English
180
7
564
65.7K