James Ide

2.8K posts

James Ide banner
James Ide

James Ide

@JI

Co-founder @Expo

California, USA Katılım Temmuz 2010
100 Takip Edilen5.7K Takipçiler
Sabitlenmiş Tweet
James Ide
James Ide@JI·
At @Expo, we want to build the best framework and infrastructure for app creators. My co-founder, @ccheever, has long had a guiding vision for creators to go from an idea to an app in people's hands as fast as possible. I think about this as democratizing software:
James Ide tweet media
English
7
12
117
27.6K
James Ide retweetledi
Tony Kim
Tony Kim@toeknee_kim·
@JI @kadikraman @expo i wonder if you could just hook up a redirect if the user agent is from an LLM so the .md knowledge doesn't have to be baked into a skill
English
1
0
0
28
Kadi Kraman 💚
Kadi Kraman 💚@kadikraman·
Did you know! You can append .md to any @expo blog or changelog post to get the content as markdown (this will also work with accept headers)
English
6
4
59
15.5K
James Ide
James Ide@JI·
Whenever a hosted service says data is visible to “only you”, this almost always means “you and the company running the service.” This is acceptable for your open-source repo or recipe ideas for what to cook tonight. But there are legal consequences for other conversation topics, like law itself and I suspect health as well. One of the most AI-forward things a democratic nation could do is establish the goal of Personal AI in Every Home, where Personal here means private and owned by the individual. Inference must run locally by default, and hosted models treated like any other hosted service. Personal AI does not need to be and will not be SOTA but it should be excellent for day-to-day intelligence. This contributes to a higher-trust society in which people trust each other more (including companies) and trust AI more. Ironically one could argue China has done the most in service of Personal AI with the best open models and weights. The U.S. absolutely can compete but I suspect the mandate needs to come from the top with a goal of Personal AI for all of its people and a mission of a high-trust society.
Moish Peltz@mpeltz

Your AI conversations aren't privileged. Yesterday, Judge Jed Rakoff ruled that 31 documents a defendant generated using an AI tool and later shared with his defense attorneys are not protected by attorney-client privilege or work product doctrine. The logic is simple: an AI tool is not an attorney. It has no law license, owes no duty of loyalty, and its terms of service explicitly disclaim any attorney-client relationship. Sharing case details with an AI platform is legally no different from talking through your legal situation with a friend (which is not privileged). You can't fix it after the fact, either. Sending unprivileged documents to your lawyer doesn't retroactively make them privileged. That's been settled law for years. It just hadn't been tested with AI until now. And here's what really hurt the defendant: the AI provider's privacy policy (Claude), in effect when he used the tool, expressly permits disclosure of user prompts and outputs to governmental authorities. There was no reasonable expectation of confidentiality. The core problem is the gap between how people experience AI and what's actually happening. The conversational interface feels private. It feels like talking to an advisor. But unless you negotiate for an enterprise agreement that says otherwise, you're inputting information into a third-party commercial platform that retains your data and reserves broad rights to disclose it. Judge Rakoff also flagged an interesting wrinkle: the defendant reportedly fed information from his attorneys into the AI tool. If prosecutors try to use these documents at trial, defense counsel could become a fact witness, potentially forcing a mistrial. Winning on privilege doesn't make the evidentiary picture simple. For anyone advising clients or managing legal risk, this is a wake-up call. AI tools are not a safe space for clients to process their counsel's advice and to regurgitate their legal strategy. Every prompt is a potential disclosure. Every output is a potentially discoverable document. So what do we do about it? First, attorneys need to be proactive. Advise clients explicitly that anything they put into an AI tool may be discoverable and is almost certainly not privileged. Put it in your engagement letters. Make it part of onboarding. Don't assume clients understand this, because most don't. Second, if clients want to use AI to help process legal issues (and they clearly will, increasingly), then let's give them a way to do it inside the privilege. Collaborative AI workspaces shared between attorney and client, where the AI interaction happens under counsel's direction and within the attorney-client relationship, can change the analysis entirely. I'm excited to be planning this kind of approach, and I think it's where the industry needs to head. storage.courtlistener.com/recap/gov.usco…

English
0
0
2
626
James Ide
James Ide@JI·
@toeknee_kim @kadikraman @expo This is our own in-house implementation which gives us more control over the markdown. The timing is certainly a coincidence!
English
3
0
2
36
Tony Kim
Tony Kim@toeknee_kim·
@kadikraman @expo Thats dope, i usually just send the url to claude code/codex. Is this due to the new cloudflare markdown change?
English
1
0
1
219
James Ide
James Ide@JI·
@zeeg Is it also true that revenue before costs = amount of money you made?
English
0
0
0
428
James Ide
James Ide@JI·
Been thinking about this a bit. In IBM's case, to their credit they have managed to keep pace with shifts in cloud. A former PM at my company used to say, "Flat is not free," with regards to growth, and IBM's stock has roughly followed QQQ (underperformed since QQQ's inception, but outperformed in the last 5 years). This is the path I foresee for most companies with shifts in AI, where they keep pace within a standard deviation of the index. Unexciting for the markets but still good for the economy. My thinking for why IBM is not magnificent, as in Mag 7, is largely ambition and hiring. I'm sure there are other paths and some Mag7 companies are good at getting B+ employees to produce A- work (not my quote). IBM has been doing AI and other futuristic technology, like quantum, for a very long time, but they're not #1 or #2. As a thought experiment on hiring, IBM would be in the zeitgeist more if they were part of the AI talent wars between OpenAI, Meta, et al. awhile back, and attract the talent aspiring to vie for #1. Kodak, on the other hand, has not kept pace whatsoever. @davidmarcus's recent post on PayPal's decline comes to mind, especially this phrase: "Choosing predictability over platform risk, again and again." x.com/davidmarcus/st… Kodak was married more to the familiar product of a film camera than the outcome for the customer, capturing moments as photos. It was a mindset problem more than a skillset problem. Going back to software companies, they're all technically capable of building agentic products; the skillset is there for the non-complacent. I expect most will have the mindset to see their product as an means to an end for the customer. They will evolve or replace their products to be more agent-powered and agent-ready, and provide better outcomes that customers can flexibly use. In contrast, I suspect companies that don't make this transition will do so out of fear. They may be afraid of losing control or revenue, or upsetting anti-AI users. But especially when those concerns are real, it's necessary to have the mindset of looking how to thread the needle. At my company I tell people, "Be the ones who replace ourselves."
English
1
0
2
140
Steven Sinofsky
Steven Sinofsky@stevesi·
@JI Maybe but why wasn’t IBM equipped to capitalize on the PC they invented or Kodak on the digital camera they invented? These were companies that invented more stuff than just about any other company.
English
2
0
21
5.7K
James Ide
James Ide@JI·
Agree: AI will increase work as computers, the internet, and mobile did. Technologies that people once said would give them back their time. But an expectation to be ever-present and ever-responding grew. It started with pagers, then email, then messaging and adjacent products like Slack. Now we are seeing people be ever-managing of agents. It’s already common to hear people say they feel they’re not being as productive as they could be if they don’t have an agent running in parallel with other tasks, including non-work time. Managing an agent for some (but not all) tasks doesn’t require focused work. You can check an agent’s state from your phone and give it a brief description of next steps. But it’s still an interruption in several ways and unlike a Slack message from another person, the agent tires only when it exhausts its token budget. The human brain needs restorative time even from menial tasks. Agents managing agents goes only so far; it is easy but not valuable to entertain the idea of more AI being the sole answer to AI. People will still be expected to manage agents, a new type of work. More generally, nearly every company has ideas for how it could do more and few will to limit their ambitions and skill ceilings to what AI alone can achieve. They will ask their employees to be a bit more like managers, but managers of a team that has the ability to run 24/7.
Konstantin@getKonstantin

I fear the future of work with AI. If you look at where the technology got us with instant messaging, it's clear that the future of work with AI won't be we work less. It will be that we work more, and are expected to contribute prompts at any times to move the agents forward.

English
1
2
12
1.5K
Paul Graham
Paul Graham@paulg·
@KTmBoyle You get credit for ideas in some fields, like math and the sciences.
English
55
9
647
46.6K
Katherine Boyle
Katherine Boyle@KTmBoyle·
You don’t get credit for good ideas or being early to them. You get credit for execution. Most systems measure by actions, not words. Amazing how many people forget this.
English
92
98
1.1K
93.6K
James Ide
James Ide@JI·
We haven't had a DeepSeek release in awhile but I think we will get there in a few more versions. I am also curious to see specialized local models. GPT is "good at everything" but what if there were smaller models as good as GPT at just a few types of tasks (good at reading emails, can't code)?
English
1
0
1
104
FerTech 🇨🇭
FerTech 🇨🇭@FerTech·
@JI @expo @harjtaggar The problem is the amount of power these local models need to be capable in a similar level to the big ones. Until this doesn’t changes and we have much more capable models to run locally, we will have to continue relying on the giants infrastructure or LLM over API providers.
English
1
0
0
132
Harj Taggar
Harj Taggar@harjtaggar·
You probably don't want to tweet about Clawdbot reading your email for you, chief
English
20
3
289
37.6K
Sam Lambert
Sam Lambert@samlambert·
@rseroter no you aren't. i've spoken to your execs and partner teams for years. i am at the point where even if i believed GCP wanted to help they would be blocked by incompetence.
English
3
0
21
3K
Sam Lambert
Sam Lambert@samlambert·
GCP are the most user hostile cloud you can imagine. I recommend never working with them.
English
36
7
385
54.9K
James Ide
James Ide@JI·
@elithrar I'd say more than a little misleading on GCP's part. For those who don't have the email, the price doubling notice literally says: > Action to take: > 1. Review your billing > 2. Budget Planning > 3. [Recommended] Migrate to Verified Peering Provider (VPP)
English
0
0
3
142
Matt Silverlock 🐀
Matt Silverlock 🐀@elithrar·
@JI Yep! We're already on that path (good for customers to have stronger SLAs here), but it doesn't change the egress pricing. The mention of VPP in the context of that change is a little misleading.
English
1
0
5
637
Matt Silverlock 🐀
Matt Silverlock 🐀@elithrar·
(Unpleasantly) surprised to see Google significantly increase their egress costs here. 2x for egress out of North America to other peered (!) networks. Goes into effect May 1st.
Matt Silverlock 🐀 tweet media
English
15
16
270
70.6K
Expo
Expo@expo·
Your EAS builds just got 30% faster ⚡️ ️ Compiler caching with ccache is now live for everyone. No extra cost, zero config beyond one env variable. Works on Android (SDK 53+) and iOS (SDK 54+). expo.dev/changelog/comp…
English
10
12
232
27.7K
James Ide
James Ide@JI·
Worthwhile things for Claude Code to improve: 1. Skill invocation. Consistently finding & hearing skills don't get picked up. Tuning the frontmatter feels like a job. A first pass at skill selection could be done with Haiku or better yet a local language model and Opus would choose the winner. 2. Permission reuse. Bias towards using permissions it is already granted. Consistently finding & hearing it will make up new commands when there's an equivalent way to do the task with existing permissions. 2b. Permission timeouts. Consider an alternative if CC is waiting on permissions and the terminal is unfocused for 10 seconds. 3. Keep me logged in. Maybe it's because my Google Workspace session TTL is 7 days but CC keeps logging me out. I cannot explain why I get logged out of every MCP server. 4. Add a CaseInsensitiveSearch tool. Calls like Search(pattern: "button|isButton") look goofy. More case-insensitivity in general unless the model is certain. Fewer tool calls = better. More precise tool calls = better.
English
0
1
4
647