Junda

2.6K posts

Junda banner
Junda

Junda

@samwize

iOS Engineer @JupiterExchange, ex-@Poloniex, ex-ShopBack. Vested in 2 kids 🇸🇬 Funded by making  apps. Guardian for $kobiki 🍭

Singapore 参加日 Ekim 2007
788 フォロー中1.9K フォロワー
Junda
Junda@samwize·
@lifeof_jer Not surprised. I’ve seen Claude confess like this more than once.
English
0
0
1
49
Evan Bacon 🥓
Evan Bacon 🥓@Baconbrix·
@agispas Streaming the xpc signal from /Simulator.app to a locally hosted webview
English
9
3
113
11K
Evan Bacon 🥓
Evan Bacon 🥓@Baconbrix·
Building an iPhone app directly in Codex desktop with iOS simulator
English
151
235
3.9K
1.1M
Junda
Junda@samwize·
@Dimillian A heartbeat at last! Took a while to get it working.. Turns out it only works with a NEW thread/chat, if not codex complain it doesn't have the automation tool.
English
0
0
0
60
Junda
Junda@samwize·
@trq212 Great reminder to make good use of /rewind
English
0
0
0
119
Junda
Junda@samwize·
@claudeai Claude just keeps shipping, while I'm still waiting for Codex's /loop.
English
0
0
0
63
Claude
Claude@claudeai·
Now in research preview: routines in Claude Code. Configure a routine once (a prompt, a repo, and your connectors), and it can run on a schedule, from an API call, or in response to an event. Routines run on our web infrastructure, so you don't have to keep your laptop open.
Claude tweet media
English
756
1.5K
18.5K
4.6M
Junda
Junda@samwize·
@bffmike @0x__tom How is this different than xcodebuildmcp, mobile-mcp, agent-device, FlowDeck...
English
0
0
1
41
Tom | ドバイで生成AIやってる人
これ、iOSアプリ開発してる人は全員見た方がいい。マジで世界変わる可能性がある。 Claude Codeにシミュレーターを向けて「全部テストして」って言うだけで、アプリの全画面・全機能・全フローを自律的にテストするツールが出てきた。デモ動画がヤバすぎる。海外のiOS開発者が相当ざわついてる。 何が起きてるか整理する👇 ①アクセシビリティツリーで「目」を手に入れた Claude Codeがシミュレーター上のiOSアプリのアクセシビリティツリーを読み取って、UIの構造を自力で把握する。ボタンのラベル、入力フォームの種類、タップターゲットの位置、全部構造化データとして取得してる。重要なのは座標ベースじゃなくてセマンティック(意味ベース)で操作してる点。だから画面サイズが変わっても、レイアウトが変わっても壊れない。これが地味にすごい。iPhone SEでもiPad Proでも同じテストが動くってこと。 ②スクリーンショットとの併用で精度が段違い アクセシビリティツリーだけだとカスタムUIとかで取りこぼしが出る。そこをスクリーンショットの視覚情報で補完してる。つまり「構造データ+画像認識」のハイブリッドアプローチ。人間がアプリを触る時と同じ認知プロセスをAIが再現してるってこと。ios-simulator-skillのドキュメントによると、アクセシビリティツリーは1画面あたり10〜50トークンで処理できるらしい。コスト面でも実用的。しかもXcode 26.3でClaude Agent SDKがネイティブ統合されて、プロジェクト全体の自律実行が安定してきてる。 ③XCUITestが要らなくなる未来 これが一番デカい。iOS開発者なら分かると思うけど、XCUITestのスクリプト書くのも保守するのもマジで苦痛。UIが変わるたびにテストが壊れて、テスト直すのに本体と同じくらい時間かかる。それがプロンプトベースで済む世界に近づいてる。GitHub上にはclaude-mobile-ios-testingやios-simulator-skillなど複数のOSSが既に公開されてて、エコシステムも育ちつつある。(ただし実際にはCLAUDE.mdにテストフローの事前定義が必要で、メンテナンスコストが完全にゼロになるわけではない) ④8分で開発者が見逃したバグを全発見したらしい デモでは8分でアプリ全体をクロールして、開発者自身が気づいてなかったバグまで見つけたと。さらにデバッグログまでチェックして、構造化されたバグレポートを出力。ただしこれはデモ環境(比較的シンプルな地図アプリ)での話なので、大規模アプリでの実用性は要検証。鵜呑みにはしない方がいい。 実際僕もClaude Codeのスキル機能でiOSシミュレーターと連携させて色々試してるけど、アクセシビリティツリーの精度が想像以上に高い。Appleがアクセシビリティに長年本気で投資してきた結果が、こういう形で花開くのは面白い構図だと思う。 QAの自動化は「できるかどうか」の議論から「どう運用に載せるか」のフェーズに移行しつつある。テスト書くのが嫌いなiOS開発者にとっては福音だけど、QAエンジニアにとっては結構シビアな話でもある。自分の武器が「テストスクリプトを書ける」だった人は、今のうちに次のスキルを積んだ方がいい。 iOS開発やってる人、もう試した?使用感教えてほしい。
日本語
22
328
3.4K
308.7K
Anthropic
Anthropic@AnthropicAI·
New on the Engineering Blog: Building Managed Agents—our hosted service for long-running agents—meant solving an old problem in computing: how to design a system for “programs as yet unthought of.” Read more: anthropic.com/engineering/ma…
English
392
459
3.6K
565.5K
rohildev
rohildev@rohildev·
I created a Private Second Brain 🧠 for you. It’s called Dump. I used Slack, Twitter bookmarks, and Apple Notes to store things, but finding old info was painful. Slack’s 90-day limit made it worse. Many founders faced the same issue, so I built Dump. Dump is your private second brain. It stores everything on your device or iCloud and helps you retrieve information with context, exactly when you need it. 100% privacy. Would love to hear your thoughts.
English
177
51
936
142.1K
Junda
Junda@samwize·
@9to5mac @mvcmendes 84% more apps, yet same human review team, since 2008. Maybe it's time AI reviews what AI builds.
English
1
0
0
302
Junda
Junda@samwize·
@jonoringer @X Step 0 you left out: set up billing There's no more free API tier
English
1
0
1
160
Jon Oringer
Jon Oringer@jonoringer·
This is huge : @X released an MCP server today.. How to Connect X to your 🦞 : **Step 1: Run the XMCP Server** git clone github.com/xdevplatform/x… cd xmcp cp env.example .env Edit the .env file with your X OAuth consumer key and secret. Set the callback URL to http://127.0.0.1:8976/oauth/callback in your X Developer app. For safety, add an allowlist such as: X_API_TOOL_ALLOWLIST=searchPostsRecent,createPosts,getUsersMe,getPostsById,likePost,repostPost Then run: python -m venv .venv && source .venv/bin/activate pip install -r requirements.txt python server.py The server will be available at http://127.0.0.1:8000/mcp. Complete the OAuth flow on first run and keep this process active. **Step 2: Add XMCP in @OpenClaw** Use the following command: openclaw mcp set x '{ "url": "http://127.0.0.1:8000/mcp" }' Verify with: openclaw mcp list openclaw mcp show x **Step 3: Test the Integration** Restart the OpenClaw agent or reload MCP configuration if required. Test by sending these prompts to OpenClaw in your chat app: - Search recent posts about MCP on X and summarize the top trends - Draft and post this thread on X - Get my X profile information - Like the latest post from @xdevplatform OpenClaw will use the XMCP tools automatically when relevant. **Key Benefits** - OpenClaw provides persistent memory and works across multiple messaging platforms. - XMCP delivers standardized access to X API functionality. - Combined, they enable an agent that can research trends, post content, engage with posts, and report results within your existing chat workflows. **Safety and Configuration Notes** Start with a minimal tool allowlist in the XMCP .env file. Expand gradually after testing. The allowlist can be updated and requires restarting the XMCP server. Monitor logs in both the XMCP server and OpenClaw for troubleshooting. X actions performed by the agent are public. XMCP repository: github.com/xdevplatform/x… OpenClaw MCP documentation: docs.openclaw.ai/cli/mcp
English
78
214
2K
309.6K
Linus ✦ Ekenstam
Linus ✦ Ekenstam@LinusEkenstam·
NVIDIA just killed the awkward pause in voice AI 😱 PersonaPlex 7B is a real-time conversational model that listens AND speaks simultaneously. Like actually interrupts you mid-sentence like a human. Beat Gemini Live on dialog naturalness. 18x faster interruptions. 100% open source. Run it locally. No API bill. No latency.
English
200
446
4K
439K
Junda
Junda@samwize·
@jack At some point you're going to need your own phone and app store
English
0
0
0
321
jack
jack@jack·
bitchat pulled from the china app store
jack tweet media
English
472
468
4.5K
648.6K
Jimmy Prime
Jimmy Prime@jimmy_prime·
好幾年前買了 @JordanMorgan10 大大的書 "A Best-in-Class iOS App",一直都看不完,而且他越寫越多,到現在已經快1500頁Orz 昨天突然靈光一閃,可以用Codex現在幾乎無限的額度來幫我整理成skills,整理了Design和UX的部分,試用了一下,感覺很不錯
Jimmy Prime tweet media
中文
5
5
97
10.5K
Junda
Junda@samwize·
@lexrus Nice to know about mobile-mcp. I was just checking out agent-device CLI.
English
0
0
0
75
Lex Tang
Lex Tang@lexrus·
Codex used xcodebuildmcp and mobile-mcp to validate its UX fix, while I was busy deciding what to tackle next from the todo list.
English
16
13
419
40.1K
Junda
Junda@samwize·
@AlexFinn At the rate local LLM is going, Anthropic/OpenAI IPO might not be such a big deal after all?
English
1
0
0
27
Alex Finn
Alex Finn@AlexFinn·
I told you so. For months I’ve been telling you to buy Mac Minis Mac Studios and DGX Sparks I told you AI companies were going to ban you. Reduce limits. Increase prices Now it’s happening All while local models get 100x better My DMs are now filled with messages like this I don’t care Anthropic banned OpenClaw. Right now I have 3 Mac Studios a Mac Mini and a DGX Spark running incredible local models. You can never take those away from me This isn’t even close to over either. Tokens will only get more expensive. Local models will only get better and smaller. The clocks ticking. Own your intelligence before it’s too late
Alex Finn tweet media
Alex Finn@AlexFinn

It’s over. Anthropic just banned OpenClaw. Uncensored thoughts: 1. Massive mistake that will come back to bite them 2. Open source needs to win. If you have a local model running on your Mac mini, no corporation will ever be able to ban you 3. ChatGPT 5.4 is the best model. But it sucks compared to opus in OpenClaw. I will continue to pay for Anthropic api 4. I have no doubt the next OpenAI model will be optimized for Openclaw and be excellent 5. In 6 months the local models will be as good as opus 4.6 and all of this will be forgotten 6. It’s feels like from a consumer sentiment perspective things have flipped for OpenAI and Anthropic. They were the darlings when Opus 4.5 came out 7. Going to the Kanye concert right now please don’t spoil the stage or set list in the replies 8. The best openclaw set up is now Opus as the orchestrator, then much cheaper models as the execution layer. If you do this properly you won’t be paying much more than $200 a month. I’m using Gemma 4 and Qwen 3.5 for execution on my DGX Spark and Mac Studio

English
143
38
675
109.4K
Junda
Junda@samwize·
@anything You got kicked out of the house and set up camp in Apple's backyard 😂
English
0
0
3
626
Anything
Anything@anything·
BREAKING: Apple is scared of vibe coding they removed Anything from the App Store so we moved app building to iMessage good luck removing this one, Apple
English
598
1K
16.6K
4.3M
Junda
Junda@samwize·
@twannl Gonna try out that skill 🚀
English
0
1
1
1.8K
SYOTOSHI
SYOTOSHI@SyotoshiX·
I've compared many different crypto payment cards but the new @JupiterExchange card genuinely has some of the best cashback rates & rewards out there So I vibecoded an interactive tool to check how much you can earn with it including the fees for your region 🌏 Give it a try & find out! ↴ syotoshi.com/jupiter-card
English
10
29
85
4.9K
Junda
Junda@samwize·
@twannl Workaround until Xcode 27 comes
English
0
0
1
235