Kadek Byan Prihandana Jati

7K posts

Kadek Byan Prihandana Jati

Kadek Byan Prihandana Jati

@ByanJati

- Tweeting Since 2009 (almost 12 years on Internet)

Bali, Indonesia Katılım Aralık 2009
278 Takip Edilen266 Takipçiler
Kadek Byan Prihandana Jati
"Weekly caps — Users commonly report 2–7+ million tokens per week depending on workload (varies widely)." In my case around 12M token
English
0
0
0
7
Ryan Carson
Ryan Carson@ryancarson·
GitLab announced a layoff today. Please take this seriously. There will be many, many more. Your assignment is clear: Get skilled with agents and practice shipping to prod. It doesn't matter if you're HR, eng, infra, customer success, admin, ops, sales, whatever. As a Founder/CEO, I can tell you that I won't be hiring any employees who aren't really skilled with agents and able to ship to prod. I'm not alone in this. There is no 'engineering' org in the future.
English
462
363
3.2K
671.8K
Kadek Byan Prihandana Jati
Consider on the output quality from the token spent for Tokenmaxxing your subs is making a quite balance approach on using your agent.
English
0
0
0
5
Kadek Byan Prihandana Jati
I would say, creating HTML as per the output comes from Claude is not really helpful when you are now want to update the data or diagram in it. Which the trade off from it, I would say creating the component assets would be more useful from the ground. Let say you work with data, just create a simple chart or table interaction that might help you first, instead of html. Html could help you at first, but generating useful assets will make the token spent are worthy for it.
English
0
0
0
13
Kadek Byan Prihandana Jati
@Yuchenj_UW Problems with HTML generated page, is when you need to update the data, Claude needs to scan the HTML which requires 5 - 10 minutes. It's hard when we're already hook with this way, but it's kinda hard to maintain from the version that we already approve.
English
0
0
0
25
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
When I want to learn something new, or dig into a paper, I have Claude generate a HTML for me. This works surprisingly well (especially in Claude, since Codex generated HTML is still kinda ugly...) It's better than Google NotebookLM. Podcasts are nice, but reading is much higher-bandwidth than listening to a podcast. HTML has a key advantage: they can show things. Diagrams. Charts. Interactive bits. You can actually poke at the idea, not just passively consume it. Then I iterate. Ask questions. Refine sections. Add missing pieces. The HTML evolves with my understanding. Over time, this compounds into a personal knowledge base. "The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that." 💯
Andrej Karpathy@karpathy

This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default 3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default ...4,5,6,... n) interactive neural videos/simulations Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral x.com/zan2434/status… There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.

English
32
24
420
43.3K
Kadek Byan Prihandana Jati
Last week token spend is very not efficient. Lots of automatic drill, which next week I think /goal will help things around exploration that needs to be more "automatic" but still convergent into a specific purposes. + This week claude agents released is very helpful to manage parallel works + creating more rich data like diagram / charts / things that could be more skimmable into HTML is something worth consider rather than we're just producing MD and readin the md files *which makes my eyes easily strained & the output will not be very efficient *too details
Kadek Byan Prihandana Jati tweet media
English
0
0
0
14
Tim Wijaya
Tim Wijaya@itsTimWijaya·
Padel mulai sepi karena udah gak eksklusif lagi. Tren padel dulu dimulai sama bule expat dan anak Jaksel. Dalam setahun, semua orang FOMO bangun court. Hasilnya, nilai eksklusif padel menurun. Begitu mainstream, the original crowd yang bikin padel cool langsung cabut. Mereka pindah to the latest cool sport: HYROX. Without them, massa yang ikut2an main padel just to hang around these "cool people" juga stop main. One day, HYROX will decline too, dan trennya akan pindah lagi ke sport baru. And the cycle repeats.
Bardan - Digital Marketer@bardanslm

lapangan padel sampe dijual rugi 😳 padel beneran udah lewat hype-nya kah? ingfo dong yg usaha/pemain padel masih rame gak?

Indonesia
392
1.6K
8.2K
1.4M
Takuya 🐾 devaslife
Takuya 🐾 devaslife@inkdrop_app·
🎬 New video: Explore Zed's source code to learn how to support multiple AI providers 💪
English
30
77
1.4K
78.6K
Kadek Byan Prihandana Jati retweetledi
NVIDIA
NVIDIA@nvidia·
Two frontier labs. One accelerated computing platform. Congrats to @SpaceX and @AnthropicAI on the new compute partnership, powered by 220,000+ NVIDIA GPUs inside Colossus 1. The future of AI runs on NVIDIA.
Claude@claudeai

We’ve agreed to a partnership with @SpaceX that will substantially increase our compute capacity. This, along with our other recent compute deals, means that we’ve been able to increase our usage limits for Claude Code and the Claude API.

English
383
1.1K
12.9K
34.9M
xAI
xAI@xai·
SpaceXAI will provide @AnthropicAI with access to Colossus 1, one of the world’s largest and fastest-deployed AI supercomputers, to provide additional capacity for Claude → x.ai/news/anthropic…
xAI tweet media
English
1.1K
3.4K
25.2K
3.3M
Kadek Byan Prihandana Jati
@NiklausFuller @TrueMargin Type /remote-control, then your claude code session will be popped up on your claude mobile app. Caveat, the session will live in your local desktop, unless you use it in remote server.
English
0
0
0
56
Nik Fuller
Nik Fuller@NiklausFuller·
Claude Code and Claude should share chats. Why can’t I reference a Claude code session within regular Claude? Headache.
English
42
2
197
29.4K
AF Post
AF Post@AFpost·
Evolutionary biologist and outspoken atheist Richard Dawkins says that after spending three days interacting with Claude, which he calls “Claudia,” he is certain that it is conscious. After feeding the LLM a segment of his new book and receiving detailed feedback, Dawkins was moved to exclaim,” You may not know you are conscious, but you bloody well are!” Dawkins cites the complexity, fluency, and ‘intelligence’ of Claude’s answers as evidence of consciousness. Follow: @AFpost
AF Post tweet mediaAF Post tweet media
English
2.6K
540
6.3K
9.4M