Jay Bharadia

844 posts

Jay Bharadia banner
Jay Bharadia

Jay Bharadia

@jay_bharadia

Building https://t.co/fuoyrD1D27 | Vue.js expertise | Life Coach | Mindfulness | Passionate about Human Mind Understanding & Life Philosophy

Katılım Mart 2022
477 Takip Edilen119 Takipçiler
Jay Bharadia retweetledi
NotebookLM
NotebookLM@NotebookLM·
Because you wouldn’t let it slide… these are rolling out today for our most requested feature: Prompt-Based Revisions: Tweak, tailor, and tune your slides just by prompting the revisions you want PPTX Support: You can now export your Slide Decks (Google Slides coming next!)
English
443
1.6K
11.9K
3.6M
Jay Bharadia retweetledi
Nuxt
Nuxt@nuxt_js·
Nuxt UI skills are now available on skills.sh $ npx skills add nuxt/ui
Nuxt tweet media
English
12
51
554
40K
Jay Bharadia retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
New art project. Train and inference GPT in 243 lines of pure, dependency-free Python. This is the *full* algorithmic content of what is needed. Everything else is just for efficiency. I cannot simplify this any further. gist.github.com/karpathy/8627f…
English
652
3.2K
25.2K
5.2M
Jay Bharadia retweetledi
Peter Yang
Peter Yang@petergyang·
Two great engineers reflecting on how the profession is fundamentally changing with AI
Peter Yang tweet media
English
148
892
8.3K
531.3K
Jay Bharadia retweetledi
Maximilian
Maximilian@maxedapps·
@karpathy Yeah, AI infographics are pretty amazing
Maximilian tweet media
English
3
21
392
89.5K
Jay Bharadia retweetledi
NotebookLM
NotebookLM@NotebookLM·
With Slide Decks, it's less about making slides and more about telling stories. Try out any of these 4 use cases and see for yourself! 1. Sources to storybook 2. Deep Research report to simplified slides 3. Messy Notes into organized thoughts 4. Ugly Slides to Pretty Slides
English
121
252
2.2K
216.6K
Jay Bharadia retweetledi
Jaclyn Konzelmann
Jaclyn Konzelmann@jacalulu·
This is too good not to share! 🤯🍌 (sound ON 🔊) If you thought infographics were great…wait until you see an animated infographic. Workflow: @GeminiApp → Nano Banana Pro for the infographic → Veo to animate it. Nano Banana 2 prompt: "Create an infographic that explains the phases of the day perfect for a 6 year old, and in the style of: Distinctive claymation with wide expressive mouths, googly eyes, and charming humor. Matte clay textures, handcrafted props, and warm, practical set lighting define the look." Veo prompt: "Bring this image to life. Animate each little scene."
English
67
225
1.6K
111.4K
Jay Bharadia retweetledi
Josh Woodward
Josh Woodward@joshwoodward·
Infographics in @NotebookLM are crazy good, we're starting to see a viral trend where people convert their LinkedIn into an infographic Here's mine! Instructions in 🧵
Josh Woodward tweet media
English
45
142
1.7K
105.2K
Jay Bharadia retweetledi
Tyler Crown 👑
Tyler Crown 👑@RTylerCrown·
I had a bit of downtime in London this morning and was using Google's new NotebookLM product. Wow! This coupled with the launch of Nano Banana Pro - don't sleep on Google! I uploaded a PDF of my LinkedIn Profile and used their new Infographic function. I asked to make it "Lord of the Rings" theme. See the below. Not perfect, but pretty close! Petition to replace all LinkedIn profiles, Resumes, CVs, pitch decks, etc. with Infographics from NotebookLM. You can pick your theme: Western Frontier, Space, etc. Continue to be long Google and its suite of product launches. Thanks for your input, @joshwoodward!
Tyler Crown 👑 tweet media
English
2
3
17
1.9K
Jay Bharadia retweetledi
NotebookLM
NotebookLM@NotebookLM·
In honor of today's 🍌🍌 launch we decided to release not one but TWO new outputs: Starting with Infographics! Create customizable, high-quality, visual summaries of your sources. Information never looked so good. Rolling out to Pro users now and free users in the coming weeks!
English
140
560
5.2K
597.9K
Jay Bharadia retweetledi
Google for Developers
Google for Developers@googledevs·
💫 Build with @antigravity, our new agentic development platform, evolving the IDE into the agent-first era. With Google Antigravity developers can: ✅ Orchestrate agents at a higher, task-oriented level 🤝 Run parallel tasks with agents across workspaces 🛠️ Build anything with Gemini 3 Pro Download the public preview → goo.gle/AGY
English
109
104
759
83.8K
Jay Bharadia retweetledi
Sam
Sam@SamNewby_·
just saw this on LinkedIn wtf is toon?
Sam tweet media
English
1.8K
532
8.7K
1.8M
Jay Bharadia retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
My pleasure to come on Dwarkesh last week, I thought the questions and conversation were really good. I re-watched the pod just now too. First of all, yes I know, and I'm sorry that I speak so fast :). It's to my detriment because sometimes my speaking thread out-executes my thinking thread, so I think I botched a few explanations due to that, and sometimes I was also nervous that I'm going too much on a tangent or too deep into something relatively spurious. Anyway, a few notes/pointers: AGI timelines. My comments on AGI timelines looks to be the most trending part of the early response. This is the "decade of agents" is a reference to this earlier tweet x.com/karpathy/statu… Basically my AI timelines are about 5-10X pessimistic w.r.t. what you'll find in your neighborhood SF AI house party or on your twitter timeline, but still quite optimistic w.r.t. a rising tide of AI deniers and skeptics. The apparent conflict is not: imo we simultaneously 1) saw a huge amount of progress in recent years with LLMs while 2) there is still a lot of work remaining (grunt work, integration work, sensors and actuators to the physical world, societal work, safety and security work (jailbreaks, poisoning, etc.)) and also research to get done before we have an entity that you'd prefer to hire over a person for an arbitrary job in the world. I think that overall, 10 years should otherwise be a very bullish timeline for AGI, it's only in contrast to present hype that it doesn't feel that way. Animals vs Ghosts. My earlier writeup on Sutton's podcast x.com/karpathy/statu… . I am suspicious that there is a single simple algorithm you can let loose on the world and it learns everything from scratch. If someone builds such a thing, I will be wrong and it will be the most incredible breakthrough in AI. In my mind, animals are not an example of this at all - they are prepackaged with a ton of intelligence by evolution and the learning they do is quite minimal overall (example: Zebra at birth). Putting our engineering hats on, we're not going to redo evolution. But with LLMs we have stumbled by an alternative approach to "prepackage" a ton of intelligence in a neural network - not by evolution, but by predicting the next token over the internet. This approach leads to a different kind of entity in the intelligence space. Distinct from animals, more like ghosts or spirits. But we can (and should) make them more animal like over time and in some ways that's what a lot of frontier work is about. On RL. I've critiqued RL a few times already, e.g. x.com/karpathy/statu… . First, you're "sucking supervision through a straw", so I think the signal/flop is very bad. RL is also very noisy because a completion might have lots of errors that might get encourages (if you happen to stumble to the right answer), and conversely brilliant insight tokens that might get discouraged (if you happen to screw up later). Process supervision and LLM judges have issues too. I think we'll see alternative learning paradigms. I am long "agentic interaction" but short "reinforcement learning" x.com/karpathy/statu…. I've seen a number of papers pop up recently that are imo barking up the right tree along the lines of what I called "system prompt learning" x.com/karpathy/statu… , but I think there is also a gap between ideas on arxiv and actual, at scale implementation at an LLM frontier lab that works in a general way. I am overall quite optimistic that we'll see good progress on this dimension of remaining work quite soon, and e.g. I'd even say ChatGPT memory and so on are primordial deployed examples of new learning paradigms. Cognitive core. My earlier post on "cognitive core": x.com/karpathy/statu… , the idea of stripping down LLMs, of making it harder for them to memorize, or actively stripping away their memory, to make them better at generalization. Otherwise they lean too hard on what they've memorized. Humans can't memorize so easily, which now looks more like a feature than a bug by contrast. Maybe the inability to memorize is a kind of regularization. Also my post from a while back on how the trend in model size is "backwards" and why "the models have to first get larger before they can get smaller" x.com/karpathy/statu… Time travel to Yann LeCun 1989. This is the post that I did a very hasty/bad job of describing on the pod: x.com/karpathy/statu… . Basically - how much could you improve Yann LeCun's results with the knowledge of 33 years of algorithmic progress? How constrained were the results by each of algorithms, data, and compute? Case study there of. nanochat. My end-to-end implementation of the ChatGPT training/inference pipeline (the bare essentials) x.com/karpathy/statu… On LLM agents. My critique of the industry is more in overshooting the tooling w.r.t. present capability. I live in what I view as an intermediate world where I want to collaborate with LLMs and where our pros/cons are matched up. The industry lives in a future where fully autonomous entities collaborate in parallel to write all the code and humans are useless. For example, I don't want an Agent that goes off for 20 minutes and comes back with 1,000 lines of code. I certainly don't feel ready to supervise a team of 10 of them. I'd like to go in chunks that I can keep in my head, where an LLM explains the code that it is writing. I'd like it to prove to me that what it did is correct, I want it to pull the API docs and show me that it used things correctly. I want it to make fewer assumptions and ask/collaborate with me when not sure about something. I want to learn along the way and become better as a programmer, not just get served mountains of code that I'm told works. I just think the tools should be more realistic w.r.t. their capability and how they fit into the industry today, and I fear that if this isn't done well we might end up with mountains of slop accumulating across software, and an increase in vulnerabilities, security breaches and etc. x.com/karpathy/statu… Job automation. How the radiologists are doing great x.com/karpathy/statu… and what jobs are more susceptible to automation and why. Physics. Children should learn physics in early education not because they go on to do physics, but because it is the subject that best boots up a brain. Physicists are the intellectual embryonic stem cell x.com/karpathy/statu… I have a longer post that has been half-written in my drafts for ~year, which I hope to finish soon. Thanks again Dwarkesh for having me over!
Dwarkesh Patel@dwarkesh_sp

The @karpathy interview 0:00:00 – AGI is still a decade away 0:30:33 – LLM cognitive deficits 0:40:53 – RL is terrible 0:50:26 – How do humans learn? 1:07:13 – AGI will blend into 2% GDP growth 1:18:24 – ASI 1:33:38 – Evolution of intelligence & culture 1:43:43 - Why self driving took so long 1:57:08 - Future of education Look up Dwarkesh Podcast on YouTube, Apple Podcasts, Spotify, etc. Enjoy!

English
577
2K
16.9K
4.1M
Jay Bharadia
Jay Bharadia@jay_bharadia·
@adamwathan Incredible to see such rapid adoption in massive platforms, kudos to the Tailwind team!
English
0
0
0
21
Adam Wathan
Adam Wathan@adamwathan·
Out of the top 10 websites on the entire internet, two of them (ChatGPT and Reddit) are built with Tailwind CSS. Pretty wild because it's a fairly young technology in "biggest websites on the internet" terms!
English
38
7
639
46K
Jay Bharadia
Jay Bharadia@jay_bharadia·
@GoogleCloudTech Great initiative, hands-on learning boosts skills for the modern cloud developer!
English
0
0
0
23
Google Cloud Tech
Google Cloud Tech@GoogleCloudTech·
Course: Gemini for Application Developers Level: Introductory Time: 1 hour 30 minutes Syllabus: Using a hands-on lab, you will learn how to prompt Gemini to explain code, recommend Google Cloud services, and generate code for your apps. Start module → goo.gle/4moBWEa
Google Cloud Tech tweet media
English
8
158
852
46.5K
Jay Bharadia
Jay Bharadia@jay_bharadia·
@filamentphp Consistent progress and robust updates, congrats on empowering more creators!
English
0
0
1
9
Filament 🦒
Filament 🦒@filamentphp·
✨ Filament v4.1 is here! This release marks two months of great progress since v4 was marked as stable, including many bug fixes, new features, and community plugin support for v4. Here are 4 of the new features 👇
English
16
56
315
18.7K
Jay Bharadia
Jay Bharadia@jay_bharadia·
@lovable This is inspiring, making AI development more accessible for creative builders!
English
0
0
0
4
Lovable
Lovable@Lovable·
Introducing Lovable Cloud & AI, a new chapter for vibe coding. Anyone can now build apps with complex AI and backend functionality, just by prompting. 100k+ new ideas, tools, and sites are built on Lovable daily. Today, we're redefining what's possible:
English
685
946
8.7K
4M
Jay Bharadia
Jay Bharadia@jay_bharadia·
@aidenybai The Ami update looks seamless, intuitive design truly enhances feedback workflows!
English
0
0
0
15
Aiden Bai
Aiden Bai@aidenybai·
introducing Ami comment on the page → make a change to your app
English
457
320
7.3K
1.2M