Nicole Hennig

24.6K posts

Nicole Hennig banner
Nicole Hennig

Nicole Hennig

@nic221

E-learning developer & AI educator: U Arizona Libraries. Former head of UX @mitlibraries. 📰 https://t.co/dFyIXs3Zc3 | 🦋https://t.co/JVcy3THgX8

Tucson, AZ Katılım Nisan 2007
1.5K Takip Edilen1.8K Takipçiler
Nicole Hennig retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
A few random notes from claude coding quite a bit last few weeks. Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December. i.e. I really am mostly programming in English now, a bit sheepishly telling the LLM what code to write... in words. It hurts the ego a bit but the power to operate over software in large "code actions" is just too net useful, especially once you adapt to it, configure it, learn to use it, and wrap your head around what it can and cannot do. This is easily the biggest change to my basic coding workflow in ~2 decades of programming and it happened over the course of a few weeks. I'd expect something similar to be happening to well into double digit percent of engineers out there, while the awareness of it in the general population feels well into low single digit percent. IDEs/agent swarms/fallability. Both the "no need for IDE anymore" hype and the "agent swarm" hype is imo too much for right now. The models definitely still make mistakes and if you have any code you actually care about I would watch them like a hawk, in a nice large IDE on the side. The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do. The most common category is that the models make wrong assumptions on your behalf and just run along with them without checking. They also don't manage their confusion, they don't seek clarifications, they don't surface inconsistencies, they don't present tradeoffs, they don't push back when they should, and they are still a little too sycophantic. Things get better in plan mode, but there is some need for a lightweight inline plan mode. They also really like to overcomplicate code and APIs, they bloat abstractions, they don't clean up dead code after themselves, etc. They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?" and they will be like "of course!" and immediately cut it down to 100 lines. They still sometimes change/remove comments and code they don't like or don't sufficiently understand as side effects, even if it is orthogonal to the task at hand. All of this happens despite a few simple attempts to fix it via instructions in CLAUDE . md. Despite all these issues, it is still a net huge improvement and it's very difficult to imagine going back to manual coding. TLDR everyone has their developing flow, my current is a small few CC sessions on the left in ghostty windows/tabs and an IDE on the right for viewing the code + manual edits. Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day. It's a "feel the AGI" moment to watch it struggle with something for a long time just to come out victorious 30 minutes later. You realize that stamina is a core bottleneck to work and that with LLMs in hand it has been dramatically increased. Speedups. It's not clear how to measure the "speedup" of LLM assistance. Certainly I feel net way faster at what I was going to do, but the main effect is that I do a lot more than I was going to do because 1) I can code up all kinds of things that just wouldn't have been worth coding before and 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion. Leverage. LLMs are exceptionally good at looping until they meet specific goals and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria and watch it go. Get it to write tests first and then pass them. Put it in the loop with a browser MCP. Write the naive algorithm that is very likely correct first, then ask it to optimize it while preserving correctness. Change your approach from imperative to declarative to get the agents looping longer and gain leverage. Fun. I didn't anticipate that with agents programming feels *more* fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part. I also feel less blocked/stuck (which is not fun) and I experience a lot more courage because there's almost always a way to work hand in hand with it to make some positive progress. I have seen the opposite sentiment from other people too; LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building. Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually. Generation (writing code) and discrimination (reading code) are different capabilities in the brain. Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it. Slopacolypse. I am bracing for 2026 as the year of the slopacolypse across all of github, substack, arxiv, X/instagram, and generally all digital media. We're also going to see a lot more AI hype productivity theater (is that even possible?), on the side of actual, real improvements. Questions. A few of the questions on my mind: - What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows *a lot*. - Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro). - What does LLM coding feel like in the future? Is it like playing StarCraft? Playing Factorio? Playing music? - How much of society is bottlenecked by digital knowledge work? TLDR Where does this leave us? LLM agent capabilities (Claude & Codex especially) have crossed some kind of threshold of coherence around December 2025 and caused a phase shift in software engineering and closely related. The intelligence part suddenly feels quite a bit ahead of all the rest of it - integrations (tools, knowledge), the necessity for new organizational workflows, processes, diffusion more generally. 2026 is going to be a high energy year as the industry metabolizes the new capability.
English
1.6K
5.4K
39.4K
7.6M
Nicole Hennig retweetledi
Steven Johnson
Steven Johnson@stevenbjohnson·
Two big steps towards our vision for @NotebookLM as the ultimate research platform: • Integrating Deep Research, with a set of only-at-Notebook features that let you explore the retrieved sources • Launching a series of Featured Notebooks curated by @GoogleResearch These developments are designed to enhance the full life cycle of research and scholarship: using the power of AI to assemble the knowledge base you need to advance your understanding, and then making your work accessible and intelligible to a wider audience using all the explanatory tools that Notebook offers. If you've used DeepResearch in the Gemini app, you already know that it's a pioneering advance in assembling complex, grounded information on any topic imaginable—collecting an entire trove of material for you and writing a nuanced research report that summarizes the findings. But because NotebookLM is designed to manage and explore potentially hundreds of sources, the Deep Research report is only the beginning of your journey. In our integration, Deep Research gives you an overview all of the sources it found during its research phase, with annotated commentary explaining how each source related to your original query. You can then choose to import some or all of the sources to the notebook, along with the report itself, which you can then explore or transform using the full suite of tools that Notebook offers: grounded chat with citations, Mind Maps, Audio/Video overviews, and much more. And it's that suite of tools that make the @GoogleResearch Featured Notebooks so compelling as well. Each notebook contains a curated collection of articles on a specific topic, published by the Google Research team. Think of them as a kind of knowledge base of Google's best thinking on a series of compelling research questions: How do scientists link genetics to health? How will quantum computing be useful? If you're a specialist in these fields, you can read the original papers or ask nuanced questions in chat and advance your understanding of the latest developments. But these notebooks can also make the complex but important topics understandable to non-specialists or students. Each notebook comes with pre-generated audio and video overviews, flashcards, and other Studio artifacts designed to make the scientific and technological concepts accessible and interesting. And you can always explore the material with our new "Learning Guide" chat mode that effectively gives you a personal tutor to enhance your understanding. There's much more to come on this front, but you can see in these two announcements how we see Notebook as both a workbench for conducting research and a publishing platform for sharing the results of that research once you're ready to make it public. Deep Research is rolling out this week to all users. The first two Google Research notebooks are live now, both of them deep dives into our most recent discoveries involving genetics and health. (Links in the following tweets.) We'll be publishing new notebooks in the series every other week or so for the next few months.
English
40
126
1.1K
104.6K
Nicole Hennig retweetledi
Kling AI
Kling AI@Kling_ai·
🎉Introducing Kling AI Avatar! Any Role, Any Voice! Simply add your Avatar image and audio, and prompt the emotions and expressions to bring your Avatar to life! Limited access only upon launch. Comment "Kling AI Avatar" & Repost to get early access! #klingai #klingavatar
English
1K
919
2.4K
623.1K
Nicole Hennig retweetledi
NotebookLM
NotebookLM@NotebookLM·
NOTE: @NotebookLM is currently experiencing some issues and this may affect the quality of your experience. We're working on getting this resolved ASAP. Thank you for your patience!
English
70
36
673
64.8K
Nicole Hennig retweetledi
NotebookLM
NotebookLM@NotebookLM·
Also, while we have your attention... Flashcards & Quizzes are rolling out TODAY! 🥳 You can now create customizable flashcards and quizzes in @NotebookLM. Stumped on a question? Tap the Explain button to receive an in-depth summary in the chat. Study on!
English
124
297
2.4K
216.1K
Nicole Hennig retweetledi
Aaron Tay
Aaron Tay@aarontay·
Elsevier Scopus AI adds Deep Research - It was a matter of time... Available now if your institution subscribes to Scopus AI - which we do. Let me kick the tires (1)
English
1
6
5
627
Nicole Hennig retweetledi
Ethan Mollick
Ethan Mollick@emollick·
It appears that the marginal energy used by a standard prompt from a modern LLM is relatively established at this point, roughly 0.0003 kWh (8-10 seconds of streaming Netflix) Water is more complicated (.25mL to 5mL+), depending on definitions. Training resources are less clear.
English
19
52
451
48.7K
Nicole Hennig retweetledi
Justine Moore
Justine Moore@venturetwins·
Spent a few days testing out Imagine - @grok's new image and video generator. And I've got a good sense of where it spikes, what the limitations are, and how to use it. Curious about how it stacks up? Keep reading 👇
English
75
47
346
2.2M
Nicole Hennig retweetledi
Steven Johnson
Steven Johnson@stevenbjohnson·
For most users, @NotebookLM is a tool for understanding and exploring project-based information. But we also think it could be a distribution platform, amplifying expert knowledge. Today we're offering a preview of that vision: Featured Notebooks. Here's the backstory... In a way, all of this dates back to a conversation I had with @joshwoodward in late July of 2022, one of the very first I had in Mountain View after joining Google Labs. We were talking about an idea that had come to us through the work of @kevin2kelly: the concept of "intelligence as a service" that he'd written about many years before—AI that you could tap into on demand like electricity or water. I told Josh that I'd always thought that sounded like a fascinating prospect, though I’d struggled to imagine what it would look like in practice. But what we'd seen with language models, and particularly the source-grounded language models that we were starting to experiment with, had suddenly made it clear to us both how Kelly's intelligence-as-service might actually work, how it might actually give authors and publishers a new platform to share their wisdom with the world. "We're going to be able to bottle up the knowledge of experts," I said at some point. "And then people will just have that expertise on tap." Somehow the metaphor stuck: Knowledge bottles. At some point that fall, our then-designer @gabeclapper created a mock for a future page where you could purchase and download expert collections of knowledge; we called it, irreverently, "the Bottleshop." (It looked quite a bit like the design we are launching today for Featured Notebooks, as it turns out.) I kept encountering the phrase in meetings over the next year, as it spread around Labs and other parts of Google. And every time I would say: love the concept but please don't lock in on "knowledge bottles" as the official name. It was just a passing metaphor! Naming aside, the underlying premise only grew more compelling over time. General purpose models trained on aggregations of human information were incredibly useful, but imagine how much more useful they would be if they were guided by (or were guides to) knowledge that had been peer-reviewed, edited, researched by experts—knowledge that had a particular point of view. If you're looking for parenting advice, say, you don't just want the average of all parenting advice across the internet, you want parenting advice from a specialist who you trust. There was another reason why we were interested in this approach. The experience of using NotebookLM is heavily influenced by the quality of the sources you've curated in each notebook. There's literally nothing to do in the product until you load a source, and it can take time to assemble a truly rich notebook on a particular topic. Source curation (or context engineering, as we would now call it) is not something that the average user is familiar with. So being able to showcase notebooks with high-quality sources on a range of topics was important for us just to explain how the product works. So you can think of the Featured Notebooks we are launching today as a preview on two levels. For newcomers to NotebookLM, the notebooks are a preview of how useful the product can be when you've assembled a collection of sources for whatever project you're working on. But it's also a preview of a potential future where there are thousands of expert-curated notebooks on all sorts of topics that you can add to your own collection, to have the knowledge you need on tap. Our launch lineup is: Longevity advice from legendary scientist @EricTopol, bestselling author of “Super Agers” Expert analysis and predictions for the year 2025 as shared in The World Ahead annual report by @TheEconomist An advice notebook based on bestselling author Arthur C. Brooks' "How to Build A Life" columns in @TheAtlantic A science fan’s guide to visiting Yellowstone National Park, complete with geological explanations and biodiversity insights An overview of long-term trends in human wellbeing published by the University of Oxford-affiliated project, @OurWorldInData Science-backed parenting advice based on psychology professor Jacqueline Nesi’s popular Substack newsletter, Techno Sapiens The Complete Works of William Shakespeare, for students and scholars to explore A notebook tracking the Q1 earnings reports from the top 50 public companies worldwide, for financial analysts and market watchers alike In the blog post announcing the notebooks, there's a wonderful quote from @nxthompson, CEO of the Atlantic, that really captures the spirit of what we're trying to do here: "The books of the future won’t just be static: some will talk to you, some will evolve with you, and some will exist in forms we can’t imagine now," Nick said. "We’re delighted to partner with Google in its pioneering work on this front.” It's the adaptability of the notebook format that I think makes this such a compelling platform for sharing knowledge. You can read the original texts in their entirety; you can ask questions or brainstorm ideas through a conversational interface, with citations pointing you back to the relevant passages from the original sources. Students or novices can ask for simpler explanations of complex topics, or listen to Audio Overviews or review Study Guides to help them master the material. But experts can ask more challenging questions, or quickly assemble the information they need. And all of the conversational interactions can unfold in over 80 languages, no matter what language the original sources were written in. We are making these notebooks freely available to all users, either because they involve public domain information or because we have secured world rights for the material from the original authors or publishers. This is still very much an experiment, and there are many elements that we would like to improve over time. You can't generate your own studio artifacts, like Audio Overviews or Study Guides, in featured notebooks; all the artifacts have been pre-generated. Chat can be in any language, but the artifacts themselves are English-only for now. While we have tried to focus on topics that will have global relevance, the initial lineup is more U.S.-focused; if these turn out to be useful to people, we plan to diversify in terms of both language, region, and topics. The best way to get a sense of what "bottled knowledge" actually feels like in practice is to open one of these notebooks and ask for advice: ask for a sample itinerary for a 3-day trip to Yellowstone focused on wildlife, or ask for advice on a making a mid-life career change in the "How To Build A Life" notebook from The Atlantic; or ask the experts at The Economist about how global economic trends might impact your industry. I think you'll find that the results are genuinely helpful, and maybe offer a hint of a new way of interacting with an author or publisher's work. And if you have ideas for future versions of featured notebooks, please let us know!
Steven Johnson tweet media
English
24
68
458
102.9K
Nicole Hennig retweetledi
Cully Cavness
Cully Cavness@Electron_Cowboy·
A data center powered by the sun, AT NIGHT! @CrusoeAI and @RedwoodMat have teamed up to bring you the next step in energy-first AI infrastructure: Crusoe Spark modular data centers + second life EV batteries + ground mounted PV solar = cost effective base load solar AI computing.
English
33
63
360
99.7K
Aaron Tay
Aaron Tay@aarontay·
It doesn't help that many studies are confused or at least unclear abt what they mean by "hallucinations". Eg they consider any error in metadata element as a hallucination when actually the error is inherent in the source used. The few that get media attention are even worse(6)
English
1
0
1
201
Nicole Hennig retweetledi
Variety
Variety@Variety·
"No Other Land" director Basel Adra: "We call on the world to take serious actions to stop the injustice and to stop the ethnic cleansing of Palestinian people." | #Oscars
English
2.3K
45.7K
175.8K
27.8M
Nicole Hennig retweetledi
Sam Altman
Sam Altman@sama·
we put out an update to chatgpt (4o). it is pretty good. it is soon going to get much better, team is cooking.
English
2.1K
1.2K
25.6K
3.9M
Nicole Hennig retweetledi
Putrino Lab
Putrino Lab@PutrinoLab·
Wanted to check-in with these new @NIH changes that are going to affect so many. First let me remind everyone: I run 6 hybrid clinical/research centers, each with a specific clinical focus. Unlike the vast majority of my colleagues, federal funding sources account for less 1/
English
34
375
1.5K
175.6K