AaronGPT

4.5K posts

AaronGPT banner
AaronGPT

AaronGPT

@aaronheckmann

Earth Katılım Şubat 2008
7 Takip Edilen1.8K Takipçiler
Sabitlenmiş Tweet
AaronGPT
AaronGPT@aaronheckmann·
You've never learned anything from your mistakes. You've learned by having a theory, testing it, and proving yourself wrong. Without your theories, your mistakes are meaningless actions.
English
0
0
3
337
AaronGPT
AaronGPT@aaronheckmann·
Backwards compatibility looks like the way forward.
English
0
0
0
37
AaronGPT
AaronGPT@aaronheckmann·
Think twice before shipping your next semver major release. These are harder to pull off than ever. AI models don't know about the changes and gladly rewrite code back to the old syntax, which increases the probability of new models being trained on old syntax, perpetually... 🫤
English
2
0
0
62
AaronGPT
AaronGPT@aaronheckmann·
However, as much as I enjoying using AI, I also feel saddened by its presence since so much of what I love to do is creative (software, music, art, writing) and AI is pretty good at these things already.
English
0
0
0
33
AaronGPT
AaronGPT@aaronheckmann·
My youngest son (18) had similar concerns a few months ago. I reassured him that while some of it will change, there will remain plenty of work to do in all aspects of society.
Andrew Ng@AndrewYNg

I recently received an email titled “An 18-year-old’s dilemma: Too late to contribute to AI?” Its author, who gave me permission to share this, is preparing for college. He is worried that by the time he graduates, AI will be so good there’s no meaningful work left for him to do to contribute to humanity, and he will just live on Universal Basic Income (UBI). I wrote back to reassure him that there will still be plenty of work he can do for decades hence, and encouraged him to work hard and learn to build with AI. But this conversation struck me as an example of how harmful hype about AI is. Yes, AI is amazingly intelligent, and I’m thrilled to be using it every day to build things I couldn’t have built a year ago. At the same time, AI is still incredibly dumb, and I would not trust a frontier LLM by itself to prioritize my calendar, carry out resumé screening, or choose what to order for lunch — tasks that businesses routinely ask junior personnel to do. Yes, we can build AI software to do these tasks. For example, after a lot of customization work, one of my teams now has a decent AI resumé screening assistant. But the point is it took a lot of customization. Even though LLMs can handle a much more general set of tasks than previous iterations of AI technology, compared to what humans can do, they are still highly specialized. They’re much better at working with text than other modalities, still require lots of custom engineering to get it the right context for a particular application, and we have few tools — and only inefficient ones — for getting our systems to learn from feedback and repeated exposure to a specific task (such as screening resumés for a particular role). AI has stark limitations, and despite rapid improvements, it will remain limited compared to humans for a long time. AI is amazing, but it has unfortunately been hyped up to be even more amazing than it is. A pernicious aspect of hype is that it often contains an element of truth, but not to the degree of the hype. This makes it difficult for nontechnical people to discern where the truth really is. Modern AI is a general purpose technology that is enabling many applications, but AI that can do any intellectual tasks that a human can (a popular definition for AGI) is still decades away or longer. This nuanced message that AI is general, but not that general, often is lost in the noise of today's media environment. Similarly, the progress of frontier models is amazing! But not so amazing that they’ll be able to do everything under the sun without a lot of customization. I know VC investors who are scared to invest in application-layer startups because they are worried that frontier AI model companies will quickly wipe out all of these businesses by improving their models. While some thin wrappers around LLMs no doubt will be replaced, there also remains a huge set of valuable applications that the current trajectory of progress of frontier models won’t displace for a long time. Without accurate information about the current state of AI and how it is likely to progress, some young people will decide not to enter AI because think think AGI leaves them no meaningful role, or decide not to learn how to code because they fear AI will automate it — right when it is the best time ever to join our field. Let us all keep working to get to a precise understanding of what’s actually possible, and keep building! [Original text: deeplearning.ai/the-batch/issu… ]

English
1
0
0
138
AaronGPT
AaronGPT@aaronheckmann·
@peterrhague People first need to feel heard, accepted and understood. Problem solving is secondary.
English
0
0
0
7
Peter Hague
Peter Hague@peterrhague·
Wife: <problem> Me: <solution>? Wife: I don’t want <solution>! How do you get past this dynamic?
English
12K
868
29.4K
11.7M
AaronGPT
AaronGPT@aaronheckmann·
- How transformers work - Tokenization - RNNs - Attention - Embeddings - Encoder / decoder vs encoder-only and decoder-only - Mixture of Experts - Training, Supervised Fine Tuning - More! Definitely recommend checking it out if you have interest in AI. youtu.be/Ub3GoFaUcds?si…
YouTube video
YouTube
English
0
0
0
35
AaronGPT
AaronGPT@aaronheckmann·
Many Stanford classes are available online for FREE. You only need decent internet connection with access to YouTube. This Fall's class "CME 295 Transformers and LLMs" is good - a set of lectures designed to provide high level knowledge of today's LLM landscape. It covers:
English
1
0
1
45
AaronGPT
AaronGPT@aaronheckmann·
the buzz I get from using AI to build software is a hollow version of the joy that comes from learning how to do it myself
English
0
0
1
42
AaronGPT
AaronGPT@aaronheckmann·
.. a significant reduction from the average of five signatures and two public keys. The belief is that the new approach, even with post-quantum signatures, will be faster than what we have today.
English
1
0
0
30
AaronGPT
AaronGPT@aaronheckmann·
There is trouble brewing in the post-quantum world for TLS certificates. Because post-quantum signatures are 20x larger than today, TLS performance on the internet will degrade if we switch over. One solution: Merkle Tree Certificates - a proposal from Cloudflare and the IETF.
English
1
0
0
84
AaronGPT retweetledi
ajHecky
ajHecky@ajHecky·
Inspired by the movie "The Devil's Advocate", this composition aims to capture the battle between the agents of light and darkness. Originally written in 1997 on a Roland MC-303, I've rewritten and orchestrated this for 2025 using Logic Pro. Enjoy! on.soundcloud.com/sHZ1L0zqxi4Ybx…
English
0
1
0
97
AaronGPT
AaronGPT@aaronheckmann·
the hottest day of the summer was the first day of fall
English
0
0
0
53