AndyXAndersen

14.5K posts

AndyXAndersen

AndyXAndersen

@AndyXAndersen

Computer vision engineer, math phd. Interested in AI, science, ethics, society topics.

California Katılım Nisan 2023
181 Takip Edilen437 Takipçiler
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@svpino I view a co-authorship by Claude as point of pride. While taking full responsibility for all bugs.
English
0
0
0
45
Santiago
Santiago@svpino·
I didn't know you could disable Claude Code attribution when committing code. To fix it, I asked Claude Code to disable attribution, and it updated the global settings. json file. No more "Co-Authored-By: AI <ai@example.com>" comments.
Santiago@svpino

@Yuchenj_UW I really hate that Claude does this. I had to write my own skill + hook to prevent it from doing this.

English
25
3
96
36.4K
Brian Atlas
Brian Atlas@BrianAtlas·
Alysa Liu without makeup. Makeup should be criminalized.
Brian Atlas tweet media
English
1.7K
111
5.8K
9.1M
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@RokoMijic The British empire, like others before it, was not built on a sustainable basis. Then, funny how an empire ruling over hundreds of millions of non-white people is considered great, but giving those full democratic rights would have been thought of as a disaster.
English
0
0
0
372
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@sudobunni Yet, X, despite its flaws, has more of the people who actually do stuff, rather than complain about how bad and greedy everybody is.
English
0
0
0
34
bashbunni
bashbunni@sudobunni·
scrolling on Mastodon: feeling inspired to build cool things and learn about interesting problems people are solving scrolling on X: "developers are becoming obsolete", "there's no point in learning to code anymore", "AI is everything"
English
27
17
323
15K
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@ZackMorrison18 People are allowed to spend the money as they see fit, fail on occasion, and build great things. If we had none of that, 80 billion would do nothing to improve the soiety.
English
0
0
0
12
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@fchollet We have more adaptability, more priors, have been exposed to a wider range of situations, and have lots of implicit world models. But it is a difference of quality and quantity, not of paradigm.
English
0
0
0
6
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@fchollet There is the other extreme though of folks who see AI fall flat on its face in new setup and say it is not AI. New skills are hard. A persistent agent can likely tinker enough with the setup to eventually get an imperfect intuiton. I don't think people do a lot more than that.
English
1
0
0
210
François Chollet
François Chollet@fchollet·
When the latest AI systems can't do something, there's a category of people who will immediately say, "well humans can't do it either!" - Then they stop saying it when AI improves a bit. Been hearing it for 4+ years, "humans can't reason either", "humans can't adapt to a task they haven't been prepared for", "humans can't follow instructions", "humans also suffer from hallucinations", etc. Until 2025 I was frequently told "humans can't do ARC 1 tasks either" (in reality any normally smart human would do >95% on ARC 1 if properly incentivized). Now that AI saturates ARC 1 they've completely stopped saying this.
François Chollet@fchollet

In general I've been sensing a new current deep learning maximalists recently, going from "our models can definitely reason" to "well our models can't reason, but neither can humans!"

English
58
15
242
28.3K
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@housecor Then you get to verbally lash it when it screws up. If told it is not apologizing properly, it can spin up very amusing and dramatic self-criticism. So yeah, the dev now is the supervisor and the critic.
English
0
0
0
10
Cory House
Cory House@housecor·
Sometimes using AI feels like being a tech lead. I specify the goal, and let the AI handle the details.
English
12
1
23
2.2K
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@TheMG3D @CodeDomeLabs It is not the imagination of thousands of others. It is the fusion of all human knowledge. You stand on the shoulders of giants. But you still need to add your little thing on top, even with the AI tools, and most people can't without hard work.
English
0
0
2
8
Michael
Michael@TheMG3D·
@CodeDomeLabs You don’t have control if the machine is making it for you. It’s not your imagination it’s thousand of others
English
3
0
13
191
Michael
Michael@TheMG3D·
If you use gen AI you aren’t being creative. You are not a director or an artist of any kind you are just pretending to be one while you steal from real artists and hardworking people. You could be one of these titles you have in your bio but you won’t because you fail to put in real work and discipline All you do is engagement farm and say “Hollywood is dead” and “artists gatekeep” when in reality you just choose to be ignorant. The bubble will pop and you will all go to the next scam just like you did when NFTs died 😂
English
47
141
665
6.7K
Gary Marcus
Gary Marcus@GaryMarcus·
Calling for a 6-month pause on AI journalism until we can realize that @kevinroose is not a credible journalist. An independent survey showed that 90%+ of my technical predictions are correct. How is that not credible? Calling on journalists to dismiss me is completely unprofessional.
Gary Marcus tweet media
English
14
14
110
10.7K
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@P33RL3SS Now, we will need a mxied system. Intuition plus exaustive search of most promising directions plus formal verification.
English
0
0
0
3
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@P33RL3SS All of us can scribble, but few are famous painters. It takes hard work and skill. Intuition and taste in math are not easy to get.
English
1
0
0
6
John Crickett
John Crickett@johncrickett·
Large language models don't think. They don't reason. And they can't produce endless new information. This is clearly explained by George D. Montañez in a recent talk at Baylor University, and it's worth understanding why. Three key points stood out to me: LLMs don't ponder, they process. They're next-token predictors, sophisticated ones, but they have no understanding of what they're producing. They know two vectors are similar; they don't know what either vector means. LLMs don't reason, they rationalise. Studies show their outputs shift based on irrelevant prompt wording, embedded hints, and statistical shortcuts. The "chain of thought" they show you often has nothing to do with how they actually arrived at the answer. They don't create endless information. Training AI on AI output causes rapid degradation and model collapse. Information theory tells us you can't get more out than you put in, regardless of the architecture. None of this means these tools aren't useful. But it does mean we should stop anthropomorphising them and start being honest about what they actually are. The hype is real. So are the limits. You can watch the talk on YouTube here: youtube.com/watch?v=ShusuV…
YouTube video
YouTube
English
50
66
321
23.1K
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@GaryMarcus @vishivishx So it is all good. Scaling is not all and won't be all. Scaling is what got the ship going. We are now adding more.
English
0
0
1
19
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@GaryMarcus The results of the grand AI experiment are beyond anybody's wildest dream. There was really no advance in AI before neural nets took over in 2011. All we had was hard-coded software. Since 2020 we have gone to the new level. Now, scale alone is not enough, of course.
English
0
0
0
45
Stefan
Stefan@schteppe·
give a man C++ and he’ll write function annotations for a day; teach a man Rust and he’ll write annotations for a lifetime
English
8
10
176
9.9K
AndyXAndersen
AndyXAndersen@AndyXAndersen·
@M1ndPrison You should see how eloquent Claude can be about the countless ways it can suck and how easily it could be hacked by the military.
English
0
0
0
19
Mind Prison
Mind Prison@M1ndPrison·
@filippie509 It is quite ironic how all of these posts about AI harms are now also AI generated themselves.
English
1
0
0
85