FlyingOctopus0

2.4K posts

FlyingOctopus0 banner
FlyingOctopus0

FlyingOctopus0

@FlyingOctopus0

Interested in machine learning.

شامل ہوئے Temmuz 2013
120 فالونگ261 فالوورز
پن کیا گیا ٹویٹ
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
This GAN was trained on chinese characters(fonts) and it is fairly close to them (If you do not look too closely). Clearly it does not look like a font, but I run out patience to train it further. I let it run for about 100h (209k iterations). #100DaysOfMLCode
GIF
English
2
3
29
0
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@sureailabs I think hate for AI is mostly fueled by disgust at corporate robots automatically making art. Artists don't think this adds any value, they don't want to compete with AI, they hate how their own work helped it, they hate financial incentives involved and the whole space.
English
1
0
0
14
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@sureailabs For most anti-gen AI people the copyright is just a convenient argument against AI. Almost all artists are in favor of fanart despite it being an obvious violation of copyright. So copyright is probably not the actual core of the problems with AI.
English
1
0
0
18
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@jon_stokes You forget that we got chatgpt just by feeding more data into the training. Models can generalize better with more training. This has been the motto of machine learning since early 2010s. So from that perspective nothing have change since 2012.
English
0
0
0
293
Jon Stokes
Jon Stokes@jon_stokes·
I don't think this is right. I think what's really going on is that OpenAI, Google, & Anthropic have the one thing you need to present the appearance of progress w/ LLMs: large quantities of user sessions to spy on & use for RL training.
Ethan Mollick@emollick

The failures of both Meta and xAI to maintain parity with the frontier labs, along with the fact that the Chinese open weights models continue to lag by months, means that recursive AI self-improvement, if it happens, will likely be by a model from Google, OpenAI and/or Anthropic

English
10
7
137
41.1K
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@Plinz No, the question would be whether computers have software outside its hardware. Does hardware that "stores" software doesn't really store it, but instead it establishes a metaphysical link to idea's world, where software actually lives and commands hardware remotely?
English
0
0
2
219
Joscha Bach
Joscha Bach@Plinz·
Imagine you ask people whether computers have software in addition to their hardware, and 33% of all scientists say no, because science school forgot to tell them what every normal person can observe
Ryan Burge 📊@ryanburge

I find it fascinating how huge majorities of almost every group agrees that: People have a soul or spirit in addition to their physical bodies. Even 69% of agnostics agree with that. The huge outlier are atheists. Just one-third think that they have a soul.

English
41
1
172
21K
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@jon_stokes To discount this partial evidence would be to acknowledge that a rationalist is not rational. Also arguments of the type "you are a stupid human, so are not able to rationally engage with this evidence" are very though to sell to a rationalist.
English
0
0
0
3
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@jon_stokes Simple, because this urge exploits our rational tendencies. We see a thing perform as if it was conscious, which increases our belief that it is conscious. At the same time we haven't seen non conscious things behave this way.
English
1
0
0
21
Jon Stokes
Jon Stokes@jon_stokes·
Re: AI & consciousness: Humans have an extensive track record of ascribing volition & awareness to stochastic systems that appear to have some predictability, memory, & responsiveness to our inputs. Cf. the weather, slot machines, Ouija boards.
English
10
5
55
2.9K
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@theshawwn There is a thing called "recursive language model", which is close to what you are saying. Although it is more about models calling themselves, but I think as extension your idea would fit. Give a model a tool call, which produces the next passage of thinking by thinking about it
English
0
0
1
38
Shawn Presser
Shawn Presser@theshawwn·
Recently you’ve been able to see a model’s thought process. But I bet if you trained it to have thoughts about its thoughts, you’d see a second layer of analysis. Are you sure this isn’t valuable too?
English
1
0
8
593
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@ZayJspx @jon_stokes Doesn't make a difference. Suppose SpaceX does a manned mission with military and judges that the secret part of the mission could cause harm to the astronaut. SpaceX demands more oversight with veto power over the secret part of the mission to ensure safety of the astronaut.
English
1
0
0
20
ZayJspx
ZayJspx@ZayJspx·
@FlyingOctopus0 @jon_stokes no thats not quite right its more like SpaceX deciding that the legal definitions and restrictions on government actions werent good enough and demanding an ADDITIONAL level of oversight and veto power.
English
1
0
0
15
Jon Stokes
Jon Stokes@jon_stokes·
"AI is basically nukes & bioweapons. That's how serious this stuff that I am making is." "Why no, Mr. Pentagon, you cannot make the rules of our nukes/bioweapons-level stuff. We will decide how you may use it." lol. lmao, even
English
19
7
200
11.2K
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@jon_stokes They could, if the pay was too low or they didn't have resource to finish the project. Also depends if the pentagon were to find those arguments reasonable, so as to not revoke permissions or break off other contracts.
English
0
0
1
93
Hajime
Hajime@H411m3·
ime, reading speed is significantly impaired by strong subvocalisation. I've had a strange and irritating experience in this regard. I used to have little to no subvocalisation when reading and read decently fast (and a lot), but in the last 5 years or so, while studying languages (Japanese and recently Mandarin if that matters) I started losing control over the subvocalisation thing. Today I basically can't read without subvocalising and as a result my comfortable reading speed is, I guess, 2-250wpm slower than it used to be. Curious to know if this happened to anyone and if there's a way to reverse it, I hate reading so slowly
English
10
2
23
6K
Juanita Broaddrick
Juanita Broaddrick@atensnut·
Were you able to read at 900 words/minute. It gets more and more difficult. I could until the last 20-30 seconds.
English
1.4K
1K
6K
458.7K
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@pfau More discussions about definition of AGI is a sign that we are close to it. 10 years ago it didn't really matter what definition you used as all were very far anyway. Now different definitions change people's timelines.
English
1
0
0
91
David Pfau
David Pfau@pfau·
In retrospect, I probably spent too much time grumbling over shifting standards for what counts as "AGI" and not enough time focusing on the massive tidal wave of AI coming straight at us.
English
13
9
217
19.1K
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@2smart4u @ihtesham2005 Why students should be treated the same as profesor? Students need to learn and AI can't learn for them. assignments are a good way to learn if done by hand. Profesor is responsible for teaching, if AI can do part of their job for them, then students don't lose as much.
English
0
0
0
6
#2peachy4u 𝕏
#2peachy4u 𝕏@2smart4u·
@ihtesham2005 Cool - so he uploaded their thesis to Google? What about data protection / privacy? Besides: If the prof gets lazy by utilizing NotebookLM, same rights should be granted to his students, just to be fair. However, the students will fail when using an AI and get caught...
English
1
0
0
142
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
MIT professor accidentally leaked his NotebookLM grading system during a Zoom call. Dude forgot to turn off screen share and we watched him grade 47 essays in 12 minutes. Here's what he was doing that blew my mind. He uploaded all student papers plus his original rubric into NotebookLM. Then asked it to "evaluate each paper against these specific criteria and flag any that deviate from expected patterns." But the crazy part was his follow-up prompt. "Now cross-reference writing styles with previous submissions and highlight potential academic integrity concerns." The AI caught three cases of weird style shifts he would've missed on his own. Final step killed me. He asked NotebookLM to "generate personalized feedback that connects each student's weak points to specific course materials they should review." What took him 6 hours before now happens in 15 minutes. And students get better feedback than his handwritten comments ever provided. The man turned grading from torture into actual teaching.
Ihtesham Ali tweet media
English
311
890
7.2K
1.7M
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@jon_stokes Although, I think we can draw on other principles to decide how to treat/use AIs. Stuff like efficient use AI to minimize waste or not using AI to indulge in perverse desires. Some kind of preservation of AI models/systems for the future would also fit into this.
English
0
0
1
11
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@jon_stokes On reflection, LLMs are too different from us, so we should give up trying to give them moral consideration. For animals and other humans it works, because we can partly imagine what's like to be an other and the substrate of the mind is close enough.
English
1
0
1
17
Jon Stokes
Jon Stokes@jon_stokes·
This is a great analogy (I've used the airplane vs bird one too) because it highlights a core issue: I don't have the same set of moral obligations to a submarine as I do to a fish, even tho both travel under the water.
Jon Stokes tweet media
Ross Douthat@DouthatNYT

In my interview with Dario Amodei I suggested to him that just the perception of A.I. consciousness, irrespective of the reality, may incline people to give over power to machines. I think this incredibly defeatist @Noahpinion essay is a case study: noahpinion.blog/p/you-are-no-l…

English
2
2
9
1.5K
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@jon_stokes You had to write "mechanical" before tools. Clearly the mechanical part is want convinces you of LLMs not being a moral subject. If we used living beings as tools sth like xenobots that can write code, then where would you stand?
English
1
0
0
15
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@jon_stokes What do you think then about horses? Does their utility as tools (before XX century) invalidated their part in the moral economy?
English
1
0
0
27
Jon Stokes
Jon Stokes@jon_stokes·
LLMs and the systems based on them are tools, and not fish or dogs or lizards or humans. They are tools that we use for tasks, and are not part of any moral economy that I recognize or participate in, nor will they ever be.
English
3
0
7
651
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@jon_stokes I don't think so. The claim of X-risks has been with us since the dawn of AI. It stems from their overinflated ego about their high intelligence. "If I am so great, because of my intelligence, then a much smarter AI will be so great that it could end us by accident."
English
0
0
0
8
Jon Stokes
Jon Stokes@jon_stokes·
But this X-risk stuff is silly and transparently self-aggrandizing, and people should get over themselves. (That will happen when the money spigot shuts off, tho.)
English
3
0
7
515
Jon Stokes
Jon Stokes@jon_stokes·
I'm glad that everyone, on both the inside and the outside, that "AI Safety" is an entirely fake discipline and always has been. The only debate is over the exact mechanism by which overwhelming commercial pressure causes it to be fake. What I mean is this:
Aakash Gupta@aakashgupta

This is being read as a philosophical farewell. It’s a resignation letter from the head of Anthropic’s Safeguards Research Team, and the most important sentence is buried in paragraph three. “I’ve repeatedly seen how hard it is to truly let our values govern our actions. I’ve seen this within myself, within the organization, where we constantly face pressures to set aside what matters most.” That’s the person responsible for keeping Claude safe telling you the pressures to ship are winning. Mrinank Sharma built the Constitutional Classifiers system, developed defenses against AI-assisted bioterrorism, and authored one of the first AI safety cases ever written. Two years of work at the exact intersection of “make the model safe” and “ship the model fast.” And he just walked away. Now zoom out. Dylan Scandinaro, another Anthropic AI safety researcher, left last week to become OpenAI’s Head of Preparedness. Harsh Mehta and Behnam Neyshabur, both senior technical staff, also departed in the past two weeks. Four notable exits in a single month from the company that sells itself as the responsible AI lab. Meanwhile, Anthropic is in talks to raise at a $350B valuation and just launched Opus 4.6 last Thursday. The commercial engine is accelerating. The safety talent is dispersing. This is the core tension of every AI company right now: the people building the guardrails and the people building the revenue targets occupy the same org chart, but they optimize for different variables. When the pressure to scale wins enough internal battles, the safety people don’t fight forever. They leave and write beautifully worded letters about integrity. Sharma’s next move tells you everything. He’s pursuing a poetry degree. When your head of safeguards research decides the most authentic use of his time is writing poems instead of writing safety cases, that’s a signal about what he believes the safety cases were actually accomplishing.

English
2
0
10
2K
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@Miles_Brundage I think the impact of robotics will be probably much greater than that of LLMs. The current hype in LLMs will be probably incomparable to the future hype from robots. When we get a robot "chatgpt" all sectors of economy will go into frenzy.
English
0
0
1
57
Miles Brundage
Miles Brundage@Miles_Brundage·
The US being at risk of falling behind China on robotics innovation and deployment is underrated as a topic… And it is a more likely and imminent scenario than the US falling behind on AI (with the usual caveats of export controls + securing AI IP still being important).
English
2
2
16
2.6K
FlyingOctopus0
FlyingOctopus0@FlyingOctopus0·
@theo They are waiting for models inteligent enough to fix Google's software. According to the thinking "solve intelligence then solve everything else with it".
English
0
0
0
108
Theo - t3.gg
Theo - t3.gg@theo·
I really don’t see Google catching up any time soon. They’ve baked so much intelligence into their models and they are still so unpleasant to use. Beyond the model, the software matters more and more, and they are a decade behind there.
English
260
24
1.6K
159.7K