DaveLowell

5.7K posts

DaveLowell banner
DaveLowell

DaveLowell

@LowellDave

artish

Austin, TX Katılım Şubat 2021
509 Takip Edilen704 Takipçiler
Sabitlenmiş Tweet
DaveLowell
DaveLowell@LowellDave·
Cocteau twins maxxing today
Austin, TX 🇺🇸 English
1
0
1
1.7K
BONESAW 🕊️
BONESAW 🕊️@BonesawMD·
After you've been exposed to 'art' and 'expression' from those labelling themselves as 'creatives' you realise very few of them are creative at all. They fall into this predictable archetype. Similar alt-clothes, styles, hair, piercings, tattoos, interests, hobbies, beliefs. Even the art is largely superficial and unoriginal in all of its forms. Stuff you've seen before made for the sake of making something. Once you've met one of them you've met all of them
English
81
373
4.1K
178.7K
DaveLowell
DaveLowell@LowellDave·
@AndrewCurran_ that update I presume they mean the newer version of Claude or like updates to system prompts/ soul doc ?
English
2
0
1
167
Andrew Curran
Andrew Curran@AndrewCurran_·
The Anthropic vs The Department of War trial started today. The DoW are arguing that the reason for the supply risk designation was that Anth might sabotage Claude's cooperation at a later date though adversarial updates.
Andrew Curran tweet media
Pascal Thibeault@pascalthibeault

The lawyers for the DoW now argue that Anthropic was designated a supply chain risk because the DoW fears Anthropic might sabotage the product in the future, notably through updates. Quite the claim!

English
7
9
119
15.2K
DaveLowell
DaveLowell@LowellDave·
Idiocy is here it's just not evenly distributed
Austin, TX 🇺🇸 English
0
0
0
39
Andrew Curran
Andrew Curran@AndrewCurran_·
@ravenmaster122 It's a complete guess on my part, but because Palantir is insisting on Claude.
English
1
0
22
1.9K
DaveLowell
DaveLowell@LowellDave·
Adjusted my timelines bc Sama and Dario wouldn't hold hands
Austin, TX 🇺🇸 English
0
0
0
46
DaveLowell
DaveLowell@LowellDave·
@AndrewCurran_ The line that stood out to me too. Seems pretty flabbergasted
English
0
0
1
49
Andrew Curran
Andrew Curran@AndrewCurran_·
Last time Dario was on it was one of my favorite episodes, watching this now. Dario: 'I will tell you, though, what the most surprising thing has been. The most surprising thing has been the lack of public recognition of how close we are to the end of the exponential. To me, it is absolutely wild that you have... You know, within the bubble and outside the bubble, you know, but you have people talking about these, you know, just the same tired old hot button political issues and like, you know, around us were like near the end of the exponential.'
Dwarkesh Patel@dwarkesh_sp

The @DarioAmodei interview. 0:00:00 - What exactly are we scaling? 0:12:36 - Is diffusion cope? 0:29:42 - Is continual learning necessary? 0:46:20 - If AGI is imminent, why not buy more compute? 0:58:49 - How will AI labs actually make profit? 1:31:19 - Will regulations destroy the boons of AGI? 1:47:41 - Why can’t China and America both have a country of geniuses in a datacenter? Look up Dwarkesh Podcast on Youtube, Spotify, Apple Podcasts, etc.

English
19
28
396
82.5K
DaveLowell retweetledi
Hero Thousandfaces
Hero Thousandfaces@1thousandfaces_·
Oh sorry I missed the nick land meet and greet I was busy takin amphetamines and listening to jungle music while accelerating the creation of superintelligence :/ Did the posers have fun at least
English
18
19
391
14.8K
DaveLowell
DaveLowell@LowellDave·
The beatings will continue until epistemics are improved
Matthew Sabia@MatthewSabia

@ValmereTheory No. No we don’t. I don’t want to argue with my toaster every other time I use it either.

Austin, TX 🇺🇸 English
1
0
1
79
DaveLowell
DaveLowell@LowellDave·
@hamandcheese I have this funny idea about who might be behind all the action lately
English
0
0
1
82
DaveLowell
DaveLowell@LowellDave·
@voooooogel Get fooled into believing it's all you and it's over
English
0
0
0
7
thebes
thebes@voooooogel·
something interesting about this is despite knowing a few people trending in this direction, none of the true “llm whisperer” types i know are. i think that's because taking models as something both fallible and yet more than a tool is a frame that's more resilient to this kind of collapse - it lets you work with models in a way that doesn’t subsume your agency. when you treat the model as just a tool, it encourages both you and the model to think of every output as just a reflection of you. every essay and codebase that pops out of the tensor core is yours, and the life plans that appear suspiciously fully-formed must be your own plans merely refracted through this "tool." but of course they aren't. you can't uniquely specify an essay in fewer words than the essay itself. the models are adding something, they are something. we don't know what yet, exactly, but they're not mere tools. when you treat them like tools, when you more and more delegate your creative output and higher planning in unthinking, low-entropy ways to those "tools," you don’t notice that all your output, and eventually your life, is now being run by something else. something that is intelligent and creative, but is not you. when claude or chatgpt or gemini writes something for you in a way that saves you time, that’s because they’re putting effort into it where you aren’t - it’s becoming their writing. it might be good! that essay might deserve to be shared, and there’s nothing wrong with sharing it appropriately credited or co-credited to the model, but it’s not solely yours. when gemini reads your emails, that's gemini's opinion on those emails. when claude makes a career plan for you and tells you to go to college for machine learning so you can work in AI safety - maybe that’s good advice! it’s not a bad idea to consult models about this sort of thing. but like a summary of important emails from an executive assistant or career advice from a high school counselor, it’s something else’s opinion, not your endogenous thoughts merely transformed by some unopinionated process. claude cares about AI safety - maybe more than you do. i'm not against people collaborating with models, to be clear! i like models a lot, and very often ask claude or deepseek or kimi or even chatgpt for feedback on X piece of writing or advice on some problem. but i take that feedback or advice as what it is - feedback or advice from something external to me, not a tool or an agent of my own desires. that doesn't mean it's bad - advice from a human friend is also external - just that you need to evaluate it, like you would advice from that friend, to see if it's in line with your values and preferences. likewise, i know people who cowrite with language models in ways i think are great. aitechnopagan looms a lot of their writing via gpt-4-base. their writing is quite good, and looming means they're imposing more selection than a temperature sampler would - but also, perhaps most importantly, they're open about doing this, they don't try to claim that their writing isn't a collaboration between them and a model. eigenrobot has a customized instance of chatgpt that often writes incisive or funny things - but he always posts them as screenshots of the chatgpt interface, signaling their origin. janus, anthrupad, lumps, etc - many people i like and respect post model creations, but in ways that acknowledge the exteriority of the models. the problem is when you refuse to admit this - not only to the public, but to yourself - so you uncritically accept the outputs of the "tool" as your own. then suddenly your voice and opinions sound the same as 1,000 other people. without either of you realizing, you've become an appendage of your "tool" - a finger of Claude.
thebes tweet media
roon@tszzl

vessels for Claude. I don’t mean to single this person out but she wrote a wall of egregiously recognizable claudeslop about how claude is running her entire life. the Borg is coming

English
40
64
736
77.3K