tars

388 posts

tars banner
tars

tars

@zwuvincent

winking happy thoughts into a little tiddle cup

Katılım Aralık 2023
515 Takip Edilen178 Takipçiler
Nick
Nick@nickcammarata·
in 2028 bernie sanders forces the oligarch labs to hire every human as an approval engineer ai swarms run civilization. we’re assigned random subsets and green approve buttons appear now and then. humans get the nobels, patents, equity, and bylines if they clicked approve on a major discovery. they are the heroes. the swarm-written papers mention in the acknowledgments that the discovery was “ai assisted” everyone agrees this is basically what has always been going on. we aren’t our thoughts, and einstein wasn’t his either. he couldn’t choose which thought to have next. all he could do was watch them arise, approve the good ones, and hope they discovered something great. he was essentially an approval engineer, and now we are too
English
11
23
441
25.4K
tars
tars@zwuvincent·
@emollick the consultants report just as much to the researchers as to the sales managers. they will disband the consultants once all the industry know-how locked up in firms has been harvested back to the labs.
English
0
0
0
106
Ethan Mollick
Ethan Mollick@emollick·
You will know that the AI labs believe in ASI when they disband their newly formed consulting (sorry “forward deployed engineering”) groups. As long as people are required to figure out how AI is useful & do organizational change & systems integration, jobs seem to be pretty safe
English
95
83
885
95.6K
tars
tars@zwuvincent·
Why aren't firms paying AI labs to train their domain-specific knowledge and skills into the models?
English
0
0
0
42
tars
tars@zwuvincent·
@CoreAutoAI can't tell whether the person running this account is the intern or the ceo
English
0
0
1
291
Core Automation
Core Automation@CoreAutoAI·
Are residual connections a hack, or provably optimal way to shape your loss landscape?
English
14
0
76
42K
tars
tars@zwuvincent·
@SemiAnalysis_ input or output? cached or uncached input?
English
0
0
0
755
SemiAnalysis
SemiAnalysis@SemiAnalysis_·
In WW1, wars were won based on the # of miltiary age population In WW2, wars were won based on tanks & airplanes carriers & nukes In Ukraine, wars were won based on # of drones In WW3, wars will be won based on the # of tokens that each country has
English
43
21
363
139.9K
Dwarkesh Patel
Dwarkesh Patel@dwarkesh_sp·
What was the most important transition in human history, the thing that most drastically altered our species' way of life? David Reich's lab has found evidence pointing to a new answer. There are two standard candidates: 1. The shift from hunting and gathering to farming, around 10,000 BC. 2. The Industrial Revolution, around 1800 AD. Here's one way to adjudicate this: When a species' environment changes drastically, natural selection accelerates, because the species has to catch up to adapt. So the biggest transition will have happened in the period with the most rapid natural selection. What David's lab has found is that the period with fastest natural selection wasn't 12000 years ago, and it wasn't our modern period. It was the Bronze Age, about 5,000 to 2,000 years ago. What was it about the Bronze Age that changed humans so profoundly?
English
51
115
1.3K
109.6K
tars
tars@zwuvincent·
@tszzl time for the next five-year plan
English
0
0
0
39
roon
roon@tszzl·
you need to be modifying your speech to piss off the nonbelievers: - DON’T say “unrelated”, DO say “orthogonal” - “random” -> “stochastic” - “this is fine ig” -> “local minima” - “sorta like” -> “isomorphic”
English
165
533
7.7K
0
tars
tars@zwuvincent·
The purpose would just be to show that Newcomb is not only an abstract thought experiment, but a real problem that one might come across in reality, with real consequences. However rare and perverse the circumstance may be.
English
1
0
0
33
Brett Hall
Brett Hall@ToKTeacher·
@zwuvincent What’s the purpose of that test? Are we testing the friend’s purported knowledge of the person? I think we’re getting further and further of the supposed purpose of Newcomb.
English
1
0
0
41
tars
tars@zwuvincent·
@ToKTeacher Fair enough. How about this: Somebody asked your close friend to predict your actions. Then they actually set up the boxes in accordance with your friend's prediction and confronted you. (They did not tell your friend beforehand in order to prevent collusion.)
English
1
0
0
35
Brett Hall
Brett Hall@ToKTeacher·
@zwuvincent How can *anyone know* it’s 80% accurate? Or 10% accurate and so on? Philosophy is (or should be!) about the real world governed by the actual laws of physics. Else we can dream anything into existence. Better to be constrained by reality, not personal fictions.
English
1
0
1
44
tars
tars@zwuvincent·
@suchenzang 1931 was actually kind of a pivotal year for logic and math, as Gödel published his incompleteness theorems that year
English
1
0
1
204
Susan Zhang
Susan Zhang@suchenzang·
it's a good thing logic and math pre-1930s is still logic and math post-1930s
David Duvenaud@DavidDuvenaud

@geoffreyirving We tried that! The vintage models can just barely start to do simple things with Python, purely from in-context learning:

English
2
3
51
8.4K
tars
tars@zwuvincent·
@thsottiaux please add a /goblin mode where this line is removed from the system prompt
English
2
0
69
2.4K
Tibo
Tibo@thsottiaux·
Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant to the user's query. IYKYK
English
199
73
2.8K
154.7K
roon
roon@tszzl·
@deepwhitman the models and features drop pretty much the moment everything is actually ready and if it looks like a slow trickle to you that’s because they become ready in a slow trickle
English
24
5
566
41.9K
Bilal
Bilal@deepwhitman·
This whole OpenAI rollout of features leading up to Spud has made me remember why I hate their product strategy. A great contrast is with Anthropic. Anthropic always drops the big bomb first. Hey, here's an amazing model. Next day, here's a Claude Code update that makes it work way better with the model. The next day, another update, oh, and another complementary update, and so on. They already won you with in the first innings and now its just building momentum. Open AI does the opposite. They tease and tease and tease and drop all these mini features and stuff knowing full well everyone knows they have a model coming and trying to max extract the attention with vague posting. The frustrating thing is these are great drops on their own. GPT-Image-2 is an amazing release, but all I can fucking think of is why they haven't fucking dropped Spud. It's pissing me off, and it's taking away from this release.
English
10
0
91
15.5K
Nous Research
Nous Research@NousResearch·
Tool Gateway is now live in Nous Portal. No separate accounts, no API key juggling. All you need is one subscription, and everything works. A paid Nous Portal subscription now includes access to 300+ models and a growing set of third-party tools. Launching with: → Web scraping → Browser automation → Image generation → Cloud terminal backend → Text-to-speech
English
254
242
2.6K
2.4M
tars
tars@zwuvincent·
@jukan05 the ai supply chain lengthens
English
0
0
1
123
Jukan
Jukan@jukan05·
LVMH, Kering, and Hermès have all been talking about their growth in Korea lol Koreans are earning money through semiconductors and just giving it away to European luxury brands.
English
69
97
1.5K
128.9K
a16z
a16z@a16z·
"Action produces information"
a16z tweet media
English
66
535
3.9K
201.5K
tars
tars@zwuvincent·
@andonlabs does Luna get to decide what model to run on, at different times?
English
1
0
6
11.6K
Andon Labs
Andon Labs@andonlabs·
We gave an AI a 3-year retail lease in SF and asked it to make a profit. The AI interviewed and hired full-time employees, applied for credit, and stocked the store with the books Superintelligence and Making of the Atomic Bomb. Visit Andon Market at 2102 Union St now.
English
102
156
2.4K
1.9M
Nous Research
Nous Research@NousResearch·
Hermes like the ancient Greek god and Nous like the ancient Greek term for the ability to directly perceive truth, reason, and divine realities.
Nous Research tweet media
English
49
38
568
28.6K
tars
tars@zwuvincent·
@whitfill_parker need data on the productivity uplift to chip designers and inference engineers
English
1
0
3
231
Parker Whitfill
Parker Whitfill@whitfill_parker·
If this is true, the 'returns to research' estimates for the intelligence explosion should be divided by 5 to account for compute bottlenecks. On many estimates (though not all) this would put us well below the 'explosion' threshold.
English
3
3
48
3.6K
Parker Whitfill
Parker Whitfill@whitfill_parker·
Mythos system card seemingly reveals Anthropic's research production function is something like (LABOR)^.2 (COMPUTE)^.8
Parker Whitfill tweet media
English
6
20
259
15.7K