Andrew Certain

11.2K posts

Andrew Certain banner
Andrew Certain

Andrew Certain

@tacertain

Former VP/Distinguished Engineer in AWS. See the blog for what you'll get if you follow (plus transit/housing quips) He/his https://t.co/FGTsM2Xy1K

Seattle, WA Katılım Mart 2011
704 Takip Edilen8.9K Takipçiler
Simba82
Simba82@Keeper_Sis·
@smeez38 Cruise mapper says its route is around the west coast of the island, not the inside passage, where it currently sits. I wonder if it has had to turn around due to being too big to pass through the narrows at Campbell River? Just speculation, though.
English
3
0
1
158
Simba82
Simba82@Keeper_Sis·
Looks like the Celebrity Solstice is turning around! Abandoning its course to Alaska. Never seen this before.
Simba82 tweet mediaSimba82 tweet mediaSimba82 tweet media
English
6
3
8
595
Piper Shaw
Piper Shaw@PiperShawTV·
when I did my initial interview for this story with Geoff, I told him I really, really didn't want it to sound like I was a victim throwing a pity party. He agreed 100%. but I had a lot of #feelings about opening up about it and how sharing it with the world would make me feel. so I called my friend @tysseattledream to help me work through them in the best way I know how. We made a few songs in a few hours. I wrote Pity Party as a sarcastic expression of a victim mentality I refused to subscribe to. We laughed, riffed, and ran with it. 🤘🏻🖤
Seattle Kraken@SeattleKraken

Making Herself Heard. Throughout a troubled childhood and growing up in a post-divorce home, Kraken Hockey Network reporter @PiperShawTV fought to maintain her independent voice and her sanity → bit.ly/PiperShaw25

English
7
8
135
11.8K
Andrew Certain
Andrew Certain@tacertain·
@PiperShawTV Thanks for putting yourself out there - and thanks for all your work with the Kraken!
English
0
0
0
110
Andrew Certain
Andrew Certain@tacertain·
@peterkrogh Looks like you're back up and I'm getting it assembled! I'll let you know if I get confused! Thanks for the offer (and all you do for photographers).
English
0
0
0
45
Peter Krogh
Peter Krogh@peterkrogh·
@tacertain First of all, that's for the head up - Godaddy had a billing glitch and the site came down - will be back up within 72 hours. Please contact me directly if you want some help setting the rail up. peter@peterkrogh.com
English
1
0
0
47
Andrew Certain
Andrew Certain@tacertain·
@peterkrogh I returned from the holidays very excited to start using my new DAM useful rail system, but when I went to get more information at thedambook.com, I discovered that the domain no longer goes to your website. Any word as to when it might be back?
English
1
0
0
38
Andrew Certain
Andrew Certain@tacertain·
So X needs to figure out whether I'm a human before it'll let people see my posts? It's really encouraging me to come back.
English
1
0
9
2K
Andrew Certain
Andrew Certain@tacertain·
@BradPorter_ If your description of what's going on leads you to think that the LLMs are close to AGI, I'm not sure where to go from there. But that's ok - I guess we'll know in 6 months or 5 years.
English
1
0
0
138
Brad Porter
Brad Porter@BradPorter_·
@tacertain Sure, but if it can then translate PL1 to PL2 directly, without going back to English? Humans have {real world observation}->L1 and {real world observation}->L2 and then translate L1->L2?
English
1
0
0
130
Andrew Certain
Andrew Certain@tacertain·
This would make me believe that there's something profound going on: if you could train a model on two different languages, but not give any translation examples, and have the model do translation. If it's really inferring meaning, this should be no problem. Humans can do this.
English
9
0
20
9.8K
Andrew Certain
Andrew Certain@tacertain·
@BradPorter_ Because they have a lot of English to PL1 and English to PL2 training data
English
1
0
1
105
Brad Porter
Brad Porter@BradPorter_·
@tacertain How is this different than LLMs producing output in different programming languages for the same prompt (differing only by the programming language)? The LLMs aren’t trained on side-by-side language pairs.
English
1
0
0
148
Andrew Certain
Andrew Certain@tacertain·
@nathankpeck I do not see anything in LLMs that lead me to believe we're on a path to that. My controversial take is that if what you say happens, it's more likely to come from SETI than our building a model that recreates human intelligence.
English
0
0
0
132
Andrew Certain
Andrew Certain@tacertain·
To be clear: I think LLMs will have a profound impact on society. I just don't think they are on a path towards "understanding" or AGI.
English
1
0
6
1.8K
Patrick - now a Dad
Patrick - now a Dad@fortygigserver·
@tacertain I like to think of how logic gates can be simulated with just toppling over dominos. LLMs are nothing more than loads of those put together. No reasonable person would look at dominos falling over in some clever way and think that they can "think" or "understand"
English
1
0
15
40
Dan P
Dan P@copumpkin·
@tacertain They don’t do what you’re saying, but some of the examples from page 54-60 in arxiv.org/pdf/2303.12712… were pretty convincing to me that some serious “understanding” was happening, for lack of a better word.
English
3
0
1
318
Andrew Certain retweetledi
Peter Mumford
Peter Mumford@pistolpedro31·
Dinner on cap hill, just want to get home on my bike. This right here is a death wish with the status quo of Rainier. But it’s the most direct and by far least hilly. It should have bike lanes on it 10x over (and dedicated bus lanes).
Peter Mumford tweet media
English
9
7
172
21.5K
Andrew Certain retweetledi
Joe Magerramov
Joe Magerramov@_joemag_·
I agree with Ben. Having been in this industry for a while, the speed at which engineers spit out code is rarely the bottleneck. I’m much more excited about what AI can do to help improve quality and correctness of the code, as well as help explore new ideas and approaches.
Ben Kehoe@ben11kehoe

@monkchips As it usually follows an 80/20 rule, that 80% the AI is doing for you is 20% of the work, leaving the 80% of debugging, maintaining etc. So you're looking at a 25% speedup, not 5x. Unless you skip the 80% work by just shipping what the AI spits out, which I think will happen

English
1
2
18
4.3K
Andrew Certain
Andrew Certain@tacertain·
This is the antidote to thinking "why don't they just..." Either give it a try and appreciate the nuance or keep your opinion to yourself!
English
0
0
11
1.9K
Andrew Certain
Andrew Certain@tacertain·
@ranman If you find something with no seeding, let me know, but I don't think you will. The latest LLM paper that Michael referenced didn't have it, so seems unlikely that you'll find an earlier one. It would also be the most profound discovery in cognitive science.
English
0
0
0
154
Randall Hunt
Randall Hunt@ranman·
@tacertain ai.facebook.com/blog/nllb-200-… In this example the model was indeed seeded with translation pairs for common languages but it taught itself translation on monolingual examples for lower resource languages. I am trying to find the paper on language's inferred vector space from 08.
Los Angeles, CA 🇺🇸 English
1
0
2
248
Andrew Certain
Andrew Certain@tacertain·
@_msw_ @ben11kehoe Too bad there's no provision for hierarchical keys. Seems strange that there isn't, actually.
English
0
0
3
151
Andrew Certain
Andrew Certain@tacertain·
@michaelbrundage This is exactly my point. If LLMs were developing "understanding" then they wouldn't need the seed prompts. And I'm sure that if they didn't need the prompts, they wouldn't have used them.
English
0
0
0
73