lowki jammin

1.2K posts

lowki jammin banner
lowki jammin

lowki jammin

@0xlowki

beats, jams, and contracts. Internet money bitch

Katılım Eylül 2021
296 Takip Edilen1.4K Takipçiler
Sabitlenmiş Tweet
lowki jammin
lowki jammin@0xlowki·
Has anyone else had literally 0 emotional reaction to market tank
English
872
686
13.9K
0
Arvind Narayanan
Arvind Narayanan@random_walker·
ChatGPT with Code Interpreter is like Jupyter Notebook for non-programmers. That's cool! But how many non-programmers have enough data science training to avoid shooting themselves in the foot? Far more people will probably end up misusing it.
Arvind Narayanan tweet media
English
68
137
819
576.8K
lowki jammin
lowki jammin@0xlowki·
@abacaj if nothing else we'll just continue to amass quality datasets, which will hold value as learning algos continue to improve
English
0
0
1
100
anton
anton@abacaj·
Intense amount of money being poured in to scale up LLMs and other foundational models. Makes me wonder if this is really the path to super intelligence or will it lead to a dead end, quickly diminishing returns
English
43
9
246
64.8K
lowki jammin
lowki jammin@0xlowki·
@rezkhere the only people who see bard as competition are still using gpt3.5, or don't actually use AI at all
English
0
0
0
29
Rez Karim
Rez Karim@rezkhere·
Google just released ChatGPT's strongest competitor yet - Bard It's free to use and comes with 10X more features. Here are some of the craziest things only Bard can do ( ChatGPT cannot) 🧵
English
281
1.8K
8.6K
2.4M
lowki jammin
lowki jammin@0xlowki·
@tszzl unfortunately we'd probably just think the models are hallucinating
English
0
0
1
91
roon
roon@tszzl·
human level ai is boring. please give vast datacenters drawing most of civilizations energy needs calculating inscrutable mysteries where not even the problem statement, much less the solutions, would make sense to mortal minds
English
57
38
519
47K
lowki jammin
lowki jammin@0xlowki·
@simonw Anyone who puts Bard close to GPT4 in performance is out of their mind
English
0
0
0
30
Simon Willison
Simon Willison@simonw·
Leaked Google document: “We Have No Moat, And Neither Does OpenAI” The most interesting thing I've read recently about LLMs - a purportedly leaked document from a researcher at Google talking about the huge strategic impact open source models are having simonwillison.net/2023/May/4/no-…
English
114
1.1K
4.9K
1.9M
lowki jammin
lowki jammin@0xlowki·
@_Dave__White_ @OpenAI ex: the best compression we have for 3D models is overtraining a model on one data sample, and the weights serve as your compression vector
English
0
0
1
72
lowki jammin
lowki jammin@0xlowki·
@_Dave__White_ @OpenAI embedding vectors are a compression of sorts, dependent on model architecture. There are plenty of examples though where the state of the art on compression is not through an embedding
English
1
0
1
75
Dave White
Dave White@_Dave__White_·
can one take an embedding vector of the type you'd get from the @OpenAI API and reproduce the text that created it? or, can i average out several embedding vectors and use them as shared context for a new inference step? is there any interesting research in this vicinity?
English
4
0
8
6.2K
lowki jammin
lowki jammin@0xlowki·
@iamgingertrash also if you match f(x) cheaper than the original creation, you could potentially RLHF to f(x) + 1 with a lower total cost (seems like shaky ground but I wouldn't outright dismiss it)
English
0
0
0
10
lowki jammin
lowki jammin@0xlowki·
@krishnanrohit the point is that solving those error manifolds speaks roughly nothing to interpretability / explainability. Regardless on your views of Yudkowsky, this point stands. And there is no clear path as of yet towards alignment (if anything, deeper models are harder to constrain)
English
0
0
1
100
rohit
rohit@krishnanrohit·
This is bad reasoning. I know the rocket equation, and prob can build one in my backyard, but I can't get a Spacex rocket to orbit. Scale *does* change scope. Problems shift, error manifolds change.
Eliezer Yudkowsky ⏹️@ESYudkowsky

Man, I'm probably not going to win this; the gatekeeping tactic is simple and effective exactly because the mundanes in the audience don't know and can't trust that there *isn't* some deep and macro-relevant arcane science that *you* don't know about, if you're not wearing the appropriate medals to be a credentialed expert. They can say "Oh well I bet Eliezer has never implemented a network or written a single line of Python" and I can reply "Actually I went out and implemented and trained a simple transformer net from scratch just in case there was something surprising to be learned that way, which there wasn't" and then they instantly move the goalposts to "but you haven't trained one of the billion-dollar systems" and again for all the audience knows there could actually be some deep technical thing they know that I don't. It just bugs me that they're getting away with pulling the gatekeeper card on what's just... really, really not very complicated computer science. It really is not hard to understand how a kqv attention layer works, if you're otherwise good at understanding simple math, and the one piece that is clever and requires knowing one very basic trigonometric fact to understand (the reason behind the sin and cos waves in the positional encoding) doesn't reflect up to debates about AI alignment in any way I can see. The big takeaway for macro issues is the O(N^2) cost of the context window, if OpenAI hasn't already broken it; and that there's still only 200 serial steps of computation being carried out in a 200-layer network, regardless of the length of context window, again assuming that still even holds for GPT-4; and believe it or not, all this is just NOT VERY COMPLICATED COMPUTER SCIENCE objectively speaking, in the part that's relevant up at the macro layer. There's a story, which I sure hope is apocryphal, that when Euler met the atheistic philosopher Diderot at the court of Catherine the Great, Catherine asked him to debate Diderot on atheism, and Euler opened with "(a+b^n)/n = x, therefore God exists", and Diderot, not knowing algebra and unable to refute this, gave up and returned to Paris. The story is about a hugely asshole move, which I hope Euler never tried in real life; and anyone who actually pulls something like that is burning all the trust that actual technical thinkers sometimes legitimately have to request from a lay audience who actually can't follow a technical argument. What I'm trying to say here is: I know the incredibly simple math that underlies deep learning, I implemented a simple transformer network just in case there was something to learn that way, and while I do not know the arcane alchemical details with learning rate schedules and layer normalization and the eLUs that you use instead of ReLUs, as are required to make a big damn training run actually work, I am incredibly skeptical that the knowledge there has huge macro implications for AI alignment which just can't be explained to somebody as ignorant as I am and which are impossible to even gesture at. It's like they're trying to pull "(a+b^n)/n = x, therefore God exists" only they won't actually write down the fake equation. But, you know, maybe the reason why they keep it up is that pulling "therefore God exists" and refusing to name the equation is just fundamentally effective as a gaslighting tactic if the audience is willing to believe they know deep mathematical and engineering secrets that can't possibly be explained to me or the audience. But, really - I think any scientist who actually knows a technical thing, and who is not burning the trust of people who actually need it - should be able to gesture to the audience as to what secret the supposed layman doesn't know. Provide a reference, provide a citation, maybe even the grandma-explanation of it. It will at least filter out the half-Eulers who want to say "therefore God exists" and not even write down the equation where somebody could check it.

English
16
4
152
41.9K
lowki jammin
lowki jammin@0xlowki·
@lsukernik so like many things, depends on the reasoning. Quality of team and execution universally applies tho
English
0
0
0
44
lowki jammin
lowki jammin@0xlowki·
@lsukernik will add the caveat though that outsiders often have naive perspectives on companies, industries etc. You can even be in the *same industry* and have a very poor understanding of what customer needs and company solutions *actually* look like
English
1
0
1
56
Larry Sukernik
Larry Sukernik@lsukernik·
You can tell if a company/fund is on the incline or decline when the private and public reputations begin to diverge Great in private and unknown in public: up and coming Great in public and bad in private: on the decline This is a really good way of looking for jobs too
English
1
0
26
2K
lowki jammin
lowki jammin@0xlowki·
these GPT rate limits are gona be the boomer things we reminisce about
English
0
0
5
427
lowki jammin retweetledi
go follow my other account
go follow my other account@my_old_acount·
Just made a handy tool to use #GPT4 on macOS! Select text, right click (or keyboard shortcut), & a pop-up window appears for quick commands.🪄✨ Summarize, translate, reformat, debug -- without switching apps. Meet Lookie⬇️
English
2
3
13
3.2K
lowki jammin
lowki jammin@0xlowki·
@lsukernik need to define fully onboarded, i often feel like a lot of "crypto native" devs still have a good amt to go
English
0
0
2
68
Larry Sukernik
Larry Sukernik@lsukernik·
Is anyone tracking the average time it takes a new developer to onboard to crypto?
English
4
0
6
1.6K
lowki jammin
lowki jammin@0xlowki·
@SubsetTopology we have no idea what signatures are real and which aren't lmao, it's a meaningless list alone
English
2
0
4
679
lowki jammin
lowki jammin@0xlowki·
with LLMs, sales teams everywhere have finally cracked engineer-to-customer translation
English
0
0
3
209
lowki jammin
lowki jammin@0xlowki·
I expect to see the US military increasingly leveraged as dollar reserve status is questioned more and more
English
1
0
2
379