lowki jammin
1.2K posts

lowki jammin
@0xlowki
beats, jams, and contracts. Internet money bitch
Katılım Eylül 2021
296 Takip Edilen1.4K Takipçiler
Sabitlenmiş Tweet

@abacaj if nothing else we'll just continue to amass quality datasets, which will hold value as learning algos continue to improve
English

@rezkhere the only people who see bard as competition are still using gpt3.5, or don't actually use AI at all
English

@tszzl unfortunately we'd probably just think the models are hallucinating
English

@simonw Anyone who puts Bard close to GPT4 in performance is out of their mind
English

Leaked Google document: “We Have No Moat, And Neither Does OpenAI”
The most interesting thing I've read recently about LLMs - a purportedly leaked document from a researcher at Google talking about the huge strategic impact open source models are having
simonwillison.net/2023/May/4/no-…
English

@_Dave__White_ @OpenAI ex: the best compression we have for 3D models is overtraining a model on one data sample, and the weights serve as your compression vector
English

@_Dave__White_ @OpenAI embedding vectors are a compression of sorts, dependent on model architecture. There are plenty of examples though where the state of the art on compression is not through an embedding
English

can one take an embedding vector of the type you'd get from the @OpenAI API and reproduce the text that created it?
or, can i average out several embedding vectors and use them as shared context for a new inference step?
is there any interesting research in this vicinity?
English

@iamgingertrash also if you match f(x) cheaper than the original creation, you could potentially RLHF to f(x) + 1 with a lower total cost (seems like shaky ground but I wouldn't outright dismiss it)
English

@iamgingertrash the goal isn't for f(x) + 1 but simply to match f(x) though, no?
English

Recursive training is a dead end (training on generated output)
If a particular models performance is f(x), you cannot have f(x)+1 by using data quality that’s inherently bounded to f(x)
Teknium (e/λ)@Teknium
Well, here's a dataset of 2.58m gpt3.5-turbo prompt:response pairs - almost 52x larger than Alpaca's dataset. If only we had gpt4 instead of 3 here xD But nonetheless, intensely useful: huggingface.co/datasets/MBZUA…
English

@krishnanrohit the point is that solving those error manifolds speaks roughly nothing to interpretability / explainability. Regardless on your views of Yudkowsky, this point stands. And there is no clear path as of yet towards alignment (if anything, deeper models are harder to constrain)
English


@lsukernik so like many things, depends on the reasoning. Quality of team and execution universally applies tho
English

@lsukernik will add the caveat though that outsiders often have naive perspectives on companies, industries etc. You can even be in the *same industry* and have a very poor understanding of what customer needs and company solutions *actually* look like
English
lowki jammin retweetledi

Just made a handy tool to use #GPT4 on macOS!
Select text, right click (or keyboard shortcut), & a pop-up window appears for quick commands.🪄✨
Summarize, translate, reformat, debug -- without switching apps.
Meet Lookie⬇️
English


@lsukernik need to define fully onboarded, i often feel like a lot of "crypto native" devs still have a good amt to go
English

@SubsetTopology we have no idea what signatures are real and which aren't lmao, it's a meaningless list alone
English





