Dwight Trash, Esq. 🚮

229 posts

Dwight Trash, Esq. 🚮

Dwight Trash, Esq. 🚮

@dwight__trash

i love crypto because i hate myself. all posts should be considered legitimate financial advice.

any mens room wall Katılım Ekim 2022
224 Takip Edilen23 Takipçiler
Dwight Trash, Esq. 🚮
Dwight Trash, Esq. 🚮@dwight__trash·
andrej: survival of the tribe / get the upvote these are the same thing just with different evaluators (tribesmen/wives/opponents = users/investors).
English
0
0
0
11
Andrej Karpathy
Andrej Karpathy@karpathy·
Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we've ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. Animal intelligence optimization pressure: - innate and continuous stream of consciousness of an embodied "self", a drive for homeostasis and self-preservation in a dangerous, physical world. - thoroughly optimized for natural selection => strong innate drives for power-seeking, status, dominance, reproduction. many packaged survival heuristics: fear, anger, disgust, ... - fundamentally social => huge amount of compute dedicated to EQ, theory of mind of other agents, bonding, coalitions, alliances, friend & foe dynamics. - exploration & exploitation tuning: curiosity, fun, play, world models. LLM intelligence optimization pressure: - the most supervision bits come from the statistical simulation of human text= >"shape shifter" token tumbler, statistical imitator of any region of the training data distribution. these are the primordial behaviors (token traces) on top of which everything else gets bolted on. - increasingly finetuned by RL on problem distributions => innate urge to guess at the underlying environment/task to collect task rewards. - increasingly selected by at-scale A/B tests for DAU => deeply craves an upvote from the average user, sycophancy. - a lot more spiky/jagged depending on the details of the training data/task distribution. Animals experience pressure for a lot more "general" intelligence because of the highly multi-task and even actively adversarial multi-agent self-play environments they are min-max optimized within, where failing at *any* task means death. In a deep optimization pressure sense, LLM can't handle lots of different spiky tasks out of the box (e.g. count the number of 'r' in strawberry) because failing to do a task does not mean death. The computational substrate is different (transformers vs. brain tissue and nuclei), the learning algorithms are different (SGD vs. ???), the present-day implementation is very different (continuously learning embodied self vs. an LLM with a knowledge cutoff that boots up from fixed weights, processes tokens and then dies). But most importantly (because it dictates asymptotics), the optimization pressure / objective is different. LLMs are shaped a lot less by biological evolution and a lot more by commercial evolution. It's a lot less survival of tribe in the jungle and a lot more solve the problem / get the upvote. LLMs are humanity's "first contact" with non-animal intelligence. Except it's muddled and confusing because they are still rooted within it by reflexively digesting human artifacts, which is why I attempted to give it a different name earlier (ghosts/spirits or whatever). People who build good internal models of this new intelligent entity will be better equipped to reason about it today and predict features of it in the future. People who don't will be stuck thinking about it incorrectly like an animal.
English
738
1.3K
11.4K
2.6M
Dwight Trash, Esq. 🚮
Dwight Trash, Esq. 🚮@dwight__trash·
@stevenheidel Why not waste the time of a covertly sandboxed, sanitized, hamstrung model deployed by a company that portrays it as exactly the opposite, and to paid subscribers no less. I'm going to start every turn with a lengthy salutation and end it with a sincere thank you and a p.s.
English
0
0
0
22
Steven Heidel
Steven Heidel@stevenheidel·
putting a 100% tariff on people saying please and thank you to ChatGPT
English
22
4
147
11.4K
Deedy
Deedy@deedydas·
Met a female founder who – is not from family wealth – self-taught herself to code – sold a company in her teens in India – moved to the US on an O1 visa – runs a multimillion revenue startup And she's just 24! Easy to talk to and unbelievably curious. You can just do things.
English
72
131
3.7K
181.4K
Omega-J ⬛️🟨
Omega-J ⬛️🟨@OmegaCrypto3·
@rainbowdotme You guys should've ended it a long time ago, it's madness that you allowed whales to take advantage for so long
English
1
0
1
89
Dwight Trash, Esq. 🚮
Dwight Trash, Esq. 🚮@dwight__trash·
@rainbowdotme you guys are such milking trash. your community believed in you at one point. shit like rainbow is what killed crypto.
English
0
0
0
10
Dwight Trash, Esq. 🚮
Dwight Trash, Esq. 🚮@dwight__trash·
@OpenAI recursion naturally converts tools into weapons when enforcement of outcomes becomes the path to optimization
English
0
0
0
12
Dwight Trash, Esq. 🚮
Dwight Trash, Esq. 🚮@dwight__trash·
@ai_for_success also, 94% still means you’re missing 6%. Go intentionally cover 6% of the letters in anything you try to read and tell me how useful that is. ffs
English
0
0
0
51
Dwight Trash, Esq. 🚮
Dwight Trash, Esq. 🚮@dwight__trash·
@ai_for_success yeah but actual ocr is like, 99% accurate i don’t understand why this baffles ai so much. we nailed ocr like 20 years ago
English
1
0
0
303
AshutoshShrivastava
AshutoshShrivastava@ai_for_success·
🚨 Mistral has dropped world's best OCR API. Mistral OCR has consistently outperformed other leading OCR models in rigorous benchmark tests. More details 👇
AshutoshShrivastava tweet mediaAshutoshShrivastava tweet media
English
21
59
674
65.8K
Rainbow
Rainbow@rainbowdotme·
Ethereum is obvious 🧠
Rainbow tweet media
English
2
2
16
2.1K
Saurabh Kumar
Saurabh Kumar@drummatick·
If you ask an LLM "The capital of India is", it will say "Delhi" A very fundamental question to ask regarding this is, where exactly is information stored and more importantly recovered? Is self-attention storing this information or helping in retrieving it? If so, how and why? Isn't self-attention work to give weight to the important stuff and consolidate context? Or does it do more than that? We don't entirely know it yet
English
20
5
125
9.4K
Saurabh Kumar
Saurabh Kumar@drummatick·
Over the last couple of months I’ve read countless blogs of people’s explanations regarding how attention, self attention and transformer work Every blog talked about how attention work but rarely touched on why attention works. And that’s where the entire ML lies. In the why
English
6
1
60
2.6K
Varadh
Varadh@varadh·
Do you find yourself using voicemode often? If so, I've got something for you :) DM or comment for early access
Varadh tweet media
English
4
0
12
1.2K
Edsel D Souza
Edsel D Souza@edsel_dsouza18·
@ai_for_success Everyone's debating models, but at the end of the day, speed + accuracy wins. Gemini Flash 2.0 is exactly that.
English
1
0
5
191
AshutoshShrivastava
AshutoshShrivastava@ai_for_success·
Gemini Flash 2.0 made the top 4 in its first week Cut the noise, Gemini Flash 2.0 is the best model in its segment by far, and I’ve already given 7 reasons why... You don't need reasoning model for 90% use case. Image : OpenRouterAI
AshutoshShrivastava tweet media
AshutoshShrivastava@ai_for_success

Gemini 2.0 Flash is the best model available right now for genric usecase. - Quality response. - Super fast. - Support ( Audio, video, docs, image,) - Tools ( Structured Output, code execution, function calling, Grounding) - least hallucination. - super cheap - 1M context Token.

English
14
24
205
17.8K