Alex

678 posts

Alex

Alex

@AlexanderMoini

founder @ Theus AI bringing radical liquidity to CRE

DTX Katılım Aralık 2022
1.1K Takip Edilen519 Takipçiler
Sabitlenmiş Tweet
Alex
Alex@AlexanderMoini·
increasing tokens/sec is increasing intelligence
English
2
1
11
3.3K
Alex
Alex@AlexanderMoini·
who knows. i'm optimistic that there is value to be created & captured on the application layer (without also being in the model layer), but its less obvious where that is for coding specific app layer. model routing and capturing value from that is one, but seems weak. model routing within a single session is obviously bad because no cache = more expensive. model routing for sub-agent patterns seems more reasonable, but it's unclear whether or not that's sufficient for differentiation btwn app and model layer. there are probably other ways tho
English
0
0
0
16
Rithvik
Rithvik@rithvanga·
@AlexanderMoini If model providers can do the same if not better for cheaper with a crazy emphasis on coding what does that mean for the future?
English
1
0
1
29
Rithvik
Rithvik@rithvanga·
Cursor is losing because they can't RL foundational SOTA reasoning models for their harness. Only model providers can now win the AI coding wars. Chinese model provider distillation isn't sustainable and will always be behind the frontier.
English
21
2
115
18.5K
Alex
Alex@AlexanderMoini·
@SecWar @DOWResponse total miss man. frustrating to see this fumble from you guys.
English
0
0
0
175
Secretary of War Pete Hegseth
This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.
English
10.5K
11.1K
71.1K
13.2M
Alex
Alex@AlexanderMoini·
@MrBeast 5m straight into spacex ipo
English
0
0
0
49
MrBeast
MrBeast@MrBeast·
If you won Beast Games would you rather take $5,000,000 upfront or $50,000 a month for life?
English
23.4K
1.9K
54.8K
11.3M
Alex
Alex@AlexanderMoini·
@Austen nothing they suck. benchmarkmaxxed models.
English
0
0
0
224
Austen Allred
Austen Allred@Austen·
Every local model I try kinda sucks and isn’t near close enough to frontier models to justify buying a ton of expensive hardware to run them on. What am I missing?
English
230
6
473
78K
Alex
Alex@AlexanderMoini·
could be better via: - 2D output space vs. 32k-128k (whatever vocab sizes are at these days) - allows for lower level optimizations rather than having to work on “dumb” human abstractions - massive data scale is attainable. prob works well with RL because of that etc., could also be worse because: - much longer out seq length to maintain coherence over - like you said - no room for any errors but could be mitigated with good rl & inference self-healing loops, but still not trivial - etc., more likely scenario is that they’re used for very specific systems at first that are small components of broader systems that use traditional coding models/languages (i.e. rust/python/etc.,).
English
0
0
0
18
Alex
Alex@AlexanderMoini·
@FrankRundatz @KocoEqualsLoco @lucas_montano all i’m refuting is your claim that the data doesn’t exist man. i would also disagree that it’s “so obviously worse” than higher level language gen. but that’s a separate topic that’s more speculative.
English
1
0
1
24
montano
montano@lucas_montano·
unpopular opinion: ai creating binary directly is a dumb idea no matter how many billions you have in your account
English
195
52
1.9K
144.6K
Alex
Alex@AlexanderMoini·
@FrankRundatz @KocoEqualsLoco @lucas_montano to be clear, i’m not saying i could afford the training, but conceptually the “how to do this” is fairly simple. the notion that there’s no way to train a model on this because the data doesn’t exist is false.
English
1
0
0
36
Frank Rundatz
Frank Rundatz@FrankRundatz·
@AlexanderMoini @KocoEqualsLoco @lucas_montano Do you have any idea how expensive training is? Do you know how often compilers are updated? You ding dongs are proposing a solution that is impossible AND has no benefit. I cannot believe we are even having this conversation.
Frank Rundatz tweet media
English
1
0
0
160
Frank Rundatz
Frank Rundatz@FrankRundatz·
AI trained on Stack Overflow to learn how to turn English into Source Code. Where is the training material for AI to learn how to turn English into Binary Code? We’ll start with there and if you can answer that with an IQ above 6 I’ll ask you the other questions that make this impossible.
Frank Rundatz@FrankRundatz

With the AI model struggling to create a 16 hour clock, It’s important to assess the implications of your response. The LLM initially failed to create something outside of its training material. This distinction is very important because it is the difference between reasoning and copying. This shows the future that we have in store for us with LLM-based AI which I’ll touch on later. You then cajoled it through prompting to sort of create the thing outside of its training material. But is it accurate even after cajoling it multiple times? No. For example, the hour hand is about 80% of the way to 14 but the minute hand is pointing at 2. To be logically consistent, the minute hand should be around 11. What did the LLM do to finally give you what you wanted? The 99.99% of training material that it was trained on acts as a huge "gravity well" pulling it toward drawing a 12 hour clock. Your cajoling through negative prompting, "No, stop it. I want a 16 hour clock, not a 12 hour clock" pulls it into a "16" token and "linear sequence" token away from its training material of a "12 hour clock". This has many ramifications to real-world results. One, this shows that LLMs really struggle to go beyond their training material. And the hyperparameters that allow them to do this (ex: top-p, top-k, temperature) have horrible side effects - hallucinations. This flaw has been in LLMs since they were first created and will remain forever. It is an immutable side-effect of the technology. Two, this effect is easy to spot in a flawed 16 hour clock. It is very difficult to spot in flawed code or a flawed legal brief. Three, Google had a symbiotic relationship with content creators. A User searched, Google provided results driving the User to a Content Creator's website. User won, Google one, Content creator won. AI Companies have a parasitic relationship with Content Creators. AI Companies steal all of the content and host it themselves. In the short term, Users and AI Companies win, Content Creators lose. Some people hand-wave this problem away by saying that LLMs will get so good they won't need Content Creators anymore. But your response shows that AI Companies will always need Content Creators. LLMs can program well because sites like Stack Overflow taught them how to program in a Problem->Solution format. Today, Stack Overflow is effectively dead. LLMs cannot learn by reading the documentation and coming up with novel solutions. We only think they can because they've been trained on actual thinking. How will LLMs stay up to date when there's no more human-generated content to train on?

English
4
0
4
631
Alex
Alex@AlexanderMoini·
@allgarbled personal software shift + vibers burning tokens for vibes sake
English
0
0
0
6
gabe
gabe@allgarbled·
Everybody’s talking about their lines of code. Lot fewer people posting their projects. That’s what I’m noticing.
English
120
28
1.3K
113.4K
Alex
Alex@AlexanderMoini·
@nickfloats wtf is a "builder" that wasn't already an idea guy? bro you're just an engineer then. your identity was wrong anyways lol
English
0
0
0
77
Nick St. Pierre
Nick St. Pierre@nickfloats·
Big identity crisis in many engineering circles rn. People who've historically considered themselves "builders" now realizing they aren't the ones building shit anymore, AI is. The moral superiority of the "I build things, you just talk" mentality is irrelevant now that the coding language is english and anyone can build things by talking. The skills that made them so economically valuable are almost fully commoditized, and they're being forced to adopt a new identity. An identity most of them despise and have mocked their entire careers. To remain relevant, they must become the "idea guy"
English
386
197
2.7K
599.7K
Alex
Alex@AlexanderMoini·
@marclou incredible timing for sonnet 5
English
1
0
0
1K
Marc Lou
Marc Lou@marclou·
The terminal was never meant to be where humans build apps
Marc Lou tweet media
English
358
11
1.3K
320.8K
Alex
Alex@AlexanderMoini·
@thedulab certainly a nice thought
English
0
0
0
35
du
du@thedulab·
If you're aware that the term "permanent underclass" even exists then it's clear you'll never be a part of it. No shot. On this app all day exposing yourself to every opinion about what the future might look like You've gotten to a point where you've essentially become an expert in predicting exactly how you might be cooked. With that level of critical thinking there's just no reality where you'll ever allow it to happen to you. Even seeing this on your feed is already proof of that Seriously though you have to give yourself some more credit. You're not some useless sack of flesh and you never were. The fact that this stuff is even on your mind means you care enough about your life to never give up Keep moving forward and trust in your ability to adapt to whatever life throws at you. No need to overly fear because you'll always be able to figure it out. Impossible not to figure out out being surrounded by so many bright minds. Believe it or not you're one of them
English
273
753
11.2K
639K
Henri Fjord
Henri Fjord@henri_fjord·
Why didn’t anyone think of recruiting @georainbolt for the Epstein investigation earlier?
English
73
1.4K
11.2K
538.1K
sudolabel
sudolabel@sudolabel·
i just wanna say get mogged
sudolabel tweet media
English
82
59
4.8K
356.9K
Avi
Avi@AviSchiffmann·
What are some good subjects for someone just getting into learning
English
112
6
253
39.4K
New
New@newsystems_·
Find out what Brampton unlocks for you. Early access starts right now. Reply with "brampton" to enroll.
English
466
4
207
83.4K
New
New@newsystems_·
It's finally here: Brampton Brampton is the world's most intelligent, creative, and fastest model. Brampton dramatically outperforms Grok 3, Claude 3.7 Sonnet, and GPT 4.5. Reply with "brampton" for early access.
English
4K
215
3.8K
2M
Alex
Alex@AlexanderMoini·
@EMostaque optimal capital allocation to be achieved.
English
0
0
9
1.9K
Emad
Emad@EMostaque·
Cost less to train GPT-4o, Claude 3.5, R1, Gemini 2 & Grok 3 than it did to make Snow White Still early
English
56
151
2.5K
181.9K
Alex
Alex@AlexanderMoini·
@tobi @teknium @RifeWithKaiju @TiggerSharkML I just don’t understand how openAPI and other standards don’t already provide the same experience. I.e. fast apis with a /docs default. Feel like differing standards for humans & llms is a huge anti pattern when it can be avoided.
English
0
0
2
138
tobi lutke
tobi lutke@tobi·
@teknium @RifeWithKaiju @TiggerSharkML yea. In sidekick we see that people ask to identify top customers and make a customer segment from them, it then goes and runs multiple analytical queries and moves on to create it. Try it with claude desktop yourself to see in action, its 90% UX but UX matters a lot
English
3
0
6
721
Teknium (e/λ)
Teknium (e/λ)@Teknium·
I still dont get what mcp is for we already had to setup endpoints to run function calls why is it different please make it make sense
English
122
19
798
193.9K