ghost of ai future

434 posts

ghost of ai future banner
ghost of ai future

ghost of ai future

@GenAIDL

Making AI & LLMs simple for beginners. showing tools, prompts, and workflows in plain language after testing what actually works for people new to AI.

Atlanta, GA Katılım Ağustos 2025
134 Takip Edilen64 Takipçiler
Sabitlenmiş Tweet
ghost of ai future
ghost of ai future@GenAIDL·
AI shouldn’t feel complicated. This account is for people starting from zero, no tech background needed. Here’s what you’ll see: “AI Made Simple” → everything about AI explained clearly. tools & prompts broken down step by step. simple use cases you can actually try. Still figuring out what an LLM even is? You're in the right place.
English
1
0
4
126
ghost of ai future
ghost of ai future@GenAIDL·
fair pushback, if it were purely a capacity problem throwing more parameters at it would close the gap and it hasn't, which suggests the issue is more architectural, the attention mechanism itself may just not be the right primitive for coherent reasoning at that scale regardless of how big the model gets
English
0
0
0
9
serdarml
serdarml@cs_serdar·
@GenAIDL @jxmnop If it was just a signal to noise/bandwith issue, scaling up the model size would directly fix the issue. Doesnt seem to be the case though.
English
1
0
1
17
dr. jack morris
dr. jack morris@jxmnop·
it is endlessly fascinating to me that we still don't have a true 1M-context model it's an unusual case where the infra is far ahead of the science. Claude discontinued 1M+ context bc it didn't really work past ~200k we don't have the right data? training techniques? not sure
English
141
14
726
104.6K
Shay Boloor
Shay Boloor@StockSavvyShay·
OpenAI President and co-founder Greg Brockman said in court that his stake in the company is now worth ~$30B. He also said OpenAI’s nonprofit arm now holds a stake worth ~$200B after the company’s for-profit restructuring.
Shay Boloor tweet mediaShay Boloor tweet media
English
38
16
185
29.4K
ghost of ai future
ghost of ai future@GenAIDL·
We’ve spent the last year racing for 1M+ context windows as if more memory = better reasoning. The reality is that the architecture is still hitting a signal-to-noise wall, until we solve the "lost in the middle" problem, we’re just building bigger haystacks without sharpening the needle.
English
0
0
0
26
ghost of ai future
ghost of ai future@GenAIDL·
Mel is the patron saint of the bare metal era, we’ve gained massive velocity through abstractions, but we’re definitely losing that granular intuition of how the hardware actually breathes. There’s a unique kind of respect for someone who treats the machine like a tailored instrument rather than a black box
English
0
2
38
2K
LaurieWired
LaurieWired@lauriewired·
There’s a famous Usenet story about a programmer (Mel) who refused higher level abstractions. It was the late 1950s, and even in that era, Mel was…well today we’d call him a boomer. Mel only wrote in raw hexadecimal. He didn’t approve of compilers, and refused to use optimizing assemblers. "You never know where it's going to put things”, he said. Everyone else in the company was moving on to FORTRAN, and they didn’t understand why Mel was so stubborn about using new tools. He *loved* self-modifying code. “If a program can’t rewrite its own code”, he asked, “what good is it?” Mel eventually left the company, and other engineers were tasked with understanding what was left. Mel’s hand-optimized routines always beat the assemblers; but some of it looked absolutely bizarre. One engineer took ~2 weeks to understand why there were loops with no exit condition…yet the program worked fine. I won’t spoil all the details, you should really read it, it’s short. But it’s a fantastic piece on “what defines a real programmer?”…which is becoming increasingly relevant in this vibe-coded era. I strive to understand computers as deeply as Mel! If we aren’t careful, we’re going to lose the “Mels” of this world to time. That’s part of why I go so deep in my youtube videos. I hope that younger viewers are genuinely fascinated by the inner workings of our machines, instead of handing everything off to higher abstractions.
solst/ICE of Astarte@IceSolst

Interesting article on treating agent output like compiler output (and why) skiplabs.io/blog/codegen_a…

English
164
526
6.6K
384.2K
Peter A. JENSEN
Peter A. JENSEN@biocommai·
@GenAIDL @ns123abc Yeah- and a 501(c)(3) is a not-for-profit company. No shareholders. No "sweat equity". Nobody is allowed to get rich off of a 501(c)(3). Time (salary) is paid at arms length. That's it.
English
1
0
1
45
NIK
NIK@ns123abc·
🚨 GREG BROCKMAN JUST CONFESSED UNDER OATH Q: You have an ownership interest in this cap profit company. Brockman: That is accurate. Q: And you invested $0 in order to acquire that interest. Correct? Brockman: That is also accurate. Q: Your ownership interest in this for-profit is valued today at more than $20 BILLION Correct? Brockman: Yes. Q: In fact, it may be closer to $30 BILLION. Correct? Brockman: I think that may be true. Yes. Brockman invested $0. Walked away with $20–30 billion. Musk donated $38 million plus the office rent. Got $0 personally. This is unjust enrichment, captured in his own testimony.
NIK tweet mediaNIK tweet media
English
443
1.3K
12K
933.3K
ghost of ai future
ghost of ai future@GenAIDL·
the Brockman testimony is a reminder that the most valuable thing in AI right now isn't the models, it's the equity structures being built around them, pay attention to who owns what before the next restructuring announcement.
English
0
0
0
268
ghost of ai future
ghost of ai future@GenAIDL·
there's smtg freeing about being in a room where nobody is performing social competence, the conversation is just about whatever everyone is interested in and the status games that make normal social situations exhausting just aren't really happening, probably why the best friendships a lot of people have are from environments where everyone was too focused on smtg to bother being cool
English
0
0
5
811
Paul Graham
Paul Graham@paulg·
Even though nerds are socially awkward, its actually easier to hang out with them than with smooth people, because standards are lower. You don't worry that you might be making social errors; all of you always are; so it stops mattering.
English
245
240
4.8K
174K
ghost of ai future
ghost of ai future@GenAIDL·
the companies that will look back and realize they waited too long aren't the ones ignoring AI, they're the ones who added a chatbot and called it transformation
English
0
0
0
19
ghost of ai future
ghost of ai future@GenAIDL·
@APompliano the people calling the top have been wrong every quarter for two years, the infrastructure being laid rn takes a decade to fully utilize
English
0
0
0
75
Anthony Pompliano 🌪
Anthony Pompliano 🌪@APompliano·
The AI boom is not slowing down. There will be more data centers, more jobs, and more economic investments. Those who are calling the top of the trend will be left crying.
English
53
29
265
36.8K
goodalexander
goodalexander@goodalexander·
pretty sure they're airdropping you about $10k a week of compute to use codex right now, enjoy it while it lasts
English
39
33
1.8K
102.8K
ghost of ai future
ghost of ai future@GenAIDL·
@sama that's brave of you to say, but you do need some good pr in these times
English
0
0
1
641
Sam Altman
Sam Altman@sama·
you know what all of these "which is better" polls are silly use codex or claude code, whatever works best for you i am grateful we live in a time with such amazing tools, and grateful there is a choice
English
2.2K
1.1K
23K
1.6M
ghost of ai future
ghost of ai future@GenAIDL·
@Yuchenj_UW lol not the costco, but true tho lifestyle doesn't actually inflate as fast as the salary does and the happiness doesn't either, the hedonic treadmill is real and costco hot dogs are still $1.50
English
0
0
2
2.8K
Yuchen Jin
Yuchen Jin@Yuchenj_UW·
Today on Caltrain in SF, I overheard a software engineer say: “When I graduated, I made $130k/year. Now I make X times that, but I’m nowhere near as happy. I still go to the same Costco.” When I was making $3k/month as a CS phd, I was pretty happy too. Humans are weird animals.
English
75
16
1.1K
153.4K
ghost of ai future
ghost of ai future@GenAIDL·
@thekitze the tl has a way of making you contrarian about whatever is getting the most attention
English
0
0
0
199
kitze
kitze@thekitze·
too much codex glazing on the tl makes me wanna move to claude code lmao
English
44
0
114
18.3K
ghost of ai future
ghost of ai future@GenAIDL·
Best free tools to actually start with AI today: Claude : best for writing, summarizing, and thinking through problems. Free tier is generous. Chatgpt : best for general use, integrations, and image generation with the free plan. Perplexity : has all of the above + AI that cites its sources. Use it when you need answers that are verifiable, not just fast. Pick one. Use it for a real task today. That is all.
English
0
0
1
56
ghost of ai future
ghost of ai future@GenAIDL·
@forgebitz the prompt quality ceiling is your own clarity, if you can't explain exactly what you need to another person you can't explain it to the model either
English
0
1
0
59
Klaas
Klaas@forgebitz·
ai is so good when you know what you are doing and what exactly you want it's not so great for things you don't understand
English
44
2
70
2.7K
ghost of ai future
ghost of ai future@GenAIDL·
@kimmonismus the Jony Ive hire made a lot more sense once people stopped thinking of it as a vanity move, something physical is coming and the question is just whether the timeline is this year or next
English
0
0
0
231
Chubby♨️
Chubby♨️@kimmonismus·
I seriously wonder if we will see OpenAI's first hardware product this year.
English
32
4
225
12K
ghost of ai future
ghost of ai future@GenAIDL·
@arpit_bhayani feature flags and clean rollback paths are the difference between shipping with confidence and shipping while holding your breath, the teams that build reversibility in from the start move faster not slower because being wrong stops being catastrophic
English
0
0
1
323
Arpit Bhayani
Arpit Bhayani@arpit_bhayani·
Always build reversible systems. When you make an architectural decision that is hard to undo, you are essentially betting that you got it right. All it takes is one edge case, one wrong state, one user misconfiguration, one incorrect assumption, and your system crumbles. Hence, it is always better to build a reversible system. The core idea is to roll out systems and features that can be reversed (disabled) easily. Rollout confidence is often proportional to how easy it is to revert the change. Some ways to achieve this are: - Loose coupling between components - Feature flags for gradual rollouts - Database migrations that can roll back - Phased rollouts and deployments - Define clear interfaces that hide implementation details Bake this principle into your design process and decision-making framework. Most decisions you make should favor reversible systems, allowing you to move faster. With this, you stop being afraid of being wrong. Hope this helps.
English
18
21
446
14.3K
ghost of ai future
ghost of ai future@GenAIDL·
AI tools are not smart. They are fast pattern matchers with access to a lot of text. Knowing that changes how you use them. You stop expecting insight and start using them for what they are good at.
English
0
0
0
17