Dat boiii

29 posts

Dat boiii

Dat boiii

@BoiiiDat9998

Katılım Mart 2024
96 Takip Edilen3 Takipçiler
Dat boiii retweetledi
Srishti
Srishti@srishticodes·
Stanford just made a $200,000 AI degree free. No application. No tuition. No “elite access”. Stanford released its actual AI/ML curriculum on YouTube. Not a PR-friendly intro. Not “AI for the public”. This is the real thing. The same lectures shaping people working on frontier models. What just became public: Deep Learning (CS230) → youtube.com/playlist?list=… Transformers & LLMs (CME295) → youtube.com/playlist?list=… Language Models from Scratch (CS336) → youtube.com/playlist?list=… ML from Human Feedback (CS329H) → youtube.com/playlist?list=… Computer Vision (CS231N) → youtube.com/playlist?list=… LLM Evaluation & Scaling → youtube.com/playlist?list=… The uncomfortable truth: The degree isn’t the scarce asset anymore. Execution speed is. Top schools know this. That’s why they’re publishing the playbook. 👉 Bookmark this. Comment the first lecture you’ll actually watch.
Srishti tweet media
English
415
5.1K
29K
4.1M
Dat boiii retweetledi
Steve the Beaver
Steve the Beaver@beaversteever·
dropshippers upgraded from Amazon 🥀
Steve the Beaver tweet media
English
1
4
31
2.1K
Dat boiii retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
Microsoft just dropped a study showing the 40 jobs most at risk by AI and the 40 most secure.
Aakash Gupta tweet media
English
98
110
683
368.3K
Dat boiii retweetledi
GREG ISENBERG
GREG ISENBERG@gregisenberg·
MCKINSEY JUST DROPPED THEIR 2025 AI REPORT. HERE’S THE TLDR: 1/ 90% of companies “use AI,” but 67% are still stuck in pilot mode. Corporate AI theater is alive and well lol. 2/ 62% of orgs are experimenting with AI agents, 23% are scaling AI agents. Most are in tech and healthcare. 3/ The impact gap is massive. 64% say AI helps innovation, but only 39% see real EBIT gains. 4/ The high performers (top 6%) think bigger. They rebuild workflows, set growth goals, and invest real budgets not just POCs. 5/ Leaders who own AI personally are 3x more likely to scale it. Makes sense. 6/ The winners use AI to transform how work gets done, not just speed it up. 7/ The average company measures efficiency. The best ones measure how fast their agents can act. 8/ Risk management is catching up with 51% have already seen AI backfire, mostly from inaccuracy. 9/ The workforce impact is foggy. 32% expect cuts, 13% expect growth, everyone else is guessing. 10/ AI adoption is mainstream, but true transformation hasn’t started. Early days.
GREG ISENBERG tweet mediaGREG ISENBERG tweet mediaGREG ISENBERG tweet mediaGREG ISENBERG tweet media
English
203
999
5.7K
834.8K
simp 4 satoshi
simp 4 satoshi@iamgingertrash·
Here is an OpenAI document submitted one week ago where they advocate for including datacenter spend within the “American manufacturing” umbrella There they specifically advocate For Federal loan guarantees Sam Lied to everyone, Again
simp 4 satoshi tweet media
English
232
1.5K
8.5K
2M
Manus
Manus@ManusAI·
Build AI web apps with 1 trillion free tokens We’re giving away 1 trillion LLM tokens so anyone can build AI-native web apps with Manus, completely free. Want even more? Share your project on social media with the hashtag #BuiltwithManus AND submit it via this form form.typeform.com/to/VL2JhE1X to earn 100,000 additional Manus credits!
GIF
English
56
102
708
60.6K
Dat boiii retweetledi
OpenAI
OpenAI@OpenAI·
Meet our new browser—ChatGPT Atlas. Available today on macOS: chatgpt.com/atlas
English
2.3K
4.2K
29.8K
14M
Dat boiii retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
I quite like the new DeepSeek-OCR paper. It's a good OCR model (maybe a bit worse than dots), and yes data collection etc., but anyway it doesn't matter. The more interesting part for me (esp as a computer vision at heart who is temporarily masquerading as a natural language person) is whether pixels are better inputs to LLMs than text. Whether text tokens are wasteful and just terrible, at the input. Maybe it makes more sense that all inputs to LLMs should only ever be images. Even if you happen to have pure text input, maybe you'd prefer to render it and then feed that in: - more information compression (see paper) => shorter context windows, more efficiency - significantly more general information stream => not just text, but e.g. bold text, colored text, arbitrary images. - input can now be processed with bidirectional attention easily and as default, not autoregressive attention - a lot more powerful. - delete the tokenizer (at the input)!! I already ranted about how much I dislike the tokenizer. Tokenizers are ugly, separate, not end-to-end stage. It "imports" all the ugliness of Unicode, byte encodings, it inherits a lot of historical baggage, security/jailbreak risk (e.g. continuation bytes). It makes two characters that look identical to the eye look as two completely different tokens internally in the network. A smiling emoji looks like a weird token, not an... actual smiling face, pixels and all, and all the transfer learning that brings along. The tokenizer must go. OCR is just one of many useful vision -> text tasks. And text -> text tasks can be made to be vision ->text tasks. Not vice versa. So many the User message is images, but the decoder (the Assistant response) remains text. It's a lot less obvious how to output pixels realistically... or if you'd want to. Now I have to also fight the urge to side quest an image-input-only version of nanochat...
vLLM@vllm_project

🚀 DeepSeek-OCR — the new frontier of OCR from @deepseek_ai , exploring optical context compression for LLMs, is running blazingly fast on vLLM ⚡ (~2500 tokens/s on A100-40G) — powered by vllm==0.8.5 for day-0 model support. 🧠 Compresses visual contexts up to 20× while keeping 97% OCR accuracy at <10×. 📄 Outperforms GOT-OCR2.0 & MinerU2.0 on OmniDocBench using fewer vision tokens. 🤝 The vLLM team is working with DeepSeek to bring official DeepSeek-OCR support into the next vLLM release — making multimodal inference even faster and easier to scale. 🔗 github.com/deepseek-ai/De… #vLLM #DeepSeek #OCR #LLM #VisionAI #DeepLearning

English
559
1.6K
13.3K
3.3M
Dat boiii retweetledi
Manus
Manus@ManusAI·
Introducing Manus 1.5 Faster, better quality results. Unlimited context. Build full-stack web apps, with real AI features, backends, user logins, custom domains and analytics. Run your biz, start your side hustle, or just have fun. If you can dream it, Manus can build it.
English
265
526
4.5K
4.6M
Dat boiii retweetledi
Praise The Camera
Praise The Camera@praisethecamera·
Save this📍 Best iPhone camera settings for high-resolution photos and stunning 4K quality.
Praise The Camera tweet media
English
301
6.5K
61.4K
9.3M
Dat boiii
Dat boiii@BoiiiDat9998·
@MTA why doesn’t your app should take Venmo as a payment type?? 😒
English
0
0
0
1
Dat boiii retweetledi
Min Choi
Min Choi@minchoi·
This is wild. Midjourney Video V1 can generate YouTube and TikTok influencer videos. These are not real 🤯 10 wild examples + prompts: 1. Product review vlog
English
68
92
736
186.8K
Dat boiii retweetledi
Min Choi
Min Choi@minchoi·
RIP Cursor/Windsurf Claude Code for VSCode 🤯
Min Choi tweet media
English
390
720
10.6K
1.2M
Ronald Mannak
Ronald Mannak@ronaldmannak·
After a weekend trying Claude 4 Sonnet on Cursor for server-side Swift development, I think I can confirm it's probably the best LLM for Swift. However, the bar is low. LLMs are notoriously bad at Swift. It can still screw up a working project with a single innocent looking prompt (so commit often!). Besides so-so support of Swift, what I really wish for is faster inference. Sonnet on Cursor is slow. The wait is just too long. I wish they'd run Sonnet on Groq or something.
English
3
1
12
1.3K
Dat boiii retweetledi
Teknium 🪽
Teknium 🪽@Teknium·
We retrained hermes with 5k deepseek r1 distilled cots. I can confirm a few things: 1. You can have a generalist + reasoning mode, we labeled all longCoT samples from r1 with a static systeem prompt, the model when not using it does normal fast LLM intuitive responses, and with, uses LongCot - You do not need "O1 && 4o" seperation for instance, I would venture to bet OpenAI seperated them so they can charge more, but maybe just wanted the distinction for safety or product insights. 2. Distilling does seem to pick up the "opcodes" of reasoning from the SFT alone. It learns how and when to use "Wait" and other tokens to perform the functions of reasoning, such as backtracking. 3. Context length expansion is going to be hard for OS to work with. Even though this stuff works well on smaller models, context length starts to eat a lot of vram as you scale that up. We're working on a bit more of this and are not releasing this model but figured I'd share some early insights
English
78
131
1.3K
105.2K
Dat boiii retweetledi
Mohammad Azam
Mohammad Azam@azamsharp·
Swift Concurrency 😡 What happened to Swift Language? Where did we go wrong?
Mohammad Azam tweet media
English
31
50
524
68.3K