LazerLars

1.8K posts

LazerLars banner
LazerLars

LazerLars

@LazerLars1

burkanCITY bby

Denmark Katılım Aralık 2011
523 Takip Edilen114 Takipçiler
LazerLars retweetledi
Victor M
Victor M@victormustar·
NVIDIA's Kimodo is the release of the week 🔥 Prompt the timeline whatever your want like: "a person walks forward" → "a person starts jumping", hit Generate, and watch a 3D character do it in seconds (700hrs of pro mocap training. Works on human + robot skeletons. Super fast + free to use on HF)
English
57
393
3.1K
382K
LazerLars retweetledi
𝕾𝖎𝖗 𝕮𝖍𝖗𝖎𝖘
i once worked with someone extremely wealthy and what stood out was their mindset and how they never complained about small inconveniences their coffee order was wrong? they just drank it. flight delayed? they pulled out a book. they had this quiet acceptance that some things simply aren’t worth the emotional energy while the rest of us stressed over what we couldn’t control, they had already shifted focus to what they could control. it wasn’t really about money solving problems… it was about having enough security that they didn’t feel the need to fight every battle... they could afford, mentally and emotionally, to let things go.
✦ 𝓢𝓱𝓲𝓷𝓮 ✦🪐@upshine3

For those who work for rich people, what's the most out-of-touch thing you witnessed?

English
227
3.5K
23.1K
2.6M
LazerLars retweetledi
IT Unprofessional
IT Unprofessional@it_unprofession·
I used to think “work-life balance” meant finishing work at 5. Then I realized my real problem wasn’t hours, it was boundaries. I’d close my laptop and keep working in my head until midnight. Running fake arguments. Replaying meetings. Drafting emails I’d never send. Now my rule is: if it can’t be written down as a task, it doesn’t get space in my brain after hours. Worry is not a deliverable. Anxious overthinking has never once closed a deal, shipped a feature, or fixed a bug. Action tomorrow? Yes. Mental torture tonight? Hard pass. Turns out, “log off” is mostly an inside job.
English
18
82
833
54.4K
A I _ S A U C E
A I _ S A U C E@aisauce_x·
The things that got cheaper are the things nobody could stop from getting cheaper. The things that stayed expensive are the things somebody had an interest in keeping expensive. Technology deflated because there was no lobby for keeping projectors at $3000. Housing inflated because there is absolutely a lobby for keeping housing scarce. That's not a market. That's a decision
English
5
14
265
45.9K
Aakash Gupta
Aakash Gupta@aakashgupta·
This photo is the entire American cost curve story in a single frame. You can buy a 1080p mini projector on Amazon for $49. A refurbished laptop for $150. A Bluetooth speaker for $25. Total home theater setup: under $250, and the prices dropped 95% in the last 15 years. A studio apartment in Hollywood rents for $1,986 per month. That's $23,832 per year for 518 square feet. The same neighborhood where this tent is pitched. Technology followed a deflation curve. Computing power per dollar doubles roughly every two years. Storage costs collapse. Screens get cheaper. A device that cost $3,000 in 2010 costs $49 in 2026 and fits in your palm. The projector in this tent is almost certainly better than anything a mid-tier hotel offered 10 years ago. Housing followed an inflation curve. LA rents are up 65% over the last decade. The county has 72,308 people experiencing homelessness. The median rent requires an income of roughly $110,000 to afford without being cost-burdened. California added about 100,000 housing units per year while needing 180,000. So this is what happens when one cost curve goes to zero and the other goes vertical. You get a man projecting a movie onto the wall of a tent in Hollywood because the entertainment is the cheap part. The four walls are the expensive part. A projector costs $49. A studio in Hollywood costs $24,000 a year. 72,000 people in LA County can afford the home theater. The door that locks is the part that broke. That ratio tells you everything about which problems we solved and which ones we chose not to.
Hoops@Hoopss

Homeless man seen with projector, laptop, and sound system in his tent in Hollywood

English
95
729
7.2K
1.4M
LazerLars retweetledi
Terabyte Trifler
Terabyte Trifler@singhgurnoor080·
LLMs get all the attention, but most real business value still comes from classical ML applied to structured data. LLMs are great at language, code, and interfaces. But forecasting demand, detecting fraud, optimizing pricing, predicting churn those are still ML + data problems, not chat problems. The real leverage isn’t “AI vs ML.” It’s people who understand ML using AI to move faster.
English
0
1
1
274
LazerLars
LazerLars@LazerLars1·
@__mharrison__ @grok can you explain me this more clear and come with concrete examples and specific task use cases. And explain me difference between ml and llm usage
English
1
0
0
32
Matt Harrison
Matt Harrison@__mharrison__·
As cool as AI is (I use it every day), I feel like ML is getting looked over, and lots of folks won't be able to glean value from AI that they would get from ML. (When I say AI, I mean LLM models, like 99% of the world). I taught an ML course this week. And there is still so much that AI isn't able to do that humans with AI assistance can. So much low-hanging fruit hanging around in folks' SQL and Excel sheets that AI won't be able to extract on its own.
Matt Harrison tweet media
English
8
10
130
13K
LazerLars retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Stack Overflow published a blog "Why demand for code is infinite" AI is acting like a super-powered teammate, allowing development teams to dream bigger and tackle incredibly complex projects they wouldn't have attempted before. Because teams can build so much faster now, companies are experiencing a "Cambrian explosion" of new AI apps and are frantically hunting for engineers to manage them. They say the current shift with AI, moves developers from manually typing every line of code to acting as AI orchestrators who manage intelligent agents. Human imagination constantly finds new problems, so the demand for custom software remains practically infinite. Companies are now hiring for specialized positions like human-AI collaboration architects and domain-specific prompt engineers. Junior developers can now skip basic syntax errors and contribute working features much faster. The core of programming is shifting from pure memorization to high-level system design. --- stackoverflow. blog/2026/02/09/why-demand-for-code-is-infinite-how-ai-creates-more-developer-jobs/
Rohan Paul tweet media
Rohan Paul@rohanpaul_ai

Citadel Securities published this graph showing a strange phenomenon. Job postings for software engineers are actually seeing a massive spike. Classic example of the Jevons paradox. When AI makes coding cheaper, companies actually may need a lot more software engineers, not fewer. When software is cheaper to build, companies naturally want to build a lot more of it. Businesses are now putting software into industries and tools where it was simply too expensive before. --- Chart from citadelsecurities .com/news-and-insights/2026-global-intelligence-crisis/

English
83
191
1.3K
188.3K
LazerLars retweetledi
GitHub Projects Community
GitHub Projects Community@GithubProjects·
Unlock hidden features in your android apps with this open-source patcher
GitHub Projects Community tweet media
English
11
41
696
60K
LazerLars retweetledi
Dave
Dave@thought_harbor·
In Japanese culture, there's a principle called kaizen, the pursuit of continuous small improvement. Not massive leaps. Not overnight transformations. Just getting 1% sharper every single day. It sounds insignificant until you do the math. 1% better every day for a year makes you 37 times stronger than where you started. Greatness isn't one big move. It's a thousand small ones that nobody bothered to notice.
English
38
141
658
19.6K
LazerLars retweetledi
Alvaro Cintas
Alvaro Cintas@dr_cintas·
🚨 The #1 problem with local AI is now solved. There’s a new tool called llmfit that checks your hardware and tells you which models will run well before you download anything. So instead of guessing and hitting out-of-memory errors…it gives you a ranked list based on your machine. What it does (in one command): → scans your setup (RAM / CPU / GPU / VRAM) → evaluates models for quality, speed, fit, and context → selects the best quantization automatically → labels what’s ideal vs okay vs borderline The part I like most: it handles MoE models correctly. Example: Mixtral 8x7B has ~46.7B total params, but only ~12.9B are active per token, and llmfit accounts for that (a lot of tools still don’t). 100% Opensource.
Alvaro Cintas tweet media
English
116
447
3.9K
225.1K
LazerLars retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨 BREAKING: Fudan University just solved the animation problem nobody thought AI could touch. It's called OmniLottie The first AI that generates real vector animations from text, images, or video. Not rasterized video. Not GIFs. Actual Lottie files, the same format used by Airbnb, Google, Uber, and every major app on the planet. Here's why this is a big deal: Every animation you see in modern apps, loading spinners, onboarding flows, micro-interactions, icons that move, those are Lottie files. Designers spend hours crafting them in After Effects. Companies pay $5K–$20K per animation project. OmniLottie generates them from a text prompt. Here's how it works: → You describe what you want: "a rocket launching with flame trail and stars twinkling" → OmniLottie converts your instruction into structured animation commands → A custom Lottie tokenizer compresses the JSON into compact shape + motion tokens → A fine-tuned VLM autoregressively generates the full animation sequence → Output: a production-ready .json Lottie file you can drop into any app Three modes: Text-to-Lottie: describe it, get it. Image+Text-to-Lottie: give it a reference image + motion description. Video-to-Lottie: feed it a video, get a vector animation version. Here's the wildest part: They tested it against GPT-5, DeepSeek, Gemini, Qwen2.5-VL, and commercial tools. GPT-5 success rate: 12.7–68% DeepSeek: 29.3% Qwen2.5-VL: 0.0% Gemini: 0.0% on Video-to-Lottie OmniLottie: 97.3% on Text-to-Lottie. 92% on Image-to-Lottie. 90.7% on Video-to-Lottie. It's 530× faster than optimization-based methods per successful generation. The secret weapon: a custom Lottie Tokenizer that strips all the redundant JSON metadata and converts animations into compact command sequences. Raw Lottie JSONs waste most tokens on formatting. The tokenizer focuses the model on what actually matters — shapes, motion, and timing. They also built MMLottie-2M a dataset of 2 million professionally designed vector animations with text, image, and video annotations. The largest vector animation dataset ever created. Publicly released. From Fudan University, StepFun, HKU MMLab, and University of Queensland.
Nav Toor tweet media
English
36
181
1.4K
96.7K
LazerLars retweetledi
Abdulkadir | Cybersecurity
Abdulkadir | Cybersecurity@cyber_razz·
Qwen’s new model can run on your phone and outperforms what cost $200 per month just 18 months ago. Here’s a breakdown of the devices each model can run on: - 0.8B: 500MB, Phone - 2B: 1.5GB, Raspberry Pi - 4B: 2.5GB, MacBook Air - 9B: 6GB, MacBook Pro The information density of these AI models is often underestimated. Every 10x compression in size translates to a 100x expansion in deployment potential. As a result, devices like phones, cars, drones, glasses, sensors, and robots are increasingly equipped with high-quality, free, and open-source edge AI.
Qwen@Alibaba_Qwen

🚀 Introducing the Qwen 3.5 Small Model Series Qwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B ✨ More intelligence, less compute. These small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL: • 0.8B / 2B → tiny, fast, great for edge device • 4B → a surprisingly strong multimodal base for lightweight agents • 9B → compact, but already closing the gap with much larger models And yes — we’re also releasing the Base models as well. We hope this better supports research, experimentation, and real-world industrial innovation. Hugging Face: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw…

English
2
2
17
1.7K
LazerLars retweetledi
Rhos 🦋
Rhos 🦋@RhosGFX·
Recently had someone ask how i choose my colours. it's pretty simple: start with a base colour, then to go darker: - move the hue towards blue/purple - increase saturation - decrease brightness To go lighter, do the reverse :) #gameassets #robloxdev #uefn
Rhos 🦋 tweet media
English
15
267
4.4K
73K
LazerLars retweetledi
LazerLars retweetledi
Ihtesham Ali
Ihtesham Ali@ihtesham2005·
🚨 Someone built a full Perplexity clone that runs 100% locally for $0. It's called Perplexica. → Searches the web in real-time → Cites every source it uses → Works with Ollama local models → Multiple search modes (general, academic, YouTube, Reddit, writing) → Zero API costs. Zero data collection. Perplexity charges $20/month for this. This runs on your machine for free. 29K stars. MIT license. (Link in the comments)
Ihtesham Ali tweet media
English
103
544
4.6K
300.2K
LazerLars retweetledi
Hasan Toor
Hasan Toor@hasantoxr·
🚨 Microsoft just quietly dropped a tool that turns ANY document into LLM-ready data in seconds. It's called MarkItDown, a lightweight Python library that converts PDFs, Word, Excel, PowerPoint, images, audio, and YouTube URLs into clean Markdown your LLM can actually use. No custom parsers. No brittle pipelines. No preprocessing hell. Built by the AutoGen team and battle-tested across 87K GitHub stars. The numbers don't lie: → pip install markitdown and you're converting files in under 60 seconds → 10+ file formats supported out of the box → Native MCP server for direct Claude Desktop integration And it works everywhere: → Command line: markitdown file.pdf > doc .md → Python API: 3 lines of code → Docker → Azure Document Intelligence for enterprise OCR 100% Opensource. MIT license. This is the document preprocessing tool your RAG pipeline has been waiting for LLM-ready output without the LLM-ready headache. Link in the first comment 👇
Hasan Toor tweet media
English
106
415
3.2K
416.3K
LazerLars retweetledi
Sudo su
Sudo su@sudoingX·
nvidia just posted $68B in one quarter. $62B from datacenter. that's the demand side. now look at the supply side. a used RTX 3090 costs $600-800. it runs 35 billion parameters at 112 tok/s. a 4090 hits 157. a 5090 crosses 230 on vLLM. all on a single consumer card. no API keys, no monthly bills, no rate limits. two days ago Qwen3.5-35B-A3B dropped. the community optimized it from 50 tok/s to 157 in 48 hours. open weights, open configs, open results. nvidia is printing money because inference demand is exploding. but the hardware to run it yourself is already sitting on eBay. how long before people realize they don't need to rent what they can own?
English
39
27
375
32.7K
LazerLars retweetledi
HealthRanger
HealthRanger@HealthRanger·
Most people have very little clue of the profound implications of what Qwen just released today. I can already tell you: This is an AI EARTHQUAKE that just sent shockwaves around the world and probably has America's AI tech giants reeling. Essentially Qwen3.5 just obliterated the need for ChatGPT, Anthropic's coding agents, and Google Gemini coding, by releasing this model for free. I've been running it all day on multiple systems, and it's really quite astonishing. Feels like we're living in the future...
Qwen@Alibaba_Qwen

🚀 Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B ✨ More intelligence, less compute. • Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. • Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios. • Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools 🔗 Hugging Face: huggingface.co/collections/Qw… 🔗 ModelScope: modelscope.cn/collections/Qw… 🔗 Qwen3.5-Flash API: modelstudio.console.alibabacloud.com/ap-southeast-1… Try in Qwen Chat 👇 Flash: chat.qwen.ai/?models=qwen3.… 27B: chat.qwen.ai/?models=qwen3.… 35B-A3B: chat.qwen.ai/?models=qwen3.… 122B-A10B: chat.qwen.ai/?models=qwen3.… Would love to hear what you build with it.

English
74
169
1.2K
150.8K
LazerLars retweetledi
Sudo su
Sudo su@sudoingX·
3 days ago. same GPU. 80B Qwen Coder-Next. 1.3 tok/s. needed two cards to even breathe. today. same GPU. 35B Qwen3.5. 112 tok/s. full 262K context. one card. room to spare. running Claude Code utonomously, building full projects from single prompts. 3 days apart. 86x faster. open source moves different. x.com/sudoingX/statu…
English
2
2
23
2.5K