Laurian Gridinoc

67.9K posts

Laurian Gridinoc banner
Laurian Gridinoc

Laurian Gridinoc

@gridinoc

Full Stack Computational Linguist ※ Mozilla OpenNews Fellow ※ Virtual Production ※ Filmmaker ※ AI accelerationist

1 AU Katılım Nisan 2007
5.1K Takip Edilen3K Takipçiler
Sabitlenmiş Tweet
Laurian Gridinoc
Laurian Gridinoc@gridinoc·
I successfuly optimised a context compression prompt with @DSPyOSS GEPA and TextGrad, see #context-compression-prompt-experiments" target="_blank" rel="nofollow noopener">github.com/Laurian/contex…
Laurian Gridinoc tweet media
English
11
24
172
25.2K
Bryan Johnson
Bryan Johnson@bryan_johnson·
round 2 is about to begin
Bryan Johnson tweet media
English
198
55
2.2K
217.4K
Laurian Gridinoc retweetledi
Bojan Tunguz
Bojan Tunguz@tunguz·
The old bad SWE productivity metric: lines of code written. The new bad SWE productivity metric: tokens consumed.
English
45
67
1K
29.5K
Laurian Gridinoc retweetledi
X Freeze
X Freeze@XFreeze·
X Freeze tweet media
ZXX
478
2.2K
25.9K
37M
Laurian Gridinoc retweetledi
0xSero
0xSero@0xSero·
For those interested in getting into local AI this is my most important video. youtu.be/Adliwsf2oPE
YouTube video
YouTube
English
13
34
421
29.1K
Laurian Gridinoc retweetledi
Taelin
Taelin@VictorTaelin·
Sorry for posting this again, I'm still processing it: It'd cost >>> $743k per year <<< to run Opus-4.6 fast-mode nonstop Literally my company cannot afford a single person using it for daily coding. And that's a shame because the experience is truly magical. I've spent the last 2 days using it on Pi (nearly $500 gone 💀), and it was the first time I kinda got into the flow state while using an agent, because the feedback is just so fast. This is not something I ever experienced before, definitely not with GPT 5.4's own fast mode. I can't wait for this kind of super fast, super high intelligence to be available for a reasonable cost...
Taelin tweet media
English
198
94
3.1K
296.6K
Laurian Gridinoc retweetledi
Maria Davidson
Maria Davidson@mariardavidson·
@pmarca Werner Herzog, the OG hater of psychoanalysts, said it best: when you illuminate every last corner of a house, the house becomes uninhabitable youtu.be/G_7Ta_4coy4
YouTube video
YouTube
English
33
118
1.3K
231.3K
Laurian Gridinoc retweetledi
Unsloth AI
Unsloth AI@UnslothAI·
Qwen3.5-4B searched 20+ websites, cited its sources, and found the best answer! 🔥 Try this locally with just 4GB RAM via Unsloth Studio. The 4B model did this by executing tool calls + web search directly during its thinking trace.
English
65
244
2.3K
124K
Laurian Gridinoc retweetledi
Jerry Liu
Jerry Liu@jerryjliu0·
Introducing LiteParse - the best model-free document parsing tool for AI agents 💫 ✅ It’s completely open-source and free. ✅ No GPU required, will process ~500 pages in 2 seconds on commodity hardware ✅ More accurate than PyPDF, PyMuPDF, Markdown. Also way more readable - see below for how we parse tables!! ✅ Supports 50+ file formats, from PDFs to Office docs to images ✅ Is designed to plug and play with Claude Code, OpenClaw, and any other AI agent with a one-line skills install. Supports native screenshotting capabilities. We spent years building up LlamaParse by orchestrating state-of-the-art VLMs over the most complex documents. Along the way we realized that you could get quite far on most docs through fast and cheap text parsing. Take a look at the video below. For really complex tables within PDFs, we output them in a spatial grid that’s both AI and human-interpretable. Any other free/light parser light PyPDF will destroy the representation of this table and output a sequential list. This is not a replacement for a VLM-based OCR tool (it requires 0 GPUs and doesn’t use models), but it is shocking how good it is to parse most documents. Huge shoutout to @LoganMarkewich and @itsclelia for all the work here. Come check it out: llamaindex.ai/blog/liteparse… Repo: github.com/run-llama/lite…
LlamaIndex 🦙@llama_index

We've spent years building LlamaParse into the most accurate document parser for production AI. Along the way, we learned a lot about what fast, lightweight parsing actually looks like under the hood. Today, we're open-sourcing a light-weight core of that tech as LiteParse 🦙 It's a CLI + TS-native library for layout-aware text parsing from PDFs, Office docs, and images. Local, zero Python dependencies, and built specifically for agents and LLM pipelines. Think of it as our way of giving the community a solid starting point for document parsing: npm i -g @llamaindex/liteparse lit parse anything.pdf - preserves spatial layout (columns, tables, alignment) - built-in local OCR, or bring your own server - screenshots for multimodal LLMs - handles PDFs, office docs, images Blog: llamaindex.ai/blog/liteparse… Repo: github.com/run-llama/lite…

English
40
233
1.8K
229.5K
Laurian Gridinoc retweetledi
Parmita Mishra
Parmita Mishra@parmita·
There's an old joke in systems biology called "How Biologists Fix a Radio." A biologist, tasked with figuring out why a radio doesn't work, removes components one by one and catalogs the result. Remove this transistor: the radio makes a horrible screeching sound. Conclusion: this is the "horrible screeching transistor." Remove another component: the radio goes silent. Conclusion: this is the "silence transistor." This is essentially what we do with genomics. We see which genes are mutated in cancer and assume they must be "cancer genes." We see which genes are differentially expressed and assume they must be "important." But correlation is not causation, and a parts list is not a circuit diagram. You can have a complete inventory of every resistor, capacitor, and transistor in a radio and still have no idea how it plays music.
Parmita Mishra tweet media
English
94
361
3K
170.1K
Laurian Gridinoc retweetledi
Hamel Husain
Hamel Husain@HamelHusain·
The highest leverage thing you can do to de-slopify AI writing is to delete at least half of it Seriously any email, post etc try to delete 50%
English
7
4
56
4.4K
Laurian Gridinoc retweetledi
Nous Research
Nous Research@NousResearch·
hermes claw migrate
English
28
24
513
27.5K
Laurian Gridinoc retweetledi
Baidu Inc.
Baidu Inc.@Baidu_Inc·
🧠 Key innovation: Layout-as-Thought End-to-end OCR models lose explicit layout analysis, something pipeline systems handle natively. Qianfan-OCR solves this with an optional thinking phase via tokens: the model generates bounding boxes, element types, and reading order before producing its final output. The result is pipeline-level layout analysis from an end-to-end model.
Baidu Inc. tweet media
English
1
3
22
1.8K
Laurian Gridinoc retweetledi
Lydia Hallie ✨
Lydia Hallie ✨@lydiahallie·
if your skill depends on dynamic content, you can embed !`command` in your SKILL.md to inject shell output directly into the prompt Claude Code runs it when the skill is invoked and swaps the placeholder inline, the model only sees the result!
Lydia Hallie ✨ tweet media
English
127
247
3K
829.6K
Laurian Gridinoc retweetledi
The Open Source Press
The Open Source Press@theospress·
The recurring themes: better small model support, Docker-ready setup, persistent memory that survives restarts, and a team (@Teknium) that merges community bug fixes the same day. @sudoingX @Zeneca @LottoLabs @rodmarkun @WeXBT have all been building and benchmarking publicly. If you run local models or want to self-host your own agent stack — read the full breakdown to see if the switch is for you 👇theopensourcepress.com/hermes-agent-v…
English
1
3
13
2.2K
Laurian Gridinoc retweetledi
Morph
Morph@morphllm·
Introducing FlashCompact - the first specialized model for context compaction 33k tokens/sec 200k → 50k in ~1.5s Fast, high quality compaction
English
88
136
2.2K
210.9K