Fabio Biola

549 posts

Fabio Biola

Fabio Biola

@NeoKree

Mobile & Full stack developer. Investor & crypto enthusiast

Katılım Temmuz 2011
355 Takip Edilen67 Takipçiler
Fabio Biola retweetledi
Interesting things
Interesting things@awkwardgoogle·
Ron Weasley yelling “Expensive Petroleum” while filling up his car might be the peak of comedy 🤣
English
466
16.6K
119.2K
5.1M
Aryan
Aryan@justbyte_·
Share your desktop wallpaper I'll start:
Aryan tweet media
English
55
12
620
15.8K
Fabio Biola
Fabio Biola@NeoKree·
@LensScientific Because that's how we are wired. Time doesn't exist, we make time. You can check the long explanation in "The order of time" from Carlo Rovelli.
English
0
0
0
7
The Scientific Lens
The Scientific Lens@LensScientific·
If the laws of physics don’t prefer a direction of time, why does everything we experience act like time has a preferred direction?
English
269
126
635
26K
Fabio Biola
Fabio Biola@NeoKree·
@poetengineer__ Love the concept! Do you think you would share it on GitHub at some point? I would love to test it on my projects
English
0
0
0
84
Kat ⊷ the Poet Engineer
Kat ⊷ the Poet Engineer@poetengineer__·
visualizing all my claude code sessions: each plant is a project, its leaves the sessions. sliding the timeline scrubber -> replaying projects growth over time.
English
57
131
1.6K
83.2K
Fabio Biola
Fabio Biola@NeoKree·
@mathemetica So, can you tell me about the camera you used to get this view?
English
2
0
1
7.3K
Mathematica
Mathematica@mathemetica·
This isn’t CGI. This is real mathematics controlling atoms inside every processor on Earth. Watch until your brain restarts.
English
82
394
3.6K
449.8K
Fabio Biola
Fabio Biola@NeoKree·
@johanvinet I appreciate you take time to do anything by hand, but man isn't there any kind of tool that can help you? (I'm not talking about AI) Even if I'm sure you like to do it I can't imagine all this hard work to always be rewarding by itself. But maybe that's just me
English
0
0
0
101
Saqib Banbhan
Saqib Banbhan@SaqibBanbh90290·
Number of saqure you see...? ONLY 1 % Will succeed
Saqib Banbhan tweet media
English
2.2K
123
345
100.4K
Fabio Biola
Fabio Biola@NeoKree·
@SaqibBanbh90290 @ItsEric208 I can guess: 3x3 squares. 1 square containing all the squares 1 square for the biscuit 1 square for the image post 1 square for the website containing the image post 1 square for the screen containing the website the question is: why does your phone/laptop have a square screen?
English
3
0
2
105
Fabio Biola
Fabio Biola@NeoKree·
@Rainmaker1973 Until you are with your friends and people really do it with 6-7 meters below it
English
0
0
1
217
Massimo
Massimo@Rainmaker1973·
What you see on social media can sometimes be a bit deceiving.
English
12
83
1K
162.2K
Fabio Biola
Fabio Biola@NeoKree·
@lassvestergaard simple fix, don't use Opus on Claude Pro. If you stick to Sonnet you can code for 3-4 hours before hitting limits. Which I think it was very intentional (time window is 5h). But still getting things done.
English
0
0
0
103
Lasse
Lasse@lassvestergaard·
never buy claude pro, this is after using opus 4.6 with plan mode 2 times and just redo a landing page 2 times. All took max 4-6 minutes 🥲
Lasse tweet media
English
98
1
320
50.4K
INALEGWU
INALEGWU@tchaloyi·
Back in 2011, I went into computer hardware repairs. I started with laptop chargers. I remember peeling back the rubber, soldering wires, and always staring at that weird plastic bulge on the cable. It didn't look like it did anything, but it was on every single high-end charger I fixed. I used to wonder if it was a hidden battery or just a weight to keep the cord from tangling. It turns out, that little lump is the unsung hero of your workspace. It's called a Ferrite Bead, and its only job is to act as a silencer for your electricity. See, every electronic device is naturally noisy. They send out invisible electromagnetic signals. Without that cylinder, your charger cable would turn into a giant antenna, broadcasting interference that would make your Wi-Fi slow, your TV flicker, or your speakers buzz. Inside that plastic shell is just a chunk of magnetic iron. It catches all that electrical noise and kills it before it can escape the wire. It’s basically a muzzle for your cable so your gadgets can live in peace. INALEGWU.
Peter Agboola@baba_Omoloro

Doesn't seem like anyone knows what this is for, right?

English
623
14K
110.2K
7.5M
Lior Alexander
Lior Alexander@LiorOnAI·
A 24-billion-parameter model just ran on a laptop and picked the right tool in under half a second. The real story is that tool-calling agents finally became fast enough to feel like software. Liquid built LFM2-24B-A2B using a hybrid architecture that mixes convolution blocks with grouped query attention in a 1:3 ratio. Only 2.3 billion parameters activate per token, even though the full model holds 24 billion. That sparse activation pattern is why it fits in 14.5 GB of memory and dispatches tools in 385 milliseconds on an M4 Max. The architecture was designed through hardware-in-the-loop search, meaning they optimized the model structure by testing it directly on the chips it would run on. No cloud translation layer. No API roundtrip. The model, the tools, and your data stay on the machine. This unlocks three things that were impractical before: 1. Regulated industries can run agents on employee laptops without data leaving the device. 2. Developers can prototype multi-tool workflows without managing API keys or rate limits. 3. Security teams get full audit trails without vendor subprocessors in the loop. The model hit 80% accuracy on single-step tool selection across 67 tools spanning 13 MCP servers. If this performance holds at scale, two assumptions need updating. First, on-device agents are no longer a battery-life trade-off; they're a compliance feature. Second, the bottleneck in agentic workflows is shifting from model capability to tool ecosystem maturity.
Liquid AI@liquidai

> 385ms average tool selection. > 67 tools across 13 MCP servers. > 14.5GB memory footprint. > Zero network calls. LocalCowork is an AI agent that runs on a MacBook. Open source. 🧵

English
13
29
340
45.5K
Ramin
Ramin@ramin_m_h·
Truly impressive release of hybrid tiny models from the Qwen team as always! People are asking how do they compare in speed, latency, and memory to @liquidai’s LFMs for on-device deployment? Here is a quick profiling on Apple M3 Ultra: > LFM2.5-1.2B is 52% faster in decode than the Qwen3.5-0.8B. > LFM2-700M is 71% faster than Qwen3.5-0.8B on decode > LFM2-2.6B has the same speed as Qwen3.5-2B on decode > LFM2-700M uses 46% less peak memory than Qwen3.5-0.8B > LFM2-2.6B uses 21% less peak memory than Qwen3.5-2B > lfms prefill with the same parameter size is generally 12% faster than Qwen3.5 We designed LFM2 series with our hardware-in-the-loop meta AI design approach that allows us to find out the most efficient architecture for a given processor without quality sacrifice. This test is done on Apple M3 Ultra, 512 GB unified memory Config: > 512 prompt tokens, 128 generation tokens, > 5 trials per configuration > Framework: MLX (mlx-lm / mlx-vlm)
Ramin tweet media
Qwen@Alibaba_Qwen

🚀 Introducing the Qwen 3.5 Small Model Series Qwen3.5-0.8B · Qwen3.5-2B · Qwen3.5-4B · Qwen3.5-9B ✨ More intelligence, less compute. These small models are built on the same Qwen3.5 foundation — native multimodal, improved architecture, scaled RL: • 0.8B / 2B → tiny, fast, great for edge device • 4B → a surprisingly strong multimodal base for lightweight agents • 9B → compact, but already closing the gap with much larger models And yes — we’re also releasing the Base models as well. We hope this better supports research, experimentation, and real-world industrial innovation. Hugging Face: huggingface.co/collections/Qw… ModelScope: modelscope.cn/collections/Qw…

English
10
7
123
22.2K
Rohan Paul
Rohan Paul@rohanpaul_ai·
Big revelation in this data. Everyone keeps talking about an AI bubble, but we forget the fact that only 0.3% of the global population actually pays for a premium subscription. For the vast majority of the real world it has not started yet. 6.8B people have yet to interact with even 1 free chatbot. The massive grey chunk is the 6.8B people who have zero experience with AI. The green dots represent 1.3B people using free versions of tools. Only a small group of 15-25 mn people pays for subscriptions. We sit in that microscopic slice.
Rohan Paul tweet media
John LeFevre@JohnLeFevre

84% of people have never used AI, and just 0.3% of users pay for premium services. Anyone who thinks AI is a bubble isn't paying attention.

English
48
13
97
13.6K
Wildminder
Wildminder@wildmindai·
17,000 tokens per second!! Read that again! LLM is hard-wired directly into silicon. no HBM, no liquid cooling, just raw specialized hardware. 10x faster and 20x cheaper than a B200. the "waiting for the LLM to think" era is dead. Code generates at the speed of human thought. Transition from brute-force GPU clusters to actual AI appliances. taalas.com/the-path-to-ub…
Wildminder tweet mediaWildminder tweet mediaWildminder tweet media
English
387
949
7.5K
1.8M
ℏεsam
ℏεsam@Hesamation·
YoU’rE gOinG tO bE LEFt beHind! 🤬 - the average OpenClaw user so accurate 😂
English
73
91
1.2K
147.2K