Robert Sasu | dev/acc

14.4K posts

Robert Sasu | dev/acc banner
Robert Sasu | dev/acc

Robert Sasu | dev/acc

@SasuRobert

I belong to Jesus. Core developer #MultiversX

Katılım Ağustos 2014
720 Takip Edilen23.1K Takipçiler
Sabitlenmiş Tweet
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
Sovereign Shards is a major technological update for all the appChains in the #blockchain industry. This is a revolution which can be only done on #MultiversX. And everything was started in 2018 when we first theorised Sharding, implemented first cross shard processing in 2019 and continued in 2020 when the $ESDT token system was born. On xDay 2023 we announced the first version, alpha release, and demonstrated the power of the sovereign shard architecture and the seamless integration between the main-chain and any custom appchain. Today, we can say, that an alpha+ release is planned for February, which will contain the followings: 🔹Fully featured sovereign shard binary with configurable genesis, validator set, block time, gasmodel, ESDTs, guardians, systemVM and WASMVM 🔹Setup any ESDT as base token/gas token, and all token transfers will be ESDTs 🔹NO BRIDGE Seamless cross mainchain to sovereign shard transactions, making any sovereign shard as an extension to the current ecosystem. This is the first solution to create custom appChains which are connected to the mainchain without the need for any bridge, as the processing model used is the one from the cross-shard module, battle tested and validated in the last years. This mean, no wrapping contracts, no lock and mint, burn and mint, simply transfer. Built in complete composability enshrined in the protocol (this is a forgotten primitive in Ethereum L2s). A few companies are already working with launching their own sovereign shards, tagged them in the picture. If you know others, feel free to tag them in a reply.
Robert Sasu | dev/acc tweet media
English
93
552
1.1K
302.7K
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
MLX is so cool. And the open source community is ripping. And thanks for all the big corporations for releasing your models. This is amazing. personally I am running a flux2 4B model continuously to generate some images. and running in parallel the new Gemma 4 model the MoE one, for continuous autoresearch on various parts of code / architecture / skills / setups. everything. All in an almost 5 year old Macbook Pro. What a time to be alive, to run all this locally. research and code. continuously. This is the moment for it.
Prince Canuma@Prince_Canuma

mlx-vlm v0.4.3 is here 🚀 Day-0 support: 🔥 Gemma 4 (vision, audio, MoE) by @GoogleDeepMind 🦅 Falcon-OCR + Falcon Perception by @TIIuae 🪨 Granite Vision 4.0 by @IBMResearch New models: 🎯 SAM 3.1 with Object Multiplex by @facebook 🔍 RF-DETR detection & segmentation by @roboflow Infra: ⚡ TurboQuant (KV cache compression) 🖥️ CUDA support for vision models (Sam and RF-DETR) Get started today: > uv pip install -U mlx-vlm Leave us a star ⭐️ github.com/Blaizzy/mlx-vlm

English
1
2
23
895
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
LLM knowledge bases, they seem like the perfect solution, but often ignore when things are executing, especially when the codebase becomes big. I do not say those are bad, definitely useful, but they are not a one click solution. I am using graph knowledge, RAG, vector base systems as well, just to test out what works the best. But AI is still making a lot of shortcuts, just to solve a problem, in a lot of the cases. What it makes things better, for big development, is clear architecture and making independently every single component super good and closed. After that enforcing to build on the given interfaces. Still some mistakes along the way, as this is usual for any kind of development. But this is software engineering life. Running multiple experiments and tests which are written from specifications and testing in blackbox can ensure much better quality. These are all typical software engineering / architecting one on one. One big question I was putting this week: WHY isn't AI discovering totally new ways of coding, of architecture, of language? Even when you are trying to force it. One year ago I thought AI will invent much better ways to code itself, or to code, as he might definitely understand the mathematics about itself better. Or is this true ? Maybe not.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
0
0
18
704
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
Agents everywhere. It is a must. And just start trying out the local LLMs and let’s make those better. We could technically beat the big guys as well. Open source is amazing.
English
3
8
74
1.2K
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
In terms of lines of codes, engineering hours, bugfixing, improving, testing, this was a tremendous Q1. I do not think there is any other ecosystem which did something like this. Really. The output was and is insane. 2026 in terms of technology is rewriting the landscape.
$amuel@theleerick

Q1 2026 was a milestone quarter for the @MultiversX ecosystem, from the Supernova Governance Vote and the rise of AI On-Chain, to Supernova going live and the community stress-testing it in real time. So I put together a quick recap, everything that mattered, broken down in under 2 minutes. Watch the full video below and share your thoughts on where MultiversX stands after this quarter. New to MultiversX? The thread has all the key links to help you get started. #EGLD #SUPERNOVA #MULTIVERSX

English
5
27
130
2.9K
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
The open source LLM community, all the tools, all the works needs even better appreciations. I just made a few benchmarks in the last week and with a 4 year old M1 Macbook, I can get the same tokens per seconds and roughly similar results as the Opus models, or the Gemini models. This is so wonderful. Right now Gemini and Google is definitely killing it on speed with their Flash and Lite models, those are at an outstanding speed. But for coding, for thinking, research, the local models are having the same results. And this is only the start. It will be getting better and better.
Robert Sasu | dev/acc tweet media
English
0
2
33
986
Digital Gold Talk
Digital Gold Talk@DigitalGoldTalk·
Senate Bill 1649 is getting closer to becoming law.
English
14
42
220
5.7K
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
And it is more maddening than losing 50 times on Mario Kart, especially when AI is not working, or you are getting "Our servers are experiencing high traffic ..." I have average 90% of continuous workload in my laptop since late January. Like even in sleep the laptop was churning and creating things with LLMs. Now with local MLX, or some free models through OpenRouter / HuggingFace , I am constantly running experiments everywhere. It takes some time to make a script and let it run overnight. But so mad you get when the AI is making the same mistakes over and over.
Naval@naval

Vibe coding is more addictive than any video game ever made (if you know what you want to build).

English
1
0
21
895
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
There was a Google upgrade, somewhere in the afternoon, which made Gemini 3.1 Pro High really stable. I hope tomorrow morning it will not change and their servers will work. Everyone is looking at the benchmarks of the agents, that they do 8X% in software development, and they say amazing. However eighty something percent, still mean 1 out of 6 prompts or decisions is wrong. That is pretty bad for production code. 🧑‍💻 And I do not get why is AI putting garbage fallbacks code everywhere, silently, instead of panicking or throwing errors. AI has some really weird, unprofessional ways of writing code in some cases.
Robert Sasu | dev/acc@SasuRobert

AI is making April jokes today. The worst performance in days or weeks. Literally constant hallucinations. Local small LLM working better than Claude / Gemini models. What a time.

English
2
1
22
1K
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
AI is making April jokes today. The worst performance in days or weeks. Literally constant hallucinations. Local small LLM working better than Claude / Gemini models. What a time.
English
2
0
18
1.8K
Adrian Tiberiu
Adrian Tiberiu@AdrianLoghinT·
@SasuRobert Everyone is building things right now, but not much of it truly improves life. It feels like we care more about making more stuff than making better stuff (quantity > quality).
English
1
0
2
157
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
Some people said AI will not lead to mass layoffs, as the workforce will reorder itself. There is slight miscalculation and the market responds. In a good scenario those people will find other jobs / the market will invent new jobs which create value, but it takes time to do that. Time of 5-10 years at least. This is the more than perfect economical disaster, tornado. I do not know how the current economical system is still surviving. It is sort of a miracle. Covid, 4 year Russia - Ukraine war, and now Iran war and the AI Age. What to do in these years? Just, build things.
Polymarket@Polymarket

BREAKING: Oracle laid off 20,000-30,000 employees this morning with a single 6 am email.

English
3
1
36
1.7K
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
Today might be a good day, to forget about AI, and try to write some code manually ? Don't you think so ?
English
3
0
24
1.3K
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
I do not think it was such a time of crazy development and democratisation of technology. This is so good. Running more and more capable models on tiny hardware. Models are getting better, models are getting smaller, and execution of the matrix operations and ram management is getting better, because we are using the models to optimise everything. What a world. It is amazing. Total freedom for creation.
the tiny corp@__tinygrad__

If you have a Thunderbolt or USB4 eGPU and a Mac, today is the day you've been waiting for! Apple finally approved our driver for both AMD and NVIDIA. It's so easy to install now a Qwen could do it, then it can run that Qwen...

English
0
2
23
1.3K
Robert Sasu | dev/acc
Robert Sasu | dev/acc@SasuRobert·
Everyone is moving to WASM. WASM is proven to be the language / engine which is the best fit for the WEB. SO it is the best fit for BLOCKCHAIN as well. Not Solidity, not some language used by only a few super geeky persons, not some new engine which needs totally new tooling around. But WASM. And MultiversX choose WASM, when it was not even popular. Because we are looking for the future. Time to BUIDL more!
Three.js@threejs

The future of Three.js is WebAssembly

English
1
13
99
2.4K