Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺

10.2K posts

Gábor SEBESTYÉN (@segabor@czinege.social) 🇭🇺🇪🇺 banner
Gábor SEBESTYÉN (@segabor@czinege.social) 🇭🇺🇪🇺

Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺

@segabor

Legacy account, no longer actively used Mastodon @[email protected] BlueSky @segabor.czinege.social

Budapest, Hungary, Europe Katılım Temmuz 2007
519 Takip Edilen186 Takipçiler
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Szabolcs Panyi
Szabolcs Panyi@panyiszabolcs·
‼️Could be connected: a security source warns of precise Russian plans for a false-flag provocation by “Ukrainian refugees” staging “Maidan-style” violence in Budapest at Ferenciek Square and Kossuth Square (Parliament), where the pro-Kremlin far-left has an election watch party.
Vatnik Soup@P_Kallioniemi

GRU agents posing as Ukrainians are reportedly planning to seize buildings in the city centre. Ukrainian intelligence says the operation involves former Berkut fighters who crushed Euromaidan protests in 2014 before fleeing to Russia. Former Berkut commander Serhiy Kusyuk is reportedly already in Budapest.

English
35
738
1.6K
158.8K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
6th Sense
6th Sense@irishman787878·
MÁR MEGINT NINCS ELÉG RUBRIKÁM! Szabolcs Panyi facebook.com/share/p/14ZcEJ… „Néha a jó szándékú, közvetlen zsarolás a legjobb megoldás,” mondta Szergej Lavrov külügyminiszter a 2023 decemberi EU-csúcs szünetében neki jelentő magyar kollégájának, Szijjártó Péternek. Ez volt az a híres tanácskozás, ahol Orbán Viktort végül kiküldték a teremből, hogy a többiek megszavazhassák Ukrajna EU-csatlakozását. Nemzetközi oknyomozó csapatunk újabb felvételeket és beszélgetés-leiratokat szerzett meg, melyek részletesen bemutatják az Orbán-kormány oroszokkal koordinált brüsszeli akciónak munkáját – nemcsak Ukrajna, hanem saját uniós szövetségesei ellen. A cikk a közép-európai VSquare, a lengyel FRONTSTORY, az orosz ellenzéki The Insider Russia, az észt (és oroszul is publikáló) Delfi, valamint a szlovák Investigatívne centrum Jána Kuciaka együttműködésében készült. Kommentben linkelem nyomozásunk első részét, melyben azt mutattuk be, hogyan segít Szijjártó Péter orosz oligarcháknak, bankoknak és cégeknek elkerülni az EU-s szankciókat – vagy éppen lekerülni a szankciós listáról.
6th Sense tweet media
Magyar
8
27
104
3.5K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Szabolcs Panyi
Szabolcs Panyi@panyiszabolcs·
I want to apologize to my followers that my feed is full of @buymeacoffee posts. There is an ongoing Orbán government–style, Russia-inspired smear campaign against me, aimed at discrediting my not even yet published story on the details of FM Péter Szijjártó’s leaks and communication with Sergey Lavrov. The result is thousands of people showing their support and appreciation for my work, boosting my visibility like never before. The best advertisement one can get for my upcoming book on how Putin’s spies infiltrated Hungary and its political elite. Yes, that book is due in 2026. And there are some great memes too—thanks to the anonymous author!
Szabolcs Panyi tweet media
English
147
942
4K
104.2K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Nav Toor
Nav Toor@heynavtoor·
🚨 BREAKING: Someone just rebuilt the entire AI assistant stack in Zig. It's called NullClaw. The binary is 678 KB. It uses ~1 MB of RAM. It boots in under 2 milliseconds. No runtime. No VM. No framework. No garbage collector. Just raw Zig. Here's why this is absurd: → OpenClaw needs a $599 Mac Mini and 1 GB+ RAM → NanoBot needs 100 MB+ RAM and Python → PicoClaw needs 10 MB RAM and Go NullClaw runs on a $5 board with 1 MB of RAM. Same functionality. 0.1% of the resources. Here's what's packed into that 678 KB: → 22+ AI providers (OpenAI, Anthropic, Ollama, DeepSeek, Groq, etc.) → 13 chat channels (Telegram, Discord, Slack, WhatsApp, iMessage, IRC) → 18+ built-in tools → Hybrid vector + keyword memory search → Multi-layer sandboxing (Landlock, Firejail, Docker) → Hardware peripheral support (Arduino, Raspberry Pi, STM32) → MCP, subagents, streaming, voice, the full stack Here's the wildest part: Every subsystem is a vtable interface. Swap any provider, channel, tool, memory backend, or runtime with a config change. Zero code changes. It even encrypts your API keys with ChaCha20-Poly1305 by default. 2,738 tests. ~45,000 lines of Zig. Zero dependencies beyond libc. 100% Open Source. MIT License.
Nav Toor tweet media
English
227
500
4.9K
488K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Reiner Pope
Reiner Pope@reinerpope·
We’re building an LLM chip that delivers much higher throughput than any other chip while also achieving the lowest latency. We call it the MatX One. The MatX One chip is based on a splittable systolic array, which has the energy and area efficiency that large systolic arrays are famous for, while also getting high utilization on smaller matrices with flexible shapes. The chip combines the low latency of SRAM-first designs with the long-context support of HBM. These elements, plus a fresh take on numerics, deliver higher throughput on LLMs than any announced system, while simultaneously matching the latency of SRAM-first designs. Higher throughput and lower latency give you smarter and faster models for your subscription dollar. We’ve raised a $500M Series B to wrap up development and quickly scale manufacturing, with tapeout in under a year. The round was led by Jane Street, one of the most tech-savvy Wall Street firms, and Situational Awareness LP, whose founder @leopoldasch wrote the definitive memo on AGI. Participants include @sparkcapital, @danielgross and @natfriedman’s fund, @patrickc and @collision, @TriatomicCap, @HarpoonVentures, @karpathy, @dwarkesh_sp, and others. We’re also welcoming investors across the supply chain, including Marvell and Alchip. @MikeGunter_ and I started MatX because we felt that the best chip for LLMs should be designed from first principles with a deep understanding of what LLMs need and how they will evolve. We are willing to give up on small-model performance, low-volume workloads, and even ease of programming to deliver on such a chip. We’re now a 100-person team with people who think about everything from learning rate schedules, to Swing Modulo Scheduling, to guard/round/sticky bits, to blind-mated connections—all in the same building. If you’d like to help us architect, design, and deploy many generations of chips in large volume, consider joining us.
English
123
201
2.2K
3M
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Clément Molin
Clément Molin@clement_molin·
🇺🇦/🇷🇺 02/24/2022 : 4h45 4 years ago, at ~4:45, the first images of the war came to us, with this ukrainian border guard fleeing the Kalanchak border post near Crimea, followed by few civilians. These are the first images of the start of the full scale invasion. Geoloc : 46.143365, 33.635930
Clément Molin tweet mediaClément Molin tweet mediaClément Molin tweet media
English
9
114
844
55.5K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Swift Language
Swift Language@SwiftLang·
📐Announcing Swift System Metrics 1.0! 🎉 A stable API for process-level monitoring on Linux and macOS. Add it to your service in a few lines, plug into Prometheus or OTel, and start visualizing in Grafana. Contributions welcome! swift.org/blog/swift-sys…
Swift Language tweet media
English
3
46
261
18.3K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Sam Altman
Sam Altman@sama·
Peter Steinberger is joining OpenAI to drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our product offerings. OpenClaw will live in a foundation as an open source project that OpenAI will continue to support. The future is going to be extremely multi-agent and it's important to us to support open source as part of that.
English
4.9K
4.3K
46.2K
16.8M
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Claude
Claude@claudeai·
We're bringing some of Claude’s most-used features to the free plan. File creation, connectors, and skills are all now available without a subscription.
Claude tweet media
English
385
686
8.5K
636.5K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Swift Language
Swift Language@SwiftLang·
🗞️News from Swift: web apps, a new mail stack, 3D printing, embedded Swift updates, and some community events. Read all about it in the latest Swift blog post: swift.org/blog/whats-new…
Swift Language tweet media
English
5
17
150
13.7K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Swift Language
Swift Language@SwiftLang·
Windows developers, you're invited! 🪟 We just launched a Windows workgroup to make Swift even better on Windows: improving the toolchain, core packages, API bridging, and deployment experiences. Curious? Interested in contributing? You're welcome here. 👉 swift.org/blog/announcin…
English
9
62
329
23.1K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Swift Language
Swift Language@SwiftLang·
🎉 10 years of open source Swift! A decade ago today, we opened Swift to the world with a simple blog post: swift.org/blog/welcome What's grown since—thanks to an incredible community of contributors—has been extraordinary. Here's to the next ten years. 🧡
Swift Language tweet media
English
10
105
561
90.7K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Swift Language
Swift Language@SwiftLang·
1️⃣ Swift comes to FreeBSD Swift is now available in preview for FreeBSD 14.3 and later. We want your feedback, bug reports, and contributions to make Swift great on FreeBSD! 🧑‍💻Read the announcement here: forums.swift.org/t/swift-on-fre… #FreeBSD
Swift Language tweet media
English
2
29
151
14.5K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Republicans against Trump
Republicans against Trump@RpsAgainstTrump·
Massive anti-Orban rally in Budapest yesterday. Hungarians are sick and tired of the corrupt, pro-Putin regime. Polls now show Viktor Orban headed for a crushing defeat in next year’s election.
English
585
2.7K
12.9K
238.6K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Excited to release new repo: nanochat! (it's among the most unhinged I've written). Unlike my earlier similar repo nanoGPT which only covered pretraining, nanochat is a minimal, from scratch, full-stack training/inference pipeline of a simple ChatGPT clone in a single, dependency-minimal codebase. You boot up a cloud GPU box, run a single script and in as little as 4 hours later you can talk to your own LLM in a ChatGPT-like web UI. It weighs ~8,000 lines of imo quite clean code to: - Train the tokenizer using a new Rust implementation - Pretrain a Transformer LLM on FineWeb, evaluate CORE score across a number of metrics - Midtrain on user-assistant conversations from SmolTalk, multiple choice questions, tool use. - SFT, evaluate the chat model on world knowledge multiple choice (ARC-E/C, MMLU), math (GSM8K), code (HumanEval) - RL the model optionally on GSM8K with "GRPO" - Efficient inference the model in an Engine with KV cache, simple prefill/decode, tool use (Python interpreter in a lightweight sandbox), talk to it over CLI or ChatGPT-like WebUI. - Write a single markdown report card, summarizing and gamifying the whole thing. Even for as low as ~$100 in cost (~4 hours on an 8XH100 node), you can train a little ChatGPT clone that you can kind of talk to, and which can write stories/poems, answer simple questions. About ~12 hours surpasses GPT-2 CORE metric. As you further scale up towards ~$1000 (~41.6 hours of training), it quickly becomes a lot more coherent and can solve simple math/code problems and take multiple choice tests. E.g. a depth 30 model trained for 24 hours (this is about equal to FLOPs of GPT-3 Small 125M and 1/1000th of GPT-3) gets into 40s on MMLU and 70s on ARC-Easy, 20s on GSM8K, etc. My goal is to get the full "strong baseline" stack into one cohesive, minimal, readable, hackable, maximally forkable repo. nanochat will be the capstone project of LLM101n (which is still being developed). I think it also has potential to grow into a research harness, or a benchmark, similar to nanoGPT before it. It is by no means finished, tuned or optimized (actually I think there's likely quite a bit of low-hanging fruit), but I think it's at a place where the overall skeleton is ok enough that it can go up on GitHub where all the parts of it can be improved. Link to repo and a detailed walkthrough of the nanochat speedrun is in the reply.
Andrej Karpathy tweet media
English
690
3.4K
24.2K
5.8M
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
Normarayr
Normarayr@normarayr·
BONUS: one-line CLI patch for @code and @cursor_ai , until this is fixed either by Apple or Electron #issuecomment-3316457267" target="_blank" rel="nofollow noopener">github.com/microsoft/vsco…
English
0
4
12
2.5K
Gábor SEBESTYÉN (@[email protected]) 🇭🇺🇪🇺 retweetledi
François Chollet
François Chollet@fchollet·
The 3rd edition of my book Deep Learning with Python is being printed right now, and will be in bookstores within 2 weeks. You can order it now from Amazon or from Manning. This time, we're also releasing the whole thing as a 100% free website. I don't care if it reduces book sales, I think it's the best deep learning intro around, and more people should be able to read it.
François Chollet tweet media
English
295
834
6.1K
829.2K