Nikolas Göbel

582 posts

Nikolas Göbel banner
Nikolas Göbel

Nikolas Göbel

@NikolasGoebel

Making mental models executable @RelationalAI

Zurich, Switzerland Katılım Şubat 2010
401 Takip Edilen454 Takipçiler
Nikolas Göbel
Nikolas Göbel@NikolasGoebel·
That was fast. Do you have your own eval or are you the eval?
English
1
0
0
55
Richard Artoul
Richard Artoul@richardartoul·
i probably should have done literally any research before starting to write a custom harness turns out if you only give an LLM shell access as its means of interacting with the world it ends up pretty stupid
English
6
0
10
1.9K
Richard Artoul
Richard Artoul@richardartoul·
@NikolasGoebel its possible i just royally messed something up in my harness, I have a giant refactor cranking right now to add more traditional "actions" like read_file, patch_file, search_web etc and I'll see if its smarter after that
English
1
0
0
75
Nikolas Göbel
Nikolas Göbel@NikolasGoebel·
@richardartoul Right... Still a little surprising considering how thoroughly they seem to have mastered bash. Could be that there is too much "friction loss" from unlearning read/write/edit on-the-fly.
English
1
0
0
74
Richard Artoul
Richard Artoul@richardartoul·
@NikolasGoebel Beats me, just probably not how they’re trained. All the major harnesses have custom actions for writing/reading/editing files and I think you just have to contort yourself a lot to do that with simple command line utilities
English
1
0
1
146
Laura Ruis
Laura Ruis@LauraRuis·
@NikolasGoebel @MinqiJiang until we give that away as well 👀 i remember a time when we said we were gonna sandbox the ai, that didnt last long
English
1
0
1
32
Minqi Jiang
Minqi Jiang@MinqiJiang·
Many think AI will automate away knowledge workers. Yet if you use these tools daily, it’s obvious that AI *increases* how much time you spend working. Why? There’s infinite work to be done. Work stalls due to expertise gaps and turnaround times now massively reduced by models.
English
3
2
26
1.8K
Nikolas Göbel
Nikolas Göbel@NikolasGoebel·
@LauraRuis @MinqiJiang Agreed. A lot of work is only "better" or "worse" because a human judges it so, based on their value system. While that is the case, there is always an opportunity for humans to better leverage and align the available raw intelligence. We're not off the hook yet, Laura!
English
1
0
1
42
Nikolas Göbel
Nikolas Göbel@NikolasGoebel·
@chamath And if everyone is converging on running inference 24/7 anyways, then there is less of a gap between peak and base load to smooth over
English
0
0
0
112
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
Is on-premise the new cloud? I’m beginning to think yes. It’s the only way for companies to not blow themselves up and have some semblance of capability in an AI world…
English
419
125
2.4K
789.6K
Nikolas Göbel
Nikolas Göbel@NikolasGoebel·
Wondering what all those unix tools that nobody ever called before '25 are going through right now
English
0
0
2
107
Nikolas Göbel
Nikolas Göbel@NikolasGoebel·
@sarahcat21 Although on the current trajectory RAM may cost thousands per gigabyte again before we know it...
English
1
0
62
3.6K
Sarah Catanzaro
Sarah Catanzaro@sarahcat21·
Postgres and MySQL were built when RAM cost thousands per gigabyte. CedarDB rebuilt every layer (optimizer, buffer manager, parallelism) for modern hardware. 1000x improvements aren’t magic, they’re engineering debt finally paid by brilliant German DB researchers
sisyphus bar and grill@itunpredictable

@cedar_db is incredibly cool and more people should know about it. They’re a team of PhDs in Munich building a new relational database, on top of almost 10 years of academic research, that crushes existing benchmarks and maybe (finally?) gets us to the HTAP grail. The core idea is that existing RDBMSes like MySQL and Postgres were built more than 30 years ago, on assumptions about hardware constraints that are just not true anymore. These ecosystems have evolved admirably but ultimately…it’s a database. It’s built not to change very much. Here are a few of the ways that CedarDB is rethinking every element of the database: 1) A better query optimizer In the last 30 years we’ve made a lot of progress on how to optimize SQL queries, to the point where an optimized query can easily outperform a not-so-optimized query by a ton. But not many query optimization improvements have made the leap from research into databases today. CedarDB did a few things on this front: Implemented the unnesting algorithm developed by Thomas Neumann (one of the leaders of the Umbra research project CedarDB came from) — an improvement of more than 1000x Developed a novel approach to join ordering using adaptive optimization that can handle 5K+ relations Created a statistics subsystem that tells the optimizer things that traditional databases can’t 2) What if your database was actually a compiler? CedarDB doesn’t interpret queries, it instead generates code. For every SQL query that a user writes, CedarDB processes, optimizes it, and generates machine code that the CPU can directly execute. This has been a holy grail for a while, and they implemented it via a custom low-level language that is cheap to convert into machine code via a custom assembler. Another way that CedarDB improves performance is through Adaptive Query Execution. Essentially they start executing each query immediately with a “quick and dirty” version, while working on better versions in the background. 3) Taking advantage of all cores / Ahmdal’s law Distributing fairly between all available cores is notoriously difficult, and the CedarDB team would argue that most databases underutilize their hardware. Their clever approach to this problem is called morsel-driven parallelism. CedarDB breaks down queries into segments: pipelines of self-contained operations. Then, data is divided into “morsels” per segment – small input data chunks containing roughly ~100K tuples each. You can read more in the original paper here: db.in.tum.de/~leis/papers/m… 4) Rethinking the buffer manager Modern systems come equipped with massive amounts of RAM; there’s actually much more “room at the club” than database developers initially assumed. So the idea of the revamped buffer manager in CedarDB is that you can (and should) expect variance not just in data access patterns, but in storage speed and location, page sizes and data organization, and memory hierarchy. CedarDB’s buffer manager is designed from the ground up to work in a heavily multi-threaded environment. It decentralizes buffer management with Pointer Swizzling: Each pointer (memory address) knows whether its data is in memory or on disk, eliminating the global lock that throttles traditional buffer managers. 5) Building a database for change Databases are built to not change. It’s exactly this stability that gives each generation the confidence to build their apps (no matter how different they are) on systems like Postgres. You know what you’re getting. But there’s also a clear downside to this rigidity. CedarDB’s storage class system employs pluggable interfaces where adding new storage types doesn’t require rewriting other components. E.g. if CXL becomes the go-to storage interface at some point in the future, you don’t need to write another whole component, you just need another endpoint for the buffer manager. Anyway these are just a few of the ideas they’re bringing to the table. Maybe it’s because they’re in Germany, maybe it’s because they’re just really humble, but more people should know about this team!! Check out the full post here: amplifypartners.com/blog-posts/the…

English
29
57
1.1K
146.6K
Nikolas Göbel
Nikolas Göbel@NikolasGoebel·
Security may be what ends up rebalancing the scales in favour of software reuse and OSS. Sure, anyone can spend the tokens to recreate functionality. But are you willing to spend the tokens needed to find all possible exploits? Attackers only need to find a single one.
Sean Heelan@seanhn

Blog post: On the Coming Industrialisation of Exploit Generation with LLMs sean.heelan.io/2026/01/18/on-… TL;DR: I ran an experiment with GPT-5.2 and Opus 4.5 based agents to generate exploits for a zeroday QuickJS bug. They're pretty good at it. Code: github.com/SeanHeelan/ana…

English
0
1
1
252
Nikolas Göbel
Nikolas Göbel@NikolasGoebel·
@rjs Now that writing the machine is so much easier, we can finally focus on writing the machine that writes the machine.
English
0
0
0
42
Ryan Singer
Ryan Singer@rjs·
We're seeing a bifurcation. Vibe coding: I don't understand it, just get me a prototype. CAD coding: Help me build something that meets all these requirements robustly.
English
2
1
18
1.9K
Nikolas Göbel
Nikolas Göbel@NikolasGoebel·
@FilArons If I'm interpreting your website correctly, the work instruction is not the source of truth though, is it? It is derived from the model?
English
1
0
1
36
Fil Aronshtein
Fil Aronshtein@FilArons·
Work instructions are the core atomic unit of information in the factory. Master the work instruction, master the factory. This is Dirac, the manifold for adaptive production.
English
2
0
31
1.9K
Nikolas Göbel
Nikolas Göbel@NikolasGoebel·
Seems like future operating systems for the era of Claude Code need to work more like databases. Strong isolation and the ability to validate integrity constraints on the before and after state of the entire system before committing to a change (or rolling back).
English
0
1
0
162
Jonathan Blow
Jonathan Blow@Jonathan_Blow·
What are the best videos/papers/other-references about code vectorization, that we can watch on-stream?
English
23
3
350
47.7K
Nikolas Göbel retweetledi
Blue Yonder
Blue Yonder@BlueYonder·
We're teaming up with Snowflake and @RelationalAI to boost our Cognitive Solutions with a supply chain knowledge graph. 🤝 Securely stored in the Snowflake AI Data Cloud, our knowledge graph enhances intuitive data engagement. Learn more: okt.to/tiFs1q
English
1
3
5
306
Nikolas Göbel retweetledi
RelationalAI
RelationalAI@RelationalAI·
🚀 RelationalAI’s Knowledge Graph Coprocessor is GA as a Snowflake Native App! 🧠 With 10x less code and complexity, customers can now build intelligent applications using a data-centric architecture based on relational knowledge graphs. ➡️ tinyurl.com/mrxx63d5
RelationalAI tweet media
English
0
2
11
686
Nikolas Göbel
Nikolas Göbel@NikolasGoebel·
@joe_hellerstein @arntzenius Bit of both? Factorization is a form of white-box compression where the de-compression can be described in relational algebra. That makes it very easy for a relational system to push computations directly to the compressed representation.
English
0
0
4
45
Joe Hellerstein
Joe Hellerstein@joe_hellerstein·
@arntzenius I think the connection to tries is more intuitive. Factorized = prefix-compressed. The lazy part is more what you do with the representation. No?
English
2
0
3
202
rntz
rntz@arntzenius·
intuition: factorized representations in databases are about efficient (optimal?) use of laziness.
English
2
1
14
1.1K
Nikolas Göbel retweetledi
Andrea Lattuada
Andrea Lattuada@utaal·
@HerrDreyer @mpi_sws_ Thank you @HerrDreyer! I’m very excited to be starting at MPI-SWS in September. If you’re interested in working with me on making verification even more practical with Verus, and on combining SMT solving and Iris-inspired techniques, please reach out! -> andrea.lattuada.me
English
4
7
27
3.3K