csgm

2.3K posts

csgm banner
csgm

csgm

@csgbwk

french husband, robotics, ai, computer graphics

France Katılım Nisan 2024
312 Takip Edilen69 Takipçiler
csgm
csgm@csgbwk·
@0x45o Website is undergoing maintenance now so here is my submission:)
csgm tweet media
English
2
0
4
1.8K
csgm retweetledi
tautologer
tautologer@tautologer·
guy who has been completely sedentary and hasn't left the house: why am I so sleepy and anxious?
English
19
186
7.2K
203.3K
csgm
csgm@csgbwk·
@lapislagoons That's actually a pretty common activity around there
English
0
0
2
9
csgm
csgm@csgbwk·
I think the only issue with Vulkan is that you're in the dark for quite a long time and it's hard to find bugs for that "darkness" time. Other than this it's a pretty nice API to work with
English
0
0
0
26
Taelin
Taelin@VictorTaelin·
@_inception_ai @ArtificialAnlys this approach enormous potential, I suppose with a larger run this would approach the intelligence of the leading models?
English
2
0
29
2.5K
Inception
Inception@_inception_ai·
Mercury 2 is in a league of its own. 1,200 tok/s at comparable quality to speed-optimized autoregressive models, per @ArtificialAnlys.
Inception tweet media
English
8
11
136
9.5K
csgm
csgm@csgbwk·
@IterIntellectus Even easier, terraform earth in places where it's annoying, we have most of the tech we just need political will and scaling
English
0
0
0
216
csgm
csgm@csgbwk·
@VictorTaelin Could it be that Anthropic and OpenAI trained on your own input for the past 3 years, which would be why they outperform so much? Also impressive to see Gemini perform so well on all benchmarks even private and still being completely unusable in real life
English
1
0
8
1.8K
Taelin
Taelin@VictorTaelin·
Introducing LamBench . . . You asked me to make a benchmark, so I made it. It is a simple, old style Q&A consisting of 120 fresh λ-calculus programming questions. Some are easy, like "implement add for λ-encoded nats". Some are harder, like "derive a generic fold for arbitrary λ-encodings". It measures: - intelligence (% tasks completed) - elegance (BLC-length of solutions) - speed (completion time) Basically what I care about, other than long context. I made it today because I was excited about GPT 5.5. It didn't do too well ): (My first-day impression is that I can't tell the difference between GPT 5.5 and GPT 5.4. I would be lying if I said otherwise. I'd not be able to distinguish in a blind test. I need more time. It is much faster though.) This is a new, simple bench, so expect be bugs. Specially on OpenRouter models. I'll retest soon. Also, it was born saturated. V2 will be harder... ↓ Link and more charts below ↓
Taelin tweet media
English
57
50
900
49.2K
csgm retweetledi
Markus Schütz
Markus Schütz@m_schuetz·
New Paper🙂 Nanite has shown that small triangles can be rendered fast in compute, we're exploring how fast for large meshes with up to 18.9 billion triangles, without the need to precompute LOD structures. Paper: github.com/m-schuetz/CuRa… Source: github.com/m-schuetz/CuRa…
Markus Schütz tweet media
English
9
127
869
66.1K
csgm
csgm@csgbwk·
@zuhaitz_dev I kind of got out of the compilation field sadly :/ I chose computer graphics and robotics, but could have gone either way tbh
English
1
0
1
15
Zuhaitz
Zuhaitz@zuhaitz_dev·
@csgbwk It takes a bit of time. But it's doable. I'm sure you would be able (:
English
2
0
1
59