wontfix

18 posts

wontfix banner
wontfix

wontfix

@DadMakingGames

Making games for our kids

انضم Haziran 2020
59 يتبع6 المتابعون
wontfix
wontfix@DadMakingGames·
@Rafa_Schwinger It is excellent they also included the base for easy fine tuning. The 0.8B is just fantastic even on CPU.
English
1
0
1
674
Rafa Schwinger 🇻🇦
Rafa Schwinger 🇻🇦@Rafa_Schwinger·
People don’t understand it yet but the most important Qwen 3.5 is not the 397B or even the 27B but the 0.8B.
English
61
52
1.4K
113.4K
wontfix
wontfix@DadMakingGames·
@ID_AA_Carmack Love this approach, even though it is more involved, I like the tiling solution for large ops (especially with MLIR) to break up execution and avoid kills while still having a custom allocator that can pay the pcie tax.
English
0
0
0
194
John Carmack
John Carmack@ID_AA_Carmack·
The glory work of GPU scheduling is in the frontier data centers with hundreds of thousands of GPUs, but a lot of research work is done with single GPU jobs on modest clusters, and the scheduling leaves much to be desired. I wish there were a clean way to preempt GPU tasks, so long running tasks could be transparently paused to allow higher priority tasks to get the minimum time-to-results. Manual checkpointing and cooperative multitasking is an option, but it complicates codebases and is fertile ground for bugs. It feels like most of the pieces are present: Everything goes through page tables on the GPUs already, Nvidia UVM (Unified Virtual Memory) allows demand paging to host memory, and MPS (Multi-Process Service) could act as a CUDA shim to force everything to use a different memory allocator. Memory page thrashing would be catastrophic for GPU tasks, but the idea would be to pause the host task of the low priority process, then let the high priority process force only the necessary pages out (or maybe none at all, if the memory pressure wasn’t high enough) while it is running, then resume the low priority task on completion, allowing it to page everything back in. Task switching at the level of tens of seconds, not milliseconds. Even if it didn’t handle absolutely all memory (kernel allocations and such) and had some overhead, that would be quite useful. Of course, Nvidia would prefer you to Just Buy More GPUs!
English
69
69
1.2K
99K
wontfix
wontfix@DadMakingGames·
@i2cjak I have been using Kicad via the gitlab.com/kicad/code/kic… with Claude which is a little off putting because the window is controlled "works". Not good at routing but recommend trying for the novelty
English
0
0
0
157
i2cjak
i2cjak@i2cjak·
what if I could vibe route PCBs? Draw vaguely at the PCB and have it route. No go back. Reroute this bit. Make this a zone. Without having to care about the minutiae? Stitching vias here. Could it be done? Constrained areas, tell it what trace width to use. What nets to connect
English
21
0
96
6K
wontfix
wontfix@DadMakingGames·
@amypretzel if you use claude skills with cadquery, it works fairly well if you save the output screenshot. Not amazingly complex but solid, adding OpenFOAM for electronics is great
English
1
0
0
94
wontfix
wontfix@DadMakingGames·
@julien_c did you setup your lerobot skills?
English
0
0
0
118
Julien Chaumond
Julien Chaumond@julien_c·
spent the last 3 days perfecting my claude code setup, ama
English
54
1
112
33.1K
wontfix
wontfix@DadMakingGames·
@mfranz_on the Arduino CLI works well from Claude via skills, it communicates over gRPC and allows Claude to verify (or upload if you aren't scared) as well as use wokwi.com for testing. I hope it may help
English
0
0
1
37
Marco Franzon
Marco Franzon@mfranz_on·
Arduino IDE has no LLM integration. No chat completion, no agentic integration. Why?
Marco Franzon tweet media
English
23
3
47
6.5K
wontfix
wontfix@DadMakingGames·
@kcimc may it be possible, that the iteration time improvements on web will reduce the value of web based applications and increase value of native development?
English
0
0
1
165
Kyle McDonald
Kyle McDonald@kcimc·
after 2.5 years of vibe coding, my biggest takeaway? native apps are dead. iterating for the web is so much faster, has better tooling, and lower overhead. low-latency, multi-projector, 3d, spatial audio, custom hardware—ai will continue to have trouble with these.
English
66
7
327
43.2K
机器之心 JIQIZHIXIN
机器之心 JIQIZHIXIN@jiqizhixin·
Huge! @TianhongLi6 & Kaiming He (inventor of ResNet) just Introduced JiT (Just image Transformers)! JiTs are simple large-patch Transformers that operate on raw pixels, no tokenizer, pre-training, or extra losses needed. By predicting clean data on the natural-data manifold, JiT excels in high-dimensional spaces where traditional noise-predicting models can fail. On ImageNet (256 & 512), JiT achieves competitive generative performance, showing that sometimes going back to basics is the key.
机器之心 JIQIZHIXIN tweet media
English
8
116
754
161.4K
wontfix
wontfix@DadMakingGames·
@GeorgeSiosi @SGRodriques @GeorgeSiosi for me, I run paperqa over the papers on my subject, summarize and discuss to validate general information. I write the question and use DSPy to formulate a prompt for Gemini to ask feedback on my prompt with paper summaries. I think it helps, I hope it may help you
English
1
0
1
41
Siosi
Siosi@GeorgeSiosi·
@SGRodriques there should be a button or prompts to help people formulate better prompts for your platform. I like the idea, but can see how it's needing more bridging for the average user.
English
1
0
1
160
Sam Rodriques
Sam Rodriques@SGRodriques·
I've heard from several people that they want to try Kosmos but can't figure out what to ask. Here are some notes on how to prompt it: Firstly, start by just giving it a research objective you're working on currently. Think of Kosmos as a collaborator. Like a collaborator (and unlike chatbots), you don't need to specify exactly what you want it to do; but you also shouldn't expect it to go and automatically solve major problems in your field. Instead, the tasks you give it should require multiple steps and multiple rounds of iteration, while still being clearly achievable. For example, here's a Kosmos objective from one of the examples in our paper: "Investigate differences in transcriptional regulation, transcriptional entropy, transcriptional noise during aging comparing subclass ‘008 L2/3 IT ENT Glut’ vs ‘007 L2/3 IT CTX Glut’ and ‘003 L5/6 IT TPE-ENT Glut’ vs ‘005 L5 IT CTX Glut’, that could explain a higher propensity to accumulate proteins." This is a good objective because there is a clear goal and it is clearly doable, but you are not telling it exactly what to do. On the other hand, if you have a very specific analysis you want to do (e.g., "conduct a differential gene expression analysis between dataset X and Y" or "tell me what the average effect size is in clinical trials for XYZ"), that would be a better fit for our analysis agent or our literature agent, which you can also try for free on the platform. Secondly, you should try running Kosmos both with and without a dataset. Without a dataset, Kosmos is basically an extremely sophisticated literature and metaanalysis agent, and it's great at generating new hypotheses. With a dataset, Kosmos is a fully fledged data-driven-discovery agent. If you're going to run it with a dataset, you want to use a dataset that is rich and complex. Omics data, complex animal study data, and financial data have all worked very well for us. Finally, iterate! It will take a few tries to get used to how Kosmos works; but once you figure it out, it is extremely powerful. The image is ChatGPT's impression of Kosmos. Pretty much on point...
Sam Rodriques tweet media
English
11
18
148
17.1K
wontfix
wontfix@DadMakingGames·
@SGRodriques @SGRodriques amazing release! related to the API, may you consider creating an endpoint we may use to preflight the Kosmos prompt we want to use to provide feedback? I have been using DSPy to prompt SmolLM3 + pqa cli (for context) to improve my Kosmos prompts and it helps
English
0
0
0
14
Sam Rodriques
Sam Rodriques@SGRodriques·
Extremely important point. Our API is now available for all of our agents and you can pay for higher rate limits, which was one of the most requested features for the FutureHouse Platform. Super excited to see what people build.
Andrew White 🐦‍⬛@andrewwhite01

We now have an API for: - precedent search agent - literature/clinical trials/patents search agent - chemistry agent - data analysis agent (that can find data) You can generate an API key in platform and add, for example, novelty detection

English
1
1
27
6.3K
wontfix
wontfix@DadMakingGames·
@SebAaltonen I highly recommend using LLVM's toolchain if you use agents to write C code, in particular MCA for perf checks and doing sanity check precompiles (compile check). I like to use those and set my ctest target to build at least asan and tsan to catch issues in tests
English
0
0
1
168
Sebastian Aaltonen
Sebastian Aaltonen@SebAaltonen·
AI generated C is a real deal. C coders wrote fast & simple code. No high freq heap allocs, no abstractions slowing the compiler down. Lots of good C example code around. Ai workflows need a language with fast iteration time. Why waste compile time and perf on modern languages?
English
68
16
609
70.3K
wontfix
wontfix@DadMakingGames·
@ClassicGamerTWR I don't mind the original author's frustration, I like your post and I liked why C uses 0 (JNZ) test and how C99 eventually added stdbool.h. It is fun because I still find cases to use JNZ branches when telling clang that branch prediction should assume normally true (or false)
English
0
0
1
15
wontfix
wontfix@DadMakingGames·
@tenderizzation I really appreciate what TensorFlow did to push out of the direction of creating one off CUDA kernels everywhere, now with mlir.llvm.org I feel like it'll be easier than ever to marry CPU and GPU across devices.
English
0
0
1
149
wontfix
wontfix@DadMakingGames·
@infraexplained love C but find it is challenging (especially debugging) when working with GPUs, even via CUDA C or OpenCL
English
0
0
0
10
Infrastructure explained
Infrastructure explained@infraexplained·
C is THE programming language, everything else are just posers
English
3
1
18
3.1K
Vicharak
Vicharak@Vicharak_In·
Two decades ago, Arduino put a microcontroller in every student’s hand, revolutionizing grassroots education. Today, as Arduino begins a new chapter in another domain, we’re opening the next chapter in education: putting FPGAs into every student’s hand. We’re delighted to introduce Shrike-Lite, the world’s most affordable FPGA development board, priced at just ₹349 / $4. Shrike-Lite combines an MCU (RP2040) with an FPGA (ForgeFPGA – 1K LUT) on a single board, unlocking hands-on learning for thousands of students and makers. Even with a 1K LUT FPGA, you can build: – Custom UART / SPI / I²C cores – LED and PWM drivers – Simple robotics controllers – Tiny accelerators and logic blocks – Many more digital-logic projects Since Shrike-Lite is a pet project at Vicharak, we’re keeping everything open-source, hardware, software, and toolchains, with complete software support from our team. We’re opening pre-orders for the first 1,000 units in India, starting now. Which will be delivered by 15th Nov, 2025.
Vicharak tweet media
English
179
510
3.7K
290.7K
wontfix
wontfix@DadMakingGames·
are you having successful with C++ or C code generation from different LLMs in agentic workflows? I have found success switching to LLVM toolchain and running things like LLVM MCA + clang-check + ASAN builds checked with catch2 tests. What do I miss?
English
0
0
0
98
wontfix
wontfix@DadMakingGames·
@sincethestudy Bracket Bot is so well done! very nice work
English
0
0
0
119
brian-machado-high-inference
brian-machado-high-inference@sincethestudy·
Research papers on SLAM all assume $300 cameras, or are impossible to replicate. Bracket Bot is doing SLAM research to bring perfect robotics SLAM to $10 sensors. Dm me if you have experience writing SLAM, we are hiring
English
16
12
231
17K
wontfix
wontfix@DadMakingGames·
@itsclivetime @giffmana with the improvements to LLVM MLIR, do you think we'll see a language better adapted for distributed programming, especially in large scale machine learning applications where we may mix logic across threads, CPU, GPU and network? similar to the work from zml.ai
English
0
0
0
87
Clive Chan
Clive Chan@itsclivetime·
@giffmana yeah, concurrency is fundamentally hard and there are definitely worse solutions than GIL, but there are also far better ways (Go is wonderful) javascript, on the other hand, is godawful.
English
4
0
26
2.8K
Clive Chan
Clive Chan@itsclivetime·
no, python is absolutely a bottleneck. one of my (least?) favorite debugs at my previous job was figuring out why a background pytorch thread that deallocates memory mapped tensors was blocking the main process. it was the GIL. github.com/pytorch/pytorc…
Lucas Beyer (bl16)@giffmana

Quite the contrary: We're using the language that was designed as a glue language for gluing pieces together that are written in the language(s) that were designed for peak performance. Everything working exactly as designed.

English
21
17
518
80.3K