danger laboratories

110 posts

danger laboratories banner
danger laboratories

danger laboratories

@dangerlab

advanced capability research @_dangertech

Sparks, NV 参加日 Nisan 2025
6 フォロー中144 フォロワー
固定されたツイート
danger laboratories
danger laboratories@dangerlab·
I don't know if this was ever released but 10M is actually doable now If you're interested I'm sure it would help with code generation and many other things @spawn @cline @replit @cursor_ai @windsurf_ai @boltdotnew @scoutdotnew lmk my dms are open
Magic@magicailabs

LTM-2-Mini is our first model with a 100 million token context window. That’s 10 million lines of code, or 750 novels. Full blog: magic.dev/blog/100m-toke… Evals, efficiency, and more ↓

English
1
0
4
4.8K
napling🌱
napling🌱@napmaxxing·
@Reiteller @snwy_me evidently we are acquainted with different cybersec crowds:) (+to discredit one source using the "more reliable source" - reddit. lol, lmao even)
English
3
0
29
2.2K
snwy
snwy@snwy_me·
i tried this once after getting into an argument with my parents about whether your phone listens or not (this was in 2020 so may be different now) two iphone 6s’s, both wiped and connected to two different wifi networks w/ independent VPNs said keywords to one and different ones to another, never in earshot of each other, with multiple apps (insta, facebook, twitter, etc.) open with their own accounts and basic usage and the ads were never what i said they were basically always location-based/pretty basic things my running theory is that whoever thinks their phone is listening is just predictable enough that it knows, not because it heard them
Karim Jedda@KarimJDDA

Introducing Gaslight Garage: a box where I put my phones and feed them AI-generated audio nonsense to make them think I want to buy stuff. Practical AI for the people. 👌 I'll report back if my ads change in the next weeks.

English
229
183
5.9K
788.9K
danger laboratories がリツイート
Saad
Saad@sodakeyEatsMush·
[blog] So I was exploring some very influential vision-language models, and while making notes along the way, it kind of turned into a mega blog. In this blog, I’ve covered the novelties and interesting aspects of models like Flamingo, BLIP, BLIP-2, and LLaVA. (There’s even a mini-blog inside this one about Perceiver by Google DeepMind). Some of the common ideas I noticed across these papers were: - The use of cross-attention to make visual and language information interact. - The idea of using a mapping network to project from one embedding space into the LLM’s embedding space. I’ll drop the link in the comments - do check it out, and I really hope you all will like it!!
Saad tweet media
English
6
43
336
20.3K
danger laboratories がリツイート
NVIDIA AI Developer
NVIDIA AI Developer@NVIDIAAIDev·
.@vllm_project has quickly become a go-to open source engine for efficient large language model inference, balancing performance with a strong developer experience. At NVIDIA, direct contributions to projects like vLLM reflect a commitment to advancing open source AI infrastructure for everyone. In this Q&A, Benjamin Chislett, Senior Systems Software Engineer at NVIDIA and Committer for vLLM, shares his perspective on shaping the project’s future, his work on speculative decoding, and why open source collaboration matters for AI at scale. 🔗 nvda.ws/4o0m3ou
NVIDIA AI Developer tweet media
English
4
15
117
8.6K
Laura
Laura@lawaashley·
Truth hurts
English
1
0
1
164
danger laboratories
danger laboratories@dangerlab·
XAi just added 2M, putting call out again in case anyone is still interested in scaling in these pivotal moments
English
0
0
0
326
danger laboratories
danger laboratories@dangerlab·
I don't know if this was ever released but 10M is actually doable now If you're interested I'm sure it would help with code generation and many other things @spawn @cline @replit @cursor_ai @windsurf_ai @boltdotnew @scoutdotnew lmk my dms are open
Magic@magicailabs

LTM-2-Mini is our first model with a 100 million token context window. That’s 10 million lines of code, or 750 novels. Full blog: magic.dev/blog/100m-toke… Evals, efficiency, and more ↓

English
1
0
4
4.8K
❄️
❄️@__Tkat__·
@pk_iv @Atlassian I am willing to settle for a small payment of 400m
English
5
0
422
30.8K
dany
dany@efectual·
ok when the waymo says good to see you Daniel my heart melts
English
2
0
8
324
danger laboratories がリツイート
Aleksa Gordić (水平问题)
Aleksa Gordić (水平问题)@gordic_aleksa·
New in-depth blog post - "Inside vLLM: Anatomy of a High-Throughput LLM Inference System". Probably the most in depth explanation of how LLM inference engines and vLLM in particular work! Took me a while to get this level of understanding of the codebase and then to write up this one - i quickly realized i understimated the effort. 😅 It could have easily been a book/booklet (lol). I covered: * Basics of inference engine flow (input/output request processing, scheduling, paged attention, continuous batching) * "Advanced" stuff: chunked prefill, prefix caching, guided decoding (grammar-constrained FSM), speculative decoding, disaggregated P/D * Scaling up: going from smaller LMs that can be hosted on a single GPU all the way to trillion+ params (via TP/PP/SP) -> multi-GPU, multi-node setup * Serving the model on the web: going from offline deployment to multiple API servers, load balancing, DP coordinator, multiple engines setup :) * Measuring perf of inference systems (latency (ttft, itl, e2e, tpot), throughput) and GPU perf roofline model Lots of examples, lots of visuals! --- I realize i've been silent on social - many of you noticed and thanks for reaching out! :) --> I'm so back! lots of things happened. Also, in general, I'm a bit sick of superficial content, it really is an equivalent of junk food (h/t @karpathy). I want to do the best/deepest technical work of my life over the next years and write much more in depth (high quality organic food ;)) so I might not be as frequent around here as i used to be (? we'll see). I'll make it a goal to share a few paper summaries a week or stuff that's relevant / in the zeitgeist. If you have any topics that happened over the past few weeks/months drop it down in the comments i might focus on some of those in my next posts. --- Huge thank you to @Hyperstackcloud for giving me an H100 node to run some of the experiments and analysis that i needed to write this up. The team there led by Christopher Starkey is amazing! Also a big thank you to Nick Hill (who did a very thorough review of the post - basically a code review lol; Nick's a core vLLM contributor and principal SWE at RedHat) and to my friends Kyle Krannen (NVIDIA Dynamo), @marksaroufim (PyTorch), and @ashVaswani (goat) for taking the time during weekend when they didn't have to!
Aleksa Gordić (水平问题) tweet media
English
63
401
2.6K
323.5K
danger laboratories がリツイート
Earth Liberation Studio
Earth Liberation Studio@EarthStvdio·
Just poking my head in here to say
Earth Liberation Studio tweet media
English
223
1.8K
11K
601.2K
Perplexity
Perplexity@perplexity_ai·
Comet is here. A web browser built for today’s internet.
English
496
635
6.6K
1.7M
danger laboratories がリツイート
KC
KC@amphichrome_·
> I log onto twitter > sex choking controversy > grok becomes MechaHitler > hot blonde chick is apparently ugly I am logging off twitter.
English
179
2.1K
102.8K
3.9M