Nicholas Bardy

5.3K posts

Nicholas Bardy banner
Nicholas Bardy

Nicholas Bardy

@NicholasBardy

Ocean Motion Artist Formerly Research Scientist @Canva, @Adobe, @wandb

San Francisco, California Katılım Kasım 2011
664 Takip Edilen954 Takipçiler
Sabitlenmiş Tweet
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
🎨🧵 AI Assisted Wave Studies I found my creative process in blending my love of Waves and Generative Art. To celebrate I'm going to spend 52 consecutive weeks creating Art. 1 Wave Photoshoot a week 1 AI Generated Video 52 New things I'm proud of in 1 year.
English
3
2
38
0
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@okaris what data is opted in? for what usage is this? the http requests?
English
0
0
0
2
ok
ok@okaris·
vercel just updated their terms. your data is now opted in for AI training by default unless you’re on the Pro plan.
ok tweet media
English
1
0
1
109
François Chollet
François Chollet@fchollet·
For those saying "I couldn't do it either!" -- I *do* think that a software engineer should be able to look at a new programming language and start solving familiar problems in the language while consulting the docs. Even if it takes more effort and more iterations. You shouldn't need millions of hours of experience in a new language before being able to work with it.
English
15
8
243
12.9K
François Chollet
François Chollet@fchollet·
This is more evidence that current frontier models remain completely reliant on content-level memorization, as opposed to higher-level generalizable knowledge (such as metalearning knowledge, problem-solving strategies...)
Lossfunk@lossfunk

🚨 Shocking: Frontier LLMs score 85-95% on standard coding benchmarks. We gave them equivalent problems in languages they couldn't have memorized. They collapsed to 0-11%. Presenting EsoLang-Bench. Accepted to the Logical Reasoning and ICBINB workshops at ICLR 2026 🧵

English
134
266
2.5K
206.2K
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
"too many instructions" is why I agree with the "llm psychosis" crowd People finally realize how powerful LLMs are, Adapt their workflows to work better with them, accelerate even harder. And then... They attribute it all to some mega prompt they stuffed a bunch of stuff in, but it's really the workflow. --- So like it's cool Gary documented a try,critique,review,plan, loop that works well for him. But also, I can almost guarantee nothing in gstack is that transcendant.
English
0
0
0
146
dex
dex@dexhorthy·
Tried plan-review-ceo from gstack yesterday. I’m not sure if this is good or bad, intentional or not intentional, but when I felt like pushing back on the agent*, something in my brain feels like I’m arguing with Garry directly 🤣 Anyways milestone 1 of a big feature shipping with RPI/QRSPI + Gstack shipping today, will report back * (which @garrytan had stated is part of the process - “your job is to know when the model is gassing you up and call it out” or something) I have some technical concerns with the sheer volume of instructions in the prompt and the amount of adherence you will actually get (@0xblacklight cited an interesting arxiv paper in post linked below) - I think we might be better served by a router that routes to specific modes, rather than explaining every single mode in a single monolithic prompt, but there’s tradeoffs to consider in plumbing and Ux for the end user. I think some may complain that it’s overly verbose and thoughtful and brings up things that are irrelevant but I actually think that’s good. I want a clean braindump of everything that might be relevant so I can edit and prune down to just what’s important
English
5
1
36
3.8K
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@zephyr_z9 There is information from SK Hynx that they diversified helium imports awhile ago during a previous crisis.
English
0
0
0
4
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@aleabitoreddit It's pretty wild. I was I rotated harder and earlier I am mostly consolidated on korean memory and missed the earlier leg up.
English
3
0
0
46
Serenity
Serenity@aleabitoreddit·
$TSEM and the Photonics Supercycle… Doesn’t like they’re bothered with Iran or Silver and Gold crashing? Almost every name from $AAOI, $COHR, $SIVE, to $LITE is green. This is what happens when capacity is sold out for the next few years or it’s in the center of scaling the AI buildout. Better to stay long rather than get distracted by macro.
Serenity tweet mediaSerenity tweet media
English
63
38
660
68.3K
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
If you think of it as test time compute you can get value out of it. it can just output so many tokens. If you look at some of the codex mini curves you can see points where the smaller models actually eclipse the bigger models on lower settings, those curves can be pushed very far out especially in swarms xhigh ->plan 5 new directions spark -> code 5 attempts on the same xhigh -> review
English
0
0
1
10
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
@NicholasBardy @tenobrus I’m sure that workflow makes sense for many people but I’m just not interested in it. You never know in advance exactly when you’re going to need the extra intelligence. Maybe there’s a subtle problem, a bug in a library, etc. I don’t want to take chances.
English
2
0
1
56
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@menhguin @dounbug not even close to how it is now though. Information didn't spread so fast, it's a bigger weapon. History used to be written by the victor, now it's written by whoever gets the most upvotes.
English
0
0
0
0
dounbug
dounbug@dounbug·
no second date, but at least he now understands how modern conflict is no longer confined to kinetic methods, encompassing cognitive, cyber, social, political, economic, and narrative dimensions where the most advanced military isn’t guaranteed to prevail
English
17
6
159
5.6K
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@michaelboegl man color is amazing on these, how much is that good post processing? Or a nice lens+camera
English
0
0
0
1
Michael Boegl
Michael Boegl@michaelboegl·
California Dreamin‘
Michael Boegl tweet mediaMichael Boegl tweet mediaMichael Boegl tweet mediaMichael Boegl tweet media
English
16
171
1.7K
34.2K
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@doodlestein @tenobrus You can get a lot out of gpt 5.4 spark though with the right plans. Some work is just dumb execution. Especially if you have big codex spec out the math and algos
English
1
0
2
53
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
@tenobrus Exactly. All the alpha is on the margin. I have no use even for models like GPT 5.4 Spark which are way better than any open-weight models, because the increased speed and lower cost are far outweighed in my use case by the fact that it’s dumber at coding.
English
3
0
23
891
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@8teAPi Sort of, its time to first token. So we're comparing AR Streaming to whole video diffusion
English
0
0
1
2
Prakash
Prakash@8teAPi·
Real time HD quality video generation on Nvidia Vera Rubin… compressing what took minutes 3 months ago to 100ms. Holodeck this year..
Runway@runwayml

A breakthrough in real-time video generation. As a research preview developed with @NVIDIA and shared at @NVIDIAGTC this week, we trained a new real-time video model running on Vera Rubin. HD videos generate instantly, with time-to-first-frame under 100ms. Unlocking an entirely new creative paradigm and bolstering the foundations of our General World Model, GWM-1. Real-time generation opens a fundamentally different design space for video models and world simulation. We're investing in co-designing our models alongside advances in hardware to keep pushing this frontier.

English
1
5
42
6K
AprilNEA
AprilNEA@AprilNEA·
🧵 I just reverse-engineered the binaries inside Claude Code's Firecracker MicroVM and found something wild: Anthropic is building their own PaaS platform called "Antspace" (Ants + Space). It's a full deployment pipeline — hidden in plain sight inside the environment-runner binary. Here's what I found 👇
AprilNEA tweet media
English
62
186
1.5K
217K
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@Citrini7 @PurpleDrink_LLC There's a lot of hedges to unwind at some point. But we're about to invade kharg island and have boots on the ground... War gonna escalate before trump calls a quick end to it when the stock markets scares him enough. we barely wicked down.
English
0
0
0
4
PurpleDrinkCapital
PurpleDrinkCapital@PurpleDrink_LLC·
Citrini polled people and they’re 2/3 bearish So nobody worry Market gonna be fine
English
4
0
154
26.7K
Sigil Wen
Sigil Wen@0xSigil·
To be exceptional, you need to make exceptions
English
26
3
95
5.9K
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@Yuf_Zh Great work, Get it on those cerebras chips now. Current spark is just a BIT too incoherrent
English
0
0
0
4
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@sporadica Nah, Just live like a monk for 5-10 year until it's starting to compound, and you are buying those $55k cars off the interest you saved. No car is as nice as compound interest, I had a shit car for years, but now I fly planes wherever I want.
English
0
0
0
2
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@cfryant I don't understand, why are you so much happier with nano banana? Can you expand a bit? Is it better alignment to your text prompts? Or better edits when you chain commands?
English
1
0
1
21
Christopher Fryant
Christopher Fryant@cfryant·
Just wanted to clarify my take here: I am judging Midjourney v8 on what it is and what it has improved - not how it stacks up against the competition. As far as professional use goes, there is no comparison. Nano Banana Pro absolutely destroys it for utility. Aesthetics alone can't fix the fact that it takes 10x longer to get a really specific result. I would love to see them make a comeback, but v8 alpha ain't it.
Christopher Fryant@cfryant

Midjourney v8 alpha is out now. Here are 20 examples using their new native HD, no SREFS, text only. My thoughts @ the end, plus 3 exclusive prompts for subscribers from these 3 below.

English
19
4
74
6.7K
Martin Shkreli
Martin Shkreli@MartinShkreli·
going to make an optical computer for AI $QCLS faster matmuls less energy QED
Martin Shkreli tweet media
English
54
28
351
72.3K
Nicholas Bardy
Nicholas Bardy@NicholasBardy·
@aleabitoreddit photonic going nuts across the board on open. looks like SPY pulling back is not gonna hit memory or photonics. We might even see recession and still data center buildout booming
English
1
0
0
11
Serenity
Serenity@aleabitoreddit·
This thesis post aged well with $LITE. I feel like every photonics pick I make just keeps doubling in short time periods.
Serenity tweet media
Serenity@aleabitoreddit

The $LITE thesis: The hidden monopoly in the AI. Lumentum is up 316% YTD, but might be 1000%+ by 2027. Micron ($300B) or TSM ($1.5T) sit in the center of every TPU/GPU deployed. But same with $LITE, but it's a $26B MC. In Every, Single, TPU from Google, $LITE makes unbelievable amounts of profit for their marketcap. That's because it's the standard for Optical Circuit Switching (OCS) + optical networking. It's also in - $NVDA Blackwell -$AMZN Trainium - and other hyperscaler ASICs. Lumentum sits in the holy trinity of every single chip deployment for photonics. And for every TPU capex spent, $LITE takes 8-12%. For every Nvidia GPU, $LITE takes ~2-3% (split between Innolight and some others, so the math gets a bit complex). But some napkin math on NVDA GPU deployments alone for BOM: NVIDIA Blackwell (GB200): HBM memory: ~50–55% (SK Hynix (Lead), Micron, Samsung) Logic (GPU Die): ~25-30% ( $TSM 4NP) CoWoS Packaging: ~13-18% $TSM Optics/Network: ~3–5% (Innolight, Lumentum, Coherent) PCB/Power: 5% For Google TPIU "Ironwood" TPU v7: HBM Memory: 38-42% Samsung / SK Hynix Logic Die: TSM ~28-33% Design/I.O: 8-10% MediaTek Optical Network: 10-14% ( $LITE (primary), $COHR secondary) Optical Switch: 2-4% $LITE $LITE est. total cluster share: ~8–12% Just an FYI, Google's "Optical" BOM share (8–12%) is an anomaly due to their unique Optical Circuit Switch (OCS) monopoly. Just for some napkin math: $40B Google TPU spend by 2027. $LITE captures 10% (30-40% margins), $1.5B+ FCF from Google alone, 17x earnings from just their primary customer. (analysts are probably extremely off with projecting TPU spend scaling). Not even including their split from $AMZN Trainium, $NVDA Blackwell, $MSFT Maia, and other chip deployments. $LITE is in the center of every single TPU/GPU future chip deployment for now and takes a cut. The only downside is they're the clear market leader now, but $AVGO and $COHR are likely set up to compete by 2027-2028. However... People say "$26B, ATH, why are you buying now". This is the reason. They're involved in every future single TPU/GPU/ASIC deployed. $LITE could end up easily over $60B+ if Google TPUs, and other chip spend ramps up and LITE takes a 2-3% (from $NVDA, $AMZN, $MSFT) or 8-12% cut (from $GOOGL) for every single dollar spent.

English
40
40
505
105K