Greg Mark

96 posts

Greg Mark banner
Greg Mark

Greg Mark

@Greg_Forged

Conway, NH Katılım Ekim 2015
122 Takip Edilen144 Takipçiler
Greg Mark
Greg Mark@Greg_Forged·
@zoink @untillabs Very cool! If they need hard-tech advice, lmk. I’m a huge fan of cryo.
English
0
0
0
84
Dylan Field
Dylan Field@zoink·
Whole body reversible cryogenics will allow humans to go to the stars. And as we work towards whole body, exploring this tech tree will save lives — enabling medical hibernation and organ transplants at scale. Thankful and excited to be a backer of @untillabs !
Until@untillabs

We’ve closed a $58M Series A, led by @foundersfund with @Lux_Capital and @Field_Ventures joining, to support development of an organ cryopreservation product. In the near term, this will help transplant patients and surgeons access the organs they need. This also serves as an important step in our long-term roadmap to medical hibernation.

English
23
13
437
69.2K
Greg Mark
Greg Mark@Greg_Forged·
@zoink Congrats Dylan!! I hope you get a moment to slow down and soak it in.
English
0
0
2
379
Dylan Field
Dylan Field@zoink·
Good morning
English
318
228
5.2K
661.7K
Greg Mark
Greg Mark@Greg_Forged·
@DavidSHolz We won the planet lottery for sure. And it’s awesome.
English
0
0
1
85
David
David@DavidSHolz·
we're hurtling through the void on top of a ball of lava. the lava has a very thin crust and a tiny bit of ice. the ice is being liquidified by a ball of plasma 94 million miles away as we spin around it at 15 miles a second. it's all so weird but you gotta feel lucky somehow
English
45
88
674
30.7K
Greg Mark
Greg Mark@Greg_Forged·
@zoink @figma It’s exciting for public investors to access to such a transformation product / company. I’ll be buying shares :)
English
0
0
8
2.6K
Dylan Field
Dylan Field@zoink·
Sharing an update on @figma: we publicly filed our S-1 with the SEC today, and have applied to list on the New York Stock Exchange under the symbol “FIG.” figma.com/blog/s1-public
English
425
302
5.9K
2.7M
Greg Mark
Greg Mark@Greg_Forged·
At MIT studying rocket engines, we learned from the Germans who invented them and the Russians who perfected them. For decades, US companies bought Russian RD-180 engines because they were the best. Now SpaceX makes the world’s best engines, and MIT students study American rocket technology. Thanks, @SpaceX. The Raptor is a work of art.
English
0
0
0
28
Dylan Field
Dylan Field@zoink·
SpaceX is (still) the most important company of the 21st century. This was true five years ago, one year ago, one day ago, and it remains true today 🚀
English
37
34
773
64K
Mario Nawfal
Mario Nawfal@MarioNawfal·
🇺🇸 TESLA DOMINATES MARCH EV SALES Tesla registered over 51,000 EVs in March—more than 6x its nearest competitor. Despite just a 1.1% monthly gain, it dwarfed Chevrolet (8,478) and Ford (7,361), the No. 2 and No. 3 brands. Chevrolet soared 274%, while Ford dipped -13%. Hyundai, BMW, and Rivian followed in the mid-5K range. Among the breakout gainers: GMC (+185%), Subaru (+163%), Porsche (+137%), and Fiat (+208%). Polestar and Lucid posted strong gains too. Meanwhile, legacy brands like Jaguar, Maserati, and Ram trailed with fewer than 50 registrations each. Source: @TeslaNewswire
Mario Nawfal tweet media
Mario Nawfal@MarioNawfal

🚨3 REASONS WHY TESLA WILL ALWAYS DOMINATE Thinking Tesla’s just another electric car? Tesla isn’t about cars—it’s betting on the future. Tesla’s direct sales model, cutting-edge battery tech, and powerful brand loyalty aren’t just reshaping roads, they're redefining lifestyles. Even amid stock volatility, Tesla’s resilience shines. Owners save serious cash—around $1,100 yearly on fuel, if not more—and benefit from strong resale values. Investing in Tesla means joining a true renewable energy revolution—that will carry us to new reaches as a civilization. Bottom line: owning a Tesla isn’t just smart driving, it’s smart money. Source: Nasdaq

English
561
1K
5.1K
4.1M
Greg Mark
Greg Mark@Greg_Forged·
F1 Miami
Greg Mark tweet media
Italiano
0
0
9
1.7K
Olivia Moore
Olivia Moore@omooretweets·
Every podcast needs to be Ghibli going forward
English
246
580
8.1K
2.4M
Greg Mark retweetledi
Balaji
Balaji@balajis·
AI OVERPRODUCTION China seeks to commoditize their complements. So, over the following months, I expect a complete blitz of Chinese open-source AI models for everything from computer vision to robotics to image generation. Why? I’m just inferring this from public statements, but their apparent goal is to take the profit out of AI software since they make money on AI-enabled hardware. Basically, they want to do to US tech (the last stronghold) what they already did to US manufacturing. Namely: copy it, optimize it, scale it, then wreck the Western original with low prices. I don’t know if they’ll succeed. But here’s the logic: (1) First, China noticed that DeepSeek’s release temporarily knocked ~$1T off US tech market caps. (2) Second, China’s core competency is exporting physical widgets, more than it is software. (3) Third, China’s other core competency is exporting things at such massive scale that all foreign producers are bankrupted and they win the market. See what they’re doing to German and Japanese cars, for example. (4) Fourth, China is well aware that it lacks global prestige as it’s historically been a copycat. With DeepSeek, becoming #1 in AI is now something they actually consider possibly achievable, and a matter of national pride. (5) Fifth, DeepSeek has gone viral in China and its open source nature means that everyone can rapidly integrate it, down to the level of local officials and obscure companies. And they are doing so, and posting the results for praise on WeChat. (6) Finally, while DeepSeek was obscure before recent events, it’s now a household name, and the founder (Liang Wengfeng) has met both with Xi but also the #2 in China, Li Qiang. They likely have unlimited resources now. So, if you put all that together, China thinks it has an opportunity to hit US tech companies, boost its prestige, help its internal economy, and take the margins out of AI software globally (at least at the model level). They will instead make their money by selling inexpensive AI-enabled hardware of increasing quality, from smart homes and self-driving cars to consumer drones and robot dogs. Basically, China is trying to do to AI what they always do: study, copy, optimize, and then bankrupt everyone with low prices and enormous scale. I don’t know if they’ll succeed at the app layer. But it could be hard for closed-source AI model developers to recoup the high fixed costs associated with training state-of-the-art models when great open source models are available. Last, I agree it’s surprising that the country of the Great Firewall is suddenly the country of open source AI. But it is consistent in a different way, which is that China is just focused on doing whatever it takes to win — even to the point of copying partially-abandoned Western values like open source, which seemed like the hardest thing to adopt. On that point: they did build censorship into the released DeepSeek AI models, but in a manner that’s easily circumvented outside China. So, you might conclude they don’t really care what non-Chinese people are saying outside China in other languages, so long as this doesn’t “interfere with China’s internal affairs.” Anyway —this is an area I’ve been watching, and my reluctant conclusion is that China is getting better at software faster than the West is getting better at hardware.
Damien Ma@damienics

What's the best explanation you've heard for why China is leaning so hard into open source? It's now an official position from the foreign ministry apparently.

English
587
1.4K
8.6K
3.4M
Greg Mark
Greg Mark@Greg_Forged·
@DavidSHolz This will be the most universally loved AI, and will bring people from different cultures together 🤍
English
0
0
4
421
David
David@DavidSHolz·
can someone make a realtime voice-to-voice language translation ai yet? pls? what are we waiting for?
English
252
109
2.4K
5.6M
Palmer Luckey
Palmer Luckey@PalmerLuckey·
lmao, Caltrain's tweet claiming their trains are "100% Billionaire-free" got deleted after me and a bunch of other Caltrain-riding billionaires responded. Don't they know that techno-autists all love trains?
English
310
526
15K
663.2K
Hedra
Hedra@hedra_labs·
Hedra Studio and Character-3 is here. A new generation of AI-native video creation. At its core is Character-3, the first omnimodal model in production, built to jointly reason across image, text, and audio for more intelligent video generation. This goes beyond multimodal—it’s the next evolution of AI-driven content creation. With Hedra Studio, we’re making this technology easy to use. It brings together Character-3 and other leading AI tools in one platform, so creators can focus on ideas instead of complicated workflows. With Hedra Studio, you can: - Text-to-video and audio-to-video - Character and general-purpose image and video creation - Dynamic backgrounds, text-to-emotion, and top AI model integration Character-3 sets a new standard for AI video. Hedra Studio makes it accessible. Both are available now.
English
191
279
1.8K
849.8K
Greg Mark retweetledi
Greg Mark
Greg Mark@Greg_Forged·
@karpathy When Elon does something, he really does it. Thanks for the detailed review!
English
0
0
0
269
Andrej Karpathy
Andrej Karpathy@karpathy·
I was given early access to Grok 3 earlier today, making me I think one of the first few who could run a quick vibe check. Thinking ✅ First, Grok 3 clearly has an around state of the art thinking model ("Think" button) and did great out of the box on my Settler's of Catan question: "Create a board game webpage showing a hex grid, just like in the game Settlers of Catan. Each hex grid is numbered from 1..N, where N is the total number of hex tiles. Make it generic, so one can change the number of "rings" using a slider. For example in Catan the radius is 3 hexes. Single html page please." Few models get this right reliably. The top OpenAI thinking models (e.g. o1-pro, at $200/month) get it too, but all of DeepSeek-R1, Gemini 2.0 Flash Thinking, and Claude do not. ❌ It did not solve my "Emoji mystery" question where I give a smiling face with an attached message hidden inside Unicode variation selectors, even when I give a strong hint on how to decode it in the form of Rust code. The most progress I've seen is from DeepSeek-R1 which once partially decoded the message. ❓ It solved a few tic tac toe boards I gave it with a pretty nice/clean chain of thought (many SOTA models often fail these!). So I upped the difficulty and asked it to generate 3 "tricky" tic tac toe boards, which it failed on (generating nonsense boards / text), but then so did o1 pro. ✅ I uploaded GPT-2 paper. I asked a bunch of simple lookup questions, all worked great. Then asked to estimate the number of training flops it took to train GPT-2, with no searching. This is tricky because the number of tokens is not spelled out so it has to be partially estimated and partially calculated, stressing all of lookup, knowledge, and math. One example is 40GB of text ~= 40B characters ~= 40B bytes (assume ASCII) ~= 10B tokens (assume ~4 bytes/tok), at ~10 epochs ~= 100B token training run, at 1.5B params and with 2+4=6 flops/param/token, this is 100e9 X 1.5e9 X 6 ~= 1e21 FLOPs. Both Grok 3 and 4o fail this task, but Grok 3 with Thinking solves it great, while o1 pro (GPT thinking model) fails. I like that the model *will* attempt to solve the Riemann hypothesis when asked to, similar to DeepSeek-R1 but unlike many other models that give up instantly (o1-pro, Claude, Gemini 2.0 Flash Thinking) and simply say that it is a great unsolved problem. I had to stop it eventually because I felt a bit bad for it, but it showed courage and who knows, maybe one day... The impression overall I got here is that this is somewhere around o1-pro capability, and ahead of DeepSeek-R1, though of course we need actual, real evaluations to look at. DeepSearch Very neat offering that seems to combine something along the lines of what OpenAI / Perplexity call "Deep Research", together with thinking. Except instead of "Deep Research" it is "Deep Search" (sigh). Can produce high quality responses to various researchy / lookupy questions you could imagine have answers in article on the internet, e.g. a few I tried, which I stole from my recent search history on Perplexity, along with how it went: - ✅ "What's up with the upcoming Apple Launch? Any rumors?" - ✅ "Why is Palantir stock surging recently?" - ✅ "White Lotus 3 where was it filmed and is it the same team as Seasons 1 and 2?" - ✅ "What toothpaste does Bryan Johnson use?" - ❌ "Singles Inferno Season 4 cast where are they now?" - ❌ "What speech to text program has Simon Willison mentioned he's using?" ❌ I did find some sharp edges here. E.g. the model doesn't seem to like to reference X as a source by default, though you can explicitly ask it to. A few times I caught it hallucinating URLs that don't exist. A few times it said factual things that I think are incorrect and it didn't provide a citation for it (it probably doesn't exist). E.g. it told me that "Kim Jeong-su is still dating Kim Min-seol" of Singles Inferno Season 4, which surely is totally off, right? And when I asked it to create a report on the major LLM labs and their amount of total funding and estimate of employee count, it listed 12 major labs but not itself (xAI). The impression I get of DeepSearch is that it's approximately around Perplexity DeepResearch offering (which is great!), but not yet at the level of OpenAI's recently released "Deep Research", which still feels more thorough and reliable (though still nowhere perfect, e.g. it, too, quite incorrectly excludes xAI as a "major LLM labs" when I tried with it...). Random LLM "gotcha"s I tried a few more fun / random LLM gotcha queries I like to try now and then. Gotchas are queries that specifically on the easy side for humans but on the hard side for LLMs, so I was curious which of them Grok 3 makes progress on. ✅ Grok 3 knows there are 3 "r" in "strawberry", but then it also told me there are only 3 "L" in LOLLAPALOOZA. Turning on Thinking solves this. ✅ Grok 3 told me 9.11 > 9.9. (common with other LLMs too), but again, turning on Thinking solves it. ✅ Few simple puzzles worked ok even without thinking, e.g. *"Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?"*. E.g. GPT4o says 2 (incorrectly). ❌ Sadly the model's sense of humor does not appear to be obviously improved. This is a common LLM issue with humor capability and general mode collapse, famously, e.g. 90% of 1,008 outputs asking ChatGPT for joke were repetitions of the same 25 jokes​. Even when prompted in more detail away from simple pun territory (e.g. give me a standup), I'm not sure that it is state of the art humor. Example generated joke: "*Why did the chicken join a band? Because it had the drumsticks and wanted to be a cluck-star!*". In quick testing, thinking did not help, possibly it made it a bit worse. ❌ Model still appears to be just a bit too overly sensitive to "complex ethical issues", e.g. generated a 1 page essay basically refusing to answer whether it might be ethically justifiable to misgender someone if it meant saving 1 million people from dying. ❌ Simon Willison's "*Generate an SVG of a pelican riding a bicycle*". It stresses the LLMs ability to lay out many elements on a 2D grid, which is very difficult because the LLMs can't "see" like people do, so it's arranging things in the dark, in text. Marking as fail because these pelicans are qutie good but, but still a bit broken (see image and comparisons). Claude's are best, but imo I suspect they specifically targeted SVG capability during training. Summary. As far as a quick vibe check over ~2 hours this morning, Grok 3 + Thinking feels somewhere around the state of the art territory of OpenAI's strongest models (o1-pro, $200/month), and slightly better than DeepSeek-R1 and Gemini 2.0 Flash Thinking. Which is quite incredible considering that the team started from scratch ~1 year ago, this timescale to state of the art territory is unprecedented. Do also keep in mind the caveats - the models are stochastic and may give slightly different answers each time, and it is very early, so we'll have to wait for a lot more evaluations over a period of the next few days/weeks. The early LM arena results look quite encouraging indeed. For now, big congrats to the xAI team, they clearly have huge velocity and momentum and I am excited to add Grok 3 to my "LLM council" and hear what it thinks going forward.
Andrej Karpathy tweet media
English
668
2.2K
16.8K
3.7M