Periwinkle, PhD

1.6K posts

Periwinkle, PhD banner
Periwinkle, PhD

Periwinkle, PhD

@PeriwinkleID

CEO @mimirsystems $BTC since '11 🧡 Go Pack 💚💛

Bangkok, Thailand Tham gia Mart 2009
688 Đang theo dõi657 Người theo dõi
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
Similarly, spending days looking for the right data points in literature review is inefficient. @MimirSystems genuinely solves this. It doesn't replace thinking, but it does replace labor. That's conducive to education, not counterproductive.
English
0
0
1
18
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
(the request was to use less silly names for my git branches) (I will not be complying)
English
0
0
0
16
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
Every successful tech company needs a strong internal meme culture which is why as CEO I make sure to use images like this when declining reasonable requests
Periwinkle, PhD tweet media
English
1
0
1
51
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
This is why we combine LLMs with deterministic retrieval If @MimirSystems cites a source, that source definitively exists. No hallucinated citations Or you get "I don't have enough information to provide a good answer" AI for scientists, by scientists Public beta this month
Nav Toor@heynavtoor

🚨BREAKING: OpenAI published a paper proving that ChatGPT will always make things up. Not sometimes. Not until the next update. Always. They proved it with math. Even with perfect training data and unlimited computing power, AI models will still confidently tell you things that are completely false. This isn't a bug they're working on. It's baked into how these systems work at a fundamental level. And their own numbers are brutal. OpenAI's o1 reasoning model hallucinates 16% of the time. Their newer o3 model? 33%. Their newest o4-mini? 48%. Nearly half of what their most recent model tells you could be fabricated. The "smarter" models are actually getting worse at telling the truth. Here's why it can't be fixed. Language models work by predicting the next word based on probability. When they hit something uncertain, they don't pause. They don't flag it. They guess. And they guess with complete confidence, because that's exactly what they were trained to do. The researchers looked at the 10 biggest AI benchmarks used to measure how good these models are. 9 out of 10 give the same score for saying "I don't know" as for giving a completely wrong answer: zero points. The entire testing system literally punishes honesty and rewards guessing. So the AI learned the optimal strategy: always guess. Never admit uncertainty. Sound confident even when you're making it up. OpenAI's proposed fix? Have ChatGPT say "I don't know" when it's unsure. Their own math shows this would mean roughly 30% of your questions get no answer. Imagine asking ChatGPT something three times out of ten and getting "I'm not confident enough to respond." Users would leave overnight. So the fix exists, but it would kill the product. This isn't just OpenAI's problem. DeepMind and Tsinghua University independently reached the same conclusion. Three of the world's top AI labs, working separately, all agree: this is permanent. Every time ChatGPT gives you an answer, ask yourself: is this real, or is it just a confident guess?

English
1
2
6
187
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
Anyone else find Opus 4.6 to be basically unusable outside of Claude Code because it consumes its context window too fast to finish a chat or Cowork task?
English
0
0
0
118
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
Did the Gmail filters get noticeably worse in the last ~6 hours for anyone else?
English
0
0
2
245
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
@guilleflorvs background research is a critical piece of the scientific process that currently consumes hundreds of hours it's not a skill issue and it doesn't have to be that way. it's a tooling problem Mimir's proprietary AI takes lit review from weeks to minutes
English
0
0
0
56
Guillermo Flor
Guillermo Flor@guilleflorvs·
we are looking to invest in the next Lovable, Spotify and Klarna reply to this with what you are building and your unique insight Example: Spotify's founder unique insight was that piracy wasn’t a “people won’t pay” problem, it was a UX problem, and if you made music instant, searchable, and cheaper than illegal downloads, people would switch
English
58
4
127
10.2K
Periwinkle, PhD đã retweet
Chris Albon
Chris Albon@chrisalbon·
Alright listen San Francisco. Everyone cant sell shovels. Someone has to mine something.
English
209
323
8.9K
438.4K
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
The scientific method is just: 1. What do you want to find out 2. See other people's previous fuckings around 3. State how you intend to fuck around and what you think you'll find out 4. Fuck around 5. Find out (and share results)
English
1
0
4
154
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
@christophersaum Advanced materials research is a $500B+ industry but experts are hamstrung by the need to do manual lit review. We save researchers hundreds of hours per product cycle with proprietary AI. Founders: PhD computer science, PhD material science Business model: AutoCAD
English
0
0
0
34
Chris Saum
Chris Saum@christophersaum·
All I want for Christmas is to invest in one more cracked technical founder solving a real business problem they’re utterly convinced is big and pervasive. If that’s you, I want to meet you. We write $500k+ check, first money in.
English
206
12
457
22.1K
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
Scientists love doing science. They want AI tools that work with them to do science faster, not AI that replaces them. That's why @MimirSystems is building tools to fix the most annoying part of the process: lit review. So that scientists can focus on science.
Julian Togelius@togelius

I was at an event on AI for science yesterday, a panel discussion here at NeurIPS. The panelists discussed how they plan to replace humans at all levels in the scientific process. So I stood up and protested that what they are doing is evil. Look around you, I said. The room is filled with researchers of various kinds, most of them young. They are here because they love research and want to contribute to advancing human knowledge. If you take the human out of the loop, meaning that humans no longer have any role in scientific research, you're depriving them of the activity they love and a key source of meaning in their lives. And we all want to do something meaningful. Why, I asked, do you want to take the opportunity to contribute to science away from us? My question changed the course of the panel, and set the tone for the rest of the discussion. Afterwards, a number of attendees came up to me, either to thank me for putting what they felt into words, or to ask if I really meant what I said. So I thought I would return to the question here. One of the panelists asked whether I would really prefer the joy of doing science to finding a cure for cancer and enabling immortality. I answered that we will eventually cure cancer and at some point probably be able to choose immortality. Science is already making great progress with humans at the helm. We'll get fusion power and space travel some day as well. Maybe cutting humans out of the loop could speed up this process, but I don't think it would be worth it. I think it is of crucial importance that we humans are in charge of our own progress. Expanding humanity's collective knowledge is, I think, the most meaningful thing we can do. If humans could not usefully contribute to science anymore, this would be a disaster. So, no. I do not think it worth it to find a cure for cancer faster if that means we can never do science again. Many of those who came up to talk to me last night, those who asked me whether I was being serious or just trolling, thought that the premise was absurd. Of course there would always be room for humans in science. There will always be tasks only humans can do, insight only humans have, and so on. Therefore, we should welcome AI. Research is hard, and we need all the help we can get. I responded that I hoped they were right. That is, I truly hope there will always be parts of the research process which humans will be essential for. But what I was arguing against was not what we might call "weak science automation", where humans stay in the loop in important roles, but "strong science automation", where humans are redundant. Others thought it was immature to argue about this, because full science automation is not on the horizon. Again, I hope they are right. But I see no harm in discussing it now. And I certainly don't think we need research on science automation to go any further. Yet others remarked that this was a pointless argument. Science automation is coming whether we want it or not, and we'd better get used to it. The train is coming, and we can get on it or stand in its way. I think that is a remarkably cowardly argument. It is up to us as a society to decide how we use the technology we develop. It's not a train, it's a truck, and we'd better grab the steering wheel. One of the panelists made a chess analogy, arguing that lots of people play chess even though computers are now much better than humans at chess. So we might engage in science as a kind of hobby, even though the real science is done by computers. We would be playing around far from the frontier, perhaps filling in the blanks that AI systems don't care about. That was, to put it mildly, not a satisfying answer. While I love games, I certainly do not consider game-playing as meaningful as advancing human knowledge. Thanks, but no thanks. Overall, though, it was striking that most of those I talked to thanked me for raising the point, as I articulated worries that they already had. One of them remarked that if you work on automating science and are not even a little bit worried about the end goal, you are a psychopath. I would add that another possibility is that you don't really believe in what you are doing. Some might ask why I make this argument about science and not, for example, about visual art, music, or game design. That's because yesterday's event was about AI for science. But I think the same argument applies to all domains of human creative and intellectual expression. Making human intellectual or creative work redundant is something we should avoid when we can, and we should absolutely avoid it if there are no equally meaningful new roles for humans to transition into. You could further argue that working on cutting humans out of meaningful creative work such as scientific research is incredibly egoistic. You get the intellectual satisfaction of inventing new AI methods, but the next generation don't get a chance to contribute. Why do you want to rob your children (academic and biological) of the chance to engage in the most meaningful activity in the world? So what do I believe in, given that I am an AI researcher who actively works on the kind of AI methods used for automating science? I believe that AI tools that help us be more productive and creative are great, but that AI tools that replace us are bad. I love science, and I am afraid of a future where we are pushed back into the dark ages because we can no longer contribute to science. Human agency, including in creative processes, is vital and must be safeguarded at almost any cost. I don't exactly know how to steer AI development and AI usage so that we get new tools but are not replaced. But I know that it is of paramount importance.

English
0
0
1
257
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
@hthieblot Help material scientists save hundreds of hours on background research
English
0
0
0
39
Hubert Thieblot
Hubert Thieblot@hthieblot·
Explain your product in one sentence. Be clear about what it does. No buzzwords. If you can do that, I’ll consider investing. Hit me.
English
1.2K
24
859
111.9K
Periwinkle, PhD
Periwinkle, PhD@PeriwinkleID·
@SurbhiTodi We're building AI-powered tooling for materials research, a $500B industry Scientists currently spend hundreds of hours per product lifecycle on lit review, Mimir can cut this 99% allowing them to test more hypotheses and innovate faster
English
0
0
0
16
Surabhi Todi
Surabhi Todi@SurbhiTodi·
Who are the best early stage founders out there? I'm writing pre- seed and seed checks. Willing to be the first one in, will make sure you have a fantastic next round (I work with all of the top investors in the valley). I write 3 checks a quarter so you know I'm focused on making sure you crush it.
English
155
8
368
34.3K