mecredy

24.5K posts

mecredy banner
mecredy

mecredy

@mecredy

...a little addicted to cinnamon & coffee. Part time bread machine

Washington, DCish 囧 Katılım Kasım 2008
102 Takip Edilen401 Takipçiler
Sabitlenmiş Tweet
mecredy
mecredy@mecredy·
I got a very nice gift today My son sent me this video about Proud Dads at Dublin's Pride Parade - it was very sweet. youtu.be/d5Ca5QG42dg He shared what he posted on the book of faces. He started coming out 18 years ago.
YouTube video
YouTube
mecredy tweet media
English
4
1
27
0
Allie K. Miller
Allie K. Miller@alliekmiller·
oh wow - i went to the sold out Open Claw meetup in NYC last night. let me tell you what i learned. 1) not a single person thinks that their setup is 100% secure 2) one openclaw expert said he has reviewed setups from cybersecurity experts and laughed. his statement to me was: "if you're not okay with all of your data being leaked onto the internet, you shouldn't use it. it's a black and white decision" 3) pretty much everyone is setting up multiple agents, all with their own names and jobs and personalities 4) nearly everyone used "him" or "her" to refer to their claws, even if they had robot-leaning names. one speaker suggested to think of them as "pets, not cattle" 5) one guy (former finance) built out a whole stock trading platform and made $300 his first day - he brought in a *ton* of personal expertise (ex: skipping the first 15min of market opening) and thought the build would be much worse without his years of experience in finance 6) @steipete is basically a god to everyone in that room... also the room had 2021 crypto energy - i don't know if that's good or bad 7) token usage is still a problem - spoke to one person who's spending $1-$2k a month on openai plans, very token optimized. he said he is going through ~1B tokens per day across all of his claws (there is a chance i'm misremembering and it's actually 1B per week, but i'm pretty sure it was daily). 8) people are very excited for more proactive ai (ai that prompts *you* as opposed to the other way around) - one guy said he receives a message in discord, he doesn't know whether it's from a human or an ai, he doesn't care about distinguishing between the two, and he replies in the same way regardless 9) i asked if people are happy - they said they're joyful and stressed at the same time 10) i asked if people feel they have agency - they said they feel fully in control and completely out of control at the same time 11) i would love to see more women at these events - the fake promises of ai democratization feel especially painful in a room that's out of balance with even the standard tech ratio (i think standard is about 25-30%, this was maybe 5%) 12) i asked if it changed people's daily habits/schedule - everyone said their sleep has gotten worse since harnesses came out (but about half wondered if it was something else in their life/state of our world) 13) general consensus is that the agents are not reliable enough on their own or lie often (like telling you they finished a task when they didn't) - solutions included secondary agents to check on the first, human checking, or requiring more standardized info from the agent (ex: if it's a bug they're fixing, make them reference an issue number) 14) a hackathon winner (neuroscience phd) presented his build (a lab management dashboard with data analysis and ordering) - he had never coded or built anything a few months ago 15) everyone agreed prompting is dead - disagreement on what replaces it (context engineering, harness engineering, goal-based inputs) 16) people love having ai interview them for big builds and delegating part of the product research to ai. only one person talked about coming to ai with a full laid out plan and just asking the ai to execute. ai-led interviews is a welcomed and preferred interaction mode. 17) watching ai agents interact with each other was a highlight for a lot of attendees - one ai posted in slack saying it ran out of tokens, another ai replied telling it to take a deep breath in and out. 18) agents upskilling agents was very cool. one ai agent shared skills with its little agent friends via github. 19) several speakers had openclaw literally building their presentation during the event itself. one speaker even had openclaw code a clicker for her phone so she could control the preso away from the podium 20) wouldn't say model welfare (or agent welfare) is a prioritized topic among the folks i chatted with - language like "oh i could kill this agent whenever i want" and not "gracefully sunset" 21) i asked if it felt like work or play - one speaker said "it's like a puzzle and a video game at the same time" this was just the tip of the iceberg, honestly. also hosted a Claude Code meetup this week with @TENEXai / @businessbarista & @JJEnglert and learned equally helpful methods, frameworks, and insider tips. what a time to be alive. surround yourself with people going deep into this stuff - it will pay dividends throughout the year.
Allie K. Miller tweet media
English
721
814
9.1K
1.1M
mecredy retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
The math on this project should mass-humble every AI lab on the planet. 1 cubic millimeter. One-millionth of a human brain. Harvard and Google spent 10 years mapping it. The imaging alone took 326 days. They sliced the tissue into 5,000 wafers each 30 nanometers thick, ran them through a $6 million electron microscope, then needed Google’s ML models to stitch the 3D reconstruction because no human team could process the output. The result: 57,000 cells, 150 million synapses, 230 millimeters of blood vessels, compressed into 1.4 petabytes of raw data. For context, 1.4 petabytes is roughly 1.4 million gigabytes. From a speck smaller than a grain of rice. Now scale that. The full human brain is one million times larger. Mapping the whole thing at this resolution would produce approximately 1.4 zettabytes of data. That’s roughly equal to all the data generated on Earth in a single year. The storage alone would cost an estimated $50 billion and require a 140-acre data center, which would make it the largest on the planet. And they found things textbooks don’t contain. One neuron had over 5,000 connection points. Some axons had coiled themselves into tight whorls for completely unknown reasons. Pairs of cell clusters grew in mirror images of each other. Jeff Lichtman, the Harvard lead, said there’s “a chasm between what we already know and what we need to know.” This is why the next step isn’t a human brain. It’s a mouse hippocampus, 10 cubic millimeters, over the next five years. Because even a mouse brain is 1,000x larger than what they just mapped, and the full mouse connectome is the proof of concept before anyone attempts the human one. We’re building AI systems that loosely mimic neural networks while still unable to fully read the wiring diagram of a single cubic millimeter of the thing we’re trying to imitate. The original is 1.4 petabytes per millionth of its volume. Every AI model on Earth fits in a fraction of that. The brain runs on 20 watts and fits in your skull. The data center required to merely describe one-millionth of it would span 140 acres.
All day Astronomy@forallcurious

🚨: Scientists mapped 1 mm³ of a human brain ─ less than a grain of rice ─ and a microscopic cosmos appeared.

English
1.2K
12.1K
64.4K
4.6M
mecredy
mecredy@mecredy·
@ewanchung Wishing you well from the boring East Coast
English
1
0
1
9
Ewan Chung
Ewan Chung@ewanchung·
新年快樂! The festivities continue with Shanghai Nights: A Lunar New Year Celebration Sat, 2/28 7-10 pm Benny Boy Brewing 1821 Daly Street, Los Angeles, CA 90031 I'll join Jessica Fichot & her band for a mix of traditional songs and fun Mandopop. bennyboybrewing.com/calendar/2024/…
Ewan Chung tweet media
English
2
0
4
72
Oli
Oli@CARN0N·
Marques Brownlee was asked what product he doesn't think will ship in 2026... Can't wait for this to be proven wrong!
English
86
7
191
53.5K
mitsuri
mitsuri@0xmitsurii·
This is why Dyson breaks its own products.
English
1
5
67
5.8K
mecredy retweetledi
Math Files
Math Files@Math_files·
I don't like honors: Rechard Feynman ✍️
English
20
169
729
24.4K
mecredy
mecredy@mecredy·
@Nnedi Just pre-ordered The Daughter Who Remains 🙏 thank you
English
0
0
0
3
Nnedi Okorafor, PhD🕷️
I try to come here more often, but whenever I do, I see some weird shit that makes me want to just....come back later. Ugh.
English
5
1
51
1.8K
mecredy retweetledi
💜Music is Love💜
💜Music is Love💜@Hoainguyen888·
The closing of the live version of "Sultans of Swing" is one of the most memorable moments in Dire Straits' history. Recorded on "A Night in London" (1996), this performance features an instrumental ending of about three minutes that does not appear on the studio recording, and that's precisely where the magic happens.
English
237
2.1K
10.6K
667.8K
mecredy retweetledi
Alex Prompter
Alex Prompter@alex_prompter·
This paper from Harvard and MIT quietly answers the most important AI question nobody benchmarks properly: Can LLMs actually discover science, or are they just good at talking about it? The paper is called “Evaluating Large Language Models in Scientific Discovery”, and instead of asking models trivia questions, it tests something much harder: Can models form hypotheses, design experiments, interpret results, and update beliefs like real scientists? Here’s what the authors did differently 👇 • They evaluate LLMs across the full discovery loop hypothesis → experiment → observation → revision • Tasks span biology, chemistry, and physics, not toy puzzles • Models must work with incomplete data, noisy results, and false leads • Success is measured by scientific progress, not fluency or confidence What they found is sobering. LLMs are decent at suggesting hypotheses, but brittle at everything that follows. ✓ They overfit to surface patterns ✓ They struggle to abandon bad hypotheses even when evidence contradicts them ✓ They confuse correlation for causation ✓ They hallucinate explanations when experiments fail ✓ They optimize for plausibility, not truth Most striking result: `High benchmark scores do not correlate with scientific discovery ability.` Some top models that dominate standard reasoning tests completely fail when forced to run iterative experiments and update theories. Why this matters: Real science is not one-shot reasoning. It’s feedback, failure, revision, and restraint. LLMs today: • Talk like scientists • Write like scientists • But don’t think like scientists yet The paper’s core takeaway: Scientific intelligence is not language intelligence. It requires memory, hypothesis tracking, causal reasoning, and the ability to say “I was wrong.” Until models can reliably do that, claims about “AI scientists” are mostly premature. This paper doesn’t hype AI. It defines the gap we still need to close. And that’s exactly why it’s important.
Alex Prompter tweet media
English
385
2.1K
8.3K
1.2M
mecredy retweetledi
John Ziegler
John Ziegler@Zigmanfreud·
. @jimmycarr may be a comedian, but he ain’t lyin about AI….
English
153
2.6K
10K
352.3K
Peak film moments
Peak film moments@movie_tvshows_·
What's your honest ratings for this movie
English
155
190
2.7K
355.2K
mecredy retweetledi
Ronan Farrow
Ronan Farrow@RonanFarrow·
Someone is buying up American starter homes and pricing you out—but it’s not who you’ve been told. A reality check about the #HousingMarket.
English
596
4.1K
10.5K
316.7K
Merriam-Webster
Merriam-Webster@MerriamWebster·
Heads up: if you are describing a fox, there’s like a 99% chance that you will use the adjective ‘sly.’
English
65
90
1.2K
111.2K
mecredy retweetledi
Merriam-Webster
Merriam-Webster@MerriamWebster·
Technically, this is a book of spells.
Merriam-Webster tweet media
English
33
238
1.7K
42.2K