Connecting The Dots (...To Disruptions)

31.3K posts

Connecting The Dots (...To Disruptions) banner
Connecting The Dots (...To Disruptions)

Connecting The Dots (...To Disruptions)

@ConnectingODots

Great minds think alike, but (blessing or curse) I think different👽Engineer/Disruptions Analyst⚡CoOwner of Tesla🚀🔴https://t.co/YkjHxwoDmn

Surfing the 🌊🌊of Disruptions Katılım Temmuz 2020
453 Takip Edilen19.4K Takipçiler
Sabitlenmiş Tweet
Connecting The Dots (...To Disruptions)
TESLA INSIDER: Elon Musk Deleted ALL Defaults at Tesla & SpaceX 🚀 New video out: youtu.be/9ePWIYadju4 We all know @elonmusk's first-principles thinking and his famous 5-step algorithm - but the real secret runs deeper. Elon doesn't just think different. He questions everything, even the "defaults" everyone else accepts without thinking. After reading and thoroughly enjoying @alsahuquillo 's outstanding book The Musk Way, Alejandro kindly shared his private treasure trove: unpublished interviews with former @Tesla @SpaceX , @PayPal, @X and @neuralink employees, Elon biographers, and industry experts. The insights are mind-blowing. Nothing is sacred. Every process, assumption, and default gets challenged → that's how breakthroughs happen. Interviews featured here: @jimmyasoni ("The Founders") @robert_zubrin ("The Case for Mars") @teardowntitan, @Autoline John McElroy Sanjay Bhargava (PayPal, SpaceX), Brian Mosdell (SpaceX) @KevinDewald, Han Zhang (Neuralink) @RomainHedouin, @JoeJustice (Tesla) I only scratched the surface, so if you want a follow-up episode diving into more gems and insights from these interviews and many others - Let me know below!
YouTube video
YouTube
English
0
10
96
7K
Connecting The Dots (...To Disruptions)
Focus is important, but so is removing future roadblocks and threats. Nobody in tesla is working on a lunar factory right now - this is just an idea - but working on a private fab like they now announced they will (and 4 years ago I predicted they will 😎) is crucial. And it's not like they have two people in the factory and can't do more than one thing at once. Starting a fab project won't delay autonomous driving one bit.
English
0
0
0
23
Test La 🇺🇦
Test La 🇺🇦@KrestTest·
@ConnectingODots @elonmusk Can we just get a few thousand Robotaxi's working first. I'm all for Moon stuff but how about actually completing 1 mission at a time :p
English
2
0
0
34
Connecting The Dots (...To Disruptions)
Call me nitpicky, @elonmusk but looks like an “R” is missing in Tesla's TERRAFAB 👀 (Gotta call it that if we're prepping for LUNAFAB on the Moon someday) 🌍+🌕=🚀🚀🚀
English
3
0
22
3.4K
Jeffrey Emanuel
Jeffrey Emanuel@doodlestein·
People are constantly asking me about my planning and execution methodology for creating software using my Agent Flywheel system of tooling, prompts, and workflows. As a result, I find myself posting the same link, often multiple times in a day, to a post of mine that includes links to 5 other X posts and threads I've made about my methodology. While this "works," in that a motivated person can read through each post and understand my approach pretty well, I realize that it's far from optimal, and a lot of people see that and just give up quickly. So I finally decided to gather together all my materials on my method and turn them into two different articles with different target audiences. Perhaps unsurprisingly, I was able to extensively leverage my own tools to do this effectively. For one, I was able to use my xf tool (for searching your personal X post archive that you can download from X) to pull in all the various posts and my replies to people in those threads into a single large markdown document. Then, I had agents use my cass tool to search for my real-world usage of my various tools and to gain insights into my planning process from firsthand observation. I also had a lot of materials in the tutorials section of the Agent Flywheel website, as well as in various agent skills I've created. All of this was woven together and synthesized into a single comprehensive document, The Flywheel Approach to Planning and Bead Creation: agent-flywheel.com/complete-guide This is the new canonical and complete guide to my approach, with everything in one place and synthesized into a coherent whole so that you don't need to scrounge around for all the different posts. I will also be updating the article as my methodology evolves and in response to reader feedback on what is confusing or unclear (so please let me know in the comments). Incidentally, as I got to the final stages of preparing this document, I found this prompt to be extremely useful: "Read the entire document again with fresh eyes all the way through, putting yourself in the position of a smart software developer who is new to agentic coding and doesn't know how to use the Flywheel or agent swarms effectively yet and who doesn't understand the planning process or beads, etc. What would be most confusing? How could we make it more engaging and intuitive without removing any content and without simplifying anything (think additively)?" Beyond that big comprehensive guide, as the Flywheel system has grown to 20+ tools now, I've heard repeatedly from people that they find the entire system too overwhelming, because there are so many tools to understand. But the truth is, there is a "core" to the Flywheel approach which captures most of the value and just uses 3 tools: * My Agent Mail project for coordination and communication of multiple agents of various types; * beads_rust (br) for task management; and * beads_viewer (bv) for automatically triaging the beads graph so that agents always work on the optimal next bead to maximize overall development velocity. So to that end, I created a separate, shorter, more-focused article for beginners to the system, the Flywheel Core Loop Guide: agent-flywheel.com/core-flywheel If you've previously been interested in the Flywheel but found it to be too hard to understand or had "information overload" (which is totally understandable... this stuff emerged organically over months of working on this stuff, so I'm sure it's a lot to take in all at once like that), I highly recommend checking it out. Once you get the hang of it, you can then layer in additional utilities, starting with destructive_command_guard (dcg) to prevent agents from blowing up your projects or machine; coding_agent_session_search (cass) to search instantly across all your agent sessions, and give this power to your agents themselves; and ultimate_bug_scanner (ubs) for finding bugs and problems across most popular programming languages in a single tool that is heavily optimized for use by agents.
Jeffrey Emanuel tweet media
English
29
26
250
17.8K
Connecting The Dots (...To Disruptions)
The csmase for us living in a simulation is getting stronger by the day. We're already at the stage where it's hard to tell whether what we see online is real, and very close to saying the same about real time video interactions. "real life" (offline) encounters are still different, but who knows, maybe they're just another i/o device that is used at different times.
English
0
0
3
344
Connecting The Dots (...To Disruptions)
@grok - 5 questions for you: Question one: based on the methodology in the above post alone, what can and cannot be legitimately concluded? Where does the design fall short of the claim? Question two: what is the methodology quietly assuming it never defends? What would have to be true for this design to produce valid results? Question three: if this post was replicated in a different context with a different audience, what changes? How far do these conclusions actually travel? Question four: what argument is this post entering? Who is it responding to and what would those people say back? Question five: what is the most important paper or post that should have been added here and what does its absence reveal?
English
1
0
0
337
Muhammad Ayan
Muhammad Ayan@socialwithaayan·
I accidentally found out how to master a research paper in 40 minutes. A first-year student at UCL showed me his Claude setup. I thought he was just skimming. Then I watched him dismantle a 60-page methodology section his professor had spent 10 years writing. Here's exactly what he did: First: he didn't ask Claude to summarise anything. That's what everyone does. Paste the paper in. Ask for a summary. Get a clean paragraph back. Feel like you've read it. Move on. That's not reading a paper. That's reading Claude's description of a paper. Those are not the same thing. And the difference between them is the difference between a student who can describe research and a researcher who can evaluate it. He read the methodology himself first. All 60 pages. No Claude. Then he came back and asked one question: "Based purely on what this methodology describes, what can be legitimately concluded from this study and what cannot? Don't tell me what the authors claim. Tell me what the design actually allows them to say." Most students read a methodology to understand what the researchers did. He read it to find the gap between what they did and what they claimed. That gap is where weak science lives. That gap is what peer reviewers spend entire careers learning to find. He found it in his first year because he asked the one question that makes it visible. But the next part is what broke my brain. He asked: "What is this methodology quietly assuming that it never explicitly defends? What would have to be true about the world for this research design to produce valid results?" His professor had spent 10 years building that methodology. Claude found two undefended assumptions in four minutes. Not because the professor was careless. Because when you live inside a methodology for a decade you stop seeing the beliefs buried underneath it. They become invisible. They feel like facts. A first-year student with the right question found them before Christmas of his first term. Then he tested how far the conclusions actually travelled. He asked: "If this exact study was run again with a different population in a different country with a different research team, what would most likely change about the results? What does that tell me about whether these findings are general or specific to this exact context?" Most published findings are presented as universal truths. Most are situational observations dressed up as universal truths. That question finds the line between the two every single time. Once you start asking it you cannot stop. Every paper you read after that you read differently. You stop asking what it found and start asking where it found it and whether it would find the same thing anywhere else. Then he mapped where the paper sat in the field. He asked: "What argument is this paper entering and who is it arguing against? What would the researchers this paper challenges say in response? Where does this sit in the conversation that was already happening before it was written?" Every paper is a move in an argument that started before it was written and continued after it was published. Most students never find out what argument. They read the paper and miss the entire reason it exists. He found out in five minutes. A paper you understand alone is a collection of findings. A paper you understand inside its argument is a position. Those are not the same thing. And the difference shows up every single time you open your mouth in a seminar. Then he went after the bibliography. He asked: "What is the single most important paper missing from this bibliography that every serious researcher in this field would consider essential? What does its absence tell me about the blind spots in this author's thinking?" He found a foundational study the professor had never cited. Not an obscure one. A widely known one. One that directly challenged the central claim of the paper he was reading. He brought it to the next seminar. His professor stopped mid-sentence. Asked him where he had found the connection. He said he asked Claude what was missing from the bibliography and followed the answer. The room went quiet for a moment. His professor told him afterward that learning to ask what a paper isn't citing is something most researchers don't develop until years into their careers. He had been at UCL for eleven weeks. The final question is the one I keep thinking about. Before closing every paper he asked: "What is this paper's single most vulnerable claim? Not the weakest evidence. The claim that if successfully challenged would unravel the most of what this paper is trying to argue." He wrote it down every time. He walked into every seminar after that with the most vulnerable claim of every paper sitting on a notepad in front of him. His professor started calling on him first every week. Not because he had read more than his classmates. Because he had read differently than his classmates. There is a version of reading that produces students who can describe what a paper says. And there is a version that produces researchers who understand what a paper does, what it assumes, where it sits, and exactly where it breaks. Universities teach the first version for three years and hope the second one develops on its own somewhere along the way. Here is the actual workflow. Five questions. Every paper. In order. Question one: based on the methodology alone, what can and cannot be legitimately concluded? Where does the design fall short of the claim? Question two: what is the methodology quietly assuming it never defends? What would have to be true for this design to produce valid results? Question three: if this study was replicated in a different context with a different population, what changes? How far do these conclusions actually travel? Question four: what argument is this paper entering? Who is it responding to and what would those people say back? Question five: what is the most important paper missing from the bibliography and what does its absence reveal? Five questions. Forty minutes. Any paper. Claude didn't read the paper for him. It gave him the questions that experienced academics ask automatically after years inside a field. He just didn't wait years to learn them. The paper didn't change. The questions did. Most students spend three or four years at university learning to describe research. He spent one afternoon learning to think about it. That is not a faster way to read a paper. It is a completely different thing to do with one. The researchers who figure that out early become the ones everyone else spends their careers citing. He figured it out at nineteen.
Muhammad Ayan tweet media
English
10
27
115
13.3K
Connecting The Dots (...To Disruptions)
This came out of @Grok Imagine. It's not perfect (yet), but unfathomably good. Like, how is this even possible right now? Grateful to be living in these days of wonder and awe, when Engineering makes magic real✨
English
4
0
15
3K
Connecting The Dots (...To Disruptions)
The 1st extension (20-30s, kissing) was smooth and easy. I tried a fee times but any one of them could do, they were all good. But the final extension (20-30s, slitting throat) took several trials, and even this final version isn't very good. Lots of inconsistencies. But this is in grok's infancy. Wouldn't be long before full length videos would be easily possible
English
0
0
2
529
Connecting The Dots (...To Disruptions)
@grok PS the amazing part isn't that this is possible. Studios have been making this for years. It's that I was able to create this without hiring anyone or studying anything, for free, in a matter of minutes
English
1
0
4
579
JoeJustice 💪🦾
JoeJustice 💪🦾@JoeJustice·
@elonmusk I uploaded a manga page from Demon Hunter (Kimetsu no Yaiba” asking grok to teach me each kanji on the page. It hallucinated different kanji.
English
2
0
0
208
Connecting The Dots (...To Disruptions)
Pro tip: FAILURE IS AN OPTION To fix lying and doubling down on the lies - in important threads add something like: "Failure is an option. Be honest if you're unsure or can't do something." This dramatically reduces halliucinations born from trying to provide an answer when no good one is found Where to add it: • Claude: In Custom Instructions or Projects • ChatGPT: In Custom Instructions • Gemini: In custom instructions in the settings • Grok: At the beginning of the chat
JoeJustice 💪🦾@JoeJustice

@elonmusk It’s not just that grok is regularly incorrect, humans are too, but it is that grok is confidently wrong and when corrected, half the time rejects the correction or relapses into the same incorrect position. Humans often at least signal they are unsure.

English
0
0
9
1.4K
Bill W
Bill W@BillW_Old_EE·
Out with the old ‘22 MYLR and in with the new ‘26 MY Premium RWD Just picked up in Louisville.
Bill W tweet mediaBill W tweet media
English
4
1
11
117
Connecting The Dots (...To Disruptions)
2/ The recursive loop doesn’t stop there. Scroll the replies: endless head-nodding, fist-shaking sub-threads cascading down to single-digit IQ takes. Classic birds-of-a-feather echo chamber, confirming their own bias - while the companies they rage against keep building the future. Oh, and speaking of dilution, @RealJimChanos - how much did YOU dilute YOUR investors' returns over the years before shutting down Kynikos end-'23 and returning remaining capital? No alpha left to report since then. Keep the reminders coming though! 🤓 __ @stevenmarkryan @ICannot_Enough @elonmusk
Connecting The Dots (...To Disruptions) tweet media
English
1
1
14
705
Tom Johnson
Tom Johnson@tomjohndesign·
Dithering is dead. Electro-static stippling is king.
English
62
200
4.7K
283.4K