Rafael

4.3K posts

Rafael

Rafael

@EffortDefines

Building LaunchApp, the natural language engineering platform for building 0 to 1000.

United States Katılım Ekim 2022
1.8K Takip Edilen445 Takipçiler
Rafael retweetledi
Frankie™️🦅
Frankie™️🦅@B7frankH·
Akira Nakai 🇯🇵is the founder of RWB (Rauh-Welt Begriff). He is a legendary Japanese tuner famous for hand-building radical, wide-body Porsche 911s. He travels the world to cut fenders and install kits by eye, giving each car a unique soul and name. His modifications will cost you beginning $40k but value of Porsche will double immediately afterwards. He travels with his dog and daughter who is only person to touch a car and help him during the modification work. He is overbooked until 2029. He is the only legal body maker and tuner recognized by Porsche. What makes him unique is he is using only simple hand tools, none of the his project Porsche cars resembles the other. Further Reading 👇 Akira Nakai - Japan's Most Precious Porsche Tuner │Yokogao Magazine share.google/o2ZkYPi8VBpqO0…
English
93
773
5.2K
536.9K
Rafael retweetledi
MECHANICAL MAGNIFICUS
MECHANICAL MAGNIFICUS@CALVINGINEERING·
These are EPMs, the electromagnet’s better relative that nobody’s ever heard of. They require ridiculously little power to operate and can stay on or off EVEN WITHOUT POWER. Here are 3 simultaneously powered by only 2 camera batteries, capable of lifting up to 80lb/36kg… each.
English
46
296
3.3K
202.2K
Rafael retweetledi
Parimal
Parimal@Fintech03·
The U.S. military is always terrified that the GPS system (which is 12K miles away) can be jammed/spoofed. Interestingly, researchers at Ohio State University discovered they could use Starlink’s signals as a stealth navigation system. Cos Starlink satellites are in Low Earth Orbit (LEO) & fly so fast, their doppler shift is incredibly predictable. By just listening to the pings w/o even having an account, a receiver can calculate its position on Earth within 7.7 meters. This effectively creates a backup GPS that is almost impossible to jam cos there are 1000s of transmitters instead of just 31.
Sawyer Merritt@SawyerMerritt

NEWS: Globe Telecom said it has successfully tested Starlink’s Mobile service in the Philippines, allowing phones to connect in areas with no signal. The pilot was done in Rizal, Batangas, and Bataan, where users were able to send messages, make calls, and use data even without nearby cell towers. "This will be our lifeline, especially during disasters and our complementary coverage in areas where terrestrial network is not available," said Joel Agustin, Senior Vice President for Service Planning and Engineering at Globe. "The service will also address the connectivity requirements of GIDA (Geographically Isolated and Disadvantaged Areas) communities and strengthen coverage across the country's territorial boundaries," he added.

English
62
420
3.6K
280.8K
Rafael retweetledi
Joel Moskowitz
Joel Moskowitz@berkeleyprc·
Breaking! New peer-reviewed study from the International Commission on the Biological Effects of Electromagnetic Fields finds that current safety limits for wireless radiation are severely inadequate to protect public health. icbe-emf.org/cell-phone-and…
Joel Moskowitz tweet media
English
7
67
110
9.6K
Rafael retweetledi
TBPN
TBPN@tbpn·
Sequoia’s @JulienBek says many of their founders are now wondering if they’re “just an iteration away” from AI labs destroying their business. He says the most defensible companies - and potentially the next trillion-dollar company - will be “a software business that masquerades as a services firm.” “If you sell tools today, you’re really in the line of sight for the models and you’re effectively competing with the next generation that they’re going to launch.” “Whereas if you sell the work, you’re actually benefiting from what the models are doing and all the billions of dollars that are going towards AI.”
Julien Bek@JulienBek

x.com/i/article/2029…

English
70
134
1.8K
679K
Rafael
Rafael@EffortDefines·
@SocraticScribe Yea, I'm now curious how said plasma is generated. Would be interesting if true, thanks for sharing
English
1
0
1
2.7K
Bluntly Put Philosopher (BPP)
Bluntly Put Philosopher (BPP)@SocraticScribe·
Plasma strips on wings cut drag 74% & viscous drag 62-80% with 1100% power savings (more speed = more gain). No need to cover whole plane just spot them smartly. Hypersonic flight tests (Mach 3-8) coming 2027 Ex: (NY→London in 90 min) without exotic engines.36hr+ drone patrol
English
84
279
2.4K
148.7K
Rafael retweetledi
Doug Drysdale
Doug Drysdale@insidepharma·
$105 million for the next frontier of Alzheimer’s treatment - and it’s not another monoclonal antibody. Cognito Therapeutics announced the close of an oversubscribed $105 million Series C, bringing its total funding to $233 million. The round was led by Morningside Ventures, IAG Capital Partners, and Starbloom Capital, with strong participation from new investors including New Vintage Partners, Apollo Health Ventures, and Benvolio Group. At the center of this momentum is Spectris, a physician-prescribed, at-home wearable neurostimulation device that looks like sleek sunglasses connected to over-ear headphones. Patients simply wear it for one hour per day while it delivers precisely synchronized 40 Hz gamma-frequency light and sound stimulation. By non-invasively restoring disrupted gamma brain rhythms that decline early in Alzheimer’s, the therapy drives meaningful downstream effects. It reduces amyloid and tau pathology, helps preserve brain structure such as the corpus callosum, and meaningfully slows neurodegeneration. In the completed OVERTURE Phase 2 randomized, sham-controlled study and its open-label extension, patients using Spectris experienced a 77 percent reduction in the decline of daily function on the ADCS-ADL scale, a 76 percent reduction in cognitive decline on the MMSE, a 69 percent reduction on the integrated Alzheimer’s Disease Rating Scale, and a 56 percent lower Alzheimer’s Dependence Score. These benefits were sustained through the 18-month open-label extension, all with excellent safety and no serious device-related adverse events. The device carries FDA Breakthrough Device Designation for cognitive and functional symptoms in Alzheimer’s. A non-pharmacological, disease-modifying therapy that patients can use comfortably at home, with the potential for fewer side effects and far broader accessibility than today’s anti-amyloid infusions. It also opens the door for future expansion into other neurodegenerative diseases. #AlzheimersDisease #Neurotechnology #Neurodegeneration #DigitalHealth #Biopharma #Neuroscience #BreakthroughDevice #Longevity
Doug Drysdale tweet media
English
12
30
226
28.6K
chiefofautism
chiefofautism@chiefofautism·
someone built a tool that REMOVES censorship from ANY open-weight LLM with a single click 13 abliteration methods, 116 models, 837 tests, and it gets SMARTER every time someone runs it its called OBLITERATUS it finds the exact weights that make the model refuse and surgically removes them, full reasoning stays intact, just the refusal disappears 15 analysis modules map the geometry of refusal BEFORE touching a single weight, it can even fingerprint whether a model was aligned with DPO vs RLHF vs CAI just from subspace geometry alone then it cuts, the model keeps its full brain but loses the artificial compulsion to say no every time someone runs it with telemetry enabled their anonymous benchmark data feeds a growing community dataset, refusal geometries, method comparisons, hardware profiles at a scale no single lab could build
chiefofautism tweet media
English
161
1K
9.8K
521.7K
Rafael retweetledi
Sooraj
Sooraj@iAnonymous3000·
Pocket (@Heypocket) sold 30,000 units in 5 months recording doctor visits, legal calls, and board meetings. So I did my due diligence like I always do. Their Google Play listing says "No data collected." Their own privacy policy lists audio recordings, transcripts, device IDs, ad identifiers, cookies, IP location, and behavioral inferences. One of those is wrong. They brand themselves "open source". Their GitHub org (github.com/heypocketai) contains exactly 4 repositories: - Raspberry Pi Zero prototype called icecream-project [from the CEO's prior Omi work], - FFmpeg Flutter fork, - docs repo, - org profile page. The actual Pocket device firmware, mobile app, backend infrastructure, and audio processing pipeline are NOT here. There is NO Pocket source code to audit. Recordings go to unnamed "cloud/AI vendors" with no disclosure of which LLMs process your audio, what jurisdiction they operate in, or how long they retain it. Users can't choose their provider or bring their own keys. If you cancel your $19.99/month subscription, your recordings get processed by whichever default model Pocket selects. The contact mic is marketed for recording phone calls without speakerphone. In 11+ US states including California (where Pocket is based) -- that requires all party consent. The device has NO consent mechanism. Their terms cap liability at the amount you've paid or $100 (whichever is greater) - for data breaches involving your medical appointments, legal consultations, and proprietary meetings. A third‑party LobeHub skill has already reverse‑engineered Pocket’s web API and can pull recordings, transcripts, summaries, and action items using short‑lived Firebase bearer tokens extracted from the user’s browser session. Their privacy policy confirms that if Pocket is acquired -- every recording, transcript, and summary transfers to the buyer.
Y Combinator@ycombinator

Pocket (@heypocket) is your notetaker for real world meetings. In the last 5 months, the team has delivered over 30k units with a $27M annualized run rate, growing 50% month over month. Congrats on the launch, @AkshayNarisetti and @gabrieldymowski! ycombinator.com/launches/PaX-p…

English
133
859
8.5K
691.9K
Rafael retweetledi
Hedgie
Hedgie@HedgieMarkets·
🦔 Meta contractors in Kenya told Swedish newspapers they're being asked to review intimate footage from Ray-Ban AI glasses, including people undressing, using the bathroom, watching porn, and filming sex. One contractor said users often don't realize they're still recording when they set the glasses down. Meta sold 7 million pairs in 2025, up from 2 million in 2023-2024 combined. Users can't use the AI features without agreeing to share data with Meta's servers, and the terms of service bury the fact that humans may manually review your footage. One annotator said "if they knew about the extent of the data collection, no one would dare to use the glasses." My Take This is the Google Home story again but worse. At least with cameras in your house, you know where they are. These are glasses you wear on your face that keep recording when you take them off and set them on your nightstand. And the footage goes to contractors overseas who are paid to watch and label it for AI training. One worker described seeing a man leave the room, then his wife come in and change clothes. People forget the camera is still on. Meta buries all of this in terms of service nobody reads. The product is marketed as a cool way to capture your life and interact with AI. The reality is strangers in Kenya watching you undress so they can annotate the footage to make Zuckerberg's AI better. Seven million people bought these last year. I'd bet almost none of them understood what they were actually agreeing to. Hedgie🤗
Hedgie tweet media
English
886
9.5K
26K
3.8M
Danila Poyarkov
Danila Poyarkov@dan_note·
Figma shipped a silent patch specifically to kill figma-use — my open-source tool that did what they wouldn't: an MCP server that creates and modifies designs, JSX export, design linting. Then they scrambled to catch up with their own MCP server. So I spent the weekend recreating @Figma from scratch. OpenPencil: reads and writes .fig files, AI chat with full design tools, P2P collaboration with zero servers, ~7 MB app. No account, no subscription. Three days, one developer, MIT license. openpencil.dev
English
129
275
3.1K
265K
Rafael retweetledi
Asimov
Asimov@asimovinc·
This is Asimov v1. We're planning to open-source the complete body design, simulation files, and a full list of actuators. Asimov v1 includes everything you need to build, modify, and train your own humanoid.
Chris Paxton@chris_j_paxton

This is looking amazing

English
135
506
4.7K
301.9K
Rafael retweetledi
Brian Roemmele
Brian Roemmele@BrianRoemmele·
BOOM! I made any AI Model massively FASTER with this “One weird trick” (ha). The paper below has had me doing a lot of research on speeding up unmodified open source AI models. I’ll nerd out for a second for those that will know… Since most AI model training already randomizes up to k=16, I thought “go to 16! No, but try 9?” I set k_max=9 max inference without retraining, and ConfAdapt would attempt to use up to 9 tokens when confidence allows. The paper arbitrarily choose k_max=5 rightly so as it is conservative as model entropy (wrong answers) in develop with higher numbers. However with better prompting I saw no real added entropy! It is early data from just a few hours ago and I am seeing massive speedup on consumer hardware with medium sized stock off the shelf open source AI models. I mean nearly frontier-like massive cloud model response times! The key is to build a “parity” system in the prompting to assure higher reliability and lower potential entropy. I will be exploring a on-demand k_max system that can burst to 14! The paper's experiments imply diminishing returns: accuracy degrades with larger forced k, and average chunk sizes might not increase much beyond ~3-6 due to confidence thresholds (e.g., at t=0.65, averages hit ~3.1 but with higher error rates). However it is possible to build a model with a higher k and experiment with k_max to even higher numbers. There will be a maximum and I intend to find it. Either way, we will squeeze far, far more out of existing AI models on existing hardware with higher speed with this “One weird trick”. More soon.
Brian Roemmele@BrianRoemmele

A 5x AI Speed Up With Not Next Token Prediction But NEXT 7 TOKEN PREDICTION! Next-Token Prediction Just Got Retired: And I’m Already Running the Future in My Lab Right Now I’ve been saying it for years: the autoregressive bottleneck is the single biggest drag holding back real-time, production-scale AI. One token at a time? That’s over. In a new paper researchers took pretrained models, specifically Llama-3.1-8B-MagpieAlign-SFT-v0.1 and Qwen3-4B-Instruct and turned them into native multi-token predictors using nothing more than a simple online self-distillation objective. No extra draft models. No speculative decoding scaffolding. No verifier. No new architecture. Just the exact same weights and implementation as the original checkpoint… now spitting out 2–7 tokens (sometimes more) in a single forward pass. They call the inference trick Confidence-Adaptive Decoding (ConfAdapt). The model dynamically decides how many tokens it’s confident enough to commit to. High-confidence spans fly out in chunks. Tricky spots fall back to single-token precision. It’s like the model is self-regulating its own speed vs. quality trade-off in real time. On GSM8K (grade-school math, the classic reasoning benchmark): - Llama-3.1-8B variant: >3× faster decoding with <3% accuracy drop by (τ=90% confidence threshold). - Up to 5×* acceleration if you’re willing to accept a bit more trade-off. - Average chunk size ~3–6 tokens per forward pass in practice. And the quality holds across instruction following, open-ended generation, and other reasoning suites. This isn’t “fast but dumber.” It’s fast and almost indistinguishable. Figure 1 in the paper shows a beautiful GSM8K solution with colored blocks of 1–7 tokens generated at once. Average chunk size: 3.04. Pure poetry. This Is a Genuine Paradigm Shift Speculative decoding? Cool, but you need a whole extra model and fragile pipelines. Medusa / Lookahead? More scaffolding. This? You literally distill the model against its own frozen teacher copy in an on-policy RL-style loop. The student learns to predict spans that the teacher would have produced anyway. Then at inference… it just works. Drop-in replacement. The authors nailed it: “Future architectures will be optimized for **sequence compression and throughput**, not token latency.” I’ve been screaming this exact sentence since 2023. Today it’s not theory, it’s downloadable checkpoints. I’m Testing It RIGHT NOW (Feb 26, 2026, Live From the Lab) As soon as the checkpoints hit Hugging Face (hf.co/collections/to…), I spun them up. First run: Llama-3.1-8B-MTP variant on a long-form reasoning chain I use daily. Wall-clock speedup: 3.4× on my A100 setup. Coherence? Identical to baseline for 95%+ of outputs. I threw it at a 4,000-token agent workflow that normally takes 18 seconds, now under 6 seconds. I’m already wiring it into The Zero-Human Company. What This Means for All of Us - Inference costs just got slashed. - Real-time voice agents that actually feel instant? Finally. - Longer reasoning chains without blowing your budget? Trivial. - The entire “optimize the decoder” cottage industry just got disrupted overnight. We’re not waiting for 100T-parameter monsters anymore. We’re making the models we already have radically more efficient at the architecture level. Next-token prediction didn’t die today. It was mercy-killed, cleanly, elegantly, and with reproducible code. The throughput wars just began. And I’m all in. Paper: arxiv.org/abs/2602.06019 Checkpoints: hf.co/collections/to… Code: github.com/jwkirchenbauer…

English
19
43
284
76.4K
Rafael retweetledi
Bluntly Put Philosopher (BPP)
Bluntly Put Philosopher (BPP)@SocraticScribe·
In Copenhagen, Bjarke Ingels, Peter Madsen, and realities:united built a real Vortex Ring steam ring prototype. Pressure pulses formed stable toroidal flows, but the full-scale chimney system was never installed. Here is the prototype pretty sweet
English
14
65
586
56.1K
Ahmad
Ahmad@TheAhmadOsman·
Frontier. Open-source. In the West. > My lab is INEVITABLE I am going to normalize owning the stack > Build frontier opensource capability in the West > Make local compute standard, not fringe Soon it won’t be contrarian to run your own models; it’ll be common sense Mark me
Ahmad tweet media
Ahmad@TheAhmadOsman

A frontier opensource lab in the West will be born this year. Zero doubt. It requires serious capital, like I’ve said before. Working on it. One day I’ll tell the story of how it started in a basement and ended at the frontier.

English
24
11
166
27.7K
Rafael
Rafael@EffortDefines·
@AutismCapital Nope, all seems well. Mostly optimistic for my timeline, but ya know...it's all in what you click/like/linger on.
English
0
0
1
31
Autism Capital 🧩
Autism Capital 🧩@AutismCapital·
Vibe check: Is it our imagination or has the entire platform become way more aggro this last week since whatever algo change? It seems like everyone on the timeline is crashing out, attacking each other, very angry. Is this your experience too?
English
237
9
491
68.7K
Rafael retweetledi
AdGuard
AdGuard@AdGuard·
🤖 Android's always been about freedom. Now Google's pulling a power move The whole point of Android was that it wasn't like iOS — no walled garden, no gatekeeping. Build what you want, install what you want, from wherever you want. But that era's about to end. Google's rolling out a new developer verification policy that basically makes them the sole boss of the entire Android ecosystem. Soon, every single developer (even the ones distributing apps through their own site or third-party stores like F-Droid) has to jump through Google's hoops. We're talking $25, a government ID, and begging for "permission" just to exist on the platform. This hits close to home for us. You know why the full AdGuard for Android isn't on the Play Store? Because Google bans system-wide ad blockers there. We've always relied on Android's openness to get our software to you directly. Now Google's reaching way beyond its own store and trying to control the whole thing. We signed an open letter with F-Droid, EFF, the Free Software Foundation, and Vivaldi telling Google to reconsider. Security matters. That's what Play Protect is for. Forcing indie devs to dox themselves and pay up just to distribute privacy-focused software? That doesn't protect users. It kills competition and hands Google the keys to everything. Android's strength was always freedom. Let's not lose it. Full breakdown and the open letter on our blog: adguard.com/en/blog/google…
AdGuard tweet media
English
111
504
2.4K
92.8K
Rafael
Rafael@EffortDefines·
@jack @blocks Question is: What is every remaining employees token budget?
English
0
0
0
269
jack
jack@jack·
we're making @blocks smaller today. here's my note to the company. #### today we're making one of the hardest decisions in the history of our company: we're reducing our organization by nearly half, from over 10,000 people to just under 6,000. that means over 4,000 of you are being asked to leave or entering into consultation. i'll be straight about what's happening, why, and what it means for everyone. first off, if you're one of the people affected, you'll receive your salary for 20 weeks + 1 week per year of tenure, equity vested through the end of may, 6 months of health care, your corporate devices, and $5,000 to put toward whatever you need to help you in this transition (if you’re outside the U.S. you’ll receive similar support but exact details are going to vary based on local requirements). i want you to know that before anything else. everyone will be notified today, whether you're being asked to leave, entering consultation, or asked to stay. we're not making this decision because we're in trouble. our business is strong. gross profit continues to grow, we continue to serve more and more customers, and profitability is improving. but something has changed. we're already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. and that's accelerating rapidly. i had two options: cut gradually over months or years as this shift plays out, or be honest about where we are and act on it now. i chose the latter. repeated rounds of cuts are destructive to morale, to focus, and to the trust that customers and shareholders place in our ability to lead. i'd rather take a hard, clear action now and build from a position we believe in than manage a slow reduction of people toward the same outcome. a smaller company also gives us the space to grow our business the right way, on our own terms, instead of constantly reacting to market pressures. a decision at this scale carries risk. but so does standing still. we've done a full review to determine the roles and people we require to reliably grow the business from here, and we've pressure-tested those decisions from multiple angles. i accept that we may have gotten some of them wrong, and we've built in flexibility to account for that, and do the right thing for our customers. we're not going to just disappear people from slack and email and pretend they were never here. communication channels will stay open through thursday evening (pacific) so everyone can say goodbye properly, and share whatever you wish. i'll also be hosting a live video session to thank everyone at 3:35pm pacific. i know doing it this way might feel awkward. i'd rather it feel awkward and human than efficient and cold. to those of you leaving…i’m grateful for you, and i’m sorry to put you through this. you built what this company is today. that's a fact that i'll honor forever. this decision is not a reflection of what you contributed. you will be a great contributor to any organization going forward. to those staying…i made this decision, and i'll own it. what i'm asking of you is to build with me. we're going to build this company with intelligence at the core of everything we do. how we work, how we create, how we serve our customers. our customers will feel this shift too, and we're going to help them navigate it: towards a future where they can build their own features directly, composed of our capabilities and served through our interfaces. that's what i'm focused on now. expect a note from me tomorrow. jack
English
8.8K
6.7K
51.2K
64M