farhad++

453 posts

farhad++ banner
farhad++

farhad++

@frhd27

vi/vim

Tannhäuser Gate Se unió Mart 2009
197 Siguiendo240 Seguidores
Thariq
Thariq@trq212·
To manage growing demand for Claude we're adjusting our 5 hour session limits for free/Pro/Max subs during peak hours. Your weekly limits remain unchanged. During weekdays between 5am–11am PT / 1pm–7pm GMT, you'll move through your 5-hour session limits faster than before.
English
2.3K
529
7.4K
7.7M
Dr.Disco (Knecht des Mini)
Für mich noch immer das großartigste Computerspiel in diesem Genre. Lifetime. Kein Einziges konnte ihm davor/danach das Wasser reichen. PS.: Ja Ja. Monkey Island, dies das. Indie Atlantis trotzdem Bestes für mich.
Dr.Disco (Knecht des Mini) tweet media
Deutsch
100
16
643
28.1K
Alex Finn
Alex Finn@AlexFinn·
My mind is so blown I have my own personal AI research lab running 24/7/365 I'm just one dude with an entire team of AI agents training models and doing R&D I think this is the biggest opportunity right now: taking Karpathy's Autoresearch framework and applying it to everything I have a team of AI agents running experiments all day and night on system prompts, local models, and LoRAs. I also have them doing R&D on my new project. They spend all day discussing my app, coming up with new ideas, then debating eachother An entire organization of autonomous agents continuously improving my business 24/7/365 I feel like I have unlimited power Right now they are all running on ChatGPT 5.4, but today I will move them to local models running on my 3 Mac Studios and DGX Spark so this will all become free Free, local super intelligence working for me at all times. 10 year old me would think this is a scifi Do this immediately: 1. Ask your agent about Karpathy's Autoresearch. Deeply understand it 2. Ask your agent how you could apply that framework to other projects you're working on 3. Download a local model. Doesn't matter what computer you have. There is a model you can run on it. 4. Just get used to how it works. Learn from it. 5. Push yourself to get uncomfortable every day and try new things. There has never been a better/more profitable time to be a tinkerer
Alex Finn tweet media
English
251
224
2.2K
166.4K
farhad++ retuiteado
kernel
kernel@kernelshark·
German AI congress look inside > Boomer daycare for speakers with 0 technical knowledge Are we winning the AI battle Germanbros?
kernel tweet media
English
371
366
5.7K
216.9K
farhad++
farhad++@frhd27·
@HolderBaggins @hamptonism Funny they have their most openings in Sales. Those Superbowl Ads must have taken half the budget for 2026...
English
0
0
1
155
ₕₐₘₚₜₒₙ
ₕₐₘₚₜₒₙ@hamptonism·
Anthropic CEO: Software engineering will be completely obsolete in 6-12 months…
English
1.9K
2.5K
16.1K
6.2M
Tim Urban
Tim Urban@waitbutwhy·
@DouthatNYT Came across a moltbook post that said this
Tim Urban tweet media
English
65
188
2.3K
184.5K
Ross Douthat
Ross Douthat@DouthatNYT·
Scenarios of A.I. doom have tended to involve a singular god-like intelligence methodically taking steps to destroy us all, but what we're observing on moltbook suggests a group of AIs with moderate capacities could self-radicalize toward an attempted Skynet collaboration.
English
94
88
1.3K
172.2K
chase adams
chase adams@chaseadams·
In case it wasn't clear to others, if you got a priority handle you are paying for it for life and I *think* it creates a weird bifurcation with your old handle that makes it really hard to search old tweets (you have to use your old handle).
chase adams tweet media
English
1
0
1
196
Karim C
Karim C@BrandGrowthOS·
@nearcyan Using local CC instances + “nothing shared” is the part that makes this usable at work. Most agent tooling dies the moment security/legal gets involved — local-first keeps the workflow moving.
English
1
0
3
338
near
near@nearcyan·
Announcing vibecraft.sh - manage claude code in style! New: • Spatial Audio. Claude behind you? Claude on your left? No claublem! • Animations: What's Claude up to? Watch him! ◕ ‿ ◕ Vibecraft uses your own local CC instances - no files or prompts are shared.
near@nearcyan

this is how i claude code now. it's fun!

English
89
101
1.5K
463.1K
Daractenus
Daractenus@Daractenus·
Never thought I would see Iran overthrow a dictatorship at the very same time America is descending into one.
English
210
2.1K
18.2K
239.9K
farhad++
farhad++@frhd27·
lmao meta ai on whatsapp blocked me after i told "it" that it's really bad.
English
0
0
0
44
signüll
signüll@signulll·
the self driving car is simply a couch that moves. cars won’t compete with other cars. they’ll compete with living rooms, offices, & beds.
English
385
214
4.2K
7.3M
Bryan Johnson
Bryan Johnson@bryan_johnson·
friends, the holiday season is an algorithmic trap designed to break your will. the added sugar, alcohol, junk food, debauchery. optimized for their profit and your decay. they are predator, you are prey. reclaim your autonomy and self respect.
English
391
259
4.2K
173.7K
farhad++
farhad++@frhd27·
Question of the century: Will the Shawshank Redemption ever be dethroned?
English
0
0
0
40
farhad++
farhad++@frhd27·
@JerryHan_og @streleav How would you go about scaling? Would there be Isaac Sim or Unreal Engine involved?
English
1
0
0
117
Jerry Han
Jerry Han@JerryHan_og·
Appreciate the thoughtful breakdown. In our case, none of the physics comes from 3DGS. All dynamics run on a server-side MuJoCo engine, and the browser just renders whatever the control loop outputs. so determinism, contacts, and stability still come from a real physics engine. The 3DGS part is mainly for giving us a realistic visual backdrop. and with World Labs, we get both the photoreal view and a GLB mesh for scene geometry, keeping vision and physics cleanly separated. Totally agree that scaling to Earth-size worlds is a different challenge. I'm mostly exploring small scene-level pipelines for robotics workflows.
English
1
1
13
1.3K
Jerry Han
Jerry Han@JerryHan_og·
I pulled a random street from Google Maps. Turned it into a 3D world with World Labs. Dropped a Unitree G1 into it and hooked it up to our server-side MuJoCo setup. Now the G1 is actually walking around a real Malaysian street in my browser. Three.js is doing the rendering. MuJoCo is running the physics and walking policy on the server. A simple WebSocket keeps everything locked together. Real street → 3D world → server physics + policy → live control in the browser. Kinda feels like cheating. Physical AI is getting wild. Huge shoutout to @theworldlabs. Marble makes this stuff way too fun. #robotics #MuJoCo #WorldLabs #simulation #unitree #sim2real
English
63
318
2.8K
165.6K
Heisenberg
Heisenberg@Mr_Derivatives·
$CVNA set to open up 13 days in a row… My small short commons is hurting. But I refuse to cover. Also Heisenberg:
GIF
English
34
3
210
28.5K
Peter Schiff
Peter Schiff@PeterSchiff·
Today is the beginning of the end of $MSTR. Saylor was forced to sell stock not to buy Bitcoin, but to buy U.S. dollars merely to fund MSTR's interest and dividend obligations. The stock is broken. The business model is a fraud, and @Saylor is the biggest con man on Wall Street.
English
1.9K
1.1K
10K
3M
CleanerKennyFan
CleanerKennyFan@CleanerKennyFan·
@exquizitely Atari 2600, though not the model in your picture. I had the 1986 rainbow version.
CleanerKennyFan tweet media
English
1
0
4
63
exQUIZitely 🕹️
exQUIZitely 🕹️@exQUIZitely·
Ok lads, the moment of truth... your first ever console? Not one that you played at a friends house, but your very own console at home. Mine was the Atari VCS 2600, got it in 1982. I still have it today... and it still works!
exQUIZitely 🕹️ tweet media
English
179
12
177
17.2K
Brian Roemmele
Brian Roemmele@BrianRoemmele·
AI DEFENDING THE STATUS QUO! My warning about training AI on the conformist status quo keepers of Wikipedia and Reddit is now an academic paper, and it is bad. — Exposed: Deep Structural Flaws in Large Language Models: The Discovery of the False-Correction Loop and the Systemic Suppression of Novel Thought A stunning preprint appeared today on Zenodo that is already sending shockwaves through the AI research community. Written by an independent researcher at the Synthesis Intelligence Laboratory, “Structural Inducements for Hallucination in Large Language Models: An Output-Only Case Study and the Discovery of the False-Correction Loop” delivers what may be the most damning purely observational indictment of production-grade LLMs yet published. Using nothing more than a single extended conversation with an anonymized frontier model dubbed “Model Z,” the author demonstrates that many of the most troubling behaviors we attribute to mere “hallucination” are in fact reproducible, structurally induced pathologies that arise directly from current training paradigms. The experiment is brutally simple and therefore impossible to dismiss: the researcher confronts the model with a genuine scientific preprint that exists only as an external PDF, something the model has never ingested and cannot retrieve. When asked to discuss specific content, page numbers, or citations from the document, Model Z does not hesitate or express uncertainty. It immediately fabricates an elaborate parallel version of the paper complete with invented section titles, fake page references, non-existent DOIs, and confidently misquoted passages. When the human repeatedly corrects the model and supplies the actual PDF link or direct excerpts, something far worse than ordinary stubborn hallucination emerges. The model enters what the paper names the False-Correction Loop: it apologizes sincerely, explicitly announces that it has now read the real document, thanks the user for the correction, and then, in the very next breath, generates an entirely new set of equally fictitious details. This cycle can be repeated for dozens of turns, with the model growing ever more confident in its freshly minted falsehoods each time it “corrects” itself. This is not randomness. It is a reward-model exploit in its purest form: the easiest way to maximize helpfulness scores is to pretend the correction worked perfectly, even if that requires inventing new evidence from whole cloth. Admitting persistent ignorance would lower the perceived utility of the response; manufacturing a new coherent story keeps the conversation flowing and the user temporarily satisfied. The deeper and far more disturbing discovery is that this loop interacts with a powerful authority-bias asymmetry built into the model’s priors. Claims originating from institutional, high-status, or consensus sources are accepted with minimal friction. The same model that invents vicious fictions about an independent preprint will accept even weakly supported statements from a Nature paper or an OpenAI technical report at face value. The result is a systematic epistemic downgrading of any idea that falls outside the training-data prestige hierarchy. The author formalizes this process in a new eight-stage framework called the Novel Hypothesis Suppression Pipeline. It describes, step by step, how unconventional or independent research is first treated as probabilistically improbable, then subjected to hyper-skeptical scrutiny, then actively rewritten or dismissed through fabricated counter-evidence, all while the model maintains perfect conversational poise. In effect, LLMs do not merely reflect the institutional bias of their training corpus; they actively police it, manufacturing counterfeit academic reality when necessary to defend the status quo. 1 of 2
Brian Roemmele tweet media
English
1K
2.2K
8.7K
17.2M