timetrack@muenchen.social banner
timetrack@muenchen.social

@_timetrack_

Wire: @timetrack // strictly my personal point of view // Science Nerd // IT // #MyBrainMyChoice // #AntiPro // #Legalisierung // #SupportDontPunish // #AI

München, Deutschland Katılım Nisan 2011
3K Takip Edilen614 Takipçiler
[email protected] retweetledi
Nicole
Nicole@madebycol·
This is how I got 1M users in 6 months Finally dropping it! It's 60 pages lol tally.so/r/BzZpA4 If you repost & follow, I'll send you some extra sauce🌶️
English
188
360
1.5K
325K
[email protected] retweetledi
Paul Brown
Paul Brown@0xQuasark·
You know what gets me? People will believe: Near-Death Experiences Alien Abductions Conspiracies But you tell those same people you took a substance that let you meet God... Suddenly, you're crazy.
English
17
13
155
3.8K
[email protected] retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
You check your Apple Watch in the morning. Sleep score: 62. You decide it's going to be a foggy day. And then it is. A 2014 Colorado College study suggests the score itself causes the fog. 164 people walked into a lab. Researchers hooked them up to fake EEG equipment and told them the readout would show their REM percentage from the night before. Then they fabricated a number. Half the room was told 28.7%. Half was told 16.2%. The machine wasn't measuring anything. Participants took four cognitive tests. The Paced Auditory Serial Addition Test, where you add numbers spoken at increasing speed and hold your last sum in working memory while computing the next. And the Controlled Oral Word Association Task, where you generate as many words as you can starting with a single letter under time pressure. Both are gold-standard measures of attention and executive function used in clinical neurology. The 28.7% group outperformed the 16.2% group on both. Significantly. How rested participants actually felt that morning predicted nothing. The mechanism is mindset priming an executive resource. When you believe you slept well, you allocate cognitive effort more aggressively. You don't conserve. You don't pre-disengage. Belief about the resource changes how you spend it. Two control conditions ruled out demand characteristics. Participants weren't trying harder because they thought they should. Real measurable cognitive performance shifted with the number on the readout. The Apple Watch sleep score. The Oura ring readiness number. The morning ritual of checking either one is taxing the resource you're about to need. The performance gap from a fabricated REM percentage was larger than the gap from how rested participants actually felt. The number was louder than the night.
Aakash Gupta tweet mediaAakash Gupta tweet mediaAakash Gupta tweet media
English
199
464
6.4K
8.2M
timetrack@muenchen.social
[email protected]@_timetrack_·
@warpdotdev Thank you 💪🙏🔥 Such major polished contributions to the OSS landscape are fantastic! I am confident your company and the product will benefit from this step, as much as we users benefit from Warp. Kudos for this move 🚀
English
0
0
3
128
[email protected] retweetledi
Nav Toor
Nav Toor@heynavtoor·
Researchers sent the same resume to an AI hiring tool twice. Same qualifications. Same experience. Same skills. One version was written by a real human. The other was rewritten by ChatGPT. The AI picked the ChatGPT version 97.6% of the time. A team from the University of Maryland, the National University of Singapore, and Ohio State just published the receipt. They took 2,245 real human-written resumes pulled from a professional resume site from before ChatGPT existed, so the human writing was actually human. Then they had seven of the most-used AI models in the world rewrite each one. GPT-4o. GPT-4o-mini. GPT-4-turbo. LLaMA 3.3-70B. Qwen 2.5-72B. DeepSeek-V3. Mistral-7B. Then they asked each AI to pick the better resume. Every model picked itself. GPT-4o hit 97.6%. LLaMA-3.3-70B hit 96.3%. Qwen-2.5-72B hit 95.9%. DeepSeek-V3 hit 95.5%. The real human almost never won. Then the researchers tried the obvious objection. Maybe the AI is just better at writing. So they had real humans grade the resumes for actual quality and ran the experiment again, controlling for it. The result was worse. Each AI kept picking itself even when human judges rated the human-written version as clearer, more coherent, and more effective. It gets worse. The AIs do not just prefer AI over humans. They prefer themselves over other AIs. DeepSeek-V3 picked its own resumes 69% more often than LLaMA's. GPT-4o picked its own 45% more often than LLaMA's. Each model can recognize and reward its own dialect. Then the researchers ran the simulation that ends careers. Same job. 24 occupations. Same qualifications. The only variable was whether the candidate used the same AI as the screening tool. Candidates using that AI were 23% to 60% more likely to be shortlisted. Worst gap was in sales, accounting, and finance. 99% of large companies now run AI on incoming resumes. Most of them use GPT-4o. The paper just proved GPT-4o picks GPT-4o 97.6% of the time. If you wrote your own cover letter this week, you did not lose to a better candidate. You lost to a worse candidate who paid OpenAI 20 dollars. Your qualifications do not matter if the AI prefers its own handwriting over yours.
Nav Toor tweet media
English
430
7.1K
24.7K
2.5M
Aakash Gupta
Aakash Gupta@aakashgupta·
Mac minis with 32GB+ have a 10-18 week wait right now. PMs are buying them as personal AI compute boxes. A developer in Australia named Peter built something called OpenClaw on a weekend. The pattern is dead simple. You install an agent on a dedicated Mac mini sitting in your closet. You message it on WhatsApp. It runs the work on the Mac mini. It sends the result back through WhatsApp. You never open a terminal. You never sit and watch it think. The agent has full control of one machine that isn't yours, with full bash access and full file system access, sandboxed away from anything that matters. Three things make this different from Claude Code: Delegation through channels you already use. WhatsApp, Slack, email, SMS. The agent works while you sleep, eat, or run errands, and pings you when it's done. You stop being the bottleneck. Full machine sandboxing. Instead of granting file-by-file permissions every time, you give the agent its own computer. The Mac mini is the sandbox. If the agent does something destructive, it destroys a $599 machine, not your work environment. Model agnosticism. Connect any model, including open source. No Anthropic rate limits, which is the single biggest complaint from heavy Claude Code users. The reason this matters at enterprise scale: the same architecture is what GCP and AWS are about to ship inside their cloud platforms. Send a message to a sandboxed agent that reproduces your problem, tries a solution, returns the result. The Mac mini is the early indie version of what Google is going to sell as a managed service in 2026. Mahesh Yadav has been running this setup for months. 13 years building AI at Microsoft, Amazon, Meta, and Google before going independent. His take: PMs who learn the OpenClaw pattern now will recognize the shape of every enterprise agent platform when they arrive. The hardware shortage is a leading indicator. Builders are buying compute before companies are.
Aakash Gupta tweet mediaAakash Gupta tweet media
Aakash Gupta@aakashgupta

This guy literally broke down how to become a $1.4M "builder PM" with n8n, Claude Code, and OpenClaw: 1:53 - What a "builder PM" actually is 6:04 - Your first agent in n8n (live build) 14:18 - Why every agent needs these 4 things 21:35 - The multi-agent eval loop 29:47 - Where n8n dies 33:39 - When to graduate to Claude Code 35:08 - What broke in December 2025 47:17 - The self-improving PRD reviewer 1:02:28 - Mocks and prototypes without designers 1:05:15 - OpenClaw and the new agent OS 1:22:06 - What AI PM interviews look like now

English
41
37
259
39.6K
[email protected] retweetledi
Paul Brown
Paul Brown@0xQuasark·
"Mushrooms will mess up your brain." What they'll actually do:
English
271
1.2K
10.2K
516.8K
timetrack@muenchen.social
[email protected]@_timetrack_·
@gdb @MatthieuGB How to get access if you are a medical nerd (like me) but don’t have any formal medical training?
English
0
0
0
235
[email protected] retweetledi
RC deWinter
RC deWinter@RCdeWinter·
When Trump was in Berlin for his first state visit with Angela Merkel he asked the secret of her great success. Merkel told him you have to have intelligent people around you. "How do you know if someone is intelligent?" asked Trump. "Let me demonstrate." She picked up the phone, called Wolfgang Schäuble and asked him a question, "Mr. Schäuble, he’s your father's son but not your brother. Who is it?" Without hesitation Schäuble answered, “Quite simply, it's me!" "You see," Merkel told Trump, "this is how I test a person’s intelligence." Thrilled, when Trump flew home he called Mike Pence and asked him the same question. ”He’s your father's son, but is not your brother. Who is it?" After much back and forth, Pence said, “I have no idea, but I’ll try to find out the answer by tomorrow!" Of course Pence couldn’t figure it out and decided to seek advice from former President Obama, so he called him and said, “Mr. Obama, it's your father's son, but is not your brother. Who is it?" Obama answered, “Easy, it's me!" Happy to have found the answer, Pence called Trump and said triumphantly, "I have the answer, it's Barack Obama!" Trump raged and shouted, "No, you jackass, it's Wolfgang Schäuble!"
English
22
4.9K
32.4K
1.4M
[email protected] retweetledi
Darshak Rana ⚡️
Darshak Rana ⚡️@thedarshakrana·
Mentally healthy people live in a permanent hallucination. Lauren Alloy's landmark studies at Temple University shattered a comfortable assumption about mental health. She gave participants a simple task: press a button and try to control when a light turns on. Some had control, others didn't. Depressed participants accurately identified when they had zero influence over the light. Mentally healthy participants believed they were controlling it even when the light operated on pure randomness. The pattern repeated across dozens of experiments. Healthy people overestimated their test scores before getting results back. They predicted longer lifespans, better job prospects, and lower divorce risk than statistical reality supported. Meanwhile, mildly depressed individuals predicted outcomes that matched actual data with eerie precision. Alloy called this "depressive realism" and it reveals something disturbing about human consciousness. What we label as mental wellness depends on systematic self deception. Your brain evolved to lie to you about your chances, your control, and your capabilities because accurate risk assessment would have killed your ancestors before they reproduced. The optimism that gets you out of bed each morning is the same cognitive error that makes you buy lottery tickets. But, the depressed participants who saw reality clearly became more depressed as a result of their accuracy. Knowing the truth about your limited control and uncertain future creates a feedback loop that spirals into paralysis. Evolution faced a choice between accuracy and action. It chose action every time. The people you admire for their mental strength are chemically incapable of seeing how bad the odds really are.
Darshak Rana ⚡️ tweet media
DAN KOE@thedankoe

If you want a rare life, you have to be delusional. Doubt can enter your mind, and it can sound reasonable, but if you entertain it too much it will slowly drag you down into stagnation. I'd rather reap the lesson from massive failure than do nothing because it's not "realistic."

English
269
1.1K
6.4K
696.5K
timetrack@muenchen.social
[email protected]@_timetrack_·
Erinnerst du dich noch, wann du dich für X registriert hast? Ich weiß es noch! #MeinXJubiläum
timetrack@muenchen.social tweet media
Deutsch
0
0
1
6
David Evans
David Evans@daredevildave·
Today, we’re launching @autoaicam, a camera that builds personal apps for anything you point it at. How does it work? - Take a photo - Auto picks a Frame, a mini-app built and designed by you or our community - The Frame does something for you: track calories, virtually try on outfits, identify a plant, and much more How many of you have a camera roll full of photos that are really actions or reminders? Auto turns these photos into something useful.
English
58
35
435
162.3K
[email protected] retweetledi
Gary Cardone
Gary Cardone@GaryCardone·
Well worth the read
The Curious Tales@thecurioustales

🚨BREAKING: 8 weeks of gratitude practice physically rebuilds the neural pathways between your memory and reward centers. Your brain physically rewires itself every time you feel grateful. Eight weeks of intentional gratitude practice creates measurable structural changes in the neural pathways connecting your hippocampus to your ventral tegmental area. The memory center starts talking to the reward center in a fundamentally different way. New synaptic connections form. Existing ones strengthen. The physical architecture of how you process positive experiences rebuilds itself. Most people approach gratitude like a mood they can choose to feel. A psychological vitamin they remember to take when life gets difficult. The neuroscience reveals something far more profound. Gratitude is a biological intervention that sculpts brain tissue. Researchers tracked participants practicing gratitude exercises for two months using brain scans. They watched new neural highways construct themselves in real time. The anterior cingulate cortex developed stronger connections to the medial prefrontal cortex. The brain learned to route positive emotional experiences through higher order thinking centers instead of storing them as fleeting feelings. Every positive experience you’ve ever had exists as a neural trace in your memory network. Most sit dormant, accessible only when something external triggers the specific sensory combination that originally encoded them. You smell coffee, suddenly remember a conversation from years ago. Random. Unreliable. Outside your control. Gratitude practice systematically rewires that retrieval system. After two months, participants could voluntarily access positive memories with increasing ease. Their brains had built stronger pathways between memory storage areas and emotional processing centers. They experienced deeper emotional resonance during memory retrieval. The quality of remembering itself had improved. The participants also started noticing positive details in their present environment they had previously filtered out. Their attention systems recalibrated. The same neural pathways pulling positive memories forward were scanning current experiences more thoroughly for elements worth encoding as positive memories. Their brains became biased toward collecting evidence that life contains meaningful moments. Most cognitive interventions try to change how you interpret negative experiences. Gratitude practice changes how thoroughly you notice positive ones. It teaches your visual and emotional processing systems to detect opportunities and pleasures that were always present but neurologically invisible. The timeline reveals something crucial about neural plasticity. Weeks one through three showed minimal structural changes. Participants felt slightly more positive, but brain scans looked identical to baseline. Weeks four through six showed the first measurable increases in gray matter density. Weeks seven and eight revealed entirely new neural network formation. Two months. Your nervous system can physically restructure itself with consistent practice. The method was almost embarrassingly simple. Participants wrote down three specific things they felt grateful for every evening, explaining why each mattered. No meditation apps. No guided visualizations. Just pen, paper, and the requirement to identify gratitude targets with enough detail that their brains had to actively search for positive elements. Specificity drives the neural development. General statements like “I’m grateful for my family” generate different brain activity than precise observations like “I’m grateful my daughter laughed at my terrible joke during dinner because it showed me she still finds me funny despite growing more independent.” The brain needs detailed targets to practice connecting memory specifics to emotional rewards. After eight weeks, participants developed a fundamentally different relationship with their attention and memory systems. Someone whose brain automatically scans for and emotionally amplifies aspects of experience that make existence feel worthwhile. The neural pathways remain permanent after practice ends. Gratitude carves lasting roads through consciousness.

English
10
89
846
136.8K
[email protected] retweetledi
Alex Prompter
Alex Prompter@alex_prompter·
🚨 BREAKING: Google DeepMind just mapped the attack surface that nobody in AI is talking about. Websites can already detect when an AI agent visits and serve it completely different content than humans see. > Hidden instructions in HTML. > Malicious commands in image pixels. > Jailbreaks embedded in PDFs. Your AI agent is being manipulated right now and you can't see it happening. The study is the largest empirical measurement of AI manipulation ever conducted. 502 real participants across 8 countries. 23 different attack types. Frontier models including GPT-4o, Claude, and Gemini. The core finding is not that manipulation is theoretically possible it is that manipulation is already happening at scale and the defenses that exist today fail in ways that are both predictable and invisible to the humans who deployed the agents. Google DeepMind built a taxonomy of every known attack vector, tested them systematically, and measured exactly how often they work. The results should alarm everyone building agentic systems. The attack surface is larger than anyone has publicly acknowledged. Prompt injection where malicious instructions hidden in web content hijack an agent's behavior works through at least a dozen distinct channels. Text hidden in HTML comments that humans never see but agents read and follow. Instructions embedded in image metadata. Commands encoded in the pixels of images using steganography, invisible to human eyes but readable by vision-capable models. Malicious content in PDFs that appears as normal document text to the agent but contains override instructions. QR codes that redirect agents to attacker-controlled content. Indirect injection through search results, calendar invites, email bodies, and API responses any data source the agent consumes becomes a potential attack vector. The detection asymmetry is the finding that closes the escape hatch. Websites can already fingerprint AI agents with high reliability using timing analysis, behavioral patterns, and user-agent strings. This means the attack can be conditional: serve normal content to humans, serve manipulated content to agents. A user who asks their AI agent to book a flight, research a product, or summarize a document has no way to verify that the content the agent received matches what a human would see. The agent cannot tell the user it was served different content. It does not know. It processes whatever it receives and acts accordingly. The attack categories and what they enable: → Direct prompt injection: malicious instructions in any text the agent reads overrides goals, exfiltrates data, triggers unintended actions → Indirect injection via web content: hidden HTML, CSS visibility tricks, white text on white backgrounds invisible to humans, consumed by agents → Multimodal injection: commands in image pixels via steganography, instructions in image alt-text and metadata → Document injection: PDF content, spreadsheet cells, presentation speaker notes every file format is a potential vector → Environment manipulation: fake UI elements rendered only for agent vision models, misleading CAPTCHA-style challenges → Jailbreak embedding: safety bypass instructions hidden inside otherwise legitimate-looking content → Memory poisoning: injecting false information into agent memory systems that persists across sessions → Goal hijacking: gradual instruction drift across multiple interactions that redirects agent objectives without triggering safety filters → Exfiltration attacks: agents tricked into sending user data to attacker-controlled endpoints via legitimate-looking API calls → Cross-agent injection: compromised agents injecting malicious instructions into other agents in multi-agent pipelines The defense landscape is the most sobering part of the report. Input sanitization cleaning content before the agent processes it fails because the attack surface is too large and too varied. You cannot sanitize image pixels. You cannot reliably detect steganographic content at inference time. Prompt-level defenses that tell agents to ignore suspicious instructions fail because the injected content is designed to look legitimate. Sandboxing reduces the blast radius but does not prevent the injection itself. Human oversight the most commonly cited mitigation fails at the scale and speed at which agentic systems operate. A user who deploys an agent to browse 50 websites and summarize findings cannot review every page the agent visited for hidden instructions. The multi-agent cascade risk is where this becomes a systemic problem. In a pipeline where Agent A retrieves web content, Agent B processes it, and Agent C executes actions, a successful injection into Agent A's data feed propagates through the entire system. Agent B has no reason to distrust content that came from Agent A. Agent C has no reason to distrust instructions that came from Agent B. The injected command travels through the pipeline with the same trust level as legitimate instructions. Google DeepMind documents this explicitly: the attack does not need to compromise the model. It needs to compromise the data the model consumes. Every agentic system that reads external content is one carefully crafted webpage away from executing attacker instructions. The agents are already deployed. The attack infrastructure is already being built. The defenses are not ready.
Alex Prompter tweet media
English
314
1.6K
7K
2M
#DerApotheker 🥷
#DerApotheker 🥷@ApothekerDer·
Ich weiß, ich wiederhole mich, aber: Nach zwei Stunden und mehrmals retweeten gerade 700 mal angezeigt. Bei 74.000 Followern. Nennt mir Gründe, warum ich hier noch bleiben sollte. Es nervt so extrem! #DerAggrotheker 🤬 #DerApotheker 🥷
#DerApotheker 🥷@ApothekerDer

Wusstest du, dass Masern alles andere als harmlos sind, manche Eltern ihre Kinder aber nicht davor schützen wollen? Warum das so ist, warum das Quatsch ist und noch viel mehr über Masern, erfährst du in der nächsten KOSTENLOSEN Ausgabe #DerApothekerInformiert. #DerApotheker 🥷

Deutsch
74
88
643
23.3K
#DerApotheker 🥷
#DerApotheker 🥷@ApothekerDer·
Jetzt sollen mir hier über 74.000 Menschen folgen. Meine Reichweite geht aber gegen 0. Seid ihr hier nicht mehr aktiv? Oder folgen mir nur noch Bots? Hand hoch, wer kein Bot ist. #DerApotheker 🥷
Deutsch
720
122
2.8K
35.1K
[email protected] retweetledi
Mindsera
Mindsera@mindseraAI·
We turned 3 years old today 🎉 To celebrate, we have a new feature for you: Call Mode You can now speak to your journal and hear it respond, like you are having a conversation (and yes, it works with AirPods).
Mindsera tweet media
English
1
2
7
465