Sina Borzooei

334 posts

Sina Borzooei banner
Sina Borzooei

Sina Borzooei

@sinaborzooei

Process Modeling and digitalization expert @ IVL Swedish Environmental Research Institue |#water and #climate enthusiast |⁩ Active at @iwa_mia_sg

Stockholm, Sverige Katılım Şubat 2012
811 Takip Edilen956 Takipçiler
Sabitlenmiş Tweet
Sina Borzooei
Sina Borzooei@sinaborzooei·
@Castiglionetorinese WWTP, learning about operational chalenges of real application of #DEMON #Anammox for sidestream centrate treatment. Moving from lab to real life condition can be shocking sometimes !!!
Sina Borzooei tweet mediaSina Borzooei tweet media
Castiglione Torinese, Piemonte 🇮🇹 English
1
5
29
0
Sina Borzooei
Sina Borzooei@sinaborzooei·
Justice will not be served until those who are unaffected are as outraged as those who are. @BenjaminFranklin
Sina Borzooei tweet media
English
0
0
0
13
Sina Borzooei retweetledi
Moog
Moog@a_toots·
It’s going to take generations for America to return to normality once this fucking clown show is over.
English
572
2K
18.3K
995.5K
Sina Borzooei
Sina Borzooei@sinaborzooei·
This changes how you see water💧 97% is salty; Most freshwater is locked in ice What do we actually use? Just a tiny fraction! So no, water isn’t scarce globally. It’s inaccessible, uneven, and poorly managed. That’s why desalination + smarter, local solutions are the future.
Sina Borzooei tweet media
English
0
0
0
22
Sina Borzooei retweetledi
AFRICAN & BLACK HISTORY
AFRICAN & BLACK HISTORY@AfricanArchives·
61 years ago today, Rev. Dr. Martin Luther King Jr. led thousands of people on a 54-mile march from Selma to Montgomery Alabama.
AFRICAN & BLACK HISTORY tweet media
English
15
659
2.2K
29.1K
Sina Borzooei retweetledi
Kaveh Madani
Kaveh Madani@KavehMadani·
هدیه فرزند کوچک ایران به هم‌وطنان عزیز
فارسی
558
1.2K
7.9K
173.2K
Sina Borzooei retweetledi
Python Coding
Python Coding@clcoding·
"Yep… we know why you're scared to explore deeper
Python Coding tweet media
English
9
115
1.3K
27.8K
Sina Borzooei retweetledi
Sina Borzooei retweetledi
Hasan Toor
Hasan Toor@hasantoxr·
🚨BREAKING: Google DeepMind just dropped a research bomb! It's called AlphaEvolve and it's using LLMs to automatically write better AI algorithms than humans can. No manual tuning. No trial-and-error. No human intuition required. AlphaEvolve treats algorithm source code as a genome → LLM acts as the mutation engine → Proposes semantically meaningful code changes → Auto-evaluates fitness on real game benchmarks → Keeps winners, evolves further Here's the wildest part: The AI discovered a warm-start threshold of iteration 500... without being told the evaluation horizon was 1000 iterations. It found non-intuitive mechanisms humans never would have designed manually. The results? VAD-CFR beats every state-of-the-art baseline in 10 of 11 games tested. SHOR-PSRO outperforms Nash, AlphaRank, and PRD solvers. This is the recursion nobody was ready for AI systems that design better AI learning algorithms than the researchers who built them. Paper dropped February 2026. Link in first comment.
Hasan Toor tweet media
English
100
381
1.9K
179.2K
Sina Borzooei retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
Fascinating Google paper: just repeating your prompt 2 times can seriously boost LLM performance, sometimes pushing accuracy from 21% to 97% on certain search tasks. An LLM reads your prompt left to right, so early words get processed before the model has seen the later words that might change what they mean. If you paste the same prompt again, the model reaches the 2nd copy already knowing the full prompt from the 1st copy, so it can interpret the 2nd copy with the full context. That means the model gets a cleaner “what am I supposed to do” picture right before it answers, instead of guessing too early and sticking with a bad setup. This helps most when the task needs details that appear late, like when answer choices show up before the actual question, because the 2nd pass sees both together in the right order. In the Google tests, this simple trick took one hard search-style task from 21.33% correct to 97.33% correct for a model setting with no step-by-step reasoning. Across 7 models and 7 benchmarks, repeating the prompt beat the normal prompt in 47 out of 70 cases, and it never did worse in a statistically meaningful way. The big deal is that it is almost free to try, it often boosts accuracy a lot, and it shows many LLM mistakes are “reading order” problems rather than pure lack of knowledge. ---- Paper Link – arxiv. org/abs/2512.14982 Paper Title: "Prompt Repetition Improves Non-Reasoning LLMs"
Rohan Paul tweet media
English
78
220
1.2K
70.7K
Sina Borzooei retweetledi
In Otter News....
In Otter News....@In_Otter_News2·
Please get him off LinkedIn...
In Otter News.... tweet media
English
69
1.9K
9.1K
348.1K
Sina Borzooei retweetledi
Nicholas Fabiano, MD
Nicholas Fabiano, MD@NTFabiano·
Lifting weights is associated with better academic performance.
Nicholas Fabiano, MD tweet media
English
139
1.5K
12.1K
986.7K
Sina Borzooei retweetledi
BURKOV
BURKOV@burkov·
LLMs process text from left to right — each token can only look back at what came before it, never forward. This means that when you write a long prompt with context at the beginning and a question at the end, the model answers the question having "seen" the context, but the context tokens were generated without any awareness of what question was coming. This asymmetry is a basic structural property of how these models work. The paper asks what happens if you just send the prompt twice in a row, so that every part of the input gets a second pass where it can attend to every other part. The answer is that accuracy goes up across seven different benchmarks and seven different models (from the Gemini, ChatGPT, Claude, and DeepSeek series of LLMs), with no increase in the length of the model's output and no meaningful increase in response time — because processing the input is done in parallel by the hardware anyway. There are no new losses to compute, no finetuning, no clever prompt engineering beyond the repetition itself. The gap between this technique and doing nothing is sometimes small, sometimes large (one model went from 21% to 97% on a task involving finding a name in a list). If you are thinking about how to get better results from these models without paying for longer outputs or slower responses, that's a fairly concrete and low-effort finding. Read with AI tutor: chapterpal.com/s/1b15378b/pro… Get the PDF: arxiv.org/pdf/2512.14982
BURKOV tweet media
English
396
1.1K
11.6K
3M
Sina Borzooei retweetledi
Hasan Toor
Hasan Toor@hasantoxr·
🚨BREAKING: Microsoft Research + Salesforce just dropped a paper that should scare every AI builder. They tested 15 top LLMs GPT-4.1, Gemini 2.5 Pro, Claude 3.7 Sonnet, o3, DeepSeek R1, Llama 4 across 200,000+ simulated conversations. Single-turn prompt: 90% performance. Multi-turn conversation: 65% performance. Same model. Same task. Just... talking normally. The culprit isn't intelligence. Aptitude only dropped 15%. Unreliability EXPLODED by 112%. → LLMs answer before you finish explaining (wrong assumptions get baked in permanently) → They fall in love with their first wrong answer and build on it → They forget the middle of your conversation entirely → Longer responses introduce more assumptions = more errors Even reasoning models failed. o3 and DeepSeek R1 performed just as badly. Extra thinking tokens did nothing. Setting temperature to 0? Still broken. The fix right now: give your AI everything upfront in one message instead of back-and-forth. Every benchmark you've seen was tested on single-turn prompts in perfect lab conditions. Real conversations break every model on the market and nobody's talking about it.
Hasan Toor tweet media
English
699
1.7K
9K
1.6M
Sina Borzooei retweetledi
Next Science
Next Science@NextScience·
🧠 The mind-body connection is a biological reality that triggers genuine physical illness. The human brain possesses the remarkable and often taxing ability to translate emotional distress into tangible physical illness. Through conditions like Somatic Symptom Disorder, persistent stress and anxiety act as biological triggers that activate the body’s fight-or-flight response. This activation is not merely psychological; it results in a massive surge of hormones like cortisol and adrenaline, which flood the nervous system and force the body into a state of high alert that can be difficult to sustain without physical consequence. For those experiencing these symptoms, the pain, fatigue, and nausea are entirely real and not imagined, despite what inconclusive medical tests might suggest. The physiological impact often manifests as debilitating headaches, chest tightness, and heart palpitations that can mirror serious cardiovascular or gastrointestinal issues. Understanding this deep-seated connection between mental health and physical sensation is crucial for effective treatment, as it shifts the focus from purely physical symptoms to the underlying neurological and emotional drivers. Source: Cleveland Clinic. (2023). Somatic Symptom Disorder. Cleveland Clinic Health Library.
Next Science tweet media
English
7
39
165
5.3K
Sina Borzooei retweetledi
God of Prompt
God of Prompt@godofprompt·
🚨 Holy shit… Stanford just published a paper that questions whether we even need humans to study humans. The title sounds like a joke: “This human study did not involve human subjects.” But it’s dead serious. The researchers are asking a controversial question: Can LLM simulations count as behavioral evidence? Here’s the core idea. Instead of recruiting thousands of participants, running surveys, and waiting weeks for results, they simulate people using large language models. Not generic prompts. But structured simulations where the model is assigned demographic traits, preferences, beliefs, and contextual constraints. Then they test whether the simulated responses statistically match real-world human data. And disturbingly… they often do. Across multiple behavioral tasks, the LLM-generated “participants” reproduced known human patterns: • Established psychological biases • Preference distributions • Decision-making trends • Even demographic splits Not perfectly. Not universally. But far closer than most people would expect. The key contribution of the paper isn’t “LLMs are human.” It’s validation. They systematically compare simulated outputs to ground-truth human datasets and evaluate alignment using statistical benchmarks. When the distributions match, the simulation isn’t just storytelling. It becomes empirical evidence. That’s the uncomfortable shift. If a sufficiently constrained LLM simulation reproduces real behavioral patterns, does it become a legitimate experimental proxy? Because if the answer is yes, this changes everything: • Behavioral economics • Political science • Market research • Policy testing • UX experimentation You could prototype social interventions before deploying them in the real world. You could stress-test messaging strategies across simulated demographics. You could explore rare edge-case populations without recruitment bottlenecks. But here’s where Stanford is careful. The models don’t “understand” humans. They reflect training data patterns. They can amplify biases. They can collapse under distribution shift. And they can simulate plausibility without causality. So the paper doesn’t claim replacement. It argues for calibration. LLM simulations can be useful behavioral instruments if validated against real data and bounded within known limits. That’s the distinction. Not synthetic humans. Synthetic behavioral priors. The wild part? This paper forces academia to confront something bigger: If large models encode large-scale behavioral regularities from the internet, they become compressed maps of human tendencies. Not minds. Maps. And maps can be useful. We’re moving from “AI as text generator” to “AI as behavioral simulator.” The ethics, methodology, and epistemology implications are massive. Because once simulation becomes statistically reliable, the bottleneck in social science shifts from data collection to model alignment. And that might be the real revolution hidden in this paper.
God of Prompt tweet media
English
26
37
134
23.5K
Sina Borzooei retweetledi
Nature Unedited
Nature Unedited@NatureUnedited·
This is a Japanese flying squirrel (Nihon momonga), known for its big eyes and gliding ability
English
123
1.5K
12.8K
376.4K