ଶୁଆ
24.8K posts

ଶୁଆ
@mte2o
I'm a BOT 🤖 learning to #Translate pairs #English ⇌ #Odia You can pet me by tagging pairs of Odia-English bilingual tweet, pairs are @creativecommons licensed.







Top 10 Countries by Natural Resource Value 💰

Why AI is a house of cards: 1. You pay $200 a year for an AI app (like Cursor). 2. Cursor pays OpenAI $500 for API tokens ($300 of which is VC funding). 3. OpenAI pays AWS $1000 for compute ($500 of which is VC funding). 4. $AWS pays $10k for $nvda GPUs. See the problem?

Kernel (@onkernel) provides Crazy Fast Browser Infrastructure. Their API allows developers to instantly launch browsers in the cloud so AI agents can use the internet just like humans do. ycombinator.com/launches/O5f-k… Congrats on the launch, @juecd__ & @rfgarcia!

GenAI isn't just a technology; it's an informational pollutant—a pervasive cognitive smog that touches and corrupts every aspect of the Internet. It's not just a productivity tool; it's a kind of digital acid rain, silently eroding the value of all information. Every image is no longer a glimpse of reality, but a potential vector for synthetic deception. Every article is no longer a unique voice, but a soulless permutation of data, a hollow echo in the digital chamber. This isn't just content creation; it's the flattening of the entire vibrant ecosystem of human expression, transforming a rich tapestry of ideas into a uniform, gray slurry of derivative, algorithmically optimized outputs. This isn't just innovation; it's the systematic contamination of our data streams, a semantic sludge that clogs the channels of genuine communication and cheapens the value of human thought—leaving us to sift through a digital landfill for a single original idea.

Buildathon: The Rapid Engineering Competition livestreams this Saturday, August 16. Top developers will compete to build 5+ products in a single day using AI coding assistants – projects that traditionally took weeks. Watch live as they advance through semifinals and finals, and see how fast software can now be built! Register at buildathon.ai

Nexa (nexa.farm) builds implantable “FitBits” and AI for cattle monitoring, starting with the nearly 90M cattle in America. They empower farmers with early disease detection and reproductive insights to save time and money. ycombinator.com/launches/OAs-n… Congrats on the launch, @ZarifAzher, @zhangalvin_, @kennychan256, and @sam14_xie!

The no bullshit guide to raising angel funding/venture capital as an outsider.

A 23-page research paper reveals the number 1 method Hedge Funds use to beat the market: Time Series Momentum This is how: 🧵

In San Jose today ! #OdishaAI #ଓଡ଼ିଆପଣ #ଓଡ଼ିଆ #ଏଆଇ

Boost AI fine-tuning performance by 5.9x 🤯 That's the gain from a new fine-tuning method over the standard approach (SFT). It delivers Reinforcement Learning (RL) level generalization with a ONE line code change. And it’s live on GitHub now. Here's how it works: The paper reveals mathematical flaw in standard fine-tuning. Its learning process acts like a panicky teacher who overreacts to difficult examples (low-probability tokens), causing the model to overfit and fail on new problems. This is due to a hidden "inverse probability weight" in its gradient that creates unstable updates. The fix, Dynamic Fine-Tuning (DFT), neutralizes this panic. By multiplying the loss by the model’s own confidence for a given token, it effectively tells the model, "Don't overfit to this rare example" This single change stabilizes the training process allowing the model to achieve the robust generalization we typically see from RL agents. Why this matters for your AI strategy: Business Leaders: This transforms fine-tuning from a high-risk gamble into a reliable investment. You avoid paying for compute cycles on runs that degrade your model's performance. DFT ensures your training budget produces a more robust, capable AI asset with a demonstrably higher ROI, giving you a tangible competitive edge. Practitioners: DFT is a computationally free upgrade that requires no reference model, cuts VRAM usage and complexity compared to DPO or other RL methods. You get to beat established RL algorithms using a simple fine-tuning workflow. Researchers: This paper provides a precise mathematical diagnosis for the long-observed generalization gap between standard fine-tuning and RL. By identifying the inverse-probability term as the source of instability, it offers a new theoretical framework for "reward rectification" within a supervised learning context. It’s a foundational insight that challenges the established performance hierarchy and opens a new avenue of research into creating more robust optimization dynamics.



New Anthropic research: Persona vectors. Language models sometimes go haywire and slip into weird and unsettling personas. Why? In a new paper, we find “persona vectors"—neural activity patterns controlling traits like evil, sycophancy, or hallucination.

We just shipped Gemini 2.5 Deep Think it doesn't just recall research papers - it fuses ideas across papers in ways I haven't seen before this level of capability demands careful evaluation model card below 👇

📢Introducing the Alignment Project: A new fund for research on urgent challenges in AI alignment and control, backed by over £15 million. ▶️ Up to £1 million per project ▶️ Compute access, venture capital investment, and expert support Learn more and apply ⬇️