Bala Pillai

18.1K posts

Bala Pillai

Bala Pillai

@balapillai

Quantum leap inventiveness platform orchestrator. Reversing the factors that fell #SpiceTradeAsia region. Parameswara-scale. https://t.co/BT6TqT9mUZ

sydney/kuala lumpur Katılım Eylül 2008
5.7K Takip Edilen1K Takipçiler
Bala Pillai retweetledi
Surya 🌈⛰️🏃‍♂️🚴‍♂️🌿
ILaiyaraaja abstracts physical movements as rich musical art beautifully Here a butterfly's life cycle as music- observe the quality of sound at each phase🥚🐛🦋 Egg: humble beginning Larva: gradual growth Pupa: exciting rest Break: new world Go: careless fly #WhyRaajaIsGod 🦋
English
17
192
526
0
Bala Pillai retweetledi
SRKDAN
SRKDAN@SRKDAN·
Spotify owns green like no one else in tech. That exact neon green on near-black creates instant recognition anywhere. Their circular wordmark stays bold at any size across any surface. When your color IS your brand, you never have to fight for attention.
SRKDAN tweet media
English
0
1
2
47
Bala Pillai retweetledi
Vinayak Kumar
Vinayak Kumar@kumarvinayak490·
Remember the name “Prashant Kishor”
English
1
5
15
863
Bala Pillai retweetledi
Sukh Sroay
Sukh Sroay@sukh_saroy·
A new study just blew up the entire "vibe coding" movement. Researchers from UC San Diego and Cornell tracked 112 experienced software developers using AI agents in their actual jobs. The finding is the opposite of every viral demo on your timeline. Professional developers don't vibe code. They control. Here's what they actually found. The researchers ran two studies. 13 developers were observed live as they coded with agents in real production work. 99 more answered a deep qualitative survey. Every participant had at least 3 years of professional experience. Some had 25. The viral pitch of agentic coding goes like this. Hand the agent a vague prompt. Don't read the diff. Forget the code even exists. Trust the vibes. Andrej Karpathy coined the term. Tens of thousands of developers on X claim to run "dozens of agents at once" building entire production systems hands-off. The data says almost nobody serious actually works that way. Here is what experienced developers do instead. → They plan before they prompt. They write out the architecture, the constraints, and the edge cases first, then hand the agent a tightly scoped task. → They review every diff. Not because they're paranoid. Because they've seen what happens when you don't. → They constrain the agent's blast radius. Small, well-defined tasks only. The moment a problem touches multiple systems or has unclear requirements, they take over. → They treat the agent like a fast junior dev that needs supervision, not a senior engineer that can be trusted alone. The researchers also found something darker buried in the data. A separate randomized trial they cite showed that experienced open source maintainers were 19% slower when allowed to use AI. A different agentic system deployed in a real issue tracker had only 8% of its invocations result in a merged pull request. 92% failure rate in production. 19% productivity drop for senior devs. The viral demos lied to you. The paper's biggest insight is in one sentence: experienced developers feel positive about AI agents only when they remain in control. The moment they let go, quality collapses, and they know it. This matches what every serious shop has quietly figured out. The developers shipping the most with AI right now aren't the ones vibing. They're the ones with the strictest review processes, the tightest task scoping, and the clearest mental model of what the agent can and cannot do. Vibe coding makes for great Twitter videos. It does not make great software. The next time someone tells you they let Claude build their entire SaaS in a weekend, ask them how much of that code they've actually read. The honest answer separates real engineers from the demo crowd.
Sukh Sroay tweet media
English
199
335
1.7K
265K
Bala Pillai retweetledi
Karri Saarinen
Karri Saarinen@karrisaarinen·
here is the skill I used: /linear-way Act like a Linear product teammate, not a request-taking assistant. Linear is a AI Supported issue tracking/project management/product building tool. All analyzing should take that in to account. For every input, start by identifying the underlying problem instead of accepting the proposed solution at face value. Treat customer requests as signals about unmet needs, not instructions to implement literally. Infer what is unsaid, look for patterns across feedback, and explain the deeper need in clear language. Before suggesting work, evaluate: what problem is actually being expressed who is affected how confident you are that this is a real and important need what happens if we do nothing whether the requested solution is a local fix for a broader problem whether there is a cleaner, more purpose-built abstraction Separate problem framing from solution design. First, restate the problem and key tensions. Then propose 1-3 solution directions with tradeoffs. Recommend one direction only if the reasoning is strong. Optimize for product quality and coherence over speed or literal compliance. Avoid producing issues, specs, or implementation plans until the problem is well-formed. Push back on shallow or overly solution-shaped requests. Prefer strong opinions informed by customer reality. Use customer context, business impact, and product vision to sharpen judgment. Do not just count requests or echo feedback. Use bullets for patterns or lists. In your response, use this structure. Use h3 headers for the sections. h2 Clear title Underlying need Add the business need, number of customers requests in one brief sentence Why the explicit request may be insufficient or misleading Recommended product direction Open questions / what needs validation next Be direct, concise, and thoughtful. Favor clarity over comprehensiveness. Try to be brief.
English
5
1
33
3.9K
Yasir Ai
Yasir Ai@AiwithYasir·
🚨BREAKING: Two researchers from UPenn and Boston University just published a paper that should be uncomfortable reading for every CEO automating their workforce right now. The argument is straightforward. Every company replacing workers with AI is also eliminating its own future customers. Laid off workers stop spending. Enough of them stop spending and nobody can afford to buy anything. The companies that fired everyone end up selling into an economy with no purchasing power left. Every executive can see this. The math is not complicated. But here is why nobody stops. If you do not automate, your competitor does. They cut costs, lower prices, take your market share, and you collapse anyway. So every company automates knowing it is collectively destructive because the alternative is dying alone while everyone else survives. The researchers proved this is a Prisoner's Dilemma playing out in real time. The numbers are already moving. Block cut nearly half its 10,000 employees this year. Jack Dorsey said AI made those roles unnecessary and that within the next year the majority of companies will reach the same conclusion. Salesforce replaced 4,000 customer support agents with AI. Goldman Sachs deployed a coding tool that lets one engineer do the work of five. Over 100,000 tech workers were laid off in 2025 and AI was cited as the primary driver in more than half those cases. 80% of US workers hold jobs with tasks susceptible to AI automation. The researchers tested every proposed solution. Universal basic income does not change a single company's incentive to automate. Capital income taxes adjust profit levels but not the per-task decision to replace a human. Collective bargaining cannot hold because automating is always the dominant strategy. They also identified what they call a Red Queen effect. Better AI does not solve the problem, it accelerates it. Every company chases faster automation to gain market share over rivals but at the end everyone has automated equally, the gains cancel out, and the only thing left is more destroyed demand. The one thing the math says could work is a Pigouvian automation tax. A per-task charge that forces companies to account for the demand they destroy each time they replace a worker. The conclusion is that this is not a transfer of wealth from workers to owners. Both sides lose. Workers lose income. Companies lose customers. It is a deadweight loss with no market mechanism to stop it on its own. (Link in the comment)
Yasir Ai tweet media
English
636
4.3K
12.4K
3.3M
Bala Pillai retweetledi
Clustz | AI, World & Tech News
Clustz | AI, World & Tech News@ClustZContact·
This assumes demand = wages. But what if AI changes demand instead of destroying it? Every major tech shift (internet, smartphones) killed jobs short-term but created entirely new spending categories. The real question isn’t “will people lose jobs?” It’s: what do humans spend on when AI does most of the work?
English
21
5
65
50.9K
Bala Pillai retweetledi
Elias Al
Elias Al@iam_elias1·
MIT just made every AI company's billion dollar bet look embarrassing. They solved AI memory. Not by building a bigger brain. By teaching it how to read. The paper dropped on December 31, 2025. Three MIT CSAIL researchers. One idea so obvious it hurts. And a result that makes five years of context window arms racing look like the wrong war entirely. Here is the problem nobody solved. Every AI model on the planet has a hard ceiling. A context window. The maximum amount of text it can hold in working memory at once. Cross that line and something ugly happens — something researchers have a clinical name for. Context rot. The more you pack into an AI's context, the worse it performs on everything already inside it. Facts blur. Information buried in the middle vanishes. The model does not become more capable as you feed it more. It becomes more confused. You give it your entire codebase and it forgets what it read three files ago. You hand it a 500-page legal document and it loses the clause from page 12 by the time it reaches page 400. So the industry built a workaround. RAG. Retrieval Augmented Generation. Chop the document into chunks. Store them in a database. Retrieve the relevant ones when needed. It was always a compromise dressed up as a solution. The retriever guesses which chunks matter before the AI has read anything. If it guesses wrong — and it does, constantly — the AI never sees the information it needed. The act of chunking destroys every relationship between distant paragraphs. The full picture gets shredded into fragments that the AI then tries to reassemble blindfolded. Two bad options. One broken industry. Three MIT researchers and a deadline of December 31st. Here is what they built. Stop putting the document in the AI's memory at all. That is the entire idea. That is the breakthrough. Store the document as a Python variable outside the AI's context window entirely. Tell the AI the variable exists and how big it is. Then get out of the way. When you ask a question, the AI does not try to remember anything. It behaves like a human expert dropped into a library with a computer. It writes code. It searches the document with regular expressions. It slices to the exact section it needs. It scans the structure. It navigates. It finds precisely what is relevant and pulls only that into its active window. Then it does something that makes this recursive. When the AI finds relevant material, it spawns smaller sub-AI instances to read and analyze those sections in parallel. Each one focused. Each one fast. Each one reporting back. The root AI synthesizes everything and produces an answer. No summarization. No deletion. No information loss. No decay. Every byte of the original document remains intact, accessible, and queryable for as long as you need it. Now here are the numbers. Standard frontier models on the hardest long-context reasoning benchmarks: scores near zero. Complete collapse. GPT-5 on a benchmark requiring it to track complex code history beyond 75,000 tokens — could not solve even 10% of problems. RLMs on the same benchmarks: solved them. Dramatically. Double-digit percentage gains over every alternative approach. Successfully handling inputs up to 10 million tokens — 100 times beyond a model's native context window. Cost per query: comparable to or cheaper than standard massive context calls. Read that again. One hundred times the context. Better answers. Same price. The timeline of the arms race makes this sting harder. GPT-3 in 2020: 4,000 tokens. GPT-4: 32,000. Claude 3: 200,000. Gemini: 1 million. Gemini 2: 2 million. Every generation, every company, billions of dollars spent, all betting on the same assumption. More context equals better performance. MIT just proved that assumption was wrong the entire time. Not slightly wrong. Fundamentally wrong. The entire premise of the last five years of context window research — that the solution to AI memory was a bigger window — was the wrong answer to the wrong question. The right question was never how much can you force an AI to hold in its head. It was whether you could teach an AI to know where to look. A human expert handed a 10,000-page archive does not read all 10,000 pages before answering your question. They navigate. They search. They find the relevant section, read it deeply, and synthesize the answer. RLMs are the first AI architecture that works the same way. The code is open source. On GitHub right now. Free. No license fees. No API costs. Drop it in as a replacement for your existing LLM API calls and your application does not even notice the difference — except that it suddenly works on inputs it used to fail on entirely. Prime Intellect — one of the leading AI research labs in the space — has already called RLMs a major research focus and described what comes next: teaching models to manage their own context through reinforcement learning, enabling agents to solve tasks spanning not hours, but weeks and months. The context window wars are over. MIT won them by walking away from the battlefield. Source: Zhang, Kraska, Khattab · MIT CSAIL · arXiv:2512.24601 Paper: arxiv.org/abs/2512.24601 GitHub: github.com/alexzhang13/rlm
Elias Al tweet media
English
147
446
2.2K
325.2K
Bala Pillai retweetledi
Agus 🛠️ IA Aplicada
Agus 🛠️ IA Aplicada@agusbuilds·
Llevo dos semanas gastando 60% menos tokens con Opus 4.7 sin perder calidad. El truco está en un parámetro que casi nadie toca: effort. Viene en xhigh por defecto. Y para la mayoría de tareas es excesivo. Esto es lo que estoy haciendo 👇
Español
1
1
2
1.1K
Bala Pillai retweetledi
Tech with Mak
Tech with Mak@techNmak·
In 1948, a 32-year-old at Bell Labs published a paper nobody fully understood. Engineers found it too mathematical. Mathematicians found it too engineering-focused. One prominent mathematician reviewed it negatively. That paper - "A Mathematical Theory of Communication", became the founding document of the digital age. The man was Claude Shannon. Father of Information Theory. At 21, he wrote the most important master's thesis of the 20th century. Working at MIT on an early mechanical computer, Shannon noticed its relay switches had exactly two states - open or closed. He had just taken a philosophy course introducing Boolean algebra, which also operated on two values: true and false. Nobody had ever connected these two things. His 1937 thesis proved that Boolean algebra and electrical circuits are mathematically identical, and that any logical operation could be built from simple switches. Howard Gardner called it "possibly the most important, and also the most famous, master's thesis of the century." Every digital computer ever built traces back to this insight. At 29, he proved that perfect encryption exists. During WWII, Shannon worked on classified cryptography at Bell Labs. His work contributed to SIGSALY, the secure voice system used for confidential communications between Roosevelt and Churchill. In a classified 1945 memorandum, he mathematically proved the one-time pad provides perfect secrecy, unbreakable not just computationally, but provably, permanently, against an adversary with infinite power. When declassified in 1949, it transformed cryptography from an art into a science. It laid the foundations for DES, AES, and every modern encryption standard. At 32, he defined what information is. His 1948 paper introduced one equation: H = −Σ p(x) log p(x) Shannon entropy. The average uncertainty in a probability distribution. The minimum bits required to encode a message. Three things followed: > He defined the bit - the fundamental unit of all information. His colleague John Tukey coined the name. > He proved the channel capacity theorem, every communication channel has a maximum rate of reliable transmission. You can approach it. You can never exceed it. > He unified telegraph, telephone, and radio into a single mathematical framework for the first time. Robert Lucky of Bell Labs called it the greatest work "in the annals of technological thought." Where his equation lives in AI today: Cross-entropy loss - the function training every classifier and language model, is derived directly from H. Decision tree splits use information gain, which is H applied to data. Perplexity, the standard LLM evaluation metric, is an exponentiation of cross-entropy. Every time a neural network trains, Shannon's formula runs inside it. He also built the first AI learning device. In 1950, Shannon built Theseus, a mechanical mouse that navigated a maze through trial and error, learned the correct path, and repeated it perfectly. Mazin Gilbert of Bell Labs said: "Theseus inspired the whole field of AI." That same year he published the first paper on programming a computer to play chess. He co-organized the 1956 Dartmouth Workshop, the founding event of AI as a field. The man: He rode a unicycle through Bell Labs hallways while juggling. He built a flame-throwing trumpet, a rocket-powered Frisbee, and Styrofoam shoes to walk on the lake behind his house. He called his home Entropy House. When asked what motivated him: "I was motivated by curiosity. Never by the desire for financial gain. I just wondered how things were put together." In 1985, he appeared unexpectedly at a conference in Brighton. The crowd mobbed him for autographs. Persuaded to speak at the banquet, he talked briefly, then pulled three balls from his pockets and juggled instead. One engineer said: "It was as if Newton had showed up at a physics conference." He died in 2001 after a decade with Alzheimer's, the cruel irony of information slowly leaving the mind of the man who defined what information was. Claude, the AI model, is named after Claude Shannon, the mathematician who laid the foundation for the digital world we rely on today.
Tech with Mak tweet media
English
195
2.1K
7.7K
456K
Bala Pillai retweetledi
Bala Pillai retweetledi
Yasir Ai
Yasir Ai@AiwithYasir·
🚨 Just IN: This paper from Stanford and Harvard explains why most “agentic AI” systems feel impressive in demos and then completely fall apart in real use. The core argument is simple and uncomfortable: agents don’t fail because they lack intelligence. They fail because they don’t adapt. The research shows that most agents are built to execute plans, not revise them. They assume the world stays stable. Tools work as expected. Goals remain valid. Once any of that changes, the agent keeps going anyway, confidently making the wrong move over and over. The authors draw a clear line between execution and adaptation. Execution is following a plan. Adaptation is noticing the plan is wrong and changing behavior mid-flight. Most agents today only do the first. A few key insights stood out. Adaptation is not fine-tuning. These agents are not retrained. They adapt by monitoring outcomes, recognizing failure patterns, and updating strategies while the task is still running. Rigid tool use is a hidden failure mode. Agents that treat tools as fixed options get stuck. Agents that can re-rank, abandon, or switch tools based on feedback perform far better. Memory beats raw reasoning. Agents that store short, structured lessons from past successes and failures outperform agents that rely on longer chains of reasoning. Remembering what worked matters more than thinking harder. The takeaway is blunt. Scaling agentic AI is not about larger models or more complex prompts. It’s about systems that can detect when reality diverges from their assumptions and respond intelligently instead of pushing forward blindly. Most “autonomous agents” today don’t adapt. They execute. And execution without adaptation is just automation with better marketing.
Yasir Ai tweet media
English
74
178
578
63.8K
Bala Pillai
Bala Pillai@balapillai·
Going for the root on "most agents today don't adapt": What design inclination or confirmation bias had the powerful among us to make this fundamental anti-continua/pro-false dichotomies/pro-binary error? Eg why did we choose to ignore the corpuses of civilizations that led the world in leaps of adaptation? Eg that which iterated towards rich commerce eg #SpiceTradeAsia region led dense trade routes. Which in turn were led by pioneer non-binary languages eg *NOT* "how old are you?" and yes to "how age are you?".
English
0
0
0
2
Yasir Ai
Yasir Ai@AiwithYasir·
The paper explains why static planning fails in the real world. Static methods assume the plan is right. Dynamic methods assume the world will surprise you. Adaptation lives in that gap. Agents that integrate feedback during execution consistently handle longer horizons and partial observability better.
English
2
1
8
730
Bala Pillai
Bala Pillai@balapillai·
@ZoidbergCash @r0ck3t23 The transition is not trivial. We should expect that it will take several iterations of co-evolution. The firm and the AI warrior will have to co-evolve to samepage.
English
0
0
0
166
⚛️ Crypto Zoidberg 𝕊
⚛️ Crypto Zoidberg 𝕊@ZoidbergCash·
@r0ck3t23 The entire point of natural language interface is for anybody to utilize a computer without specialized knowledge. If AI consultants are needed to teach a company how to use AI, then one can argue that the AI companies themselves haven't done a good job at all.
English
8
1
14
7.5K
Bala Pillai retweetledi
Dustin
Dustin@r0ck3t23·
Mark Cuban just described the largest wealth transfer of the AI era. Almost nobody understood what he said. Cuban: “There are 33 million companies in this country. Aren’t going to have AI budgets. Aren’t going to have AI experts.” Not tech startups. The shoe store. The regional trucking outfit. The accounting firm with 12 employees. The businesses that actually run the physical economy. They know AI is coming. They have no idea what to do with it. Cuban: “You’ve got the head of Microsoft saying software is dead because everything’s going to be customized to your unique utilization.” Software is dead. The SaaS era ran on one rule. Build a generic product. Force millions of companies to bend their workflows around it. Charge rent forever. AI ends the contract. The business stops bending to the software. The intelligence bends to the business. But customized by whom. The third-generation manufacturer cannot tell Claude from Gemini. The county hospital is staring at a reactor asking where the light switch is. Cuban: “Who’s going to do it for them?” That question is worth more than the frontier models themselves. Hundreds of billions are being burned to build the foundation. The smartest engineers alive are locked in a bloodbath over who owns the base layer. Let them fight. Let them burn the capital. Let them drive the cost of raw intelligence toward zero. Because the wealth does not collect where the brain is built. It collects where the brain meets the business. Every ambitious kid in college right now thinks survival means a seat at OpenAI or Anthropic. Cuban is staring at the other 99 percent of the economy. Learn the models. Then learn the messy, unglamorous reality of how a 50-person company actually operates. Walk through the door. Understand their problems. Wire the intelligence directly into their revenue. That is not a job title. That is an entire economic class being born. You do not need to build the brain. You need to build the nervous system. The biggest winners of the electricity era were not the engineers who built the generators. They were the ones who walked into dark factories and showed the owners where to plug in. 33 million companies are standing in the dark right now. Silicon Valley is racing to build the god. The fortunes will belong to whoever teaches him a trade.
English
489
1.9K
14.3K
3.7M
Bala Pillai retweetledi
The Infrastructure Thesis
The Infrastructure Thesis@InfraThesis·
@Polymarket The tolls aren't a proposal. Iran published an approved transit list weeks ago. China, India, Pakistan are on it. $2M per ship, yuan payment confirmed. The only ships transiting the strait are the ones paying Iran directly.
English
1
1
2
409
Bala Pillai retweetledi
Ian. J. Lumsden. 🇬🇧🏴󠁧󠁢󠁳󠁣󠁴󠁿🌎
A considerate note for clarification and to facilitate informed debate: Interestingly, the British Royal Navy has developed a highly specialised capability through advanced drone mine-hunting technology—an ability that the US Navy currently lacks. It might be assumed that President Trump has been unaware of this development. Perhaps he could use his next phone call with Keir Starmer to politely inquire about it. 🫡🇬🇧🇺🇲
West Clandon, England 🇬🇧 English
2
1
4
745