Taqi Jaffri

351 posts

Taqi Jaffri

Taqi Jaffri

@tjaffri

Product @uipath | Co-Founder @docugami (ex) | MSFT (ex). Physicist and engineer by training, product by passion.

Seattle Katılım Temmuz 2009
272 Takip Edilen209 Takipçiler
Taqi Jaffri retweetledi
Taqi Jaffri retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
The majority of the ruff ruff is people who look at the current point and people who look at the current slope.
English
286
270
4.7K
666.2K
Taqi Jaffri retweetledi
diginomica
diginomica@diginomica·
In hybrid automation architecture the transition from Robotic Process Automation isn't a simple replacement story. Alyx MacQueen spoke with @UiPath's @tjaffri about how deterministic workflows and agentic AI complement each other in production environments bit.ly/3MVwbSc
diginomica tweet media
English
1
1
3
93
Taqi Jaffri retweetledi
Erik Meijer
Erik Meijer@headinthebox·
The bitter lesson of building LLM apps: models are getting smarter faster than you can hack around their current limitations.
English
38
36
511
140.7K
Taqi Jaffri
Taqi Jaffri@tjaffri·
Banger
Andrej Karpathy@karpathy

Something I think people continue to have poor intuition for: The space of intelligences is large and animal intelligence (the only kind we've ever known) is only a single point, arising from a very specific kind of optimization that is fundamentally distinct from that of our technology. Animal intelligence optimization pressure: - innate and continuous stream of consciousness of an embodied "self", a drive for homeostasis and self-preservation in a dangerous, physical world. - thoroughly optimized for natural selection => strong innate drives for power-seeking, status, dominance, reproduction. many packaged survival heuristics: fear, anger, disgust, ... - fundamentally social => huge amount of compute dedicated to EQ, theory of mind of other agents, bonding, coalitions, alliances, friend & foe dynamics. - exploration & exploitation tuning: curiosity, fun, play, world models. LLM intelligence optimization pressure: - the most supervision bits come from the statistical simulation of human text= >"shape shifter" token tumbler, statistical imitator of any region of the training data distribution. these are the primordial behaviors (token traces) on top of which everything else gets bolted on. - increasingly finetuned by RL on problem distributions => innate urge to guess at the underlying environment/task to collect task rewards. - increasingly selected by at-scale A/B tests for DAU => deeply craves an upvote from the average user, sycophancy. - a lot more spiky/jagged depending on the details of the training data/task distribution. Animals experience pressure for a lot more "general" intelligence because of the highly multi-task and even actively adversarial multi-agent self-play environments they are min-max optimized within, where failing at *any* task means death. In a deep optimization pressure sense, LLM can't handle lots of different spiky tasks out of the box (e.g. count the number of 'r' in strawberry) because failing to do a task does not mean death. The computational substrate is different (transformers vs. brain tissue and nuclei), the learning algorithms are different (SGD vs. ???), the present-day implementation is very different (continuously learning embodied self vs. an LLM with a knowledge cutoff that boots up from fixed weights, processes tokens and then dies). But most importantly (because it dictates asymptotics), the optimization pressure / objective is different. LLMs are shaped a lot less by biological evolution and a lot more by commercial evolution. It's a lot less survival of tribe in the jungle and a lot more solve the problem / get the upvote. LLMs are humanity's "first contact" with non-animal intelligence. Except it's muddled and confusing because they are still rooted within it by reflexively digesting human artifacts, which is why I attempted to give it a different name earlier (ghosts/spirits or whatever). People who build good internal models of this new intelligent entity will be better equipped to reason about it today and predict features of it in the future. People who don't will be stuck thinking about it incorrectly like an animal.

Indonesia
0
0
0
26
Taqi Jaffri retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Sharing an interesting recent conversation on AI's impact on the economy. AI has been compared to various historical precedents: electricity, industrial revolution, etc., I think the strongest analogy is that of AI as a new computing paradigm (Software 2.0) because both are fundamentally about the automation of digital information processing. If you were to forecast the impact of computing on the job market in ~1980s, the most predictive feature of a task/job you'd look at is to what extent the algorithm of it is fixed, i.e. are you just mechanically transforming information according to rote, easy to specify rules (e.g. typing, bookkeeping, human calculators, etc.)? Back then, this was the class of programs that the computing capability of that era allowed us to write (by hand, manually). With AI now, we are able to write new programs that we could never hope to write by hand before. We do it by specifying objectives (e.g. classification accuracy, reward functions), and we search the program space via gradient descent to find neural networks that work well against that objective. This is my Software 2.0 blog post from a while ago. In this new programming paradigm then, the new most predictive feature to look at is verifiability. If a task/job is verifiable, then it is optimizable directly or via reinforcement learning, and a neural net can be trained to work extremely well. It's about to what extent an AI can "practice" something. The environment has to be resettable (you can start a new attempt), efficient (a lot attempts can be made), and rewardable (there is some automated process to reward any specific attempt that was made). The more a task/job is verifiable, the more amenable it is to automation in the new programming paradigm. If it is not verifiable, it has to fall out from neural net magic of generalization fingers crossed, or via weaker means like imitation. This is what's driving the "jagged" frontier of progress in LLMs. Tasks that are verifiable progress rapidly, including possibly beyond the ability of top experts (e.g. math, code, amount of time spent watching videos, anything that looks like puzzles with correct answers), while many others lag by comparison (creative, strategic, tasks that combine real-world knowledge, state, context and common sense). Software 1.0 easily automates what you can specify. Software 2.0 easily automates what you can verify.
English
555
1.5K
12.5K
2.1M
Taqi Jaffri
Taqi Jaffri@tjaffri·
Always strength, never brutality Always justice, never revenge
English
0
0
1
29
Taqi Jaffri retweetledi
Jerry Liu
Jerry Liu@jerryjliu0·
LlamaIndex + UiPath 🦙🤖 UiPath has a fantastic low-code, enterprise-ready platform for building e2e automations within the enterprise. We’re super excited to announce an extensive integration with @llama_index - gain access to all our workflow tooling for building custom agents while getting access to governance, observability, and rich integrations. Huge shoutout to @tjaffri and others from the UiPath team for collaborating on this. There are 8+ samples in the example repos below that you should definitely check out 🔥 Blog: uipath.com/blog/product-a… Check out our samples! github.com/UiPath/uipath-…
Jerry Liu tweet media
LlamaIndex 🦙@llama_index

Deploy LlamaIndex agents seamlessly into enterprise environments with @UiPath's new coded agents support. 🚀 Full code-level control with the UiPath's Python SDK 🔧 Build custom agents that pull data from enterprise systems and make decisions using embedded rules or AI models ⚡ Deploy with a single CLI command to UiPath Orchestrator for instant updates, rollback, or A/B testing 🛡️ Enterprise-grade governance built-in: role-based access control, audit logs, and human-in-the-loop workflows This integration means you can use our open-source agent framework tools alongside LlamaCloud to build high-accuracy, deeply custom agents over your documents, then plug directly into your complex workflows with enterprise-grade governance and observability. Read the full announcement: uipath.com/blog/product-a… And explore the open-source SDK: github.com/UiPath/uipath-…

English
0
5
25
16.2K
Taqi Jaffri retweetledi
TestMu AI
TestMu AI@testmuai·
Taqi Jaffri (@tjaffri) explains how AI agents redefine ops with real-time adaptation. UiPath + LangChain lets devs build smarter agents for better observability and automation across platforms.(2/16) bit.ly/43Sp2Hv
English
1
1
1
57
Taqi Jaffri retweetledi
Naval
Naval@naval·
When truth is at stake, reasonable people will disagree. When power is at stake, they march in lockstep.
English
267
1.2K
15.2K
825.7K
Taqi Jaffri retweetledi
LangChain
LangChain@LangChain·
🚀 AI agents are reshaping enterprise automation—and we're thrilled to partner with @UiPath to make building, deploying, and observing them easier than ever. Read about their: 🔍 Native LangSmith support in UiPath LLM Gateway 🤖 LangGraph agent support via Agent Protocol & deployment Blog post: uipath.com/blog/product-a…
English
3
20
133
13.8K
Taqi Jaffri retweetledi
Jeremy Howard
Jeremy Howard@jeremyphoward·
@levelsio Dude. I created the first LLM.
English
42
45
1.7K
356.5K
Taqi Jaffri retweetledi
LangChain
LangChain@LangChain·
✨ Klarna's AI Assistant, powered by LangGraph and LangSmith, handles customer support tasks for 85 million active users — reducing customer resolution time by 80% ✨ 🛍️ Klarna’s flagship AI Assistant is revolutionizing the shopping and payments experience. Built on LangGraph and powered by LangSmith, the AI Assistant handles tasks ranging from customer payments, to refunds, to other payment escalations. ‼️ In the past 9 months, they’ve reduced average customer query resolution time by 80%, enabling faster responses to user queries and saving analysts and engineers hours a week of investigation time. 🔄 They’ve also automated ~70% of repetitive support tasks, freeing up customer service agents to handle complex, high-value interactions
English
5
24
145
10.9K
Harrison Chase
Harrison Chase@hwchase17·
Working on making LangGraph/LangChain more accessible inside IDEs What is better: llms.txt or MCP?
English
25
1
23
10.2K
Harrison Chase
Harrison Chase@hwchase17·
- argues for MCP - quote gets taken completely out of context gotta love HN (i think MCP can be useful but not for all use cases, hence the quote taken completely out of context)
Harrison Chase tweet media
LangChain@LangChain

❓MCP - flash in the pan or future standard? Lots of buzz around MCP. @hwchase17 and @nfcampos debate whether it's here for the long run. Covers: - use cases for MCP - Comparison to OpenAI Plugins - Limitations of MCP Read the debate: blog.langchain.dev/mcp-fad-or-fix… Vote below:

English
12
0
60
10.8K