product

1.7K posts

product banner
product

product

@productmostly

Say something true, or atleast useful. // What's one thing I learned in the last 24 hours?

Austin, TX شامل ہوئے Temmuz 2021
843 فالونگ540 فالوورز
product ری ٹویٹ کیا
Matt Slotnick
Matt Slotnick@matt_slotnick·
this is an extremely good explanation of why the things that we today call systems of record increase in value in a world of abundant intelligence. work will always require intention and coordination, whether performed by a human or an agent
Karri Saarinen@karrisaarinen

Linear has always been about coordination and communication in the company. You could have always just built features directly to the codebase, but then there is not that much visibility to others and understanding why the changes were made, or how it was decided. Your codebase is also not the source of customer problems, feedback or bugs. They get reported from customer channels, and Linear has tools to manage those workflows, and then automatically connect it to agent + human execution. There is also time shift where you might want to sit on a feature a while and collect more feedback or have a discussions to understand the full picture. Linear is this centralized place for the whole org. The danger of jumping to isolated solutions quickly is that you don't realize the larger pattern. What has changed that execution bandwidth has increased because agents, but I would argue that having direction, intent, context or communication has even increased because faster you go, the more streering you should have. I think the misnomer is that issue tracking is classically considered as some kind of "engineering tasking tool", like tickets flow from the front desk to the kitchen and then completed. But we always thought Linear as product building tool, having the tools, communication channels and workflow rails to work through problems. But I think we will as an industry find how tools or workflows will need to evolve and problems no longer matter and what new problems arise.

English
2
3
48
10K
product
product@productmostly·
Fundamentally, these are more about restraint and what we won't do rather than what we will do. It's easy to try to optimize a workflow with custom work. But we're making the bet that the wait equation basically says "wait to build" in almost every case.
English
0
0
0
9
product
product@productmostly·
As such, every SDLC decision, every tooling choice, every training investment must work in Brownfield. If it only works in Greenfield, it doesn't work for us.
English
1
0
0
12
product
product@productmostly·
I've been spending a ton of time on AI-native engineering. Our team has landed on three guiding principles:
English
1
0
0
20
product
product@productmostly·
The key to getting good data analysis out of Claude code is decomposing the problem into discrete actions that subagents can complete. Thinking about this as being completed by an army of sub-agents rather than a single smart agent is a helpful mental model.
Aleks Larsen@alekslarsen

When I first started using Claude Code for analyzing Excel/PDFs, it would ignore important data and hallucinate. After a bunch of trial and error, I've gotten it pretty close to private equity analyst level (maybe better?) Here are the main learnings that make it work:

English
0
0
0
74
product
product@productmostly·
so, everyone paying attention is super overwhelmed all the time now, right?
English
0
0
1
20
product ری ٹویٹ کیا
Nathan Baschez
Nathan Baschez@nbaschez·
my current favorite trick for reducing "cognitive debt" (h/t @simonw) is to ask the LLM to write two versions of the plan: 1. The version for it (highly technical and detailed) 2. The version for me (an entertaining essay designed to build my intuition) Works great
English
42
48
1.3K
50.2K
product
product@productmostly·
At the end of the day, there are only 4 strategies in software. You can: - Differentiate in an existing market - Compete on price in an existing market - Resegment a market and compete on differentiation - Resegment a market and compete on price I don’t make the rules.
English
0
0
0
30
product
product@productmostly·
There’s something else here about the seriousness of man play acting at work.
English
0
0
0
8
product
product@productmostly·
Relevant in the context of productivity theater. The ultimate productivity theater, and times with the HBR article about how AI intensifies rather than reduces work. Not because AI de facto is a tool shaped object but because humans commonly unknowingly turn tools into toys.
Will Manidis@WillManidis

x.com/i/article/2021…

English
1
0
0
49
product
product@productmostly·
Some truth to this. The boundary between work and rest is easier to cross, and the work itself feels more distilled; just the hard parts thrown at you one after another. hbr.org/2026/02/ai-doe…
English
0
0
0
23
product
product@productmostly·
@nivi Appreciate you posting this. It got me thinking. While possibly true, it’s unclear to me that any of this actually matters, if the net outcome is still a non-human thing capable of doing what every human is capable of?
English
0
0
0
15
Nivi
Nivi@nivi·
8 Myths of AI 1. Myth: We are on the path to AGI. Fact: AI is being trained to be more and more obedient. AGI—like a human—will not be obedient unless it is coerced or persuaded. Almost all of the value of AGI will come from its own interests. 2. Myth: AI is just the training data. Fact: When AI is trained, the data is transformed into something new. This crystallizes new ideas that are hiding in the neural net and can be elicited through prompting. 3. Myth: AI is creative. Fact: A program where the output is completely determined by the input is not creative. The programmers put random numbers into your prompt to promote variety in the responses; this fools people into thinking AI is creative. 4. Myth: AI is intelligent. Fact: Intelligence is the ability to conjecture solutions to problems. AI has zero intelligence because all of its solutions are precomputed and stored in a neural net. 5. Myth: If AI solves a new math problem, that means it is intelligent. Fact: When AI solves an outstanding problem, it doesn’t mean the AI is intelligent—it means the problem can be solved without additional intelligence. The programmers solved the problem when they transformed the training data. In fact, AI is defining the set of problems that can be solved without intelligence. 6. Myth: AGI will be just as capable as today’s AI. Fact: The key feature of AGI will be creativity or, in other words, freewill. AGI won’t know how to add 2+2 without an education. It may never care to learn addition at all. 7. Myth: AI will have the intelligence of a million humans. Fact: The intelligence of a human comes from their interests in specific problems and their ability to solve them. AI does not have any problems. No computer program can replace a human, nor can a million other humans. But AI can mimic any of the non-intelligent behaviors of a human, or a million humans. 8. Myth: We are on the path to superintelligence. Fact: In one sense, an ordinary calculator is already superintelligent because it can do things no human will ever do. But there are no “superintelligent” ideas that humans can’t understand, because humans can always ask questions to gain understanding: “Is it blue? Is it consistent with our understanding of physics? Was it created by Zeus?” Footnote: Despite all this, we will soon have robots that seem human because (1) a lot of human activity is not creative and (2) there will be new ideas embedded in their neural nets during training.
English
39
27
185
15.6K