cryptonium

11K posts

cryptonium

cryptonium

@cryptonium93

Katılım Mayıs 2021
4.1K Takip Edilen645 Takipçiler
cryptonium
cryptonium@cryptonium93·
@paoloanzn I don't think you read it right. I think they were paying for a lot of different SaaS products and instead they have now implemented those solutions better internally and don't need to pay for those SaaS anymore.
English
0
0
0
21
4nzn
4nzn@paoloanzn·
the CEO of Vercel saying the saas apocalypse is real because they replaced internal tools with AI-generated apps is so funny to me… you run a software company bro. you have engineers everywhere. if YOU couldn't replace your own internal tooling with vibe-coded apps that would be embarrassing honestly, AI or not that's like a mechanic saying cars are easy to fix and concluding nobody needs mechanics anymore. yeah no shit it's easy for YOU 90% of businesses out there don't have a single person on staff who knows what an API call even is. they're not replacing their CRM with something they prompted in claude code the disconnect is wild. these tech CEOs live in such a bubble that they genuinely think their experience is universal. your company literally builds deployment infrastructure, obviously you can ship internal tools fast the SaaS apocalypse might come eventually idk, but pls get back in touch with reality
Guillermo Rauch@rauchg

Almost every SaaS app inside Vercel has now been replaced with a generated app or agent interface, deployed on Vercel. Support, sales, marketing, PM, HR, dataviz, even design and video workflows. It’s shocking. The SaaSpocalypse is both understated and overstated. Over because the key systems of record and storage are still there (Salesforce, Snowflake, etc.) Understated because the software we are generating is more beautiful, personalized, and crucially, fits our business problems better. We struggled for years to represent the health of a Vercel customer properly inside Salesforce. Too much data (trillions of consumption data points), the ontology of Vercel was a mismatch to the built-in assumptions, and the resulting UI was bizarre. We generated what we needed instead. When you don’t need a UI, you just ask an agent with natural language. We’ve also been moving off legacy systems with poor, slow, outdated, and inconsistent APIs, as well as just dropping abstraction down to more traditional databases. UI is a function 𝑓 of data (always has been), and that 𝑓 is increasingly becoming the LLM.

English
83
115
2.4K
231.4K
cryptonium
cryptonium@cryptonium93·
@burkov You saw Billions taken away from SaaS market for a reason. Everyone is building internal tooling and systems first. Maturing their AI workflows during the process.
English
0
0
0
12
BURKOV
BURKOV@burkov·
I wish I could see some of those millions of companies making lots of money out of vibe-coded solutions. Without real names, it looks more like the second dotcom boom to me: everyone has a domain.com (a vibe-coded software project) without any paying customers. I have been arguing this for a long time: the bottleneck is not the inability of people to code. The bottleneck is the lack of ideas that software startup accelerators haven't brute-forced yet. The money should be on the intersection of software and real world. And this is where vibe coding breaks its teeth.
Logan Kilpatrick@OfficialLoganK

@burkov so many companies are making lots of money!

English
14
7
149
10.7K
cryptonium
cryptonium@cryptonium93·
@nmatt0 You don't know how bad it can get until you start talking to human liaisons for bad AI all day long. Instead of "I don't know" or any human response you just get 10 paragraphs of 99% useless AI text to simple questions.
English
1
0
1
112
Matt Brown
Matt Brown@nmatt0·
If you haven't been at a large American corporation in the last few years you have NO IDEA how true this is. We are SO EARLY. If you are using Claude Code or Cursor you are in the top 1%. There are corporations where the only AI that employees have access to is Copilot or some old and locked down onprem ChatGPT model. They talk to the chatbot like its a toy. They celebrate and report to their management that they used AI to edit an Excel file. They generate 10 line powershell scripts that you could have done by hand. BUT they need to be able to tell leadership that they are "Using AI". They collect AI usage data, make graphs and chart all to make leadership happy. Deep down, they literally have no idea what to do with this technology. It's like aliens handing cavemen a nuclear reactor.
unusual_whales@unusual_whales

Microsoft CEO: The biggest obstacle to expanding artificial intelligence is persuading people to change the way they work.

English
14
5
65
7.6K
cryptonium
cryptonium@cryptonium93·
@jasonbosco @59thProfile The best people I know at debugging production systems are better than every coder involved in any of the services. Half the time the coders tell them the wrong thing. They know the architecture, they know the systems, they read the code as needed to follow detail.
English
0
0
0
21
Jason Bosco
Jason Bosco@jasonbosco·
Ugh, that sounded way different in my head. I meant to say, if folks don’t know enough of the codebase, not even file names, it’s going to be hard to debug production issues… I heard some folks say “let the agent debug it”, but then what happens if the LLM provider’s API is down, or if they’re nerfing the output to meet demand, etc.
English
1
0
3
32
Jason Bosco
Jason Bosco@jasonbosco·
I see a new form of tech debt coming for dev teams - Comprehension debt. As more and more code is generated by LLMs, if teams don’t take the time to understand deeply what the generated code is doing, as well as code they write by hand… It’s only a matter of time before the code base starts looking unfamiliar to most of the team. It then becomes harder to discern if new code that LLMs generate is adding more spaghetti or if there’s a better approach. It’s a downward spiral from there - unrelated things break with every change despite existing tests passing, no one knows the full picture to be able to fix the root cause, not even an LLM, etc. So as tempting as it is to move super fast with LLMs, there’s only so much comprehension debt you can rack up before your code base silently becomes a Rube Goldberg machine under your nose.
English
95
40
347
21.2K
cryptonium
cryptonium@cryptonium93·
@jasonbosco Incorrect, we are simply getting used to abstracting up. We already use a ton of black box libraries that represent the majority of all of our codebases without understanding a line of code written there. You think anyone spins up an http server by writing a tcp protocol stack?
English
0
0
1
20
cryptonium
cryptonium@cryptonium93·
@ujjwalscript Good engineers join jobs with millions of lines of code and catch on and understand it over time. 50k lines of code 🤣😂. Every engineer I know to have been good before AI, uses AI a lot. Code does not need to be written FOR humans anymore. If you judge AI that way, you're wrong
English
0
0
0
6
Ujjwal Chadha
Ujjwal Chadha@ujjwalscript·
The "10x AI Developer" is a MASSIVE lie. You are just a 1x Developer generating 10x the technical debt. The entire tech industry is high on the illusion of "vibe coding" right now. The popular consensus is that because Claude and Devin can spin up a backend in 45 seconds, software is now infinitely cheaper to build. Here is the provocative reality nobody is budgeting for: AI is about to make software engineering significantly MORE expensive. Everyone is cheering for code generation, but completely ignoring the Verification Tax. When an AI agent writes 5,000 lines of code, it is optimizing to pass the immediate test. It is not optimizing for human readability. It relies on brute-force loops, repetitive logic, and bizarre architectural shortcuts that just happen to compile. Fast forward 12 months. Your business needs to pivot, or a core dependency breaks. You are now staring at a 50,000-line black box that no human being actually wrote, understands, or can safely modify. You cannot simply "prompt" your way out of architectural collapse. When the machine-generated spaghetti finally breaks, you won't be saved by a $20/month LLM subscription. You will have to hire a top-tier Principal Engineer at absolute premium rates just to untangle the mess your "autonomous swarm" created. We are treating code generation as a pure productivity win, but code is a liability, not an asset. Stop measuring how fast your team can generate syntax. Start measuring how quickly they can debug it.
English
192
154
1.2K
98.7K
cryptonium
cryptonium@cryptonium93·
@livingdevops On uDesign too early, you're right. On K8s usage too early, you're wrong. What are you going to start with docker compose, helm, custom REST? It's all brittle at any scale. K8s is very mature and runs anywhere, use it, use it early, use it often. And learn Kubernetes Operators.
English
0
0
0
52
Akhilesh Mishra
Akhilesh Mishra@livingdevops·
Most engineers can’t design for small. They jump straight to Kubernetes clusters, microservices mesh, and multi-AZ RDS before asking one simple question: What are we actually solving for? 100 users don’t need 12 microservices. 100 users don’t need a Kubernetes cluster humming 24/7. 100 users need a monolith, one database, and a deploy that doesn’t cost $4000/month. The best engineers I’ve seen in production didn’t flex complexity. They asked: → How many users today? → What’s the growth curve? → What breaks first if we scale? Then they built the simplest thing that survives. Kubernetes is not a solution. It’s a tool for a specific problem at a specific scale. Knowing when NOT to use it is the real skill.
English
15
6
90
9.4K
cryptonium
cryptonium@cryptonium93·
@svpino That's basically what the state of software is today. Somehow everyone is under this delusion that all software is written by top 1% engineers, who own only small parts of big faulty systems anyway. Actually it's better than the state of today's code where tests don't even exist.
English
0
0
0
32
Santiago
Santiago@svpino·
I don’t read AI generated code. I don’t read AI generated tests. If AI says everything is good, I ship to production. This is a flawless strategy.
Moses Xu@mosesxu

@svpino i don't read every line CC writes for me. i define what needs to be tested and CC writes the tests too. if they pass it ships. "i understand every line in my codebase" was already a lie before AI showed up, now we just can't pretend anymore

English
64
9
274
41.9K
mxcl @ clawlicio.us
Being obsessed with the code quality of the agent is a 2010s mindset based in an era of technical debt. There is no longer technical debt. All that needs review is software architecture. As long as what is produced conforms to that reviewing every line of code is wasting time. If you cannot design software architecture then: you no longer will have a job in software. You are replaced.
English
13
1
29
5K
cryptonium
cryptonium@cryptonium93·
@StaciW_DC @omniwarp You mean the smaller percentage of the 90% that run the nodes for profit? Whatever benefits their bottom line is what will be.
English
0
0
0
46
StaciW_DC
StaciW_DC@StaciW_DC·
@omniwarp If you've been in it since 2019, surely you also understand that nothing happens to the software of this protocol without 90% of the stake voting in favor of it.
English
4
2
38
1.3K
StaciW_DC
StaciW_DC@StaciW_DC·
BOOM! As many of you know, we have been preparing to take on the responsibility for Algorand protocol maintenance and development for some time now. Our founder, Silvio Micali, built the best blockchain in crypto, bar none, with an elegant consensus mechanism still far ahead of even the most recent generation blockchains. And I, and everyone at the Algorand Foundation, understand that the sustainability of this protocol is our most sacred responsibility. With this unified strategy, in the United States, and as an entity that can pursue for-profit activities when they are in the best interests of the ecosystem, we will be truly unstoppable. In addition to our founder, I also want to acknowledge all Algorand Technologies engineers, past and present, for the incredible piece of technology that they have conceived, developed, and kept running.... Without one. second. of. downtime. 🫡🫡👊👊
Algorand Foundation@AlgoFoundation

Algorand protocol development and ecosystem growth are now under one roof. Algorand Foundation and Algorand Technologies ( @Algorand ) have come to a strategic agreement to unify ecosystem operations. This agreement creates a unified powerhouse for blockchain innovation here in the United States and positions Algorand as the chain that enables financial empowerment at scale.

English
35
90
502
23.3K
cryptonium
cryptonium@cryptonium93·
@omniwarp @StaciW_DC Of course it will happen. Slowly but surely the profit motive creeps in. Soon as rewards were given for participation, it was guaranteed to go down this road. There's a reason Silvio designed the way he did. The protocol will go the way that's best for the business of consensus.
English
0
0
0
18
omniwȺrp
omniwȺrp@omniwarp·
@StaciW_DC We are not going to use the powers over the protocol to mint new coins with the excuse of solving a security budget problem that was never a problem in practice, right? Breaking the promise made to investors that have been in it since 2019 would've been a really bad precedent.
English
2
0
16
1.1K
cryptonium
cryptonium@cryptonium93·
@tomfgoodwin That being said, the LLM is the LLM, it's stateless, so you could use one "agent" and dynamically load context for whatever role/task you want next. For human management it's just easier to split these up logically as dedicated "agents".
English
0
0
0
6
cryptonium
cryptonium@cryptonium93·
@tomfgoodwin AI is most certainly not unconstrained. AI has very real limits with how much context it can use for any given task. If you don't break down the context to just what you need for a given task/role then you'll just end up with loads of hallucination and forgetting.
English
1
0
0
9
Tom Goodwin
Tom Goodwin@tomfgoodwin·
I’m surely being stupid. But if AI is rather unconstrained by expertise or capacity or to some extent speed Why do we need to divide tasks or departments to 9 agents ( the marketing agent, the optimization agent etc ) to each do one thing. And then another agent to manage the swarm. Cant one agent just be doing it all you know. It seems very skeuomorphic. Will we have HR agents to make sure the agent agents are being looked after ? A office canteen manager agent to feed the agents ? Seems daft
English
197
3
190
25.5K
cryptonium
cryptonium@cryptonium93·
@tomfgoodwin The builders are building, they aren't marketing, at least not marketing well (yet).
English
0
0
0
135
cryptonium
cryptonium@cryptonium93·
@kylegawley Too many of you have really convinced yourself that flow is magic, it's just comfort. If you're not more productive but have more output then you're outputting junk, so that should be your focus.
English
0
0
0
10
Kyle Gawley
Kyle Gawley@kylegawley·
AI has increased my code output But I'm not more productive I feel like it's harder to focus on tasks and get meaningful work done because I'm never in flow
English
71
9
235
8.4K
cryptonium
cryptonium@cryptonium93·
@molecularmusing Code is a means to an end. If you code long enough, code is a solved problem, there is nothing really new, just every problem dressed up in different clothes. Now with AI, we can problem solve the closer to the ends and not waste so much time with the middle of the means.
English
0
0
0
9
Stefan Reinalter
Stefan Reinalter@molecularmusing·
I find this extremely worrying, with many of people I respect saying things like "I no longer write code" or "let LLMs do it". Why did you start programming? Was it never the journey for you, but only the goal? I genuinely want to understand this, I seem to be the odd one out.
English
183
18
488
32.5K
cryptonium
cryptonium@cryptonium93·
@DominikWarchol @ivanburazin It's a double edged sword as well. The token inefficient strategies are often the least reliable as well. Depending on what you're doing exactly, oftentimes the naive approach costs 10x more tokens with less reliable output. Turning a 1x engineer into a 1.5x engineer for 3x cost.
English
0
0
0
9
Dominik Warchoł
Dominik Warchoł@DominikWarchol·
The math points at something most companies aren't ready for. $300K in token spend at current API pricing means enormous output volume. Which means the constraint isn't the model anymore — it's whether there's enough useful work being generated to justify the spend. The companies that hit this ratio and win will have solved the eval problem first. Because $300K in tokens producing unreviewed, low-quality output isn't leverage. It's expensive noise. The flip from people cost to token cost is interesting. The harder flip: from output volume to verified output quality. The engineer who costs less than their token spend isn't being replaced. They're becoming the quality layer that makes the token spend defensible.
English
1
0
1
228
Ivan Burazin
Ivan Burazin@ivanburazin·
I recently met a founder who has an engineer spending more on Claude tokens than his actual salary. His goal: entire company spends more on tokens than people by end of 2026. Just imagine... $150k engineer → $300k/year in token spend Curious to see when the flip happens at scale in more companies.
English
41
3
58
24.5K
cryptonium
cryptonium@cryptonium93·
@Layton_Gott I don't even look at the code, why would I need to? I know what it should do, I'll let the AI get me there.
English
1
0
1
11
Layton Gott
Layton Gott@Layton_Gott·
AI isn’t making developers great… It’s EXPOSING bad ones. Anyone can generate code now. Very few can debug it, scale it, or even explain it. Those skills will always matter more.
English
39
1
44
1.6K
cryptonium
cryptonium@cryptonium93·
@jamesacowling @ryan_t_brown We haven't seen a lot of things that current LLM are capable of. Just because it hasn't been done doesn't mean it can't. Larger systems take longer to get right.
English
0
0
0
36
James Cowling
James Cowling@jamesacowling·
@ryan_t_brown Yes everyone's saying it yet we have yet to see an AI generated equivalent of existing large-scale systems, which would exist already if it was just a token issue
English
1
0
6
1K
James Cowling
James Cowling@jamesacowling·
Whatever your take on this, writing code has always been tremendously easy compared to designing/maintaining large systems. The best engineers in the world are the best because of this stuff. Currently LLMs will not save you from bad architectural decisions. You need to constrain them to do the right thing.
David Cramer@zeeg

im fully convinced that LLMs are not an actual net productivity boost (today) they remove the barrier to get started, but they create increasingly complex software which does not appear to be maintainable so far, in my situations, they appear to slow down long term velocity

English
30
40
487
44.2K
cryptonium
cryptonium@cryptonium93·
@ArashSadrieh @OfficialLoganK Eventually they will all learn to stop caring about the code. Behavior is all that really matters and the entirety of the software industry will finally be able put all of their energy on ensuring behavior meets expectations.
English
0
0
1
219
Arash Sadrieh
Arash Sadrieh@ArashSadrieh·
@OfficialLoganK Senior engineers now spend 4.3 minutes reviewing AI-generated code versus 1.2 minutes for human code. 3.6x the review burden. PR volume up 113% in a year. Only 26% of senior devs would ship AI code without review. We automated the cheap part and tripled the expensive part.
English
10
4
96
5.8K
Logan Kilpatrick
Logan Kilpatrick@OfficialLoganK·
The bottleneck has so quickly moved from code generation to code review that it is actually a bit jarring. None of the current systems / norms are setup for this world yet.
English
379
184
4.1K
518.3K