Massu Hora

11 posts

Massu Hora

Massu Hora

@massuhora

Attention is all you need

Katılım Nisan 2026
65 Takip Edilen1 Takipçiler
Massu Hora
Massu Hora@massuhora·
@dhruvtwt_ @nvidia But how is the speed? I saw someone say that it is slow to use, which leads to poor experience
English
0
0
0
152
Dhruv
Dhruv@dhruvtwt_·
Why is no one talking about this? @nvidia is offering around 80 AI models via hosted APIs absolutely for free. You get access to MiniMax M2.7, GLM 5.1, Kimi 2.5, DeepSeek 3.2, GPT-OSS-120B, Sarvam-M etc. This plugs straight into OpenClaude, OpenCode, Zed IDE, Hermes agent and even with Cursor IDE. Setup: – Grab API key: build.nvidia.com/models – base_url = "integrate.api.nvidia.com/v1" – api_key = "$NVIDIA_API_KEY" – select model (e.g. minimaxai/minimax-m2.7) If you’re building or experimenting, this is basically free inference. Lock in and start building today anon. Thank me later.
Dhruv tweet media
English
524
1.8K
18K
1.5M
Massu Hora
Massu Hora@massuhora·
@anorth_chen 不太同意。在某些非关键领域,可能是这样,但在安全、隐私、合规、金融、医疗等高风险领域,任何潜在的 bug 都可能是致命的。所有核心系统的可靠性依然重要,就像无法接受飞机、自动驾驶或银行的控制程序出现低概率错误一样。
中文
2
0
8
1.5K
North@CreaoAI
North@CreaoAI@anorth_chen·
市场可以原谅有bug的产品,但是市场不会接受过时的产品。 今天在鹿鱼内部做了一次AI的分享,到结尾的时候有朋友问到如果AI的工程设计做的烂,不优雅怎么办。 我反问了一句,会不会这件事已经没那么重要了?甚至AI写了一点bug也是可以接受的? 我们过去在代码层面做各种工程抽象层,微服务,让代码优雅。是为了降低我们工程师在协作时的“认知负担”。我曾经非常喜欢DDD的工程理念。 多写几个函数,抽象的不干净,就是技术负债,会让我们未来开发和维护成本越来越高。这其实是围绕着人脑来设计的理念。 但如果这些是交给AI来做呢?有些地方就算抽象的不干净,写了一点屎山代码,会让我们vibe coding时效率降低多少呢? 以及最重要的,如果你费劲心思在维护的代码库和去避免出现的bug,是一个已经过时的产品功能。市场和用户们根本不在乎这些,那做的这一切又有什么意义? 很多既有的成见是建立于“这很重要”之上的,但这或许已经成为过去时了。
中文
10
4
83
33.5K
Massu Hora
Massu Hora@massuhora·
In fact, there is still a significant gap between the scenario you described and covering all aspects of daily life. There could be many reasons for this, but the main one lies in establishing fully functional communication methods across different platforms, especially between competing platforms.
English
0
0
0
13
Aaron Levie
Aaron Levie@levie·
Software going headless is inevitable in a world where agents use the tools 100X more than people do. And the reality is for a lot of software this is actually a huge boon to potential use-cases for these platforms. Software business models have largely been predicated on selling to the number of seats that are in the company in a given function, and the usage of your software is constrained by how much people can do in a given day. This means that your technology is often vastly underutilized relative to what it actually can power for the customer. Enter: agents. Agents can work 24/7, run in parallel, and string together work across systems. This is a big deal because now the agent can do far more than people ever could with these tools. Instead of reviewing contracts one by one, the agent will review all of them. Instead of manually moving data between marketing systems and across campaigns, the agent will let you run 10X more of them. Instead of being rate limited in a client onboarding process by human steps, agents accelerate these. Agents end up using these underlying platforms far more than people ever did, which opens up use-cases that the platform couldn’t go after before. Now, not every software market has the same amount of positive sum use-cases between people and agents, but I’d argue that a significant portion of systems of record, for instance, can be used far more than they are today. Your Salesforce data can be leveraged 100X more to do vastly more customer targeting and sales automation. Your documents can be turned into structured data and analyzed for insights and knowledge to automate other workflows. And so on. Now, of course you have to find a way to make this all commercially attractive, but it’s not hard to picture the revenue from API and agent consumption on these platforms becoming a rich component of revenue streams over time. Seats for the people, consumption for the agents. Lots of upside here for the companies that embrace this trend.
English
77
88
935
103.7K
Massu Hora
Massu Hora@massuhora·
@jlongster Agreed. In the AI era, it's easy to add more unit tests for edge cases if you find an error related to a specific bug.
English
0
0
0
23
James Long
James Long@jlongster·
the strategy of adding a specific test whenever a bug is found in prod is a terrible strategy for testing you end up with a huge suite of one-off edge cases, often hardcoding specific behaviors that you want to be able to change but can't because tests fail in all kinds of weird ways it's fine temporarily if you want to make sure it doesn't happen, but you should revisit it and take a more cohesive approach of testing all behaviors and combinations of them you wouldn't expect. reading tests shouldn't read as if you're reading a journal of incidents
English
25
3
184
24.8K
Massu Hora
Massu Hora@massuhora·
@Infoxicador Someone even discusses software engineering being dead, failing to see that software engineering underlies harness engineering
English
1
0
1
129
Ruben Casas 🦊
Ruben Casas 🦊@Infoxicador·
Been thinking if "harness engineering" aka tokenmaxxing without reading the code becomes the norm, then investing in modularity and a close tight verification loop to make the agents effective (or at least not fall apart) is a must for companies to invest on right now!. but... there's nothing new about this, to me this is just "good engineering", good testing, good architecture and modularity and historically companies who were already very good at it... will continue to be effective and thrive in this hypothetical future. but for the rest average non-tech shop that is not prepared, my fear is they see something like this and embark into a journey that will only corner them into a mountain of slop that will be hard to untangle until it is too late.
Ruben Casas 🦊 tweet media
English
10
5
50
52.9K
Massu Hora
Massu Hora@massuhora·
@buccocapital Overall, it is those large AI model companies that will benefit. They cause the large-scale unemployment and get the fake GDP.
English
0
0
0
139
BuccoCapital Bloke
BuccoCapital Bloke@buccocapital·
This is why everyone is so exhausted at work Your company is scrambling to adopt AI in order to keep up with its competitors. You are scrambling to adopt AI to maintain parity with your peers. Nobody is gaining an advantage, everyone is just getting fitter
BuccoCapital Bloke tweet media
English
59
66
1.4K
82K
Massu Hora
Massu Hora@massuhora·
@ForrestPKnight Yep, GitHub becomes a shit hill from gold hill gradually with the development of LLM.
English
0
0
1
376
Forrest Knight
Forrest Knight@ForrestPKnight·
I can't prove it, but with all of these new GitHub repos getting 50k+ stars in the blink of an eye... pretty sure the vast majority of GitHub stars are fake now.
English
136
28
1.6K
61.6K
Massu Hora
Massu Hora@massuhora·
@intuitiveml On the other hand, how do you prevent the agent from impacting the other original feature?
English
1
0
0
14
Massu Hora
Massu Hora@massuhora·
Thanks for your sharing. You mentioned the single monorepo was the non-negotiable foundation so agents could see the whole system. For teams coming from multiple scattered repos with very different stacks (React/Node + Python + Go, etc.), what was the hardest part of the migration, and how exactly did you use your own agents to generate the migration scripts and update the shared-packages structure? Also curious if the architects had any rules of thumb for folder layout that made the AI context loading much more reliable.
English
1
0
0
233