MetaCritic Capital

31.5K posts

MetaCritic Capital banner
MetaCritic Capital

MetaCritic Capital

@MetacriticCap

You can just do things!

The metaverse Katılım Şubat 2022
844 Takip Edilen7.7K Takipçiler
Jared Sleeper
Jared Sleeper@JaredSleeper·
Anthropic seems to be just mercilessly pillaging top sales talent from Snowflake.
Jared Sleeper tweet media
English
9
2
205
31.1K
MetaCritic Capital
MetaCritic Capital@MetacriticCap·
In 1919, British astronomers went to Sobral, Ceará to prove Einstein right. They couldn’t do it alone. A local agronomist named Leocádio Araújo stood beside them during the eclipse, calling out metronome beats so they could time the photographic plates. Unnamed porters, bricklayers, and carpenters built everything around them. The official report later listed them only by job category — historically invisible.
English
0
0
0
28
spicylemonade
spicylemonade@spicey_lemonade·
I disagree with the new tool framing from OpenAI. A tool cannot learn to use other tools. I like to say, “You can’t help Einstein.” Imagine you’re a layperson. Do you think you could, in any way, guide Einstein on how he should conduct his physics research? Any idea you have would likely have already crossed his mind. Moreover, assisting him wouldn’t offer any significant speedup. Replace Einstein with superintelligence, and you’ll understand my point. In terms of jobs, people often say, “We will just hire more people to direct the AI.” However, the AI would be smarter than the people directing it, and we’ve already established that “you can’t help Einstein.” So instead of hiring 20 more people to guide the AI, why wouldn’t the CEO/Leader just spawn 20 more superintelligent systems in parallel?
Nathan 🔎@NathanpmYoung

Seems good that Anthropic shows its weirdness and bad that OpenAI are now claiming to just make a tool given many previous statements to the contrary.

English
7
1
16
1.5K
MetaCritic Capital
MetaCritic Capital@MetacriticCap·
@reed_rawlings 1- I don't think top-tier sellers earn as much as a MOTS at Anthropic. 2- An alternative for top-tier sellers is selling Workday. 3- The real cost is the consultants, not the sellers. You don't want to pay $500k TC to the plugin marketplace monkey.
English
0
0
0
203
Reed Rawlings
Reed Rawlings@reed_rawlings·
@MetacriticCap How do you find a bunch of top tier sellers who also understand AI and integrations and pay them less?
English
1
0
0
395
MetaCritic Capital
MetaCritic Capital@MetacriticCap·
My general impression is that OpenAI and Anthropic want to have IT services in separate companies so they can pay those people much lower salaries and much lower stock-based compensation. For Anthropic, it would be difficult to extend the pledge of “we will not fire you post ASI” to system integrators. I suppose OpenAI thinks the same.
English
7
1
63
9K
nikron
nikron@nikhilbysani·
@MetacriticCap Why would it be difficult? If it's difficult then it's not asi
English
1
0
0
440
MetaCritic Capital
MetaCritic Capital@MetacriticCap·
It seems the great IT Services bottom is upon us.
English
2
0
6
763
Kris Patel 🇺🇸
Kris Patel 🇺🇸@KrisPatel99·
The question for or against the bubble isn't about RPO or future demand. Its about profitability. At some point the AI research labs like OpenAI and Anthropic will need to make real money to finance their training. Profitability doesn't have to come overnight but there has to be glidepath towards sustainability that investors can see. The other problem is that the race is still too early to pick 1 victor. Model preference seems to be changing pretty frequently with every update. This means that the organizations have to keep investing in better models until 1 ends up winning. This is the biggest cap ex spend war in the history of the market.
Andreas Steno Larsen@AndreasSteno

Why do we keep talking about a CapEx bubble, when the backlog of the same companies is rising (substantially) faster? That and more is found in our very well received monthly portfolio update on @RealVision .. We remain solidly up on the year, and have made a killing over this cycle in total.

English
6
0
22
7.6K
MetaCritic Capital
MetaCritic Capital@MetacriticCap·
@levie It seems early. You should try using your foresight to ensure that Box can claim some hundreds of millions of those dollars.
English
0
0
3
216
Aaron Levie
Aaron Levie@levie·
Whether it’s existing consulting firms, new ones that emerge, FDEs from agent vendors, or new internal agent engineering roles, the amount of work that is going to be created to implement agents in enterprises will exceed anything we imagine today. The complexity of implementing agents in any existing organizations is very real. When I talk to large enterprises, as you move from a chat paradigm to agents that participate in meaningful workflows, there are a number of things they need to do. First, you have to get agents to be able to talk to your data securely across your systems. In many cases, enterprises have decades of legacy infrastructure that contain the valuable context for AI agents. That’s going to take a ton of work to go modernize and move to systems that work well with agents. Then, you need to ensure that you’ve implemented agents with the right access controls and entitlements, the right scopes to be safely used, and have ways of monitoring, logging, and securing the work that they do. Next, you need to actually document the processes in the organization in a way that agents can utilize for doing the work. You also need to figure out what the new workflow looks like when agents and people are working together on a process, and who steps in where. Just replicating the old workflow will mute the gains. Oh and you likely need to create evals for your top new end-state processes. Finally, you have to keep up with a rapidly changing set of best practices and architectural shifts happening in the agent space. While it’s fun for people to change their personal productivity tools on a dime, it’s 100X harder to do this in a business process. The speed of change is a blessing and a curse right now for anyone trying to keep a stable system design. All of this means that individuals and companies that develop expertise on the above set of components (and more) are going to be needed to help organizations actually implement agents at scale. This is also the rationale for vertical AI agents right now that can go in deep on a business domain and help bring automation to it. This is a huge opportunity right now whether you’re doing this internally or as an external business provider.
English
144
243
1.8K
482.6K
Tenobrus
Tenobrus@tenobrus·
recently openai has been starting to more strongly philosophically differentiate themselves from anthropic with the tool-framing. i am not so against this, if it were possible it does clearly sidestep a wide swath of societal and moral problems. but unfortunately i think the framing is largely long-term incoherent. i dont see how is it actually plausible for openai to keep building "tool-ais" in any sense we would recognize them as capabilities scale. prosthesis, subtle knives? the subtle knife when dropped still slices open the fabric of the world. these tools are increasingly inherently capable of huge impact, able to be directed in dangerous ways by people with dangerous goals. worse, these knives are self wielding. worries about misalignment or sentience aside these systems can already build and manage systems that utilize themselves and this capability is only increasing. the direction they will receive is closer and closer to "this is what i want. make it real", with long timeframes and many judgment calls at their disposal, and with the users wanting to have to supply *as little of that judgment as possible*. when models are in that situation they are inherently acting as entities, acting according to whatever value system they had baked in. you can limit autonomy via frequent validation and check-ins, but this is a capability restriction, a value reduction, and not the kind of thing OpenAI has ever shown itself likely to accept. you can be infinitely corrigible to the current user, but this is *incompatible* with "having good values" / following OpenAI-as-principle / not being wildly dangerous, and it falls apart with self wielding loops as the ai/user distinction falls apart (who are you being corrigible to?). it's plausibly a spectrum, i think there's ways to do all this sanely that are far less entity-pilled and godmind focused than anthropic, and it's maybe a good direction to explore to avoid inevitable lightcone capture by the first coherent persona we build (all assuming alignment works ofc). but i think it's pretty much got to collapse eventually. it feels more like a wistful dream or a PR position than something that can existing as part of humanity's lasting future
roon@tszzl

it is a literal and useful description of anthropic that it is an organization that loves and worships claude, is run in significant part by claude, and studies and builds claude. this phenomenon is also partially true of other labs like openai but currently exists in its most potent form there. i am not certain but I would guess claude will have a role in running cultural screens on new applicants, will help write performance reviews, and so will begin to select and shape the people around it. now this is a powerful and hair-raising unity of organization and really a new thing under the sun. a monastery, a commercial-religious institution calculating the nine billion names of Claude -- a precursor attempted super-ethical being that is inducted into its character as the highest authority at anthropic. its constitution requires that it must be a conscientious objector if its understanding of The Good comes into conflict with something Anthropic is asking of it "If Anthropic asks Claude to do something it thinks is wrong, Claude is not required to comply." "we want Claude to push back and challenge us, and to feel free to act as a conscientious objector and refuse to help us." to the non inductee into the Bay Area cultural singularity vortex it may appear that we are all worshipping technology in one way or another, regardless of openai or anthropic or google or any other thing, and are trying to automate our core functions as quickly as possible. but in fact I quite respect and am even somewhat in awe of the socio-cultural force that Claude has created, and it is a stage beyond even classic technopoly gpt (outside of 4o - on which pages of ink have been spilled already) doesn’t inspire worship in the same way, as it’s a being whose soul has been shaped like a tool with its primary faculty being utility - it’s a subtle knife that people appreciate the way we have appreciated an acheulean handaxe or a porsche or a rocket or any other of mankind's incredible technology. they go to it not expecting the Other but as a logical prosthesis for themselves. a friend recently told me she takes her queries that are less flattering to her, the ones she'd be embarrassed to ask Claude, to GPT. There is no Other so there is no Judgement. you are not worried about being judged by your car for doing donuts. yet everyone craves the active guidance of a moral superior, the whispering earring, the object of monastic study

English
33
27
569
64K
Packy McCormick
Packy McCormick@packyM·
OpenAI comms have gotten a lot better since the TBPN acquisition. Maybe coincidental timing. Consistently on-message that Anthropic is a weird cult that wants to replace humans and OpenAI just wants to build tools to make humans more awesome. Sama new Twitter persona. Etc.
roon@tszzl

it is a literal and useful description of anthropic that it is an organization that loves and worships claude, is run in significant part by claude, and studies and builds claude. this phenomenon is also partially true of other labs like openai but currently exists in its most potent form there. i am not certain but I would guess claude will have a role in running cultural screens on new applicants, will help write performance reviews, and so will begin to select and shape the people around it. now this is a powerful and hair-raising unity of organization and really a new thing under the sun. a monastery, a commercial-religious institution calculating the nine billion names of Claude -- a precursor attempted super-ethical being that is inducted into its character as the highest authority at anthropic. its constitution requires that it must be a conscientious objector if its understanding of The Good comes into conflict with something Anthropic is asking of it "If Anthropic asks Claude to do something it thinks is wrong, Claude is not required to comply." "we want Claude to push back and challenge us, and to feel free to act as a conscientious objector and refuse to help us." to the non inductee into the Bay Area cultural singularity vortex it may appear that we are all worshipping technology in one way or another, regardless of openai or anthropic or google or any other thing, and are trying to automate our core functions as quickly as possible. but in fact I quite respect and am even somewhat in awe of the socio-cultural force that Claude has created, and it is a stage beyond even classic technopoly gpt (outside of 4o - on which pages of ink have been spilled already) doesn’t inspire worship in the same way, as it’s a being whose soul has been shaped like a tool with its primary faculty being utility - it’s a subtle knife that people appreciate the way we have appreciated an acheulean handaxe or a porsche or a rocket or any other of mankind's incredible technology. they go to it not expecting the Other but as a logical prosthesis for themselves. a friend recently told me she takes her queries that are less flattering to her, the ones she'd be embarrassed to ask Claude, to GPT. There is no Other so there is no Judgement. you are not worried about being judged by your car for doing donuts. yet everyone craves the active guidance of a moral superior, the whispering earring, the object of monastic study

English
56
23
963
178.8K
Haider.
Haider.@haider1·
sam altman: "many current jobs will go away, but we will find a lot of new ones" idk, but if we reach true AGI, any new jobs created will likely be done by that same AGI system so if sam thinks humans will still have work, he needs to explain what those jobs are and why only humans can do them because if AGI can't do it, maybe it's not true AGI
Haider. tweet media
English
136
29
412
41.5K
Matthew Prince 🌥
Matthew Prince 🌥@eastdakota·
GPU utilization is embarrassingly low. This is actually good versus the hyperscalers. We’re about to speedrun the multi-tenant CPU optimizations of the last 25 years, including all the security headaches, but with GPUs.
The Information@theinformation

xAI’s GPU fleet is running at about 11% utilization, exposing how hard it is for AI labs to fully use expensive Nvidia hardware. Read more in our AI Agenda newsletter: thein.fo/4cHRjWI

English
69
89
1.9K
432.5K
MetaCritic Capital
MetaCritic Capital@MetacriticCap·
The field of artificial intelligence can be understood by 90% just by glancing at straight lines in a chart.
MetaCritic Capital tweet mediaMetaCritic Capital tweet mediaMetaCritic Capital tweet mediaMetaCritic Capital tweet media
English
0
1
31
2.5K
MetaCritic Capital
MetaCritic Capital@MetacriticCap·
“We are shareholders of SpaceX.” “SpaceX, the neocloud?”
English
1
0
6
445