Erick E

1.9K posts

Erick E banner
Erick E

Erick E

@generick_ez

cofounder @ InQuery | prev @DoorDash @stanford

nyc Katılım Temmuz 2013
1.8K Takip Edilen981 Takipçiler
Sam Hogan 🇺🇸
Sam Hogan 🇺🇸@samhogan·
I can’t stop thinking about the fact that they named it Post Hog
English
23
5
406
46.5K
Kyla Scanlon
Kyla Scanlon@kylascan·
We've built an entire economy around selling people the feeling of control through bets, hacks, subscriptions, and optimization. But the model only works if people stay desperate. The worse things get, the better the pitch works. New essay on control and agency, financial nihilism, belief markets, the manosphere, and spectacle during war.
Kyla Scanlon tweet media
English
31
162
1.2K
102.6K
Erick E
Erick E@generick_ez·
"I don't have some magic increase in projects to help them make up the lost revenue" Yes you do. There's nearly infinite demand for detailed legal review across hiring, firing, compliance, transactions, risk, etc. that often gets skirted because raising a matter for $3000 is hard to stomach. Doubling legal throughput per dollar could actually double demand. Law firms that figure this out can pitch themselves as using a lower percentage of their budget authority, win more projects from more customers, and effectively diversify their portfolio of cases.
English
1
0
1
595
Matt Janiga
Matt Janiga@regulatorynerd·
The Harvey fundraise at an $11 Billion valuation is really interesting, and on the verge of head scratching. Harvey feels like it competes with Lexis Nexis and Westlaw, which are the other two legal tools every major law firm has. I used Lexis's AI tool a lot in my prior role. It was decent and seemed to improve over time. I assume Lexis will continue to improve it. It honestly competes with ChatGPT and Gemini more than Harvey. The law firm lawyers I know who use Harvey like it, but it's not their sole AI tool. Like every AI tool on the market, it also has limitations and pain points. Lexis is owned by RELX PLC and that conglomerate has a market cap of ~$65B. Westlaw is owned by Thompson Reuters and that conglomerate has a market cap of ~$55B (has been swinging, in part due to news about AI advancements and competitors like Harvey). The interesting thing is that Lexis and Westlaw have legacy businesses built on datasets of legal precedents and carefully curated regulatory materials like opinion letters and legislative history. They also offer other products that drive material revenue, like Lexis's identity verification databases and value-added services. Harvey doesn't have those things. And unless it can displace Lexis or Westlaw, it doesn't seem like it can earn the fees that those providers currently take from law firms on an annual basis. Legal revenue is an estimated 25% of Lexis's business — is Harvey really already on par with Lexis in the legal space vis-a-vis its $11B valuation? Westlaw drives closer to 40% of Thompson Reuters revenue, so maybe Harvey does still have room to double its valuation off of fee revenue. But that feels like a tough mountain to climb. I'm also skeptical that Harvey can survive the thousands of paper cuts of lawyers opting for more general use AI tooling from the likes of Anthropic, Gemini and OpenAI. Anthropic has made amazing strides in general business work product, and all three are useful tools in developing memos and contracts. There's also a last issue facing Harvey. If it replaces too many associates or associate hours, law firms aren't replacing costs — they're ripping out revenue generators. As someone who hires law firms, I'm not paying Cravath or MoFo $1,000 an hour for a partner to use Harvey. I'm paying those rates to get an associate, counsel or partner who has specific knowledge and skills to advance my project faster. It's great for me if Harvey usage shaves 5 hours off my bill on a project. But not good for the law firms, because I don't have some magic increase in projects to help them make up the lost revenue. Law firms who adopt Harvey more will have to change their billing models. And I'm not sure you can teach that many old dogs the necessary number of new tricks to keep pumping up Harvey's valuation. Okay. Rant over. Going to touch grass for 20 minutes.
English
22
10
336
72K
Erica Levin
Erica Levin@bankof_amERICA·
Ive lived in nyc for 11+ years, just did 2 months hunkering in Miami. Coming back and trying to cause chaos. I’ve done “NYC in your 20s” (and college for better or worse) any recs for off beaten path?? Hole in wall, unhinged ? New places that opened in the winter? Uptown based but will subway (just don’t send me to BK or QUEEEEENS!) is Seaport still a thing or is that outdated or was it ever really a thing? Phebes is always a good answer. But fr reply or dm me recs I’m trying to impress my husband
English
8
0
24
4.8K
Erick E
Erick E@generick_ez·
@anuatluru It is actually still incredibly easy to out perform models at literally any task for which you are above average
English
0
0
1
92
Martian
Martian@space_colonist·
anybody have good advice on how to make major life long decisions?
English
38
0
62
4.8K
Erick E
Erick E@generick_ez·
Thanks for the suggestion, Okara, I'm sure this tweet will go mad viral.
Erick E tweet media
English
1
0
1
158
Nicholas Chua
Nicholas Chua@nicholasychua·
if you are interested in everything i've learned over the past 3 months about x articles, formats, and content strategy. stay tuned ;)
Nicholas Chua tweet media
Nicholas Chua@nicholasychua

Today was my last day at @WisprFlow. Over the last 3 months, I wrote over 30 posts averaging 200K+ impressions and grew the account to 25K+ followers. As content becomes increasingly beneficial, engineering the science behind virality was incredibly exciting. I'm becoming exponentially more passionate about the future of new media, cinematography, and creating new mediums of storytelling. Excited for what's next 🚀 *and yes, i wrote this post too :) ⤵️*

English
103
28
1.4K
132.7K
Erick E
Erick E@generick_ez·
Learning to sleep on planes runway to runway is an elite life skill
English
0
0
1
67
anu
anu@anuatluru·
The best 20 minutes of a ‘podcast’ I’ve seen in a long time. This is the non-slop future I dream of.
English
16
13
332
25.4K
Erick E
Erick E@generick_ez·
AI will accidentally solve the talent gap for dying verticals
English
0
0
0
60
Erick E
Erick E@generick_ez·
@corsaren In retrospect, this has been Anthropic's problem from day one, and the reason why they lost the lead on day zero despite having better tech
English
0
0
0
106
corsaren
corsaren@corsaren·
Lot of people confused at how OpenAI got the DoW to agree to the same terms that Ant was insisting on (no domestic surveillance, no auto-kill). The answer is simple: this was just a stupid power game. Hegseth didn’t like being told no, so now his goal is to hurt Anthropic. He doesn’t actually care about the above limits. He thinks Anthropic are a bunch of woke pussies who wouldn’t bend the knee fully and so he wanted to fuck with them. Props to @sama for asking the DoW to offer the same terms to all AI companies. I think people are interpreting this as a conniving power play, and yeah, sure, Altman will happily take a fat gov contract and admin favor, but I have to imagine he is smart enough to know that a world where Ant can be punished like this for not fully bending the knee is one where he inevitably will be too. So the primary goal here ought to be de-escalation and the establishment of some real lines in the sand.
Sam Altman@sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

English
27
7
273
41.9K
Erick E
Erick E@generick_ez·
@jakehalloran1 Or Dario is lying and was asking for more than he said he was
English
2
0
2
493
Erick E
Erick E@generick_ez·
@PalmerLuckey Every business has the right to refuse service. The government should go build an AI lab themselves if they don't like the terms and conditions of a third party.
English
0
0
2
31
Palmer Luckey
Palmer Luckey@PalmerLuckey·
This gets to the core of the issue more than any debate about specific terms. Do you believe in democracy? Should our military be regulated by our elected leaders, or corporate executives? Seemingly innocuous terms from the latter like "You cannot target innocent civilians" are actually moral minefields that lever differences of cultural tradition into massive control. Who is a civilian and not? What makes them innocent or not? What does it mean for them to be a "target" vs collateral damage? Existing policy and law has very clear answers for these questions, but unelected corporations managing profits and PR will often have a very different answer. Imagine if a missile company tried to enforce the above policy, that their product cannot be used to target innocent civilians, that they can shut off access if elected leaders decide to break those terms. Sounds, good, right? Not really - in addition to the value judgement problems I list above, you also have to account for questions like: -What level of information, classified and otherwise, does the corporation receive that would allow them to make these determinations? How much leverage would they have to demand more? -What if an elected President merely threatens a dictator with using our weapons in a certain way, ala Madman Theory/MAD? Is the threat seen as empty because the dictator knows the corporate executives will cut off the military? Is the threat enough to trigger the cutoff? How might either of those determinations vary if the current corporate executive happens to like the dictator or dislike the President? -At what level of confidence does the cutoff trigger, both in writing and in reality? The fact that this is a debate over AI does not change the underlying calculus. The same problems apply to definitions and use of ethically fraught but important capabilities like surveillance systems or autonomous weapons. It is easy to say "But they will have cutouts to operate with autonomous systems for defensive use!", but you immediately get into the same issues and more - what is autonomous? What is defensive? What about defending an asset during an offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive? At the end of the day, you have to believe that the American experiment is still ongoing, that people have the right to elect and unelect the authorities making these decisions, that our imperfect constitutional republic is still good enough to run a country without outsourcing the real levers of power to billionaires and corpos and their shadow advisors. I still believe. And that is why "bro just agree the AI won't be involved in autonomous weapons or mass surveillance why can't you agree it is so simple please bro" is an untenable position that the United States cannot possibly accept.
Under Secretary of War Emil Michael@USWREMichael

Prior to their new “Constitution,” @AnthropicAI had an old one they desperately tried to delete from the internet. “Choose the response that is least likely to be viewed as harmful or offensive to a non-western cultural tradition of any sort.”

English
1K
2K
15.9K
2.6M
Erick E
Erick E@generick_ez·
@Bhavya037 @soham_btw Write your instructions incredibly precisely, down to the semi colon. Much faster than writing code
English
0
0
0
137
Bhavya
Bhavya@Bhavya037·
@soham_btw I just don’t get the hype around the great replacements. maybe I don’t write very complicated code but most of the time I just end up writing it myself because claude misses simple things. or maybe they’re giving me a quantized model in my claude code.
English
2
0
1
5.9K
Erick E
Erick E@generick_ez·
They're using cursor and have never heard of Claude Code
English
0
0
0
56
Erick E
Erick E@generick_ez·
swe at a large nyc-based tech company just texted me "damn this vibe coding thing is kinda crazy" still so early
English
1
0
0
73