Amir Ghavi (aghavi.ETH)

603 posts

Amir Ghavi (aghavi.ETH)

Amir Ghavi (aghavi.ETH)

@aghavi

I am a NY-based lawyer focusing on IP, Media and Tech. ❤️ international affairs. Opinions purely personal & definitely not legal advice. Really.

New York, NY Katılım Mart 2010
596 Takip Edilen270 Takipçiler
Amir Ghavi (aghavi.ETH)
@emollick Absolutely yes to this. I’ve been saying for some time now that prompt engineering is really a function the effectiveness of the model. It’s a semantic reasoning engine so the better it gets, the more the UX changes and UI (prompt engineering) melts away.
English
0
0
3
77
Ethan Mollick
Ethan Mollick@emollick·
Prompt engineering is going away for most people, and here is the latest of many signs: Anthropic, like some of the other AI companies, have released a tool that automatically generates good prompts for you based on intent. It works pretty well!
Ethan Mollick tweet mediaEthan Mollick tweet media
English
94
259
2.3K
415.8K
Amir Ghavi (aghavi.ETH)
@stealcase @mer__edith I think you’re conflating copyright with protection of labor that hasn’t — at least historically — been the motivation for copyright law. I get your point but I’m suggesting copyright is the wrong instrument to achieve it. Artists could not create art without robust fair use.
English
0
0
0
66
Benjamin (leaving for 🦋@benjami.no)
@aghavi @mer__edith Multiple things can be true at once, and [re]generative AI definitely plays its role in propaganda for the wonders of AI 'progress' while being built off the labour of workers that should have been protected by copyright, while threatening those same workers.
English
1
0
4
91
Amir Ghavi (aghavi.ETH) retweetledi
Jason Fried
Jason Fried@jasonfried·
I’m going to draw a parallel that isn’t quite straight, but it’s close enough. It’s between old (analog) cars and new (digital) cars, simple (analog) businesses and complicated (digital) businesses. Lately I’ve been driving two distinctly different cars. One from 1970, and the other from 2023. They essentially have nothing in common other than they’re cars. Some who don’t care much for cars might say that’s everything in common. On paper I can see the argument, but get behind the wheel of both, and go for a drive, and you wouldn’t say the experiences were equivalent. They’re as different as different can be. The 1970s experience is analog. Your leg strength determines your stopping power. The road surface below clearly communicates with your ass. The steering wheel actually feels like it’s turning the wheels. The windows roll down by turning a crank that literally rolls them down. Switches snick, dials click. This is a direct experience, with mechanical feedback, requiring full attention and real effort. The 2023 experience is digital. Nearly everything is abstracted. The brakes are heavily mechanically assisted. Steering even more so. You can’t really feel the road surface because the suspension masterfully absorbs every little detail. Even the buttons have been replaced by touchscreens with haptics. This is an indirect experience, with simulated feedback, requiring some attention and little effort. Which you prefer is up to you. I happen to love both for different reasons. And that’s just driving. The gulf is wider when it comes to repair. With an old analog car, a problem’s cause is more obvious. Linkages are visible, this connects to that, teeth mesh to move gears which move shafts which move wheels. When something’s broken it’s generally confirmable with a common tool and a set of eyeballs. Any mechanic with a basic understanding can diagnose any problem relatively easily — sometime solely by ear. There are only so many things it could be, and they’re all right in front of you. With a modern new car, computers enter the picture. Engines are covered, sometimes even inaccessible. Cars crash, but so does firmware. This doesn’t turn that, this sends electrons to that, which are then interpreted by circuits and software and systems.  Which one’s to blame when something doesn’t work? It’ll take a while to find out, and at least $1000. Almost anything can be wrong, and you’ll have to leave the car at the shop for a while so they can plug it in and run diagnostics. The human defers to the machine to tell us. Pros and cons for sure. Now for the parallels to business. Through this car experience lens, I’ve come to see businesses as either analog or digital too. I’m not describing their product or what they make. I’m talking about how they’re structured, how they run. An analog business can make software, and a digital business can make pizza. An analog business is direct. It’s clear what does what, and how it does it. When something changes, you typically know what changed. There’s some suspension to absorb the bumps, but it’s basic — you’re never too far removed from the root cause. There are fewer managers and more doers. There’s less distance between the customer and the maker. Feedback is heard, not interpreted and translated. No decks, just conversations and writing. A digital business is indirect and abstracted. The structure is obscured, riddled with departments and groups run by other groups. Decisions are complicated because one too many people are involved. What should be simple has become complex — the result of process that serves the prospect, not the purpose. Everything is padded. Getting to someone requires going through something. Customers are a concept, sliced by demographics. Everything’s a presentation rather than a conversation. As usual, the analogies and metaphors don’t map perfectly, but hopefully you feel what I’m getting at. Does it resonate?
English
50
36
365
106.1K
Amir Ghavi (aghavi.ETH)
@neilturkewitz @Dan_Jeffries1 Neil - with all respect, you’re being a bit of bully and you’re also wrong. As you know, a work is only a derivative work if it is infringing. You also know that the copyright guidance documents are neither sacrosanct nor have the force of law — they’re just rough guidance.
English
0
0
0
27
neil turkewitz
neil turkewitz@neilturkewitz·
@Dan_Jeffries1 Hey Dan. If I were you, I’d refrain from advising people about copyright. You write the following. I have attached some materials from the Copyright Office that directly contradict your statement. I guess they don’t understand copyright. Pity that! copyright.gov/circs/circ14.p…
neil turkewitz tweet medianeil turkewitz tweet media
English
2
0
30
578
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
If you want to understand why the Times case has a near zero probability of winning, then read this thread. This fellow does a nice write up and he seems sincere in his belief that what he is saying about the suit is accurate and correct when in fact it's basically just a lot of wishful thinking, misunderstanding of copyright law and red herrings. He's really hopeful that this case will cement the media's right to charge machines to learn, something not even remotely covered by copyright law. The text does not say what he thinks it says and it does not even come close to a "slam dunk." In fact, the opposite. First, as I've noted before, trying to get everyone to license training data is not going to work because that's not what copyright is about. We all learn for free. We learn from the world around us and so do machines. Writers at the NYT did not pay the Hemingway estate for learning to write short, sharp sentences as young people studying journalism. Young quarterbacks do not have to call up Tom Brady to get permission to study his throwing motion to learn to throw a football. Copyright law is about preventing people from producing exact copies or near exact copies of content and posting it for commerical gain. Period. Anyone who tells you otherwise is lying or simply does not understand how copyright works. What else does he cite in the write up? The amount of money Microsoft makes? 1 trillion in new value on their stock price! Equating it to a fraction of the training data is utterly preposterous. The NYT claiming the value of reporting on wars and murders and politics as somehow relevant to the case? Not even remotely related. Pointless to even include except as a red herring. It's an attempt to ascribe nebulous public good value to actual value in stock price. No. Just no. Even the most damning thing, the prompts they cite as evidence of exact output by GPT of Times content, are obviously manipulated. Anyone in AI can see this in under a second. Nobody seems to be able to recreate the verbatim output with the BS prompts they provided. Why? Because the verbatim output almost certainly did not come from memorization, but from retrieval augmented generation (RAG) with web browsing. A programmer probably deliberately prompted it via the API to fetch a specific article and asked it to output part of the text and they provided only a fraction of the prompt instead of the whole prompt. If I ask it to go fetch a times article and output that for me then it's on me, not the model. I don't need machine learning to do this. I can do it with programming libraries from decades ago. This is nonsense. And including it will kill this case dead because the lawyers will not be able to reproduce this in the real world. Almost everything this fellow cites as evidence is sleight of hand, misdirection and not relevant at all to proving actual copyright violations, which are dependent on output not input. This case is going to get eaten alive, just like the Sarah Silverman case and others that were filed with a complete lack of understanding of how AI works, along with grandiloquent claims about copyright and violations that are absurd to even the most basic sniff test. This most likely outcome for this case is it being settled out of court with MS and OpenAI paying a licensing fee for ongoing training data which is what this is really about. It will be a bad precedent for everyone, everywhere because there is no actual ruling and it gives the illusion that they won and people should be held to ransom for training data.
Jason Kint@jason_kint

ok, I've now read the full NYT complaint filed this morning vs OpenAI and Microsoft. I'm impressed - it's future-focused around fair value for work vital to democracy. It also contains 220k pages of exhibits although the pages of Ex J stood out to me. more on that in a minute. /1

English
153
172
1.2K
888.9K
Aaron Moss
Aaron Moss@copyrightlately·
Judge Orrick's tentative is to dismiss almost all of the claims against Stability AI, Midjourney and DeviantArt with leave to amend. Seems skeptical that Stable Diffusion plausibly incorporates the plaintiffs' works given how small the model is vs the 5B images it was trained on.
English
12
22
92
39.8K
Amir Ghavi (aghavi.ETH)
@paulg Because AI is tech and not an app, I suspect existing public cos will start integrating it into their tech stack (with varying levels of veracity) and claim they are AI cos.
English
0
0
0
59
Paul Graham
Paul Graham@paulg·
There are only two ways to satisfy the demand for public AI companies. 1. Companies that are already public will try to claim they're AI companies, possibly by buying actual AI startups. 2. AI startups will go public faster than they would have otherwise.
English
18
12
344
56.8K
Paul Graham
Paul Graham@paulg·
AI is the first big new wave of technology we've had since startups started going public much later. The result is a unique problem: public market investors who want to invest in AI have few options. Most of the good investments are still private.
English
80
113
1.3K
420.7K
𝖦𝗋𝗂𝗆𝖾𝗌 ⏳
Ok hate this part but we may do copyright takedowns ONLY for rly rly toxic lyrics w grimes voice: imo you'd rly have to push it for me to wanna take smthn down but I guess plz don't be *the worst*. as in, try not to exit the current Overton window of lyrical content w regards to sex/violence. Like no baby murder songs plz. I think I'm Streisand effecting this now but I don't wanna have to issue a takedown and be a hypocrite later. ***That's the only rule. Rly don't like to do a rule but don't wanna be responsible for a Nazi anthem unless it's somehow in jest a la producers I guess. - wud prefer avoiding political stuff but If it's a small meme with ur friends we prob won't penalize that. Probably just if smthn is viral and anti abortion or smthn like that. Rly rly don't like adding rules so I apologize but this is the only thing
English
281
94
3.8K
818.6K
Amir Ghavi (aghavi.ETH) retweetledi
Roberto Nickson
Roberto Nickson@rpnickson·
Listen to this AI generated song featuring Drake & The Weeknd. It goes so damn hard. It's by "Ghostwriter977" on TikTok and it's blowing up on socials + streaming platforms. UMG, which controls around 1/3 of the global music market, has already asked streaming platforms to ban AI. A modern Napster moment. Will be fascinating to watch this all unfold in real-time.
English
1.8K
7.6K
62.6K
21.2M
Amir Ghavi (aghavi.ETH)
@jjvincent It’s not mutually exclusive. Researchers have noted that in many cases OpenAI has used the same technology as Stable Diffusion - they just have not made it public that they do so.
English
0
0
0
641
Emad
Emad@EMostaque·
aka “Your margin is my opportunity”
Emad tweet media
English
13
46
545
85.3K
Emad
Emad@EMostaque·
Some folk don't realise that copyrighting styles is horrible for artistic freedom and entrenches large content owners' powers. Glad it isn't in visual media.
English
54
18
406
82K
Amir Ghavi (aghavi.ETH)
@curious_founder I suspect it’s not just limited to NO2, but also radon and other gasses. The idea of why we would voluntarily (and literally) pipe below earth gasses to our immediate atmosphere makes no sense to me.
English
0
0
1
197
Michael Thomas
Michael Thomas@curious_founder·
Researchers just found that gas stoves are responsible for 12.7% of childhood asthma cases. Recently I read dozens of studies about gas stoves and indoor air quality. I also installed monitors in our home and ran my own tests. Here's what I learned.
English
1.2K
5.8K
32.6K
7.7M
Amir Ghavi (aghavi.ETH) retweetledi
Sergei Galkin
Sergei Galkin@sergeyglkn·
A new level of interactivity with Art. Pictures generated by #AI Stable Diffusion + Real time morph in #AR Invite me to an exhibition or something…
Richmond, London 🇬🇧 English
117
581
2.9K
382.5K
Amir Ghavi (aghavi.ETH) retweetledi
Meredith Whittaker
Meredith Whittaker@mer__edith·
OK! let’s talk about That Op-ed. The one that insisted not only that privacy is dangerous, but that not affirmatively building surveillance into communication tools is a radical ideological position. 1/ web.archive.org/web/2023010119…
English
29
478
2K
667.6K
Amir Ghavi (aghavi.ETH) retweetledi
Sterling Crispin 🕊️
Sterling Crispin 🕊️@sterlingcrispin·
Imagine an AI model that's 3x larger and more powerful than GPT3 aka ChatGPT Google already built that in April, called PaLM, on their own TPU hardware competing with NVIDIA. People think ChatGPT will replace Google but they basically invented transformers in '17 (the T in GPT)
Sterling Crispin 🕊️ tweet media
English
143
715
4.1K
1.3M
Amir Ghavi (aghavi.ETH) retweetledi
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
Very soon AI art generators won't be inspired by any art (already less than 2% of the dataset) + it will still generate art with ease + the whole "stolen remixer" narrative will collapse. Also AI will have no "tells" like crummy hands + it'll be perfectly coherent. Then what?
Daniel Jeffries tweet media
English
71
38
428
73.5K
Amir Ghavi (aghavi.ETH) retweetledi
Percy Liang
Percy Liang@percyliang·
📣 CRFM announces PubMedGPT, a new 2.7B language model that achieves a new SOTA on the US medical licensing exam. The recipe is simple: a standard Transformer trained from scratch on PubMed (from The Pile) using @mosaicml on the MosaicML Cloud, then fine-tuned for the QA task.
English
41
318
1.5K
426.6K
Amir Ghavi (aghavi.ETH) retweetledi
SMB Attorney
SMB Attorney@SMB_Attorney·
Lawyers we must kill this technology immediately! Someone call Congress and make it illegal.
English
329
977
10.9K
0