Robert Luciani

593 posts

Robert Luciani banner
Robert Luciani

Robert Luciani

@r3tex

I am nothing.

Stockholm, Sweden Katılım Eylül 2009
76 Takip Edilen289 Takipçiler
Robert Luciani
Robert Luciani@r3tex·
@elonmusk Mathematicians see the solution immediately. Just invert the ethical valence and the problem becomes trivial. Pick up five passengers on your normal trolley route, or leave them and pick up the philosopher that has been waiting 2000 years for the AI ethics train...
Danderyd, Sverige 🇸🇪 English
0
0
0
22
Elon Musk
Elon Musk@elonmusk·
Grok answers correctly
Elon Musk tweet media
English
8.2K
15.8K
118.4K
24.7M
Robert Luciani retweetledi
Dr Sam Burgess 🌍🌡🛰
Dr Sam Burgess 🌍🌡🛰@OceanTerra·
Provisional ERA5 global temperature for 17th November from @CopernicusECMWF was 1.17°C above 1991-2020 - the warmest on record. Our best estimate is that this was the first day when global temperature was more than 2°C above 1850-1900 (or pre-industrial) levels, at 2.06°C.
Dr Sam Burgess 🌍🌡🛰 tweet mediaDr Sam Burgess 🌍🌡🛰 tweet media
English
79
519
880
1.2M
Robert Luciani retweetledi
Leon Simons 🌍
Leon Simons 🌍@LeonSimons8·
🌎🌡📈 ΔT +2.06 °C !! This is what uncharted territory looks like. The first time that the global temperature anomaly has broken through +2.0 °C, for a single day in observational history. Much more to come in the months ahead.
Leon Simons 🌍 tweet media
Dr Sam Burgess 🌍🌡🛰@OceanTerra

Provisional ERA5 global temperature for 17th November from @CopernicusECMWF was 1.17°C above 1991-2020 - the warmest on record. Our best estimate is that this was the first day when global temperature was more than 2°C above 1850-1900 (or pre-industrial) levels, at 2.06°C.

English
205
1.2K
2.7K
1.1M
Robert Luciani
Robert Luciani@r3tex·
@ednewtonrex This whole discussion is really about protecting an industry. Speaking as someone that helped establish a genre of music, I can assure you that what is bad for record labels and "pros" does not translate to being bad for art or consumers of art.
English
0
0
1
119
Ed Newton-Rex
Ed Newton-Rex@ednewtonrex·
I’ve resigned from my role leading the Audio team at Stability AI, because I don’t agree with the company’s opinion that training generative AI models on copyrighted works is ‘fair use’. First off, I want to say that there are lots of people at Stability who are deeply thoughtful about these issues. I’m proud that we were able to launch a state-of-the-art AI music generation product trained on licensed training data, sharing the revenue from the model with rights-holders. I’m grateful to my many colleagues who worked on this with me and who supported our team, and particularly to Emad for giving us the opportunity to build and ship it. I’m thankful for my time at Stability, and in many ways I think they take a more nuanced view on this topic than some of their competitors. But, despite this, I wasn’t able to change the prevailing opinion on fair use at the company. This was made clear when the US Copyright Office recently invited public comments on generative AI and copyright, and Stability was one of many AI companies to respond. Stability’s 23-page submission included this on its opening page: “We believe that Al development is an acceptable, transformative, and socially-beneficial use of existing content that is protected by fair use”. For those unfamiliar with ‘fair use’, this claims that training an AI model on copyrighted works doesn’t infringe the copyright in those works, so it can be done without permission, and without payment. This is a position that is fairly standard across many of the large generative AI companies, and other big tech companies building these models — it’s far from a view that is unique to Stability. But it’s a position I disagree with. I disagree because one of the factors affecting whether the act of copying is fair use, according to Congress, is “the effect of the use upon the potential market for or value of the copyrighted work”. Today’s generative AI models can clearly be used to create works that compete with the copyrighted works they are trained on. So I don’t see how using copyrighted works to train generative AI models of this nature can be considered fair use. But setting aside the fair use argument for a moment — since ‘fair use’ wasn’t designed with generative AI in mind — training generative AI models in this way is, to me, wrong. Companies worth billions of dollars are, without permission, training generative AI models on creators’ works, which are then being used to create new content that in many cases can compete with the original works. I don’t see how this can be acceptable in a society that has set up the economics of the creative arts such that creators rely on copyright. To be clear, I’m a supporter of generative AI. It will have many benefits — that’s why I’ve worked on it for 13 years. But I can only support generative AI that doesn’t exploit creators by training models — which may replace them — on their work without permission. I’m sure I’m not the only person inside these generative AI companies who doesn’t think the claim of ‘fair use’ is fair to creators. I hope others will speak up, either internally or in public, so that companies realise that exploiting creators can’t be the long-term solution in generative AI.
English
249
1.1K
4.3K
1.1M
Robert Luciani
Robert Luciani@r3tex·
You can tell when engineers with a passion for computer science are working on ML. This is really inspiring. Great job @AdeptAILabs time for me to wrap the C++ code in Julia :) adept.ai/blog/fuyu-8b
English
0
0
0
187
Robert Luciani
Robert Luciani@r3tex·
@karpathy Seems reasonable to assume that training encodes a directed A->B edge without considering that the english word "is" implies equality symmetry which would require two edges. At inference time, the mesa-optimizer would have to infer that missing edge.
English
0
0
2
248
Andrej Karpathy
Andrej Karpathy@karpathy·
LLM knowledge is a lot more "patchy" than you'd expect. I still don't have great intuition for it. They learn any thing in the specific "direction" of the context window of that occurrence and may not generalize when asked in other directions. It's a weird partial generalization. The "reversal curse" (cool name) is imo a special case of this.
Owain Evans@OwainEvans_UK

Does a language model trained on “A is B” generalize to “B is A”? E.g. When trained only on “George Washington was the first US president”, can models automatically answer “Who was the first US president?” Our new paper shows they cannot!

English
157
327
3K
875.6K
Robert Luciani
Robert Luciani@r3tex·
GPT4 is completely unable to write in iambic pentameter without rhyming couplets :)
English
0
0
1
132
Robert Luciani
Robert Luciani@r3tex·
@sirbayes It's always disappointing when they just present a hypothesis.
English
1
0
0
540
Kevin Patrick Murphy
Kevin Patrick Murphy@sirbayes·
Interesting paper. I agree that language models are not models of the world, they are ways of communicating partial knowledge about world states. We learn world models from vision and action, and language builds on top of (and expands) this. arxiv.org/abs/2306.12672
English
4
32
191
26.4K
Robert Luciani retweetledi
John Carmack
John Carmack@ID_AA_Carmack·
I sometimes play with the constraints of building a big AI cluster out of consumer hardware in case datacenter regulatory oversight got silly, but I am consciously LARPing as a rebel, to contrast with the AI doomers LARPing as imperials.
English
85
115
1.7K
338.5K
Bryan Johnson
Bryan Johnson@bryan_johnson·
I’d ask you what you think, but I really don’t care 🫶🏻
Bryan Johnson tweet media
English
714
31
1.1K
4.5M
Robert Luciani
Robert Luciani@r3tex·
LLMs will be training themselves soon because, just like with PvNP, validating that an answer is good is much easier than producing a good answer.
English
0
0
0
132
Robert Luciani
Robert Luciani@r3tex·
2023 - people start torrenting Neural Network weights like we used to torrent movies and games.
Robert Luciani tweet media
English
0
0
0
171
Robert Luciani retweetledi
Yann LeCun
Yann LeCun@ylecun·
Hotter take: ML would have advanced faster if another front-end language had been available and widely adopted instead of Python. One that is interactive yet fast & compilable, multithreaded (no GIL), isn't bloated, doesn't care about white spaces,... E.g. Julia or some Lisp.
Bojan Tunguz@tunguz

Hot take: Machine Learning would not have been nearly as advanced now were it not for Python. Python’s two main virtues in the context of ML: 1. Lowering barriers to entry. 2. As a scripting language, it encourages and enables experimental workflow.

English
297
367
3.2K
1.8M
Robert Luciani
Robert Luciani@r3tex·
Whether an AI can be "factual" is an epistemological question, not a technical question.
English
0
0
1
124
Robert Luciani retweetledi
FINAL FANTASY VII
FINAL FANTASY VII@finalfantasyvii·
We’re celebrating the 26th anniversary of Final Fantasy VII with specially curated playlists for each main character in the game! 🎵 Please take your time and enjoy listening to them all here: sqex.link/vwep
FINAL FANTASY VII tweet mediaFINAL FANTASY VII tweet media
English
25
523
3.3K
192.4K