Daniel Riek

497 posts

Daniel Riek

Daniel Riek

@RiekDaniel

Free Software (not as in Beer). Opinions still are mine unless explicitly stated otherwise.

Boston, MA Katılım Temmuz 2019
96 Takip Edilen267 Takipçiler
Daniel Riek retweetledi
Yann LeCun
Yann LeCun@ylecun·
SB-1047 would definitely have a chilling effect on open source AI and the entire AI ecosystem. Very much hoping that Governor @GavinNewsom will veto it.
Andrew Ng@AndrewYNg

A decision on SB-1047 is due soon. Governor @GavinNewsom has said he's concerned about its "chilling effect, particularly in the open source community". He's right, and I hope he will veto this. If you agree, please like/retweet this to show your support for VETOing SB-1047!

English
44
113
627
178.9K
Daniel Riek retweetledi
Daniel Riek retweetledi
Julien Chaumond
Julien Chaumond@julien_c·
anyone remembers google bard?
English
74
10
353
43.2K
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
I've also come to realize that there is a pattern with these doomer academics who have never actually touched a real AI system. They live in theory land. It's the difference between an experimental physicist and theorist. Experimental physics means getting your hands dirty in the real world with a messy, hard to design experiment. It doesn't just magically pop into reality. They never lived it. They aren't builders. It's straight up hard to set up these training systems and inference systems. They are literal supercomputers and they require specialized hardware and software not to mention skills to buy, along with absurd cash and purchasing negation skills, set up and configure. These folks have never even sniffed one of these systems themselves. Their DevOps team (when they moonlight at companies) or students do it all for them and hide the complexity. It's the only possible way anyone could believe in systems that magically "copy themselves" or "self replicate." You can't just copy the multiterabytes weights of a foundation model and run it on a gamer PC or grandma's laptop or another supercomputer because, you know someone will notice another process running there on that billion dollar supercomputer with their monitoring system and say, ah what the hell is this thing? A notnet might replicate to your crappy home router but your foundation model won't be replicating to home PCs anytime soon. It's also absurd that these mathematical minded people can believe in something like intelligence that can improve recursively infinitely, as if there are no constraints whatsoever in terms of precision/ability/time/data, etc. All of these naive fantasy opinions come from being divorced from the actual real world of boring, hard, dirty, slow, painstaking hardware and software work to make these little snowflakes models run and train reliably.
English
5
0
5
367
martin_casado
martin_casado@martin_casado·
Leave us alone dude. Yoshua is a Canadian academic who has no sensitivities for protecting what has made California the greatest innovation hub in history. Joshua isn’t even accountable to SB1047. We have our own AI experts who would be accountable and they have overwhelmingly spoken out against it, far more than in support. And even more privately to the Governor directly.
Yoshua Bengio@Yoshua_Bengio

I read California Governor @GavinNewsom's comments about SB1047 yesterday: “The governor said he is weighing what risks of AI are demonstrable versus hypothetical.” bloomberg.com/news/articles/… Here is my perspective on this: Although experts don’t all agree on the magnitude and timeline of the risks, they generally agree that as AI capabilities continue to advance, major public safety risks such as AI-enabled hacking, biological attacks, or society losing control over AI could emerge. Some reply to this: “None of these risks have materialized yet, so they are purely hypothetical”. But (1) AI is rapidly getting better at abilities that increase the likelihood of these risks, and (2) We should not wait for a major catastrophe before protecting the public. Many people at the AI frontier share this concern, but are locked in an unregulated rat race. Over 125 current & former employees of frontier AI companies have called on @CAGovernor to #SignSB1047. I sympathize with the Governor’s concerns about potential downsides of the bill. But the California lawmakers have done a good job at hearing many voices – including industry, which led to important improvements. SB 1047 is now a measured, middle-of-the-road bill. Basic regulation against large-scale harms is standard in all sectors that pose risks to public safety. Leading AI companies have publicly acknowledged the risks of frontier AI. They’ve made voluntary commitments to ensure safety, including to the White House. That’s why some of the industry resistance against SB 1047, which holds them accountable to those promises, is disheartening. AI can lead to anything from a fantastic future to catastrophe, and decision-makers today face a difficult test. To keep the public safe while AI advances at unpredictable speed, they have to take this vast range of plausible scenarios seriously and take responsibility. AI can bring tremendous benefits – but only if we steer it wisely, instead of just letting it happen to us and hoping that all goes well. I often wonder: Will we live up to the magnitude of this challenge? Today, the answer lies in the hands of Governor @GavinNewsom.

English
16
14
271
73.5K
Daniel Riek retweetledi
Amjad Masad
Amjad Masad@amasad·
How could computers multiplying numbers lead to "literal human extinction?" Doomers never say how, and politicians never press them because they imagine something like Terminator. The whole thing is an absurd clownshow.
CSPAN@cspan

AI expert Helen Toner (@hlntnr): "If they succeed in building computers that are as smart as humans or perhaps far smarter than humans, that technology will be at a minimum extraordinarily disruptive and at a maximum could lead to literal human extinction."

English
261
96
1.3K
325.2K
Daniel Riek
Daniel Riek@RiekDaniel·
@austin_rief They are paying the millions for an external party to tell the company what they can not say for political/cultural/relationship reasons internally.
English
0
0
0
7
Austin Rief ☕️
Austin Rief ☕️@austin_rief·
Consulting is dead. I’m on a plane sitting next to a woman who works at a top consulting firm, making a presentation for a fortune 50 client. ChatGPT has written every single word in this deck. What happens when clients find out they’re paying millions for ChatGPT?
English
1.1K
1.5K
24K
4.3M
Daniel Riek
Daniel Riek@RiekDaniel·
@timothysc I am actually interested in trying that. Why? Because I want a standardized way to rebuild all dependencies for a specific ISA target. Right now I am rebuilding a bunch of Fedora packages - but that is painful. So thinking about moving to Gentoo or Nix.
English
0
0
0
34
Daniel Riek
Daniel Riek@RiekDaniel·
"AI may eliminate millions of jobs, but it also may help us cure cancer" Do you know how many well paying jobs depend on cancer? So are you suggesting we keep that around? If not, what is the point?
English
0
0
0
82
Daniel Riek retweetledi
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
Maybe if we red teamed legislation as fiercely we red team AI, we'd get better legislation. Sadly many people who propose legislation are actively hostile to any and all feedback. This post from a lawyer, engineer and former FTC employee looks at the unintended side effects of legislation like SB1047 and how "reasonable" is always in the mind of the beholder and the enforcer.
Neil Chilson ⤴️⬆️🆙📈 🚀@neil_chilson

x.com/i/article/1827…

English
11
23
156
69.1K
Daniel Riek retweetledi
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
1. These are not whistleblowers. They exposed no crime. This is not Enron. They didn't even expose bad security practices, just practices they felt were not good enough for them. 2. OpenAI just publicly stated that they oppose this legislation, because it is poorly designed. They also said they favor federal legislation. 3. Just because someone supports legislation in general does not mean they have to support bad legislation.
English
5
3
16
985
Daniel Riek retweetledi
Yann LeCun
Yann LeCun@ylecun·
An important op-ed by Mark Zuckerberg (Meta) and Daniel Ek (Spotify) in The Economist about the fuzzy and disparate regulatory landscape in the EU and how it is becoming an obstacle to the development and deployment of technology in the EU, particularly when it comes to AI and open source.
The Economist@TheEconomist

Europe is “particularly well placed” to make the most of a coming wave in open-source AI, argue the tech CEOs. Yet fragmented regulation is “hampering innovation and holding back developers” econ.st/4fWLHHU Illustration: Sam Kerr

English
91
114
746
201.6K
Daniel Riek retweetledi
Bindu Reddy
Bindu Reddy@bindureddy·
Maybe the doomers can shut up now! An official sounding study that proclaims AI is not an existential threat to humanity just dropped! 😃
Bindu Reddy tweet media
English
283
61
490
73.4K
Daniel Riek
Daniel Riek@RiekDaniel·
@tegmark And then they find out how bad any of this regulation actually would be. Better late than never.
English
0
0
0
18
Alex
Alex@alex_avoigt·
🇪🇺 Europe is 🔥warming 2x the global average. People are dying every day, every week, and every month because of heat and some still claim that we still have many years to go, should continue burning fossil fuels, and that it's not that bad yet. No, there is no time to lose. It's perfectly fine if you don't believe me or think I'm exaggerating the situation, but then I suggest you travel to the black areas on the map below and experience it yourself.
Alex tweet media
Greenpeace International@Greenpeace

“Europe is warming at twice the rate of the global average – we can’t rest on our laurels.” Fossil fuels are turbo-charging extreme heat-related weather events, risking people's health and lives. It's time for rich polluters to #StopDrillingStartPaying! act.gp/46GHvHT

English
80
38
296
34.6K
Daniel Riek retweetledi
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
@tsarnick This is an absolutely clownish take that gets more unhinged as we go along. When this is the kind of commentary coming from respected tech leaders is it even surprising that people are confused? Like go make infinite TikToks and steal all the users? What the actual hell?
English
1
1
10
1K
Daniel Riek retweetledi
Michael A. Arouet
Michael A. Arouet@MichaelAArouet·
Even if we deindustrialized Europe back into Middle Ages as some degrowth weirdos dream of, it wouldn’t help climate too much. The problem is somewhere else, unfortunately even throwing tomato soup at Mona Lisa has fairly limited impact in this regard.
Michael A. Arouet tweet media
English
125
462
2K
144.7K
Daniel Riek retweetledi
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
People whose primary approach to life is preventing all risk and preventing all harm, do more harm than the harm they were trying to prevent.
English
7
4
65
3.8K