Sheila B

315 posts

Sheila B banner
Sheila B

Sheila B

@RealSheilaB

Technology Consultant | Technical Due Diligence | Artificial Intelligence Auditor | Views are my own, RT are not Endorsements

Katılım Eylül 2021
703 Takip Edilen148 Takipçiler
Sheila B
Sheila B@RealSheilaB·
Whats up @AnthropicAI #anthropic #claudeai - I'm having a hard time getting into my account - and when I do I get the "Taking longer than usual..." message and nothing happens even why do try again and again and again shortly! #CustomerService
Sheila B tweet mediaSheila B tweet media
English
0
0
2
97
Sheila B retweetledi
Michael Moor
Michael Moor@Michael_D_Moor·
🚀 New PhD position in my group: jobs.ethz.ch/job/view/JOPG_… We’re hiring a doctoral student at ETH Zurich (located in D-BSSE, Basel) to work on medical reasoning. 🌍 Fully funded PhD as part of the EU-funded Marie-Curie project "MLCARE". Our group has access to high-end GPU clusters, we are embedded in the life science hub of Switzerland in Basel, are involved with the ETH AI Center and SwissAI projects. For more details on the position, check out the link. 📩 Pease share!
English
12
45
252
40K
Sheila B
Sheila B@RealSheilaB·
Everyone (and their grandpa) is publishing ‘State of #AI’ reports right now. Does anyone actually read them, or are they just gathering slideware generation points?
English
0
0
0
10
Sheila B
Sheila B@RealSheilaB·
Bonjour #Paris! ❤️ More on my adventure and the wonderful people I met, coming soon! Coucou de la magnifique Paris!🇫🇷👩‍🎨 #adventure #france #ai
Sheila B tweet media
English
0
0
1
38
Sheila B retweetledi
Peter Wang 🦋
Peter Wang 🦋@pwang·
When humanity does create AGI, it will be named Untitled14.ipynb
English
41
263
2.5K
220.3K
Sheila B
Sheila B@RealSheilaB·
@burkov LOL, must have a EU link, they are on strike every other day.
English
0
0
0
13
BURKOV
BURKOV@burkov·
GPT-4 is officially annoying. You ask it to generate 100 entities. It generates 10 and says "I generated only 10. Now you can continue by yourself in the same way." You change the prompt by adding "I will not accept fewer than 100 entities." It generates 20 and says: "I stopped after 20 because generating 100 such entities would be extensive and time-consuming." What the hell, machine?
English
498
215
4.9K
1.5M
Sheila B
Sheila B@RealSheilaB·
@AndrewYNg @geoffreyhinton A scientist should never say my truth is the absolute truth regardless of how they discover it especially when there are so many variables and the high level of complexity. I call who scientists who claim this as having a god complex!
English
0
0
1
215
Andrew Ng
Andrew Ng@AndrewYNg·
I'd like to respectfully point out that the logic in this argument is based on a flawed model for how scientists think. Scientists don't just take a weighted average of others' opinions to form their own. A good scientist takes as input lots of data, including others' opinions, and then ultimately has to reason, build their own internal model of the world, and draw their conclusions from that model. I give your opinion a lot of weight. And, after having heard many opinions including yours, my internal model tells me that there is essentially no risk of AI human extinction. So I don't follow the logic that because @ylecun or anyone else disagrees with you (and other AI extinctionists, which IMO are in the minority) that they gave your opinion very little weight, unless we think scientists arrive at conclusions by taking a weighted average of what everyone else thinks. Since you've gone against the grain many times in your own career -- often brilliantly so -- I assume you're also familiar with what it feels like to give someone's opinion weight, but then ultimately to draw a different conclusion!
English
220
588
7.1K
1.1M
Geoffrey Hinton
Geoffrey Hinton@geoffreyhinton·
Yann LeCun thinks the risk of AI taking over is miniscule. This means he puts a big weight on his own opinion and a miniscule weight on the opinions of many other equally qualified experts.
English
606
483
4.3K
2.9M
Sheila B
Sheila B@RealSheilaB·
@ylecun Needs to come with a disclaimer: this is a paid advertisement by M***
English
0
0
0
14
Yann LeCun
Yann LeCun@ylecun·
There is at least one industry research lab where the leadership believes that (super)human-level AI: - is attainable - is a scientific research question, not just a question of more compute and more data. - is not "just around the corner". It will take a while. - is not an existential risk. - requires contributions from the entire research community, because no one has a monopoly on good ideas. - hence requires open source platforms and open research, because no one has a monopoly on good ideas. - is going to change human condition for the better.
English
195
429
4K
923K
Sheila B retweetledi
Geoffrey Hinton
Geoffrey Hinton@geoffreyhinton·
New paper: managing-ai-risks.com Companies are planning to train models with 100x more computation than today’s state of the art, within 18 months. No one knows how powerful they will be. And there’s essentially no regulation on what they’ll be able to do with these models.
English
154
682
3.1K
1.4M