joël

2K posts

joël banner
joël

joël

@joelstc

mr worldwide Katılım Temmuz 2016
945 Takip Edilen327 Takipçiler
Sabitlenmiş Tweet
joël
joël@joelstc·
read the resources being shared. don’t just like them. read the resources being shared. don’t just like them. read the resources being shared. don’t just like them. read the resources being shared. don’t just like them. read the resources being shared. don’t just like them. read
English
1
6
19
0
joël retweetledi
Tem
Tem@temnco·
someone posted this genius on linkedin
Tem tweet media
English
100
5K
42K
2.5M
joël
joël@joelstc·
If the issue is with small boats perhaps one might make the boats larger such that they are easier to catch in nets or, alternatively, make the country smaller so the boats are easier to see in comparison?
English
0
0
0
144
joël retweetledi
Karl Hansen
Karl Hansen@karl_fh·
"Under 'policies', you've just put 'growth'" "Yeah" "That's your job, though. I was sort of looking for your policies within the job, so is there anything else you could put there?"
Karl Hansen tweet media
English
5
149
1.3K
41.2K
joël retweetledi
Scott Bryan
Scott Bryan@scottygb·
Ed Davey has just done an interview about the UK rejoining the single market whilst on the teacups
English
158
637
7.9K
1.2M
joël retweetledi
Olivia Moore
Olivia Moore@omooretweets·
@a16z @illscience We see use cases for voice agents across B2B and B2C. For businesses - replace labor with software! For consumers - provide access to previously expensive services (therapy, coaching, etc.) + unlock new types of conversations. Read our full thesis here: gamma.app/docs/a16z-Real…
English
1
3
30
8.2K
joël retweetledi
Liam Thorp
Liam Thorp@LiamThorpECHO·
Rishi Sunak had a terrible start to his campaign, took an unscheduled day off to work with his team and came up with the single worst policy of modern times. Fair play.
English
86
975
9.1K
363.4K
joël
joël@joelstc·
This is odd because it draws further attention to the issue but is without clear next steps or understanding of the critique - why tweet about it at all??
Greg Brockman@gdb

We’re really grateful to Jan for everything he's done for OpenAI, and we know he'll continue to contribute to the mission from outside. In light of the questions his departure has raised, we wanted to explain a bit about how we think about our overall strategy. First, we have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it. We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks. Second, we have been putting in place the foundations needed for safe deployment of increasingly capable systems. Figuring out how to make a new technology safe for the first time isn't easy. For example, our teams did a great deal of work to bring GPT-4 to the world in a safe way, and since then have continuously improved model behavior and abuse monitoring in response to lessons learned from deployment. Third, the future is going to be harder than the past. We need to keep elevating our safety work to match the stakes of each new model. We adopted our Preparedness Framework last year to help systematize how we do this. This seems like as good of a time as any to talk about how we view the future. As models continue to become much more capable, we expect they'll start being integrated with the world more deeply. Users will increasingly interact with systems — composed of many multimodal models plus tools — which can take actions on their behalf, rather than talking to a single model with just text inputs and outputs. We think such systems will be incredibly beneficial and helpful to people, and it'll be possible to deliver them safely, but it's going to take an enormous amount of foundational work. This includes thoughtfulness around what they're connected to as they train, solutions to hard problems such as scalable oversight, and other new kinds of safety work. As we build in this direction, we're not sure yet when we’ll reach our safety bar for releases, and it’s ok if that pushes out release timelines. We know we can't imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities. We will keep doing safety research targeting different timescales. We are also continuing to collaborate with governments and many stakeholders on safety. There's no proven playbook for how to navigate the path to AGI. We think that empirical understanding can help inform the way forward. We believe both in delivering on the tremendous upside and working to mitigate the serious risks; we take our role here very seriously and carefully weigh feedback on our actions. — Sam and Greg

English
0
0
1
190
joël retweetledi
JJ
JJ@JosephJacks_·
IMHO, Anthropic and OpenAI are playing a losing game trying to continue scaling compute by hemorrhaging equity capital losses (massively underwater CAC) as their core competitive strategy. Let me unpack this view: Hyperscalers (specifically MSFT, META and GOOG) have orders of magnitude more resources (talent), users (billions in MAUs) and capital ($200B+ of working capital between them, to be exact.. and this could be 2-3X that quite easily with their liquid stock factored in) to build and scale SOTA models themselves. Additionally, they are in the lovely position to not need to monetize the models whatsoever - they can infuse them in their existing cash cow businesses and transform those value propositions for customers, boosting engagement, revenue expansion, data acquisition and a lot more. "How do we get customers to pay for the models??" is not even a question. You give the models away entirely. META does this explicitly with zero issue in the beautiful absence of a business model conflict. As a result, I don't see a rosy future for Anthropic or OpenAI at all. Even in the enterprise market. They serve a useful purpose to give interesting product ideas to the hyperscale incumbents. And to entertain the industry. But not much else. So... where is the real opportunity? Radically more compute efficient approaches for achieving whatever approximates as "AGI" or "ASI"... which cannot be defined by anyone, but will indeed bring immense value for humanity. Some risks, but vastly more benefits than risks. In the absence of massively (and I mean orders of magnitude) more compute efficient approaches being unlocked by a startup for scaling machine intelligence without training runs that cost tens of billions of dollars (which is the current regime in the next 6 months), the hyperscaler incumbents will absolutely take 90%+ of the market share. There is no other path that I can realistically see happening.
English
67
89
853
405.6K
joël retweetledi
Kevin → Plant Daddy
Kevin → Plant Daddy@KevinEspiritu·
I want my take on the record: something is fishy about OpenAI. It's been obvious to me since November '23 and all of the bizarre drama that ensued, in cagey interviews given by Sam/etc., and the recent no-info resignations of big players. I'm a dumb gardener, but this is my take
English
74
18
848
407.9K
joël
joël@joelstc·
Prank where influencer kills ur dog but buys you a new one so it’s all good 👍🏾
English
0
0
3
158
joël
joël@joelstc·
@bigstrongthumbs agree it sounds more human and (probably!) more accurate - i think there's just a broader sense in which this isn't really solving a problem that exists in the way in which it used to - there must be more interesting applications of what's been developed than a slightly faster GT
English
1
0
0
34
Ioana
Ioana@bigstrongthumbs·
@joelstc no clue when you'd use this & not GT or a human lmao. I guess it's about the inflections of the voice, the ability to learn and improved accuracy? similar to the diff bw GPT and a chatbot from a decade ago?
English
1
0
1
70
joël
joël@joelstc·
lord grant me the confidence of someone who only understands 30% of the topic they are talking about
English
0
0
1
130
joël retweetledi
Joe Weisenthal
Joe Weisenthal@TheStalwart·
*GAMESTOP TRADING HALTED FOR VOLATILITY AGAIN, SHARES UP 69%
English
57
41
508
361.7K
joël
joël@joelstc·
tired of donald glover and his shenanigans
English
0
0
1
156