JesseBridgewater

3.2K posts

JesseBridgewater banner
JesseBridgewater

JesseBridgewater

@drbridgewater

VP of Data Science at SoFi. previously @livongo @twitter @ebay @msft... #blm he/him @[email protected]

Atlanta and Silicon Valley Katılım Nisan 2009
932 Takip Edilen666 Takipçiler
Sabitlenmiş Tweet
JesseBridgewater
JesseBridgewater@drbridgewater·
2023 is the year data scientists start using the term Artificial Intelligence without feeling like they are trying to sell something.
English
0
0
7
721
JesseBridgewater
JesseBridgewater@drbridgewater·
I'm gradually phasing out my use of this account and am migrating to threads and Bluesky.
English
2
0
2
150
JesseBridgewater
JesseBridgewater@drbridgewater·
I'll definitely take the other side on this one. No one wants a monopoly on intelligent software. Especially govts. I predict there will be meaningful share for a number of large, closed models and the center of gravity will keep shifting to models that people and orgs control.
Nick Dobos@NickADobos

Prediction: near 0% chance open source LLMs compete with OpenAi Y’all think open source can match training data from millions of people asking new questions every day? Or an open source LLM hosting platform can handle their scale? OpenAi wins. They have customers & data

English
0
0
1
287
JesseBridgewater
JesseBridgewater@drbridgewater·
Bold hotel room finding design choice. My room, 735, is the same distance if I go left or right. So I *guess* it makes sense to give me no advice??
JesseBridgewater tweet media
English
2
0
1
345
JesseBridgewater retweetledi
Ethan Mollick
Ethan Mollick@emollick·
Great paper for teaching & learning. Tell students: “Your goal is to feel awkward and uncomfortable.” Giving an explicit goal of aiming to feel uncomfortable in order to grow makes folks persist in classes, write better, seek out more info & learn more from political opponents.
Ethan Mollick tweet media
English
13
221
1.2K
157.9K
JesseBridgewater retweetledi
SpaceX
SpaceX@SpaceX·
Tracking footage of Falcon 9 first stage returning to Earth after launching the Ax-2 mission to orbit
English
1.2K
5.4K
36.1K
4.7M
JesseBridgewater retweetledi
Emmett Shear
Emmett Shear@eshear·
@Duderichy This is my favorite animation!!! I have it bookmarked on my phone for parties. Such a whitepill on past 40 years.
GIF
English
64
265
1.5K
1.9M
JesseBridgewater retweetledi
JosH100
JosH100@josh_wills·
@pdrmnvd @matsonj @Rustlysses I feel this acutely in my coding projects already; like I spend an enormous amount of time thinking and tinkering with what a particular interface should look like and then outsource most of the actual coding/testing/docs to gpt4
English
1
2
9
626
JesseBridgewater retweetledi
Connor Shorten
Connor Shorten@CShorten30·
SQL-PaLM: Improved Large Language Model Adaptation for Text-to-SQL arxiv.org/pdf/2306.00739… “By automating the query generation, Text-to-SQL enables the development of conversational agents with advanced data-analytics abilities [...] outperforms previous fine-tuning based SOTA"
English
2
1
4
330
JesseBridgewater retweetledi
Shreyas Doshi
Shreyas Doshi@shreyas·
There is a popular Apple Pie Position, which goes: “If you don’t measure it, you cannot improve it” This belief is often harmful on early-stage teams. This is more accurate: “If you don’t measure it, you cannot prove to some people that you improved it” Impact vs. Optics Implications: 1) You need leaders who have good taste. Without good taste, everything becomes an exercise to “prove it conclusively”, which slows teams down 2) As an org grows, it becomes increasingly hard to hire / retain enough people with good taste. Plus, because optics are now more important in such orgs, people end up having to do all sorts of useless work to prove the unprovable 3) Such orgs can maintain (and hopefully grow) existing mature products (because mature products lend themselves well to management by extensive measurement). But these orgs struggle to produce new products and make them successful. These orgs tend to be uninspiring and, for some people, boring 4) When you go from such an org to a smaller team or company, you must adjust your approach. If you yourself don’t have good taste, you must first build awareness of that and hire / listen to people who do have good taste 5) In all cases, understand that there is a difference between *evaluating* how something is going and *measuring* how it is going. In some cases, you can simply evaluate without performing crazy contortions to measure. Of course, such evaluation requires good taste and judgment 6) If you find yourself feeling upset by any of these words, consider perhaps that you might have tied too much of your identity to a certain way of operating and want to remain convinced that yours is the only right way 7) If, after considering this, you still don’t feel any differently, that’s okay. Not everything will resonate with everyone at all times. Maybe come back to this idea in a few years and see how it feels then P.S. Some people proudly repeat the “if you can’t measure it, you can’t manage it” quote, not realizing that they are actually mis-quoting Deming, who actually said the exact opposite: “It is wrong to suppose that if you can’t measure it, you can’t manage it – a costly myth.” 🙂
English
35
134
894
192.6K
JesseBridgewater
JesseBridgewater@drbridgewater·
ML / DS people: just a reminder that this is our moment and the next few years are going to be huge. Don't settle for low-impact work. Your work should have line-of-sight to revolutionize something.
English
0
0
2
148
JesseBridgewater retweetledi
Hal Daumé III
Hal Daumé III@haldaume3·
Today is the official first day of @trails_ai: the Institute for Trustworthy AI in Law & Society! 🎉🥂🥳 It was a long path getting here, and I'm really excited for the next 5+ years working with an amazing team of scholars, educators and communities. trails.umd.edu >
Hal Daumé III tweet media
English
3
18
104
15K
JesseBridgewater
JesseBridgewater@drbridgewater·
Is there a credible list of doom scenarios that AI risk researchers maintain (and try to estimate the likelihood of)? My guess is that the probability of any fwoom AGI scenario is pretty close to zero.
English
0
0
0
148
JesseBridgewater
JesseBridgewater@drbridgewater·
Strongly agree. Any scientific breakthrough could produce unforeseen risk. It doesn't mean we regulate science except in very narrow areas where the risk is very clear (e.g. bio weapons).
François Chollet@fchollet

To be clear, at this time and for the foreseeable future, there does not exist any AI model or technique that could represent an extinction risk for humanity. Not even in nascent form, and not even if you extrapolate capabilities far into the future via scaling laws.

English
0
0
0
178
JesseBridgewater retweetledi
Jessy Lin
Jessy Lin@realJessyLin·
How can agents like LLMs become decision-making partners for humans? 💬 Excited to share a new paper + suite of envs for 𝘥𝘦𝘤𝘪𝘴𝘪𝘰𝘯-𝘰𝘳𝘪𝘦𝘯𝘵𝘦𝘥 𝘥𝘪𝘢𝘭𝘰𝘨𝘶𝘦𝘴, where agents + humans collab to solve hard everyday problems. [1/n] Site: collaborative-dialogue.github.io
English
7
78
461
86.9K
JesseBridgewater
JesseBridgewater@drbridgewater·
Opportunity is much more inspiring than fear. That alone is reason to look at AI risk differently than the doomers. I think we will find the energy to build safe AI systems by building powerful AI tools that we want to use. Fear of extinction will motivate very few
English
0
0
0
117
JesseBridgewater retweetledi
Jacob Austin
Jacob Austin@jacobaustin132·
Super super happy to be able to talk about DIDACT, the first code LLM trained to model real software developers editing code, fixing builds, and doing code review end-to-end. Developers don't write code in one go and neither should our models! 1/n
GIF
English
20
202
1.1K
305.5K