david🤓

2.3K posts

david🤓

david🤓

@dwhdai

📈 machine learner 🏒 #TML fan @[email protected] Opinions are my own.

Toronto, Ontario Katılım Ağustos 2014
92 Takip Edilen140 Takipçiler
Jowanza Joseph
Jowanza Joseph@Jowanza·
Got a new MacBook Pro, lemme see if I can get Python 3.7 installed
English
172
1.4K
8.1K
0
Collin Rugg
Collin Rugg@CollinRugg·
JUST IN: Boeing offers condolences after a passenger was killed on a Boeing 777 plane, says their “thoughts” are with the passengers and crew. In total 30 people were injured and a 73-year-old British man was killed. The incident happened after the plane fell a whopping 6000 feet. “Suddenly the aircraft starts tilting up and there was shaking so I started bracing for what was happening, and very suddenly there was a very dramatic drop so everyone seated and not wearing seatbelt was launched immediately into the ceiling,” a passenger said. “Some people hit their heads on the baggage cabins overhead and dented it, they hit the places where lights and masks are and broke straight through it.”
English
4.2K
11.4K
55.5K
31.1M
Michael Fralick
Michael Fralick@FralickMike·
One pager on a handful of non-antibiotic meds we prescribe often on GIM. feedback welcome. 🤓 @GIMtoronto if you want to learn more about SGLT2 - sglt2rx.com [needs some updates, i know]
Michael Fralick tweet media
English
2
1
10
2.1K
david🤓 retweetledi
Ai2
Ai2@allen_ai·
OLMo is here! And it’s 100% open. It’s a state-of-the-art LLM and we are releasing it with all pre-training data and code. Let’s get to work on understanding the science behind LLMs. Learn more about the framework and how to access it here: blog.allenai.org/olmo-open-lang…
GIF
English
28
332
1.4K
358K
david🤓
david🤓@dwhdai·
i've been particularly enjoying the inline copilot command, where i can ask copilot to do specific things
English
0
0
0
18
david🤓
david🤓@dwhdai·
@DynamicWebPaige @ImportFrom >2 years later, i'm using copilot again (woohoo work got a license!) and it's taught me this neat pathlib syntactic sugar for combining Paths
david🤓 tweet media
English
1
0
1
27
david🤓
david🤓@dwhdai·
happy 1 year birthday to ChatGPT! til that you can provide custom instructions to ChatGPT to have its responses be tailored to your needs. improves the experience dramatically imo
david🤓 tweet media
Greg Brockman@gdb

Happy birthday ChatGPT! One year ago we released what we intended to be a “low-key research preview” expecting that the real moment of excitement would be GPT-4 launch. Became all-hands-on-deck scaling effort—GPU efficiency, db, even auth. Thank you everyone for your passion!

English
0
0
0
96
david🤓
david🤓@dwhdai·
@pika_labs this is the most hype product launch video ever
English
0
0
0
14
Pika
Pika@pika_labs·
Introducing Pika 1.0, the idea-to-video platform that brings your creativity to life. Create and edit your videos with AI. Rolling out to new users on web and discord, starting today. Sign up at pika.art
English
1.3K
4.9K
24.4K
20.7M
david🤓
david🤓@dwhdai·
maybe satya orchestrated sam’s dismissal just so msft could acquihire openai for nothing
English
0
0
0
49
david🤓 retweetledi
Sam Altman
Sam Altman@sama·
i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people. will have more to say about what’s next later. 🫡
English
6.1K
8.8K
89.9K
26M
david🤓 retweetledi
Mike Coutermarsh
Mike Coutermarsh@mscccc·
Software developer hobby ladder 1. Text editors from the 70s 2. Pour over coffee 3. Writing a book 4. Mechanical keyboards 5. Endurance sports
English
47
72
709
136.3K
Harrison Chase
Harrison Chase@hwchase17·
🔥Really excited to do another course with @DeepLearningAI! "Functions, Tools, and Agents" goes over the basics of OpenAI function calling as well as how to use it to do tagging, extraction, tool selection, and we even build up to a conversational agent!
DeepLearning.AI@DeepLearningAI

🚨 New short course alert! "Functions, Tools, and Agents with LangChain" reflects new capabilities of LLMs and @langchain. Explore advancements like OpenAI’s function calling and a new syntax called LCEL, and build a conversational agent. Enroll now ➡️ hubs.la/Q026Kcrf0

English
13
25
183
34.9K
Rosanne Liu
Rosanne Liu@savvyRL·
What's the real difference between "pre-training" and "finetuning" these days? It used to be that the 2 phases have totally different objectives, tasks, data, and even modifications of model (new heads). But in LLMs it seems to mean just new data? Why not call it "more training"?
English
31
8
176
82.4K
david🤓
david🤓@dwhdai·
@savvyRL it's not cool unless we anthromorphize AI
English
0
0
0
44
Rosanne Liu
Rosanne Liu@savvyRL·
[PAPER POLICE AT WORK] Pretty cool vis, and solid paper, but do we really have to blow it up as to say LLMs "learn a world model"?? The result basically says all similar words (wrt location or time) are well clustered in the latent space—a finding already known from word2vec.
Wes Gurnee@wesg52

Do language models have an internal world model? A sense of time? At multiple spatiotemporal scales? In a new paper with @tegmark we provide evidence that they do by finding a literal map of the world inside the activations of Llama-2!

English
15
96
850
163.5K
david🤓
david🤓@dwhdai·
@mysticaltech @tunguz love when people who don't know what they're talking about confidently talk about things they shouldn't
English
0
0
1
513
The Canaanite
The Canaanite@mysticaltech·
@tunguz Completely outdated skill. Now GPT-4's "Advanced Data Analysis" aka "Code Interpreter", can do this in a snap. The only remaining job in tech in the future will be "AI Engineer".
English
26
0
39
473.9K
Bojan Tunguz
Bojan Tunguz@tunguz·
Being able to handle messed up CSV files is actually one of the best screenings for Data Science roles tbh.
English
77
71
1.4K
761.9K
👩‍💻 Paige Bailey
👩‍💻 Paige Bailey@DynamicWebPaige·
😅 Controversial opinion, so buckle in: It's impossible to replace the camaraderie and the productivity gains of being colocated with a team, in an office, in the same time zone. Far and away my favorite way to work. ❤️ (I get that there are other benefits to being remote, and that often remote work is preferred for folks with families - but holy moly, it's so nice to see people in the office everyday.)
GIF
English
79
7
326
194.2K
david🤓 retweetledi
Peyman Milanfar
Peyman Milanfar@docmilanfar·
Six mathematical objects you don't need 1/6. Klein Wine Bottle Which you have to pour from the bottom
Peyman Milanfar tweet media
English
17
124
1.2K
215.1K
azul (she/her)
azul (she/her)@azulgarza_·
@dwhdai @nixtlainc Absolutely! Those associated predictions can be used as exogenous variables for added context. TimeGPT can manage them, ensuring forecasts take these predictions into account.
English
1
0
1
72
nixtla
nixtla@nixtlainc·
🎉 Update: TimeGPT (Beta) 🎉 Last week, we introduced the private beta of TimeGPT-1 – the first large pre-trained model for time series data. Original post in the comments. The response in the past 5 days has been both heartening and overwhelming. The number of requests for the private beta exceeded our expectations. We kindly ask for your patience as we work through them. 🙏 For our first cohort of beta testers, we selected a diverse group, including startups and time series specialists familiar with our open-source tools. It's been an eventful phase, with over 200K calls to our API and the emergence of some intriguing results. We're delighted that several notable figures participated in this phase. A big shoutout to @didier_lopes, @timoschowski, @seanjtaylor, @predict_addict, @TorresJorgeML, @paxcema, Jonathan Farland and Han Wang. They all gave us very valuable Feedback. 💪 Key insights from our initial beta testers include: 1️⃣ TimeGPT-1 consistently delivers precise forecasts. 2️⃣ In zero-shot forecasting scenarios, TimeGPT-1 matches or surpasses the accuracy of SOTA models directly trained on hourly, daily, weekly, and monthly data. 3️⃣ Fine-tuning TimeGPT-1 with domain-specific data enhances the forecasting accuracy. 4️⃣ Incorporating exogenous factors, like calendar attributes, allows TimeGPT-1 to discern specific signal patterns better. 5️⃣ Uncertainty estimations based on conformal prediction have been precise and well-calibrated. ⚠️ However, TimeGPT-1 is far from perfect. While it's a promising start, we've identified some limitations that echo challenges observed in other large models for language and images. For instance, the model can sometimes offer plausible yet off-target predictions. (Then again, what does it even mean to hallucinate the future?) Limitations include: 1️⃣ Increased propensity for errors or "hallucinations" in forecasts with longer horizons or at very high frequencies (like millisecond data). Predictions can become flat, missing specific signal nuances. 2️⃣ Some prediction intervals don’t account for domain constraints. A few users highlighted intervals suggesting negative values for time series that only take positive values. 3️⃣ Forecasts can become erratic when many exogenous variables are introduced. In summary, the feedback has been largely positive. Most of our beta testers find TimeGPT-1 a possible practical tool for their forecasting needs, delivering accurate results efficiently and with streamlined processes. We're already looking into enhancing TimeGPT-1, exploring its potential for even higher frequency data and more extended forecasting horizons. The enthusiasm for TimeGPT-1 has been palpable. While we're inundated with requests, rest assured we're pushing the envelope to onboard more users. Interested parties can still express their interest via the form on nixtla.io to access TimeGPT-1 beta and our preliminary research paper.
nixtla tweet media
English
9
44
293
98K