Andrew Chang

173 posts

Andrew Chang

Andrew Chang

@achang1618

Building @runalph My knowledge graphs @neode_ai

San Francisco, CA Katılım Ağustos 2019
128 Takip Edilen158 Takipçiler
Andrew Chang
Andrew Chang@achang1618·
@dasanil Immediately? Even with rule of 72, you had to divide twice and know ~2^8-2^9 off the top of your head. Impressive to those who get it. Magic to those who don't.
English
2
0
95
120.2K
Anil Das
Anil Das@dasanil·
Random dude I met at golf is talking investments. A second dude is 63, so the investment guy goes, “if your parents had invested a dollar when you were born and it grew at 10% annually, it would now be …” and pauses to think. I say immediately, “about $500”.
English
57
130
23.1K
3.9M
Andrew Chang
Andrew Chang@achang1618·
@jeremyphoward @levelsio Yikes. I’m all for AI chat for question answering, but those results seem way too opinionated and biased for such a nuanced question. Lol at the “research ppls” too. Also confused about the outrage over llms.txt? Seems obvious from the nature of LLMs..especially instruct models
English
0
0
0
164
Jeremy Howard
Jeremy Howard@jeremyphoward·
I'm glad @levelsio checked this, but sad our contrib has been erased by later big tech co's. Alec Radford said ULMFiT inspired GPT. ULMFiT's first demo predated BERT. Today's 3-stage LLM approach of general corpus pretraining and 2 stages of fine-tuning was pioneered by ULMFiT.
@levelsio@levelsio

So, I asked every AI model if this was true Then I checked Wikipedia Then I asked AI researcher ppl I knew to be sure And not a single source confirms that you created the first LLM You had a massive impact creating the ULMFiT algorithm which pioneered transfer learning and fine-tuning techniques though Happy to be corrected if you did create the first LLM, of course, but I can't find the sources anywhere

English
43
72
1.2K
304.3K
Andrew Chang
Andrew Chang@achang1618·
We're able to launch our hugging face notebook environment on 200+ GPU options from 10+ providers ranging from lambda labs to crusoe! Our mix ranges from 1xA100s to 8xH100 clusters 🚀 We built out an API to launch notebook servers that to make it easier to start using GPU tools like pytorch and transformers: docs.amdatascience.com/api-reference/…
English
0
1
1
248
Julien Chaumond
Julien Chaumond@julien_c·
We need more knowledge sharing about running ML infrastructure at scale! Here's the mix of AWS instances we currently run our serverless Inference API on. For context, the Inference API is the infra service that powers the widgets on @huggingface Hub model pages + PRO users and Enterprise orgs can use it programmatically. And you? what's your current mix of instances?
Julien Chaumond tweet media
English
12
10
118
12.4K
Andrew Chang
Andrew Chang@achang1618·
Create a python networkx graph with one line of ...text? @americandatasci's Agent Alph can create and run code in notebooks with your simple text directions. However, it's not perfect. Anyone see any issues with some of the data that GPT-4o provided in this tiny knowledge graph? #notebooks #ai
English
0
1
3
245
Andrew Chang
Andrew Chang@achang1618·
@jeremyphoward The number of blog posts I see criticizing notebooks for functions it wasn’t even meant to be used for is a testament to how useful it actually is 😂
English
0
0
0
180
Jeremy Howard
Jeremy Howard@jeremyphoward·
If you haven't got the time to learn basic notebook-engineering, don't worry, there's a fix: Just record a video, or write a blog post, telling everyone else not to use notebooks. If you're successful, then you'll never have to learn, and no-one need know about your problem.
English
19
8
394
62.3K
Andrew Chang
Andrew Chang@achang1618·
@judegomila Funny and maybe ironic because sigmoid activation functions were a commonly used layer for deep learning neural networks that allowed models to scale in capability by introducing nonlinearity. Observable universe is finite so chained sigmoid progression may not be a bad thing
English
0
0
1
68
Jude Gomila
Jude Gomila@judegomila·
The existence of chained sigmoids in technology development curves and any natural system is why the singularity won’t really happen and be much slower than people think it will be (ie not like the movies). Ever speedier chained sigmoids functions rather than a pure single exponential is the more realistic ‘singularity’. The plurality of mutually dependent sigmoids functions that are the reason we don’t have flying cars, yet. You can use sigmoids dynamics to your advantage when knowing when to leave a certain path and when the diminishing returns are starting. No singularity, but the sigmoidreality. This is also why I’m not worried about AGI risk at this point in time.
Yann LeCun@ylecun

Yes, I've made this point many times. The beginning of a sigmoid looks like an exponential. Not only can we "never be fully certain that what we are observing isn't in fact following a logistic trend before the inflection point", we can always be fully certain that *every* *single* *exponential* *trend* eventually passes an inflection point and saturates into a sigmoid. Continuing an exponential trend beyond that inflection point requires a paradigm shift. No physical process can grow indefinitely. There are always friction terms in the dynamics equation that eventually become dominant (energy consumption, heat dissipation, quantum effects, thermal fluctuations, communication bandwidth, mass/energy density....). Even processes that *appear* exponential on a long time scale are actually a succession of sigmoids, in which each new sigmoid is caused by a paradigm shift. A good example is Moore's Law. It is saturating right now. But the exponential progress of the last 7 decades is due to a succession of technological paradigm shifts that weren't pre-ordained. Each paradigm behaved like a sigmoid. Each new sigmoid overtook the previous one. The envelope turned out to be exponential. We haven't seen similar paradigm shifts in, say, airplane speed or space travel. Technological paradigm shifts require scientific breakthroughs.

English
1
0
3
1.1K
Andrew Chang
Andrew Chang@achang1618·
@immad $5m FDIC insurance, that's 20x more than the typical personal checking account. Haven't seen any other banks making sweep networks with personal accounts this easy
English
0
0
0
36
immad
immad@immad·
1/ MERCURY PERSONAL is finally here! Get all the power of Mercury — now for your personal account. Up to 5.00% APY savings*, $5M FDIC insurance**, free domestic wires, for a flat $240/yr fee. Get on the waitlist: mercury dot com/personal-banking
English
139
65
1K
231.2K
Andrew Chang
Andrew Chang@achang1618·
@judegomila One of my favorite fun facts to share when it pops up; I like to say in a room of 50 there’s >95% chance of two sharing the same birthday
English
1
0
0
106
Jude Gomila
Jude Gomila@judegomila·
The birthday paradox in action: ‘In probability theory, the birthday problem asks for the probability that, in a set of n randomly chosen people, at least two will share a birthday. The birthday paradox refers to the counterintuitive fact that only 23 people are needed for that probability to exceed 50%.’ Point being it’s very likely for people to share birthdays.
Gary Marcus@GaryMarcus

Fun, possibly mind-boggling fact that Sam Altman knows that you might not, h/t @ihorgowda: Altman and Oppenheimer share a birthday, April 22.

English
4
0
4
3.2K
Andrew Chang retweetledi
Dalton Caldwell
Dalton Caldwell@daltonc·
Smart people being compensated to make correct predictions within their areas of expertise can still be dead wrong. Even when (especially when!) they are acting in good faith. In addition, it’s insidiously hard to make laws which seek to ensure that experts don’t make mistakes.
English
6
5
65
8.5K
Andrew Chang retweetledi
Jeremy Howard
Jeremy Howard@jeremyphoward·
There's a new programming language in town - it's Mojo! I'm more than a little excited about it. It's Python, but with none of Python's problems. You can write code as fast as C, and deploy small standalone applications like C. My post is below, and a 🧵 fast.ai/posts/2023-05-…
English
95
872
4.6K
1.1M
Andrew Chang
Andrew Chang@achang1618·
@grigoriy_kogan @judegomila Latency on pinecone is great; we're still testing it out at scale, but I'm very impressed. I've worked pretty extensively with in-house vector search tools like faiss and scann, so pinecone was a nice surprise
English
0
0
3
82
Greg Kogan
Greg Kogan@gregkogan·
@judegomila Where’s the latency bottleneck? If it’s Pinecone we can see what could be done.
English
2
0
1
77
Jude Gomila
Jude Gomila@judegomila·
1/ I’d like to share a fun little demo we have been hacking on @golden We attempt to answer factual questions in a more accurate way using the Golden Knowledge Graph by providing a retrieval enhancement to GPT-3. Golden Retriever: ai.golden.com Thread👇
GIF
English
19
123
249
158.9K
Andrew Chang
Andrew Chang@achang1618·
Check out our new AI demo that leverages the @Golden Knowledge Graph to enhance fact retrieval processes with GPT-3! Our web3 knowledge graph is constantly under verification processes done by the open community...data quality and growth is constantly increasing #AI #NLP
Jude Gomila@judegomila

1/ I’d like to share a fun little demo we have been hacking on @golden We attempt to answer factual questions in a more accurate way using the Golden Knowledge Graph by providing a retrieval enhancement to GPT-3. Golden Retriever: ai.golden.com Thread👇

English
0
0
4
337