Mat McGann

5.9K posts

Mat McGann banner
Mat McGann

Mat McGann

@MatMcGann

Above all, grow knowledge ✍🔧👨‍💻🚀 @healthhorizon (CEO) https://t.co/eTnatQrpRQ (blog)

Canberra, Australia Katılım Haziran 2010
504 Takip Edilen780 Takipçiler
Sabitlenmiş Tweet
Mat McGann
Mat McGann@MatMcGann·
Using "right and wrong" or "true and false" in a given scenario is an attempt to guide decision making on a foundation. There are no foundations. Instead of "right and wrong" consider what's "outsourced on trust" or "stood up to in conscience"
English
1
0
9
0
Mat McGann
Mat McGann@MatMcGann·
Prediction: Typos will be an indication of manual effort and therefore a status thing as AIs saturate our communication
English
0
0
1
63
Mat McGann
Mat McGann@MatMcGann·
@svembu A common use case is to generate reliable RAG of known unknowns from the outside world. Works for arbitrary data models Horiz.in
English
1
0
1
12
Mat McGann
Mat McGann@MatMcGann·
@svembu Agreed. We are building data pipelines using only the reasoning of LLMs to create and maintain structured knowledge bases. The process can extract latent information from unstructured sources. As a result the data has full provenance. Often used to provide reliable RAG.
English
1
0
1
145
Sridhar Vembu
Sridhar Vembu@svembu·
All the LLMs and other deep learning models are based on neural networks. We can think of them as mathematical functions with hundreds of billions of parameters. Those parameters (weights) are determined during training and we train these networks with trillions of tokens (text, images, videos that are split up into tokens to be ingested by the models). We can say that every one of the trillions of tokens played a part in determining the value of each of the hundreds of billions of parameters. The image I have in mind is a giant lake where we dissolve trillions of cubes of salt, sugar etc. After the dissolution we cannot know which of the cubes of sugar went where in the lake - every cube of sugar is everywhere! Therein lies a problem: if we use a business database, such as customer relationship data, to train a neural network model (i.e to determine its parameters), when the customer changes that data or deletes the data, we do not know how to alter the weights of the model to account for this change in the data. Even if the model were dedicated to that customer, we still cannot guarantee the customer that their changes to the data will be reflected in the model. In that sense, neural networks (and therefore LLMs) are NOT a suitable database. This is a fundamental limitation of the current scientific mathematical approach and cannot be fixed only by technological fine tuning. The RAG (retrieval augmented generation) architecture keeps the business database separate and augments the user prompt with data fetched from the database. In that case, the model itself is not trained on the (potentially changing) customer data because that data is only used in the prompt. But RAGs can only go so far. I personally have come to believe more foundational work is needed. What does that look like? All I have right now are hunches. That is the existing part of scientific work!
English
186
466
3.3K
285.9K
Mat McGann
Mat McGann@MatMcGann·
I bet the JFK files show incompetence and corruption rather than conspiracy
English
0
0
2
139
Mat McGann
Mat McGann@MatMcGann·
Clear as day availability bias. People overestimate how likely civil war is in the open societies and underestimate how likely it is everywhere else
English
0
0
2
136
Mat McGann
Mat McGann@MatMcGann·
@averykimball Axioms and theories do seem different but I'm not sure they are. They're all just ideas. When an experiment doesn't give you what you expect it could be your theories or your axioms that are wrong
English
0
0
1
25
avery
avery@averykimball·
@MatMcGann theories underlying an instrument being wrong doesn't falsify determinism as you say it's a ground assumption, an axiom, 'falsification' doesn't apply to it- it's used *for* falsifying (like the other suite of consequences inside Realism)
English
1
0
0
25
Mat McGann
Mat McGann@MatMcGann·
It's cool that it works when it does. And it obviously gets at something important and true about reality. But it is an assumption about the world. It is by no means self evident and every experiment that has had any kind of experimental error (which is all of them) falsifies it.
English
2
0
0
53
Mat McGann
Mat McGann@MatMcGann·
Determinism states that things must be fully deterministically caused and it applies to every aspect of reality at all times. Determinism is just the theory-est theory. It's maximum constraint and maximum reach.
English
1
0
0
50
Mat McGann
Mat McGann@MatMcGann·
In my opinion it's a Dunning Kruger effect. When you learn enough physics you see determinism underlying all it's success and infer it's truth. But when you learn more physics you realise that our understanding of the world is waaay patchier.
English
0
0
1
33
Mat McGann
Mat McGann@MatMcGann·
Determinism is a fertile theory where it works but people seem to think it's more than a theory.
English
1
0
0
33
Mat McGann
Mat McGann@MatMcGann·
"Casual reasoning [and hence determinism] is a heuristic" 👌👌👌 i.e. determinism is a theory about the world, not a ground truth. youtu.be/nh1Z3UTobrY?si…
YouTube video
YouTube
English
1
0
1
180
Mat McGann
Mat McGann@MatMcGann·
I've listened to hundreds of In Our Times. They've covered every kind of famous person imaginable. Except actors. I can't remember a single IOT about an actor.
English
0
0
0
61
Mat McGann
Mat McGann@MatMcGann·
This would explain the drastic rise in allergies over the last few decades: Feedback loop: - Some kids are born allergic - More awareness of this > less exposure - less exposure causes emergent allergies - More awareness > decrease exposure > ...
Crémieux@cremieuxrecueil

Over a decade ago, researchers started a trial to see if they could prevent peanut allergies They gave a few hundred kids peanuts from ages one to five and told parents of another group to have their kids avoid the stuff Peanut consumption reduced peanut allergy rates by a lot:

English
0
0
2
154
Mat McGann
Mat McGann@MatMcGann·
Wolfram is the most correct about AI person living imo
English
0
0
0
63