Roberto Musso

2.1K posts

Roberto Musso banner
Roberto Musso

Roberto Musso

@RobertoMussoM

Libertad, equidad, esfuerzo y evolución. En ellos creo. Valoro el afecto, la lealtad y la humildad. Admiro el talento. En Digevo oficio de emprendedor.

Katılım Ağustos 2011
1.2K Takip Edilen2.5K Takipçiler
Roberto Musso retweetledi
Nicholas A. Christakis
Nicholas A. Christakis@NAChristakis·
Extraordinary. We are stardust. Asteroid Bennu contain all 5 of nucleobases that form DNA and RNA on Earth and 14 of 20 amino acids found in living organisms (though Bennu contains equal amounts of these structures and their right-handed counterparts). nature.com/articles/s4155…
English
8
85
327
26.9K
Roberto Musso retweetledi
Oscar Hoole
Oscar Hoole@theoscarhoole·
In 1965, Singapore was forced out of Malaysia. No army. No resources. No fresh water. A tiny island of 2M people living in slums. Then ONE man's ruthless vision built modern Asia's greatest success... 🧵
Oscar Hoole tweet media
English
2.7K
28K
223.4K
40.4M
Roberto Musso retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
It's a bit sad and confusing that LLMs ("Large Language Models") have little to do with language; It's just historical. They are highly general purpose technology for statistical modeling of token streams. A better name would be Autoregressive Transformers or something. They don't care if the tokens happen to represent little text chunks. It could just as well be little image patches, audio chunks, action choices, molecules, or whatever. If you can reduce your problem to that of modeling token streams (for any arbitrary vocabulary of some set of discrete tokens), you can "throw an LLM at it". Actually, as the LLM stack becomes more and more mature, we may see a convergence of a large number of problems into this modeling paradigm. That is, the problem is fixed at that of "next token prediction" with an LLM, it's just the usage/meaning of the tokens that changes per domain. If that is the case, it's also possible that deep learning frameworks (e.g. PyTorch and friends) are way too general for what most problems want to look like over time. What's up with thousands of ops and layers that you can reconfigure arbitrarily if 80% of problems just want to use an LLM? I don't think this is true but I think it's half true.
English
565
1.2K
10.6K
1.3M
Roberto Musso retweetledi
Elon Musk
Elon Musk@elonmusk·
Wise words from a recent interview with @geoffreyhinton, one of the smartest people in the world regarding AI
English
6.6K
25.6K
103.4K
37.8M
Roberto Musso retweetledi
Informa Cosmos
Informa Cosmos@InformaCosmos·
Cuando te falta presupuesto pero te sobra ingenio.
Informa Cosmos tweet media
Español
493
1.9K
36.4K
7.1M
Roberto Musso retweetledi
Bindu Reddy
Bindu Reddy@bindureddy·
When you cheat on your exam using ChatGPT and aren't paying too much attention
Bindu Reddy tweet media
English
21
24
163
28.1K
Roberto Musso retweetledi
Gisela Baños
Gisela Baños@gisbanos·
La mañana del 16 de diciembre de 1947, este cacharro cambió la historia de la informática, abriendo la puerta a la miniaturización. Es el primer transistor y, aunque no lo sepas, tiene un relación muy estrecha con cierto género literario de marcianos... 🛸 ¡Dentro hilo! 🧵👇
Gisela Baños tweet media
Español
31
816
2.9K
458.7K
Roberto Musso retweetledi
Itamar Golan 🤓
Itamar Golan 🤓@ItakGol·
"Take a deep breath and work on this problem step-by-step." 😌 It may sound like a joke 😄, but this is the bottom line of Deepmind's recent paper... Deepmind released a paper, titled 'Large Language Models as Optimizers,' where they used LLMs to optimize the prompts given to another LLM in order to improve its output. In one of their experiments 🧪, they used Google's Palm as an optimizer, and eventually, it converged into a prefix that increased the accuracy of the following prompt by approximately 9% on average, and it was not other than: 'Take a deep breath and work on this problem step-by-step.' I find it shocking that mathematical optimization converged to this very human empathic-like suggestion - "take a breath..." 😅. On the other hand, it was trained on what we human write everywhere, and this writing probably reflects us mindset more than we might think. So I'll take a deep breath and move to my next meeting... Readmore - arxiv.org/pdf/2309.03409…
Itamar Golan 🤓 tweet media
English
29
132
700
201.1K
Roberto Musso retweetledi
Bindu Reddy
Bindu Reddy@bindureddy·
Bayesian Neural Networks - Capturing The Uncertainty Of The Real World Life is inherently uncertain and probabilistic, and Bayesian Neural Networks (BNNs) are designed to capture and quantify that uncertainty In many real-world applications, it's not sufficient to make a prediction; you also want to know how confident you are in that prediction. For example, in healthcare, a model that says a patient has a 70% chance of having a particular disease is less informative than one that says there's a 70% chance but with a margin of error of ±10%. BNNs are less prone to overfitting, can be more data efficient as they can incorporate priors, and can output a probability distribution for each prediction. Knowing the uncertainty or the probability that a particular prediction is accurate builds trust and confidence with business users. So how do Bayesian Networks Work? The core idea is to replace the fixed weights w in a standard neural network with probability distributions P(w) The famous equation from Bayes is: P(A|B)=P(B|A)P(A) / P(B) In the context of BNNs: A is the model parameters (weights and biases). B is the observed data. P(A∣B) is the posterior distribution of the parameters given the data. P(B∣A) is the likelihood of the data given the parameters. P(A) is the prior distribution of the parameters. P(B) is the evidence, often considered a normalizing constant. Prior Distribution - You start with a prior distribution P(w) over the weights. This represents your initial belief about the model parameters before seeing any data. Posterior Distribution - The goal is to compute the posterior distribution P(w∣D), which represents the updated belief about the weights after observing data D. Bayes theorem along with some approximation methods are used to calculate this distribution. Prediction - Finally, to make a prediction for a new input x, you average over all possible weights, weighted by their posterior probabilities: P(y∣x,D)=∫P(y∣x,w)×P(w∣D)dw This gives you not just a point estimate but a distribution over the possible outputs y, capturing the model's uncertainty. For example: BNNs can be applied to a dataset of MRI scans where each scan is labeled either "Cancer" or "No Cancer." The goal is to build a model that can predict these labels for new, unlabeled MRI scans. A BNN can say, "I'm 80% sure this is cancer, but there's a 20% chance it's not," which is valuable information for clinicians. BNNs are useful wherever uncertainty quantification is important including disease diagnosis, risk assessment, energy forecasting, and real-time decision-making
Bindu Reddy tweet media
English
25
443
2K
436.2K
Roberto Musso retweetledi
CRCP Valpo
CRCP Valpo@CRCPValpo·
#27ee |⌚️ ¡Iniciamos la cuenta regresiva del 27ee Encuentro de Ecosistemas: quedan 5 días! ¿Ya te inscribiste? ☝🏻Te extendemos la invitación al Seminario Impacto de la Inteligencia Artificial en EBCT 📅Jueves 3 de agosto ⏰15:00 horas 🔗27ee.cl Te esperamos!
CRCP Valpo tweet media
Español
0
2
5
433
Roberto Musso retweetledi
Aadit Sheth
Aadit Sheth@aaditsh·
NVIDIA recently hit $1T in market cap. The CEO Jensen Huang flew to Taipei to give a commencement speech at NTU. Not Stanford. Not Harvard. NTU. Here are 5 key insights on AI from Jensen's speech 👇
Aadit Sheth tweet media
English
68
360
3K
1.9M
Roberto Musso retweetledi
Rowan Cheung
Rowan Cheung@rowancheung·
AI developments this week were insane. We got massive announcements from Epic Games, OpenAI, Amazon, multiple AI robots, the US Senate, Hippocratic AI, Zapier, Zoom, Elon Musk, Meta, DragGAN, and Apple. Here's EVERYTHING you need to know and why it's important:
English
54
306
1.6K
1M
Darrell Lerner
Darrell Lerner@DarrellLerner·
ChatGPT Plugins are the fastest way to get rich in 2023. I’ve created a step-by-step guide showing you how to earn $10k/month using ChatGPT & No-Code Tools. Want it for FREE? Like + Comment “send” (Must be following so I can DM)
Darrell Lerner tweet media
English
5.4K
537
6.4K
963.7K
Roberto Musso retweetledi
Rowan Cheung
Rowan Cheung@rowancheung·
March of 2023 will go down as one of the most revolutionary months in history. AI developments released this month changed the world forever. Here's the rundown of the biggest events that happened 👇
Rowan Cheung tweet media
English
257
1.5K
7.3K
1.7M
Roberto Musso
Roberto Musso@RobertoMussoM·
El emprendimiento no es una moda. Es una consecuencia del cambio tecnológico. Preparar a todos los estudiantes universitarios para aprender es clave. lnkd.in/eq94fdfY
Español
0
0
0
185
Roberto Musso
Roberto Musso@RobertoMussoM·
Colaborar con inteligencia artificial. Es en lo que deben entrenar las universidades a todos los futuros profesionales. No importa la carrera que sigan. Para eso, deben incorporar decididamente IA en el proceso educativo. #intlnkd.in/eqYEx4jf lnkd.in/e3SKs4y3
Español
0
0
0
141
Roberto Musso retweetledi
Codie Sanchez
Codie Sanchez@Codie_Sanchez·
The average person thinks having a job is safer than starting a company. They're wrong.
English
176
210
2.3K
559.3K