AI Authority

3.6K posts

AI Authority banner
AI Authority

AI Authority

@Authority_AI

Empowering businesses to integrate AI. We offer AI audits, strategy, tools, training & support to boost efficiency & stay ahead of the curve.

Get in touch ➡️ Katılım Nisan 2023
2K Takip Edilen3.6K Takipçiler
Sabitlenmiş Tweet
AI Authority
AI Authority@Authority_AI·
Elon Musk is setting out to understand the true nature of the universe with his newest company, xAI. The team will talk about the new venture on Friday, July 14, during a Twitter Spaces. Here's a breakdown of each team member and their contributions to artificial intelligence: Elon Musk Elon Musk, co-founder of groundbreaking companies like SpaceX and Tesla, has been instrumental in advancing the field of artificial intelligence, particularly through his contributions to OpenAI. His belief in the transformative potential of AI, coupled with his advocacy for ethical regulations, has fostered both technological innovation and the ongoing discourse around safe and responsible AI usage. Igor Babuschkin Igor Babuschkin, a distinguished AI researcher, holds an impressive portfolio of experience at leading tech companies like DeepMind and OpenAI. Handpicked by Elon Musk for his expertise, Babuschkin has been engaged in developing a rival chatbot to OpenAI's ChatGPT, which he sees as being trained to be "woke". Manuel Kroiss Manuel Kroiss, an accomplished software engineer, has leveraged his skills at tech giants such as Google and DeepMind, significantly contributing to advancements in reinforcement learning and artificial intelligence. Most recently, as co-author of the paper "Reverb: A Framework for Experience Replay," Kroiss helped introduce a highly efficient and customizable system for experience replay in reinforcement learning, furthering the state-of-the-art in the field. Yuhuai (Tony) Wu Yuhuai (Tony) Wu is an accomplished computer scientist and AI researcher, known for his work on automated mathematicians and formal reasoning at Google's N2Formal team and a stealth startup. His influential research, recognized with accolades at ICLR and coverage in outlets like the New York Times and Quanta Magazine, centers on developing machines capable of human-like reasoning, particularly in mathematics. Christian Szegedy Christian Szegedy is a highly accomplished research scientist with extensive expertise in deep learning, artificial intelligence, computer vision, video analysis, and formal reasoning, currently serving as a Staff Research Scientist at Google. His educational background includes a Ph.D. in Applied Mathematics from the University of Bonn, where he studied matching theory, graph coloring, and developed optimization algorithms to solve complex problems in electronic design automation (EDA). Jimmy Ba Jimmy Ba is an Assistant Professor at the University of Toronto, where he leads research on the development of efficient learning algorithms for deep neural networks, with a focus on building general problem-solving machines that mirror human-like efficiency and adaptability. His extensive educational and research experience includes a PhD under the supervision of Geoffrey Hinton, and he holds positions as a CIFAR AI Chair and a recipient of the Facebook Graduate Fellowship 2016 in machine learning. Toby Pohlen Toby Pohlen, previously a Staff Research Engineer at Google DeepMind for over six years, has extensive experience in machine learning and reinforcement learning, including work on major projects like the AlphaStar League and Ape-X DQfD. With a stellar academic record at RWTH Aachen University, where he graduated at the top of his class in Computer Science, he has made significant contributions in areas such as computer vision, image processing, numerical analysis, and probability theory. Ross Nordeen Ross Nordeen, a technical program manager from Tesla who managed the influx of new hires and access requests at Twitter during Elon Musk's overhaul, is now part of xAI. He aims to help humanity pass the great filters, achieve a Type 1 status on the Kardashev scale, and launch von Neumann probes to explore the galaxy. Kyle Kosic Kyle Kosic, an accomplished full-stack site reliability engineer and data scientist, has a rich academic background in Machine Learning, Physics, and Applied Mathematics and a diverse professional experience spanning OpenAI, OnScale, Lucena Research, Wells Fargo, and IntegriChain. Known for his proficiency in a wide array of technical skills, including various programming languages, DevOps technologies, and AWS services, Kosic's work is marked by his passion for automation, scalability, distributed computing, and effective application of machine learning models in various domains. Greg Yang Greg Yang, honored with the Morgan Prize Honorable Mention in 2018, has spent over five years with Microsoft Research, where he made significant contributions, including the development of the theory of Tensor Programs and the practical scaling of neural networks. Known for his late-night eureka moments in Building 99, Yang, who joined Microsoft Research directly after undergraduate studies, expresses immense gratitude for the opportunities and growth experienced during his tenure at the company. Guodong Zhang Guodong Zhang, a highly distinguished researcher in machine learning and artificial intelligence, is based at the University of Toronto and the Vector Institute in Canada. Known for his focus on training, tuning, and aligning large language models, Zhang has made considerable contributions to the field, demonstrated through his extensive publication record. His impactful work ranges from optimization studies, such as 'Multi-agent Optimization or its Application', to deep learning research like 'Deep Learning without Shortcuts: Shaping the Kernel with Tailored Rectifiers', to Bayesian deep learning, with 'Differentiable Annealed Importance Sampling and the Perils of Gradient Noise'. Zhang's academic excellence has been recognized with several honors including the Apple PhD Fellowship in 2022 and the Borealis AI Fellowship in 2020, underlining his significant potential in the field of AI and machine learning. Zihang Dai Zihang Dai is a Research Scientist at Google, with academic degrees from Tsinghua University and Carnegie Mellon University, and prior research internships at Baidu USA and MILA at the University of Montreal. His notable contributions at Google include generalized language pretraining with XLNet and efficient language processing with Funnel-Transformer, with his research interests focusing on deep learning for natural language processing and energy-based generative adversarial networks.
AI Authority tweet media
English
42
84
429
311.4K
AI Authority
AI Authority@Authority_AI·
A live look at Yud's Spotify Wrapped
AI Authority tweet media
English
3
0
4
956
Vaibhav (VB) Srivastav
Vaibhav (VB) Srivastav@reach_vb·
Insanely fast whisper now with Speaker Diarisation! 🔥 100% local and works on your Mac or on Nvidia GPUs. All thanks to @hbredin's Pyannote library, you can now get blazingly fast transcriptions and speaker segmentations! ⚡️ Here's how you can use it too: pipx install insanely-fast-whisper After a successful install, you should be able to run insanely-fast-whisper from anywhere on your Mac/ PC. insanely-fast-whisper --file-name --batch-size 2 --device-id mps --hf_token P.S. This is very much a WIP, I'll refactor a lot of this code and add speaker diarisation specific features next! That's it!🤗
English
37
161
1.2K
325K
AI Authority
AI Authority@Authority_AI·
Seems like Grok or similar will perhaps win conversational AI. You can’t have so many restrictions in a chat setting. Imagine if a friend just said “oh, no, I’m not talking about that” whenever you brought up a controversial topic. That convo would be dead. OpenAI still wins in a lot of areas, but chat, perhaps ironically, seems like one it’ll eventually surrender to others.
English
2
0
8
1.8K
Kris Kashtanova
Kris Kashtanova@icreatelife·
I asked Grok questions that were complicated and controversial and each time its opinion was non-bias. I was surprised by some answers. It wasn't what I expected the answers would be. It was quiet accurate for the beta and something that's been trained for 3 months only. It doesn't have same opinions as Elon Musk (of what I read on the news people think his opinions are). Grok doesn't really have opinions. It gives you an understanding of the topic and different perspectives humans have on it.
English
46
59
727
161.7K
Kris Kashtanova
Kris Kashtanova@icreatelife·
Incredible sea creatures at Frost Science Museum. So in awe
Miami, FL 🇺🇸 English
17
2
71
7.5K
AI Authority
AI Authority@Authority_AI·
(AI) Chaos is a ladder 👑
AI Authority tweet media
CY
0
0
5
754
AI Authority
AI Authority@Authority_AI·
It’s a joke at this point to talk Google and AI, but I still think it’s likely true that: 1. Google has shown us a couple cards in their deck. 2. OpenAI has shown us almost the entire deck. Last 3 days, major win for Microsoft. But to count out Google leading AI long-term… gl gl
English
2
0
2
769
AI Authority
AI Authority@Authority_AI·
Sam and Greg joining Microsoft directly is a baller move.
English
0
0
5
679
AI Authority
AI Authority@Authority_AI·
The last 24 hours of OpenAI
AI Authority tweet media
English
1
4
18
3.8K
Rowan Cheung
Rowan Cheung@rowancheung·
Quick life update: My newsletter just crossed 400k subscribers! When I started 10 months ago, all I had was 1,000 followers on Twitter and an obsession for AI. 5 lessons I wish I could've told myself when I started: 1. Don't start from scratch. I can sincerely say I've learned more from my mentors than I ever did from school. Hunt down people who dominate your niche, and dig into their brains (even if it means paying for their time). The lessons I've learned from @farhanmohamed and @JMatthewMcGarry have probably doubled the pace of growth for me. 2. Don't get distracted by competitors. In other words, when things are working, keep doing what's working. I tried growing my newsletter similar to my competitors on LI and with paid ads, but it was a time-suck. I learned the hard way to double down on what's working, which was (and still is) organic Twitter growth. 3. Don't be bothered by copycats. When I first started, I was sickened by people copying and pasting my work as their own (very relevant to the newsletter niche). I could've spent that time growing, but instead, I spent it on sulking. Never again. Now, I consider it a compliment validating that I'm doing something right. 4. Don't listen to haters (cliche but true). If someone is hating, there's a 99% chance they're just jealous. Early on, an "expert" in the newsletter space whom I highly respect, said another AI newsletter would fail and I was doing it incorrectly. I'd be lying if I said it didn't make me contemplate things. Don't get me wrong, it definitely still can fail, but I later found out the "expert" was looking to start an AI newsletter of his own by partnering with a competitor. 5. Lastly (but most importantly), make your product actually useful. The Rundown has one of the industry's leading Open Rates / CTR, and we include polls in every edition for feedback that we iterate on daily. Revenue comes from sponsors, which is 90% inbound from the readers themselves, and over 50% of those buyers end up being repeat sponsors! None of this would've been possible without the readers, the sponsors, and my incredible team. I'm extremely grateful to be in the position I am, and I'm excited to share some big updates heading into 2024. We're only just getting started!
Rowan Cheung tweet media
English
122
35
620
138.5K
AI Authority retweetledi
Z M
Z M@Ghost_Z12·
Just as text-to-image AI models unlocked new realms for creators, a similar wave is about to hit music. Making great music is no longer going to be limited to mastering an instrument or having an amazing voice.
English
0
2
9
687
AI Authority
AI Authority@Authority_AI·
@GaryMarcus Yeah, this is closer to your doctor reading your file and then calling you by the wrong name 3 in 100 times. I don’t think serious people are suggesting AI should currently replace human doctors, or at least hope they’re not.
English
3
0
15
1.4K
Gary Marcus
Gary Marcus@GaryMarcus·
This is very confused, but also typical of some fallacious reasoning I have been seeing. A new study showed that LLMs make stuff up 3% of the time in a simple task requiring summarization of a single short document. From you have people wrongly inferring that that 3% represents what an LLM’s error rate would be in a complex medical environment in which an answer often cannot simply be looked up. We need some education here about to think about benchmarks, cognition, generalization etc.
Johannes Schunter@jschunter

@stubailey900 @bindureddy @GaryMarcus Actually, we should be thrilled with a 3% error rate in doctors. The actual misdiagnosis rate is between 10-15%. Goes to show that our intuition about the significance of a 3% error rate is completely off. ncbi.nlm.nih.gov/books/NBK49995….

English
14
21
119
46.6K
AI Authority retweetledi
Sam Altman
Sam Altman@sama·
GPTs are now live for all ChatGPT+ subscribers!
English
723
1.1K
14.6K
2.9M
Greg Kamradt
Greg Kamradt@GregKamradt·
Pressure Testing GPT-4-128K With Long Context Recall 128K tokens of context is awesome - but what's performance like? I wanted to find out so I did a “needle in a haystack” analysis Some expected (and unexpected) results Here's what I found: Findings: * GPT-4’s recall performance started to degrade above 73K tokens * Low recall performance was correlated when the fact to be recalled was placed between at 7%-50% document depth * If the fact was at the beginning of the document, it was recalled regardless of context length So what: * No Guarantees - Your facts are not guaranteed to be retrieved. Don’t bake the assumption they will into your applications * Less context = more accuracy - This is well know, but when possible reduce the amount of context you send to GPT-4 to increase its ability to recall * Position matters - Also well know, but facts placed at the very beginning and 2nd half of the document seem to be recalled better Overview of the process: * Use Paul Graham essays as ‘background’ tokens. With 218 essays it’s easy to get up to 128K tokens * Place a random statement within the document at various depths. Fact used: “The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day.” * Ask GPT-4 to answer this question only using the context provided * Evaluate GPT-4s answer with another model (gpt-4 again) using @langchain evals * Rinse and repeat for 15x document depths between 0% (top of document) and 100% (bottom of document) and 15x context lengths (1K Tokens > 128K Tokens) Next Steps To Take This Further: * Iterations of this analysis were evenly distributed, it’s been suggested that doing a sigmoid distribution would be better (it would tease out more nuanced at the start and end of the document) * For rigor, one should do a key:value retrieval step. However for relatability I did a San Francisco line within PGs essays. Notes: * While I think this will be directionally correct, more testing is needed to get a firmer grip on GPT4s abilities * Switching up prompt with vary results * 2x tests were run at large context lengths to tease out more performance * This test cost ~$200 for API calls (a single call at 128K input tokens costs $1.28) * Thank you to @charles_irl for being a sounding board and providing great next steps
Greg Kamradt tweet media
English
202
612
3.8K
1.5M
AI Authority
AI Authority@Authority_AI·
That was fun. Time to build.
English
0
1
5
457
AI Authority
AI Authority@Authority_AI·
Later this month, you can launch GPTs in the store. OpenAI will pay the best GPT creators a portion of their revenue.
English
0
0
1
231