Laura Elena Mardones

1.7K posts

Laura Elena Mardones banner
Laura Elena Mardones

Laura Elena Mardones

@LauraMardones

No matter where I go, I always end up focusing on #continuousimprovements. Passions: #lean, #agile, #barbershop (@ronningeshow), #Zumba, #fitness, #health

Stockholm, Sweden Katılım Nisan 2011
240 Takip Edilen90 Takipçiler
Laura Elena Mardones retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
It’s possible that NotebookLM podcast episode generation is touching on a whole new territory of highly compelling LLM product formats. Feels reminiscent of ChatGPT. Maybe I’m overreacting.
English
331
383
5.9K
994.2K
Laura Elena Mardones retweetledi
Ethan Mollick
Ethan Mollick@emollick·
🤯So real agents came faster than I thought Without any Python ability. I signed up for Replit and used their new AI coder. I was able to build a working app that identifies sentiment at the paragraph level in 23 minutes. I only interacted 8 times. It did the design & debugging
Ethan Mollick tweet mediaEthan Mollick tweet media
English
39
218
2.5K
284.3K
The Best
The Best@Thebestfigen·
If this dog was yours, what would you name it?
The Best tweet media
English
52.9K
3.7K
75K
10.4M
Laura Elena Mardones retweetledi
Rowan Cheung
Rowan Cheung@rowancheung·
Exclusive: Meta just released Llama 3.1 405B — the first-ever open-sourced frontier AI model, beating top closed models like GPT-4o across several benchmarks. I sat down with Mark Zuckerberg, diving into why this marks a major moment in AI history. Timestamps: 00:00 Intro 00:38 Meta’s Llama 3.1 rundown 03:44 Real-world use cases for Llama 3.1 06:15 Educating developers on open-source AI tools 09:43 Societal implications of open-source AI 13:00 Balancing power and managing bad actors 14:40 Open source and global competition 16:59 Accelerating innovation and economic growth 20:04 Zuck on Apple and lessons from the past 24:22 Future of AI: Llama 3 and beyond 26:43 Prediction: Billions of personalized AI agents 31:32 Factors to changing anti-AI sentiment
English
491
1.4K
8.6K
2.6M
Laura Elena Mardones
Laura Elena Mardones@LauraMardones·
RT @alliekmiller: BREAKING: @Meta just dropped their most talked about AI model—405B. One of the highlights is an incredible list of laun…
English
0
4
0
1
Laura Elena Mardones retweetledi
Ethan Mollick
Ethan Mollick@emollick·
This is a big one. A frontier-class model available for all. Expect very cheap intelligence (of a sort) applied to all sorts of key problems as fine-tuned models are developed. But also expect governments & scammers to quickly breach the guardrails and use it in sketchier ways.
Aston Zhang@astonzhangAZ

Our Llama 3.1 405B is now openly available! After a year of dedicated effort, from project planning to launch reviews, we are thrilled to open-source the Llama 3 herd of models and share our findings through the paper: 🔹Llama 3.1 405B, continuously trained with a 128K context length following pre-training with an 8K context length, supports multilinguality and tool usage. It offers performance comparable to leading language models, such as GPT-4, across a range of tasks. 🔹Compared to previous Llama models, we have enhanced the preprocessing and curation pipelines for pre-training data, as well as the quality assurance and filtering methods for post-training data. 🔹Pre-training 405B on 15.6T tokens (3.8x10^25 FLOPs) was a significant challenge. We optimized our entire training stack and used over 16K H100 GPUs. 🔹To support large-scale production inference for the 405B model, we quantized from 16-bit (BF16) to 8-bit (FP8), reducing compute requirements and enabling the model to run on a single server node. 🔹We leveraged the 405B model to improve the post-training quality of our 70B and 8B models. 🔹In post-training, we refined chat models with multiple rounds of alignment involving supervised fine-tuning (SFT), rejection sampling, and direct preference optimization. We generate most SFT examples using synthetic data. 🔹We integrated image, video, and speech capabilities into Llama 3 using a compositional approach, enabling models to recognize images and videos and support interaction via speech. They are under development and not yet ready for release. 🔹We've updated our license to allow developers to use outputs from Llama models to enhance other models. There is nothing more rewarding than working at the forefront of AI development alongside some of the brightest minds in the field and publishing our research transparently. I'm excited about the innovations our open-source models enable and the potential of the future herd of Llamas!

English
12
66
496
41.8K
Laura Elena Mardones retweetledi
Rohan Paul
Rohan Paul@rohanpaul_ai·
A very nice System Prompt for Claude Sonnet 3.5
Rohan Paul tweet media
English
48
313
4.6K
902.9K
Laura Elena Mardones retweetledi
Allie K. Miller
Allie K. Miller@alliekmiller·
This. THIS is my favorite Claude use case. Take an ungodly amount of data and preferences, shove it into Claude, ask for an interactive decision-making bot, ask for scoring and reward mechanism, personalize as necessary. Brands will now calibrate for human+AI decisions.
Allie K. Miller tweet mediaAllie K. Miller tweet media
English
113
311
3.9K
485K
Laura Elena Mardones retweetledi
MARC REBILLET
MARC REBILLET@marcrebillet·
HOW TO FUNK IN TWO MINUTES
Manhattan, NY 🇺🇸 English
218
3.5K
14.4K
0
Laura Elena Mardones retweetledi
Allie K. Miller
Allie K. Miller@alliekmiller·
113,000 people joined live to watch the OpenAI new spring release livestream. Now, I’ll tell you right off the bat, the social media response (based on MY feed) is mixed. The super techies are disappointed that they don’t have some holographic laser beam that shoots out of their phone and reads their minds, and the wider business population didn’t seem to watch and weigh in. I want to bridge that gap. I’ll call out the big releases and the two features that I think are the winners. —————————————— New model. It’s called GPT-4o (terrible name) and has GPT-4-level intelligence. It’s 2x faster. 50% cheaper. 5x higher rate limits (compared to GPT-4-Turbo). 💡 What everyone is focused on: it’s cheaper! And faster! And with API access! 👀 What I am focused on: free users just got a mega performance boost which will likely reduce churn and the model architecture is a WINNER and worthy of attention - they went from three models to one; according to Andrej Karpathy, former OpenAI, they have built a “combined text-audio-vision model that processes all three modalities in one single neural network.” —————————————— GPTs for all. Now every user, even free users, can access the “mini task bot” GPTs. 💡 What everyone is focused on: yay now everyone can use the GPT I built! 👀 What I am focused on: completely new user base, tens of millions new users testing and breaking capabilities —————————————— More Voice. Now way more real-time (previously there was a 2-3 second lag). You can interrupt it mid-sentence. The voice assistant “picks up on emotion” (like fast breathing). I hate that phrase and would rather call it speech nuances. Fast multi-language translation. Performance improved for 50 languages (97% of the world’s population). 💡 What everyone is focused on: wow it sounds like Scarlett Johansson! 👀 What I am focused on: I’m already talking to ChatGPT Voice every morning. This is going to massively increase voice-first experiences. I think office spaces need to think about this asap. Think about the acoustics and EVERYONE talking to an AI assistant at once. It’s already an issue on my team. —————————————— Vision on desktop. Now the desktop version can “see” your screen—only when you permission it to, not all the time. Sort of like generative AI alt text + chat. So you can ask it to describe a graph on your screen or presumably ask it questions about an article on your screen without a big lift. 💡 What everyone is focused on: lots of privacy concerns (I agree) and why do we need voice for code 👀 What I am focused on: HOLY MOLY THIS IS THE WINNING FEATURE. It’s basically a coworker on screen share with you 24/7, with no fatigue. I can imagine people working for hours straight with this on. —————————————— Rollouts over the next few weeks. If you like voice features (like talking to Siri but smarter), upgrade to Plus when it releases.
English
19
19
105
28.8K
Laura Elena Mardones retweetledi
OpenAI
OpenAI@OpenAI·
We’ll be streaming live on openai.com at 10AM PT Monday, May 13 to demo some ChatGPT and GPT-4 updates.
English
545
1.7K
10K
5.9M
Laura Elena Mardones retweetledi
Sam Altman
Sam Altman@sama·
not gpt-5, not a search engine, but we’ve been hard at work on some new stuff we think people will love! feels like magic to me. monday 10am PT.
OpenAI@OpenAI

We’ll be streaming live on openai.com at 10AM PT Monday, May 13 to demo some ChatGPT and GPT-4 updates.

English
1.1K
2.7K
26.2K
4.5M
Laura Elena Mardones retweetledi
Mike Cardona | Automation Alchemist 🧪🔧
I have a @Zapier automation so anytime I have a problem I can't solve atm, I'll dictate it with audiopen → send it to AI and have it generate 3 potential solutions → send to my database in so it's automatically stored and scheduled for me to review and take action (or not): Example "problem" output:
Mike Cardona | Automation Alchemist 🧪🔧 tweet mediaMike Cardona | Automation Alchemist 🧪🔧 tweet media
English
4
1
9
1.1K
Laura Elena Mardones retweetledi
Ethan Mollick
Ethan Mollick@emollick·
As someone who has spent the past 15 years building interactive teaching sims and games, let me tell you: We can totally build the Young Ladies Illustrated Primer with today’s AI technology. (But it would best if we altered the idea somewhat to make the pedagogy better, though)
English
18
13
157
32.3K
Laura Elena Mardones retweetledi
shy kids
shy kids@shykids·
@shykids_ have always told stories with and about new technology (see filmography). with ‘air head’ we wanted to show people that the most important ingredient in a story is (and will always be) humanity. *also key when commenting online* #sora #openAI
English
46
38
250
38.7K
Laura Elena Mardones retweetledi
Rowan Cheung
Rowan Cheung@rowancheung·
AI NEWS: A new open-source LLM that beats Grok, LLama-2, and Mixtral is here. Plus, more developments from Anthropic Claude 3, Amazon, MIT, Heygen, OpenAI, and Hume AI. Here's everything going on in AI right now:
English
70
343
3.1K
1.5M
Linus ✦ Ekenstam
Linus ✦ Ekenstam@LinusEkenstam·
@hume_ai would love early access, been following your journey since day 0 and this is super exciting
English
2
0
11
2.2K
Hume AI
Hume AI@hume_ai·
Meet Hume’s Empathic Voice Interface (EVI), the first conversational AI with emotional intelligence.
English
155
451
2.5K
875K
Laura Elena Mardones retweetledi
Satya Nadella
Satya Nadella@satyanadella·
Together with @Adobe, we're connecting Adobe Experience Cloud with Microsoft Copilot to reimagine how marketers approach their daily work—from campaign briefs to content creation and approvals.
Microsoft News and Stories@MSFTnews

At Adobe Summit, Adobe and Microsoft announced plans to bring marketers new integrated AI capabilities to reimagine their daily work, increasing collaboration and efficiency: msft.it/6012cs2Fe

English
89
348
2.9K
449.5K