Lawrence Spracklen

183 posts

Lawrence Spracklen

Lawrence Spracklen

@spracklen

Head of AI, https://t.co/ZBvqdi3BKf

Menlo Park, CA Beigetreten Mart 2009
92 Folgt151 Follower
Lawrence Spracklen retweetet
Vinod Khosla
Vinod Khosla@vkhosla·
80% of 80% of all economically valuable jobs will be capable of being done by AI. Years ago I asked -- to the consternation of many -- whether we needed doctors. Then I asked whether we needed teachers. High time we ask whether we need financial analysts, bankers, traders, etc. Welcome, @Arkifi
English
117
97
780
354.1K
Lawrence Spracklen
Lawrence Spracklen@spracklen·
Excited to be part of this! My team is hiring for multiple AI positions!
English
1
0
1
121
Lawrence Spracklen retweetet
Intel
Intel@intel·
Let’s get beyond the hype of ChatGPT, and get down to the details. Learn about Large Language Models (LLMs) and more from our panel of experts from @Numenta and Intel. Watch now. intel.ly/41NmmYM #IntelAI #IntelXeon
Intel tweet mediaIntel tweet mediaIntel tweet media
English
5
10
54
31.9K
Lawrence Spracklen retweetet
Intel Business
Intel Business@IntelBusiness·
Our LinkedIn Live discussion with @Numenta is happening tomorrow at 10am PST. Hear from @spracklen at Numenta and Intel’s Ronak Shah and Sancha Norris to learn about Large Language Models (LLMs) based on 4th Gen #IntelXeon technology.
English
1
4
11
4.5K
Lawrence Spracklen retweetet
Intel Business
Intel Business@IntelBusiness·
Are Large Language Models and ChatGPT right for your enterprise? Join us next week on April 19th for a discussion with @spracklen from @Numenta and Intel’s Ronak Shah and Sancha Norris to learn about 4th Gen #IntelXeon and #GenerativeAI.
English
0
17
50
26.1K
Lawrence Spracklen
Lawrence Spracklen@spracklen·
A more detailed overview of the results that were shared during the Intel 4th gen processor launch earlier this year. This particular demonstration is focussed on larger sequence lengths, and uses an Intel system with HBM (High Bandwidth Memory) support. #deeplearning #AI
Numenta 🧠@Numenta

Check out our joint blog with @IntelSoftware on accelerating Large Language Models with long sequence lengths. @Numenta running on the #IntelXeon CPU Max Series delivers up to 20x inference acceleration compared to other CPUs. Learn more 👇 #LLMs #HBM #Intel #AI

English
0
1
5
2.2K
Lawrence Spracklen retweetet
Intel Software
Intel Software@IntelSoftware·
Whether running short text snippets or long documents through natural language processing models, #IntelXeon CPU Max Series with #OpenVINO toolkit and #oneAPI enables excellent performance boosts over other CPUs. Learn more here. intel.ly/43dOqWJ #HPC
Intel Software tweet media
English
1
3
14
6K
Lawrence Spracklen
Lawrence Spracklen@spracklen·
Move from running your existing Transformer models from Hugging Face on Ice lake processors to Numenta Transformer models on Intel's Sapphire rapids, and improve inference throughput by over 120X..... Seamless integration into common MLOps solutions.
Numenta 🧠@Numenta

New on our blog: our VP ML @spracklen explains why the new 4th Gen #IntelXeon is a great fit for our neuroscience-based technology, and dives deeper into our dramatic throughput and latency performance improvements. numenta.com/blog/2023/01/1…

English
0
0
1
1K
Lawrence Spracklen
Lawrence Spracklen@spracklen·
Great to see @IntelAI showcasing the benefits of our @Numenta technology running on their latest server processors. 62X throughput improvement compared with standard BERT-large on Ice Lake, and 123X compared with AMD Milan. intel.com/content/www/us…
Lawrence Spracklen tweet media
English
0
0
1
144
Lawrence Spracklen retweetet
Intel News
Intel News@intelnews·
The #XeonMax CPUs help @Numenta dramatically accelerate large language models that can analyze, categorize, and translate large collections of text documents. For long sequences, Numenta achieves 20x throughput speed-up on #XeonMax compared to other CPUs. intel.ly/3WZnoyR
Intel News tweet media
English
0
10
38
3.8K
Lawrence Spracklen retweetet
Numenta 🧠
Numenta 🧠@Numenta·
⚠️ MEDIA ALERT ⚠️ Numenta Achieves 123X Inference Performance Improvement for BERT Transformers on @intel's new Xeon Processor. A thread👇(1/4) numenta.com/press/2023/01/…
English
2
13
66
12.1K
Lawrence Spracklen retweetet
Numenta 🧠
Numenta 🧠@Numenta·
Announcing the Numenta Private Beta Program 🚀 Built on decades of neuroscience research🧠, our technology enables powerful and scalable deep learning applications. Apply and get early access to our solutions for NLP and Computer Vision today at numenta.com/beta
Numenta 🧠 tweet media
English
0
6
22
2K
Lawrence Spracklen retweetet
Numenta 🧠
Numenta 🧠@Numenta·
We’re currently running a limited closed Beta Program for our brain-based AI products. Check out our November #newsletter for more details, job openings, new content, and more! mailchi.mp/numenta/newsle…
Numenta 🧠 tweet media
English
1
3
6
0
Lawrence Spracklen retweetet
Christy Maver
Christy Maver@cdmaver·
Closing the event with a keynote on deep learning at the edge, @spracklen shows how @Numenta is getting 100x performance improvements. #EdgeAISummit
Christy Maver tweet mediaChristy Maver tweet media
English
0
3
8
0