Tim Sears

1.1K posts

Tim Sears banner
Tim Sears

Tim Sears

@csTimSears

Groq On

Palo Alto, CA Se unió Mart 2019
461 Siguiendo197 Seguidores
Tim Sears retuiteado
Groq Inc
Groq Inc@GroqInc·
With over 284K+ developers using GroqCloud™, the Groq Speed Read focuses on developers and highlights new features, great apps, and what’s coming up. Sign up for the Speed Read here so you don't miss out - hubs.la/Q02GqlrH0 -
Groq Inc tweet media
English
1
5
21
4.1K
AT
AT@BaseballWRLD_·
@adaquano I don’t think we can comprehend how ugly they would’ve gotten
English
5
1
386
154.8K
AT
AT@BaseballWRLD_·
The agony that is Angel Hernandez’s umpiring has finally come to a close, so what better time to remember his legacy than right now Some of Angel Hernandez’s worst calls and most iconic moments, a thread🧵
AT tweet media
English
870
5.4K
56.2K
18.1M
Tim Sears retuiteado
Jay Scambler
Jay Scambler@JayScambler·
Who wants to go all in on an LPU cluster with me?
Jay Scambler tweet media
English
5
2
28
2.1K
Tim Sears retuiteado
Matt Shumer
Matt Shumer@mattshumer_·
I’m sorry to all the engineers at @GroqInc I should’ve waited till tomorrow to post about you… Hopefully the servers stay online till the morning!
English
9
2
168
44.1K
Tim Sears retuiteado
@levelsio
@levelsio@levelsio·
Try groq.com now Hyperfast LLM running on custom built GPUs Answers in miliseconds, not seconds How? 🤯
English
158
195
2.4K
602.3K
Tim Sears retuiteado
Matt Shumer
Matt Shumer@mattshumer_·
The first public demo using Groq: a lightning-fast AI Answers Engine. It writes factual, cited answers with hundreds of words in less than a second. More than 3/4 of the time is spent searching, not generating! The LLM runs in a fraction of a second. …6591-00-1rsd2y84t464l.worf.replit.dev
Matt Shumer@mattshumer_

Wild tech you have to try: groq.com They are serving Mixtral at nearly 500 tok/s. Answers are pretty much instantaneous. Opens up new use-cases, and completely changes the UX possibilities of existing ones.

English
28
65
374
126.6K
Tim Sears retuiteado
Groq Inc
Groq Inc@GroqInc·
@Uncensored_AI @levelsio Good guess, but nope. Completely original hardware / software solution built from the ground up. We say it's "sand to sky" because it's our custom silicon GroqChip, an LPU provided as a system via our cloud solution. We haven't even started with tricks used by others yet. :D
English
0
1
3
147
Tim Sears
Tim Sears@csTimSears·
@TaliaGold @GroqInc We did that. It’s fun. You can talk to mixtral running on Groq in minutes with zero code over at vapi.ai.
English
2
7
40
10.3K
Talia Goldberg
Talia Goldberg@TaliaGold·
Faster than thought. Now imagine this with real time voice 🤯 @GroqInc
English
5
1
12
4.1K
Tim Sears retuiteado
diicell
diicell@0xdiicell·
oi, m8. @GroqInc fastest resp alive
diicell tweet media
English
0
2
8
1.8K
Tim Sears retuiteado
Dhruv Sahu
Dhruv Sahu@DhruvSahu98·
@chamath @GroqInc I tried this with summarize book text with cross questioning and summary. It's faster and more accurate than big techs llms.
English
0
3
6
1.7K
Tim Sears retuiteado
Matt Shumer
Matt Shumer@mattshumer_·
Wild tech you have to try: groq.com They are serving Mixtral at nearly 500 tok/s. Answers are pretty much instantaneous. Opens up new use-cases, and completely changes the UX possibilities of existing ones.
English
66
156
1.5K
421K
Jay Scambler
Jay Scambler@JayScambler·
Groq is serving the fastest responses I've ever seen. We're talking almost 500 T/s! I did some research on how they're able to do it. Turns out they developed their own hardware that utilize LPUs instead of GPUs. Here's the skinny: Groq created a novel processing unit known as the Tensor Streaming Processor (TSP) which they categorize as a Linear Processor Unit (LPU). Unlike traditional GPUs that are parallel processors with hundreds of cores designed for graphics rendering, LPUs are architected to deliver deterministic performance for AI computations. The LPU's architecture is a departure from the SIMD (Single Instruction, Multiple Data) model used by GPUs and favor a more streamlined approach that eliminate the need for complex scheduling hardware. This design allows every clock cycle to be utilized effectively, ensuring consistent latency and throughput. For developers, this means that performance can be precisely predicted and optimized which is critical in real-time AI applications. Energy efficiency is another area where LPUs shine. By reducing the overhead of managing multiple threads and avoiding the underutilization of cores, LPUs can deliver more computations per watt. Groq's innovative chip design allows multiple TSPs to be linked together without the traditional bottlenecks found in GPU clusters making them extremely scalable. This enables linear scaling of performance as more LPUs are added simplifying the hardware requirements for large-scale AI models and making it easier for developers to scale their applications without rearchitecting their systems. So what does this all mean? LPUs could provide a massive improvement compared to GPUs for serving AI applications in the future! If anything it will be great to have alternative high performing hardware since A100s and H100s are so in demand
English
61
220
1.3K
317.9K
Ted Sanders
Ted Sanders@sandersted·
As a former materials physicist, I am genuinely unsure of what to make of this superconductivity paper. 🧵 👇
English
55
252
3.3K
1.3M
Mark Heaps
Mark Heaps@lifebypixels·
Any insight on what 𝕏 is going to call posts? Obvi not going to be tweets...so....? Brand strategic rollout at its finest. :/
English
6
0
3
465
Tim Sears
Tim Sears@csTimSears·
@vpatryshev It’s like Theory but not just for clothing.
English
0
0
0
92
Vlad Patryshev 🇫🇷 🇺🇦 🇺🇸
Yesterday a cashier at a gas station asked me what is category theory. Imagine, you have 5 seconds to think and 15 seconds to explain.
English
4
0
13
944
Tim Sears
Tim Sears@csTimSears·
@oldfriend99 @sclv 0 + 0 is my favorite but Dad said it would never amount to anything.
English
0
0
0
63
josh (oldfriend99)
josh (oldfriend99)@oldfriend99·
2 + 2 is often held up as an example of a very easy math problem. But I find 2 + 1 is even easier
English
79
765
20.8K
762.3K
Tim Sears
Tim Sears@csTimSears·
Math. We have no moat around it. How do we build one?
English
0
0
0
75