CHLOE

7.4K posts

CHLOE banner
CHLOE

CHLOE

@SantaChloe

San Francisco, CA Katılım Aralık 2013
1.7K Takip Edilen4.1K Takipçiler
CHLOE
CHLOE@SantaChloe·
$NVDA is expected to beat earnings next Wednesday after market closing. $SMH $SOXL
English
0
0
3
590
CHLOE
CHLOE@SantaChloe·
$AMZN Amazon is due to hold an event on February 26
English
0
0
2
480
CHLOE
CHLOE@SantaChloe·
$COIN has been oversold after beating earnings. And crypto is doing well.
English
0
0
1
329
CHLOE
CHLOE@SantaChloe·
$COIN Keefe, Bruyette & Woods analysts led by Kyle Voigt raised their price target on Coinbase shares to $305 from $255 while maintaining a Market Perform rating. Canaccord analyst Joseph Vafi increased price target to $400 from $280 while maintaining a Buy rating on the stock
English
0
0
1
337
CHLOE
CHLOE@SantaChloe·
$BITX $IBIT $COIN $MSTR $HOOD $SOFI The three-day CoinDesk Consensus Hong Kong event will begin. The crypto conference marks the first expansion of the Consensus event series beyond North America. consensus-hongkong2025.coindesk.com
English
0
0
0
394
CHLOE
CHLOE@SantaChloe·
$AMD $INTC $SOXL ISSCC 2025 will be held February 16-20, 2025 at the San Francisco Marriott Marquis
CHLOE tweet media
English
0
0
0
364
CHLOE
CHLOE@SantaChloe·
$NVDA Bernstein reiterates Nvidia as outperform Bernstein said it sees little impact from tariffs on stocks like Nvidia right now. cnbc.com/2025/02/03/mon…
CHLOE tweet media
English
0
0
1
352
CHLOE retweetledi
Banana3
Banana3@Banana3Stocks·
$SPY $QQQ $NVDA $TSLA $META Deepseek Look, I’m not all about conspiracy theories They published a paper and it looks good Facts: they have 100% lied and own 50,000 H100’s and did something to break our laws Facts: in the paper u can see that they trained the model on 2048 NVDA GPUs and optimized the F out of those chips Now here’s a conspiracy theory and a fact at the same time and I actually think this has a higher mathematical probability when u put the math together… we don’t know if they used the 50,000 H100 super cluster to train a model that told them how to optimize the smaller cluster. Cause that’s literally a feature of AI They could have used the AI from the 50k super cluster to optimize the compression and then used that compression to Optimize the 2k cluster’s AI 🧐🧐🧐🧐🧐🧐🧐🧐🧐 That could have also told them how to reconfigure the actual structure and architecture of the chips forcing areas to work for other areas called Streaming Multiprocessors or as the nerds call it SM’s. Think of it like when the “Starship Enterprise” has to move its rear shields to add to the front shields to focus for a harder defense there Now making those kind of changes at that level kind of make it unstable for people who understand this stuff, it’s unstable because it’s difficult to maintain. This is why for those that remember my deep dives on NVDA I would go deep into CUDA and its importance.,. Well that’s CUDA’s whole thing. They allow u to not have to do all that Mickey Mouse 🐭 crap 💩 to it because you can do all that optimization for programming parallel tasks This makes CUDA more important to the hyperscalers if you understand any of that 🤷‍♂️ This is deep in the weed stuff I didn’t think I’d ever have to get into but you guys have seen me do deep tech dives in this so I have to Something is not exactly as it seems. @elonmusk feels the same Still Deepseek used a ton of NVDA GPU’s and if they’re more successful they will eventually need more of that hardware and if they are even more and more innovative they will want newer hardware that replaces several units for one Jevons Paradox Not financial advice!
Banana3 tweet mediaBanana3 tweet mediaBanana3 tweet media
Banana3@Banana3Stocks

$SPY $QQQ $NVDA $META Deepseek Yes they have 50,000 H100’s They claim to have used only 2048 to train their new large language model By having a super cluster of 50k H100’s they could have practiced code and trained a few models to know what to optimize on a smaller cluster of chips Nothing they did with their LLM could have been done without the frontier model known as OpenAI and they could not have optimized anything without $META as they used Llama in their code $META was always open sourced so they already knew that large language models are a commodity. If they created an open source model that then a group of “rag tag” developers built on top of that and made their code open source it’s just a continued extension of the model and it was literally always setup for someone to come over the top and build upon it National security risk is that even though it’s open sourced the original code writers are the ones that trained the models and that means there’s Chinese bias at the core of the model. Even if it’s run locally which is a great achievement for anyone that understands the space to run it locally at a complex level and get good responses is awesome it still doesn’t change the core bias and what may or may not be in there Bringing down the cost curve of a new technology is literally the entire point and not a negative. This has been proven throughout history and you can simply Google: Jevons Paradox to know that history has already taught us how to rhyme 😘 Bending the cost curve lower makes it more ubiquitous it both drives higher demand as more people and businesses want it since the cost is lower so demand goes up and the addressable market goes up as well because again more people can afford it and more businesses can as well Expanding the proliferation of AI via NVDA GPU’s and the current reaction of uneducated dumb asses that don’t understand that even if we take the case at face value that rag tag developers figured out magic code while using over 2000 GPU’s with a standby cluster of 50,000 H100’s 🤦‍♂️, advanced AI in such way that they just surpassed as far as token to cost of training a large language model the entire United States and our reaction is going to be… …let’s pack our bags now that someone showed NVDA GPU’s are even stronger than we knew and lets give the entire market to the Chinese and let’s all embed source code that originated from China during the start of an Administration that would love to succeed the entire AI market to China - yeah Trump and his stupid Stargate are gonna pack it up 😂 $META is gonna announce they don’t need to spend 65 billion any second and for some reason when this can all be done with t million bucks China themselves are about to spend 140 billion on AI infrastructure I’m calling Jensen in the morning and telling him to cease and desist all Blackwell chips and to immediately halt all R&D on RUBIN and RUBIN Ultra and to immediately shutdown CUDA for all developers since Shitseek used - oh that’s right NVDA GPU’s It’s over boys and girls… Tell Jevon and his Paradox to go shove it. We don’t want no stinky American AI no more!!! The Chinese government has spoken!!!🇨🇳🇨🇳🇨🇳 💪 You American AI weak 🇺🇸🇺🇸🇺🇸👎🤓 Trump eggplant 🍆 now smaller than Xi eggplant 🍆 Wallstreet has declared it’s over, $META is over especially since they never charged for any large language model and had open sourced everything and Deepseek actually used Llama to build their model so that’s it it’s over, all these people gonna ask for refunds for money they never spent 😂. That data center the size of Manhattan is canceled any second now 😂 Everything canceled now, all capex all Stargate gone poof 💨 China 🇨🇳 the strongest in AI 🤖💪 has spoken while using American GPU’s and American Frontier model and American open source LLM model 🤪 Hit the road Jack, hit the road Jevon & Jensen, hit the road Elon & Zuck, hit the road Bezos - Sundar-Satya and Trump 🇨🇳💪🤦‍♂️ NFA

English
79
70
666
288.3K
CHLOE
CHLOE@SantaChloe·
$NVDA Deepseek is so dumb
CHLOE tweet media
English
0
0
2
374
CHLOE retweetledi
The Rock Trading Group
The Rock Trading Group@The_RockTrading·
Wow. Everyone selling $NVDA for this 😂😂😂😂 yes, market all crashing for ever and ever because of this: THIS:
The Rock Trading Group tweet media
English
838
541
5.3K
941.6K
CHLOE retweetledi
Banana3
Banana3@Banana3Stocks·
$SPY $QQQ $NVDA $META Even Deepseek agrees with me 😂 This whole thing is gonna expose a lot of idiots soon! Jevons Paradox 🍌🍌🍌 Not financial advice!
Banana3 tweet media
Banana3@Banana3Stocks

$SPY $QQQ $NVDA $META Deepseek Yes they have 50,000 H100’s They claim to have used only 2048 to train their new large language model By having a super cluster of 50k H100’s they could have practiced code and trained a few models to know what to optimize on a smaller cluster of chips Nothing they did with their LLM could have been done without the frontier model known as OpenAI and they could not have optimized anything without $META as they used Llama in their code $META was always open sourced so they already knew that large language models are a commodity. If they created an open source model that then a group of “rag tag” developers built on top of that and made their code open source it’s just a continued extension of the model and it was literally always setup for someone to come over the top and build upon it National security risk is that even though it’s open sourced the original code writers are the ones that trained the models and that means there’s Chinese bias at the core of the model. Even if it’s run locally which is a great achievement for anyone that understands the space to run it locally at a complex level and get good responses is awesome it still doesn’t change the core bias and what may or may not be in there Bringing down the cost curve of a new technology is literally the entire point and not a negative. This has been proven throughout history and you can simply Google: Jevons Paradox to know that history has already taught us how to rhyme 😘 Bending the cost curve lower makes it more ubiquitous it both drives higher demand as more people and businesses want it since the cost is lower so demand goes up and the addressable market goes up as well because again more people can afford it and more businesses can as well Expanding the proliferation of AI via NVDA GPU’s and the current reaction of uneducated dumb asses that don’t understand that even if we take the case at face value that rag tag developers figured out magic code while using over 2000 GPU’s with a standby cluster of 50,000 H100’s 🤦‍♂️, advanced AI in such way that they just surpassed as far as token to cost of training a large language model the entire United States and our reaction is going to be… …let’s pack our bags now that someone showed NVDA GPU’s are even stronger than we knew and lets give the entire market to the Chinese and let’s all embed source code that originated from China during the start of an Administration that would love to succeed the entire AI market to China - yeah Trump and his stupid Stargate are gonna pack it up 😂 $META is gonna announce they don’t need to spend 65 billion any second and for some reason when this can all be done with t million bucks China themselves are about to spend 140 billion on AI infrastructure I’m calling Jensen in the morning and telling him to cease and desist all Blackwell chips and to immediately halt all R&D on RUBIN and RUBIN Ultra and to immediately shutdown CUDA for all developers since Shitseek used - oh that’s right NVDA GPU’s It’s over boys and girls… Tell Jevon and his Paradox to go shove it. We don’t want no stinky American AI no more!!! The Chinese government has spoken!!!🇨🇳🇨🇳🇨🇳 💪 You American AI weak 🇺🇸🇺🇸🇺🇸👎🤓 Trump eggplant 🍆 now smaller than Xi eggplant 🍆 Wallstreet has declared it’s over, $META is over especially since they never charged for any large language model and had open sourced everything and Deepseek actually used Llama to build their model so that’s it it’s over, all these people gonna ask for refunds for money they never spent 😂. That data center the size of Manhattan is canceled any second now 😂 Everything canceled now, all capex all Stargate gone poof 💨 China 🇨🇳 the strongest in AI 🤖💪 has spoken while using American GPU’s and American Frontier model and American open source LLM model 🤪 Hit the road Jack, hit the road Jevon & Jensen, hit the road Elon & Zuck, hit the road Bezos - Sundar-Satya and Trump 🇨🇳💪🤦‍♂️ NFA

English
50
25
332
67.5K
CHLOE
CHLOE@SantaChloe·
$NVDA $NVDL $SOXL $SOXS DeekSeek news is one month old news. Just google it. You will see.
English
0
0
4
718
CHLOE retweetledi
Oguz Erkan
Oguz Erkan@oguzerkan·
Number of H100 chips bought in 2024: - $MSFT: 450,000 - $META: 350,000 - $AMZN: 196,000 - $GOOG: 169,000 If you believe they couldn’t find the way to make better AI without more chips but a few Chinese engineers did it as a side project, you are too naive. Deepseek was reportedly trained on over 200,000 H100s. Even if Deepseek achieved to match OpenAI with less chips, this isn’t any bearish for chip makers, to the contrary, it is amazingly bullish. If Deepseek really reached this level with just a few thousand chips, can you imagine what could be done with a million chips? Deepseek news, true or false, are amazingly bullish for chip makers, especially for $NVDA.
English
754
810
8.7K
1.2M
CHLOE
CHLOE@SantaChloe·
$NVDA DeepSeek news is nothing new. The news was out one month ago.
English
0
0
0
340
CHLOE retweetledi
Shay Boloor
Shay Boloor@StockSavvyShay·
$NVDA Is Quietly Building the Future of Quantum Computing Quantum computing is shedding its once-mystical veneer, evolving from theoretical abstraction into a tangible force that promises to reshape industries. At the heart of this metamorphosis sits NVIDIA, quietly but decisively positioning itself as the linchpin of this technological revolution. The company's CUDA Q platform -- a seamless integration of quantum tools, simulators, and infrastructure -- isn’t merely simplifying quantum computing; it’s accelerating its adoption. In a world where quantum possibilities outpace classical systems, NVIDIA is building the bridge that enterprises will rely on to traverse this new frontier. Quantum computing is an inherently intricate dance -- an interplay of nascent hardware, cutting-edge algorithms, and unimaginable computational potential. Yet NVIDIA, leveraging its GPU dominance, has made navigating this complexity more practical than ever. CUDA Q empowers developers to simulate quantum algorithms on NVIDIA GPUs before deploying them to physical Quantum Processing Units (QPUs). This simulation step is indispensable. Real quantum hardware remains costly, constrained, and elusive -- an expensive luxury. By offering a virtual proving ground, NVIDIA enables developers to iterate faster, debug efficiently, and refine their models with precision. In essence, it transforms quantum innovation from speculative science into practical, executable applications. But NVIDIA isn’t climbing this mountain alone. The company has entrenched itself in the cloud infrastructure ecosystem, forging critical alliances with $AMZN AWS, $MSFT Azure, and $GOOGL Cloud. These partnerships are a game-changer. Developers can tap into CUDA Q’s simulators and tools on-demand while accessing physical QPU hardware from $IONQ & $RGTI. This cloud-centric approach democratizes quantum computing, removing cost barriers and opening the floodgates for experimentation. No longer do enterprises need to “own” quantum hardware -- they can rent, test, and iterate. NVIDIA sits squarely in the middle, orchestrating a seamless flow of quantum development from simulation to hardware deployment. The hardware side of quantum computing is an ecosystem unto itself, rapidly diversifying as companies pursue competing architectures. IonQ's ion-trap technology & Rigetti’s superconducting qubits, yet regardless of the underlying qubit technology, the industry’s success hinges on integration -- an area NVIDIA dominates. Simulators, powered by NVIDIA GPUs, provide the classical backbone that quantum systems rely on to scale. The result? A cohesive quantum stack where NVIDIA tools are not just useful but essential for enterprise adoption. The stakes couldn’t be higher. Quantum computing has the potential to unlock capabilities far beyond classical systems, enabling breakthroughs in fields like pharmaceutical modeling, supply chain optimization, and cybersecurity. Theoretical limits no longer feel insurmountable; quantum machines promise computational leaps that redefine what’s possible. But there’s a catch: quantum technology must first become reliable, scalable, and accessible. NVIDIA’s role isn’t to replace QPU providers -- it’s to make quantum computing work. Simulators will remain indispensable, even as quantum hardware matures, because they allow enterprises to prototype, validate, and stress-test quantum algorithms without friction. This strategy mirrors NVIDIA’s dominance in AI. Just as GPUs became the undisputed backbone of AI model training and inference, NVIDIA is now building the same foundational role for quantum computing. CUDA Q isn’t simply a toolkit -- it’s NVIDIA’s declaration that quantum’s future will run through its infrastructure. And as industries begin the first tentative steps into quantum-powered solutions, NVIDIA’s ecosystem provides everything they need -- simulators for experimentation, cloud platforms for scalability, and seamless connections to QPU hardware for real-world deployment. For companies willing to seize the quantum opportunity early, the payoff could be transformative. Quantum-powered breakthroughs will create new winners -- innovators who leverage quantum computing’s potential before the rest of the market catches up. NVIDIA is ensuring its platform is the one those innovators use to get there. The quantum computing boom is still in its infancy, an untapped well of potential that many companies are only beginning to explore. But the trajectory is clear: with NVIDIA at the center, the bridge between quantum possibility and practical application has already been built.
Shay Boloor tweet media
English
47
193
1.1K
182.9K