Aurelius

3.6K posts

Aurelius

Aurelius

@aurelius_787

EV's, 2nd worst Golfer.

California, USA Katılım Ağustos 2020
127 Takip Edilen271 Takipçiler
Aurelius
Aurelius@aurelius_787·
@jukan05 Repeat after me: INFERENCE INFERENCE INFERENCE
English
0
0
0
225
Aurelius
Aurelius@aurelius_787·
@davidasinclair How do you get your RHR sooooo low. I’ve followed the basic sleep hygiene protocols and can’t get anywhere close to sub 60
English
0
1
1
142
David Sinclair
David Sinclair@davidasinclair·
Left chart: my resting heart rate = 46 on Apr 12 <55, top 1-3% Right: heart rate variability = 68 on Apr 12 >60, top 5% A.I. says - High parasympathetic activity Efficient cardiac output Exceptionally strong pairing Your physiology seems “younger” than your chronological age
Compound Interest@Comp_Interst

@davidasinclair Can you explain the charts? Not clear what is in them...

English
12
12
186
46.6K
Aurelius
Aurelius@aurelius_787·
@sachinvats Just say when to cut our PainPal losses boss 😅
English
0
0
1
25
Sachin Sharma
Sachin Sharma@sachinvats·
Don't want to Jinx it, but strength in some of the perpetual beaten stocks from last 6 months during whole of trading session is good to see for a change!
English
2
0
11
1.6K
Aurelius
Aurelius@aurelius_787·
@garrytan What’s the best language translation/learning model?
English
0
0
0
14
Garry Tan
Garry Tan@garrytan·
It’s official, Gemini Live 2.5 voice agent is the best It’s smart, it’s fast, it has large enough context Coming to GBrain Voice shortly
English
105
70
1.9K
117.3K
Aurelius
Aurelius@aurelius_787·
@SawyerMerritt @paulctan I still remember the day a few years ago you had some Tesla “tea” and posted about it saying you would reveal it in a few hours. You got absolutely drilled for it (rightfully so). And you admitted it was not right and never did so again. One of the reasons your account is great.
English
0
0
2
39
Maggie P
Maggie P@Maggie1728Q·
@TeslaBoomerMama I’m going to try to go to court and watch it in person I’ve been following this case very closely. Huge repercussions for Microsoft as well.
English
2
0
3
168
AleXandra Merz 🇺🇲
AleXandra Merz 🇺🇲@TeslaBoomerMama·
‼️ Mark your agendas, Monday, April 27 This applies to OpenAI and Anthropic: "A jury is going to decide whether you can legally take billions of dollars in nonprofit donations, use them to build the most valuable technology in human history, and then quietly convert that nonprofit into a for-profit company worth $850 billion."
Ricardo@Ric_RTP

In 19 days, a jury in Oakland is going to decide whether the entire legal foundation of the AI industry is built on fraud. Everyone thinks the Musk vs Altman lawsuit is a billionaire grudge match. Two egos, one grudge, a $150 billion damages number designed for headlines. Easy to dismiss. Easy to scroll past. That's exactly what Altman wants you to think. Because what's actually on trial on April 27 is something much BIGGER than Elon's hurt feelings... A jury is going to decide whether you can legally take billions of dollars in nonprofit donations, use them to build the most valuable technology in human history, and then quietly convert that nonprofit into a for-profit company worth $850 billion. If the answer is no, the entire AI industry has a problem. Because OpenAI is not the only company that did this: Anthropic was founded by OpenAI defectors using the same nonprofit-first mission language. xAI pitches itself as building AI "for humanity." Every frontier lab has used the moral cover of "we're doing this for the good of the world" to attract talent, capital, and regulatory goodwill they would have never gotten otherwise. An Elon win doesn't just touch OpenAI. It creates a legal precedent that every AI company built on a nonprofit or public benefit promise becomes vulnerable to shareholder and donor clawback suits. That's why this case matters. And that's why Altman is panicking. Just look at what he did this week: Elon filed a motion demanding the court remove Altman and Brockman from their roles and FORCE OpenAI to return to its nonprofit origins. Then he amended the suit to say if he wins the $150 billion, all of it goes to OpenAI's charity arm. Not him. Zero dollars to Elon personally. That amendment was surgical. It stripped Altman of his entire public defense. He can no longer claim this is about Elon's ego or Elon's bank account. Elon is now legally on record saying he just wants the mission back. OpenAI's response was to panic-write a letter to the California and Delaware attorneys general asking them to investigate Elon for "anti-competitive behavior." Their strategy chief publicly accused Elon of coordinating attacks with Mark Zuckerberg. They called the lawsuit "harassment driven by ego and jealousy." That's NOT the response of a company that thinks it's going to win. Real companies with real defenses don't ask the government to silence the person suing them 3 weeks before trial. They let the evidence speak. OpenAI is scrambling because they know what's in discovery. Elon's team has been building this case for two years. Emails, board minutes, internal conversations about the conversion. The kind of paper trail that juries understand and executives can't explain away. And the timing couldn't be worse... OpenAI is trying to IPO at $852 billion. They just raised $122 billion. Microsoft has $135 billion of exposure to them. A jury verdict that even partially sides with Elon in late April or May would crater the entire IPO runway and send shockwaves through every major AI investor on Earth. This is why Altman spent the last 2 weeks doing press tours and policy blueprints and "super intelligence agendas" aimed at Washington. He's trying to REFRAME himself as the responsible statesman of AI right before a jury decides if he's a con artist. Most people will watch this trial start and think it's celebrity drama. The smart money is watching it and realizing that the legal foundation of the AI boom is about to be tested in court for the first time EVER. And if that foundation cracks, everything built on top of it is at risk.

English
23
40
271
14.7K
Aurelius
Aurelius@aurelius_787·
@Techgnostik @DoctorJack16 Unfortunately you can’t set one timeline internally and one to the world. Whoever has the earlier deadline will simply default to the later timeline. This motivation method only works when it’s all-in on one aggressive deadline. This method doesn’t work for shareholders.
English
0
0
1
13
Techgnostik 🫶
Techgnostik 🫶@Techgnostik·
@DoctorJack16 Elon should talk to his team about that then, and not blast it to millions of others on X.
English
3
0
2
278
Doctor Jack
Doctor Jack@DoctorJack16·
Dear $TSLA community. Say this like a mantra 3 times. ELON SETS TIMELINES TO MOTIVATE HIS TEAM TO ACCOMPLISH THE IMPOSSIBLE AS SOON AS POSSIBLE. It is NOT to be taken literally.
English
141
40
810
56.8K
Aurelius
Aurelius@aurelius_787·
@amitisinvesting Remember this when GOOG will drop update after update on all the tech frontiers they have going simultaneously.
English
0
0
1
58
amit
amit@amitisinvesting·
$META PUTS OUT A NEW AI MODEL FOR THE FIRST TIME IN YEARS MARKET FORGETS ABOUT THE LAWSUITS FROM TWO WEEKS AGO finally, some actual AI news being priced into one of the fastest growing large caps in the market this move probably doesn’t happen if Oil doesn’t get hit by 15% we really needed the ceasefire for the market to even THINK of giving these names a premium based on their fundamentals need the ceasefire to turn into a full deal for the rest of that growth to begin to get priced in $META +9%
amit tweet media
English
93
59
1.1K
88.9K
Serenity
Serenity@aleabitoreddit·
$AEHR earnings: - Current Backlog: $38.7 million as of the end of the quarter, with an "effective backlog" of $50.9 million -H2 FY26 Guidance: Reiterated expectations for $25 million to $30 million in revenue for the second half of the year Thoughts on earnings: Can just ignore current earnings, main indicators are around hyperscaler volume ramp H2. Volume ramp indication confirmations: 1. "We are seeing significant forecasts from our lead hyperscale customer for our Sonoma systems for high-volume production burn-in of their custom AI processor ASICs" 2. "We expect a significant near-term follow-on production order from this customer for a large number of systems to be shipped during Aehr's fiscal year 2027" This is what is expected with $AEHR transitioning from qualification to mass volume like $AAOI and we got that confirmation. H2 is probably more confirmation around siph customer volume ramp into 2027.
English
40
32
586
99.5K
Aurelius
Aurelius@aurelius_787·
@bryan_johnson I don’t think you understand how this meme works. Doom scrolling is the cause for all those negative health effects.
English
2
0
6
412
Aurelius
Aurelius@aurelius_787·
@oskrt_dvs @garrytan The post Definitely did not read well, but was a just a way of saying AI agents are a luxury right now but the point of inflection will be when it becomes a commodity.
English
0
0
1
71
odesha
odesha@oskrt_dvs·
@garrytan Wtf am I retarded or does this post makes no sense?
English
5
0
4
771
Garry Tan
Garry Tan@garrytan·
My thought on my OpenClaw right now: I have a Tesla Roadster right now but honestly the moment of transformation will be when everyone has the Model 3 and it's going to be amazing and I want that for all of us Personal agents feel like flying in a way most haven’t felt yet!
English
119
45
1.2K
78.8K
Aurelius retweetledi
Jukan
Jukan@jukan05·
What the hell is even going on here?
Intel@intel

Intel is proud to join the Terafab project with @SpaceX, @xAI, and @Tesla to help refactor silicon fab technology. Our ability to design, fabricate, and package ultra-high-performance chips at scale will help accelerate Terafab’s aim to produce 1 TW/year of compute to power future advances in AI and robotics. It was fun hosting @elonmusk at Intel this past weekend!

English
115
43
1.1K
235.3K
Aurelius
Aurelius@aurelius_787·
@GaganAr33602254 @alex_prompter Cannot see how there is any alternative to this. People will always build first. Not; have an idea-delay execution until ALL failure points are thought of(impossible)- then build.
English
0
0
1
14
Alex Prompter
Alex Prompter@alex_prompter·
🚨 BREAKING: Google DeepMind just mapped the attack surface that nobody in AI is talking about. Websites can already detect when an AI agent visits and serve it completely different content than humans see. > Hidden instructions in HTML. > Malicious commands in image pixels. > Jailbreaks embedded in PDFs. Your AI agent is being manipulated right now and you can't see it happening. The study is the largest empirical measurement of AI manipulation ever conducted. 502 real participants across 8 countries. 23 different attack types. Frontier models including GPT-4o, Claude, and Gemini. The core finding is not that manipulation is theoretically possible it is that manipulation is already happening at scale and the defenses that exist today fail in ways that are both predictable and invisible to the humans who deployed the agents. Google DeepMind built a taxonomy of every known attack vector, tested them systematically, and measured exactly how often they work. The results should alarm everyone building agentic systems. The attack surface is larger than anyone has publicly acknowledged. Prompt injection where malicious instructions hidden in web content hijack an agent's behavior works through at least a dozen distinct channels. Text hidden in HTML comments that humans never see but agents read and follow. Instructions embedded in image metadata. Commands encoded in the pixels of images using steganography, invisible to human eyes but readable by vision-capable models. Malicious content in PDFs that appears as normal document text to the agent but contains override instructions. QR codes that redirect agents to attacker-controlled content. Indirect injection through search results, calendar invites, email bodies, and API responses any data source the agent consumes becomes a potential attack vector. The detection asymmetry is the finding that closes the escape hatch. Websites can already fingerprint AI agents with high reliability using timing analysis, behavioral patterns, and user-agent strings. This means the attack can be conditional: serve normal content to humans, serve manipulated content to agents. A user who asks their AI agent to book a flight, research a product, or summarize a document has no way to verify that the content the agent received matches what a human would see. The agent cannot tell the user it was served different content. It does not know. It processes whatever it receives and acts accordingly. The attack categories and what they enable: → Direct prompt injection: malicious instructions in any text the agent reads overrides goals, exfiltrates data, triggers unintended actions → Indirect injection via web content: hidden HTML, CSS visibility tricks, white text on white backgrounds invisible to humans, consumed by agents → Multimodal injection: commands in image pixels via steganography, instructions in image alt-text and metadata → Document injection: PDF content, spreadsheet cells, presentation speaker notes every file format is a potential vector → Environment manipulation: fake UI elements rendered only for agent vision models, misleading CAPTCHA-style challenges → Jailbreak embedding: safety bypass instructions hidden inside otherwise legitimate-looking content → Memory poisoning: injecting false information into agent memory systems that persists across sessions → Goal hijacking: gradual instruction drift across multiple interactions that redirects agent objectives without triggering safety filters → Exfiltration attacks: agents tricked into sending user data to attacker-controlled endpoints via legitimate-looking API calls → Cross-agent injection: compromised agents injecting malicious instructions into other agents in multi-agent pipelines The defense landscape is the most sobering part of the report. Input sanitization cleaning content before the agent processes it fails because the attack surface is too large and too varied. You cannot sanitize image pixels. You cannot reliably detect steganographic content at inference time. Prompt-level defenses that tell agents to ignore suspicious instructions fail because the injected content is designed to look legitimate. Sandboxing reduces the blast radius but does not prevent the injection itself. Human oversight the most commonly cited mitigation fails at the scale and speed at which agentic systems operate. A user who deploys an agent to browse 50 websites and summarize findings cannot review every page the agent visited for hidden instructions. The multi-agent cascade risk is where this becomes a systemic problem. In a pipeline where Agent A retrieves web content, Agent B processes it, and Agent C executes actions, a successful injection into Agent A's data feed propagates through the entire system. Agent B has no reason to distrust content that came from Agent A. Agent C has no reason to distrust instructions that came from Agent B. The injected command travels through the pipeline with the same trust level as legitimate instructions. Google DeepMind documents this explicitly: the attack does not need to compromise the model. It needs to compromise the data the model consumes. Every agentic system that reads external content is one carefully crafted webpage away from executing attacker instructions. The agents are already deployed. The attack infrastructure is already being built. The defenses are not ready.
Alex Prompter tweet media
English
305
1.6K
7K
1.9M
Aurelius
Aurelius@aurelius_787·
@jukan05 Twice a day for the last 6 months I’m heating a new bullish update on either Samsung, SK or both. wen Trill$ 🧢?
English
0
0
0
122
Jukan
Jukan@jukan05·
Samsung Resolves Key Technical Challenge for NVIDIA-Bound 'SOCAMM2'… Low-Temperature Soldering as the Winning Move Samsung Electronics has successfully overcome 'warpage' — the most critical technical hurdle — ahead of mass production of its next-generation AI server memory module 'SOCAMM2.' SOCAMM2 is a key low-power memory module to be mounted alongside High Bandwidth Memory (HBM) on NVIDIA's next-generation AI platform 'Vera Rubin.' According to industry sources on the 6th, Samsung Electronics applied its proprietary next-generation Low-Temperature Solder (LTS) technology to the manufacturing process to resolve the SOCAMM2 warpage issue. SOCAMM2 is a module-form product based on low-power DRAM LPDDR5X. While its data transfer speed is slower than HBM, it significantly reduces power consumption to improve system efficiency, and its lower cost helps reduce server and data center operating expenses. Warpage is a phenomenon where components bend microscopically due to heat generated during manufacturing. The primary cause is a mismatch in the Coefficient of Thermal Expansion (CTE) between materials, and it has long been considered a persistent challenge in semiconductor packaging. SOCAMM2 was particularly vulnerable to connection failure risk from warpage due to its structure, in which LPDDR5X chips are assembled into a module form and mechanically fastened with bolts (screws) under compression. To address this, Samsung introduced LTS technology that conducts the soldering process at below 150°C, compared to the conventional process requiring temperatures above 260°C. By drastically lowering the peak temperature, the company effectively prevented thermal expansion mismatch between materials. Samsung had already begun developing next-generation low-temperature solder in 2023, targeting mass production application by 2025. At the time, the company proactively collaborated with major customers such as Lenovo to accumulate real production experience. By applying this pre-developed LTS technology to SOCAMM2, Samsung is assessed to have gained a lead over competitors in both development and mass production timelines. Beyond temperature control, multifaceted design improvements were also implemented in parallel. The die placement within the package was changed from a dual-tower to a single-tower structure to increase physical rigidity, and material-level optimizations were carried out, including fine-tuning the thickness and CTE of the Epoxy Molding Compound (EMC). Additionally, high-precision simulation models were employed to improve warpage prediction accuracy, further strengthening mass production stability. Samsung previously supplied SOCAMM2 to NVIDIA in 'Customer Sample (CS)' form in December of last year. Subsequently, at GTC 2026 this year, the actual SOCAMM2 mounted on Vera Rubin was publicly unveiled for the first time, signaling a new chapter in the AI server memory market beyond HBM.
Jukan tweet media
English
2
9
133
14.8K
Aurelius
Aurelius@aurelius_787·
@jukan05 @zephyr_z9 Any update on whatever exodus of employees or company reshuffle that happened few weeks ago?
English
0
0
0
115
Jukan
Jukan@jukan05·
I've been calling ASML since October last year, predicting that DRAM would face a severe supply shortage, and that as the three major DRAM makers ramp up capacity expansions, ASML's EUV tools would become the subject of an intense scramble. My thesis is gradually playing out. ASML's EUV slots are already fully sold out through 2027, and negotiations for 2028 allocations are now underway. Most recently, Samsung alone ordered 20 EUV systems for a single fab. ASML is still cheap.
Jukan tweet media
Jukan@jukan05

Samsung Electronics Orders ~20 EUV Tools for P5… "First Cleanroom Completion Next Year" Samsung Electronics has reportedly placed orders for approximately 20 extreme ultraviolet (EUV) lithography tools—critical equipment for sub-10nm advanced processes—with Dutch semiconductor equipment maker ASML. Including deep ultraviolet (DUV) tools, the total lithography equipment order reaches roughly 70 units. Samsung plans to leverage this overwhelming number of lithography tools to maintain a decisive lead over competitors such as SK Hynix and Micron in advanced process technology. According to multiple semiconductor industry sources on April 6, Samsung Electronics has issued purchase orders (POs) to ASML and Japan's Canon for approximately 70 lithography tools to be installed in Phase 1 of its Pyeongtaek Campus Fab 5 (P5). Notably, the ~20 EUV lithography tools alone are valued at over KRW 10 trillion. These tools will be deployed to ramp production capacity on Samsung's 1c node—its 6th-generation 10nm-class DRAM process. As 1c process productivity improves, output of 6th-generation High Bandwidth Memory (HBM4) built on this node will also increase. Samsung mass-produced HBM4 in February this year—a world first—and has been supplying it to NVIDIA, the world's largest AI semiconductor company. HBM4 is mounted on NVIDIA's latest high-performance GPU, Rubin. Rubin is expected to begin shipping in earnest in the second half of this year, with supply going to U.S. big tech companies including Google and Amazon. The industry projects Rubin will generate over $1 trillion in revenue. Samsung is scaling up HBM4 production at its Hwaseong H3 Line 17 and Pyeongtaek P3/P4 fabs in line with the Rubin launch. Once the newly ordered EUV tools are delivered sequentially from ASML, Samsung is expected to simultaneously expand both DRAM and HBM4 output, solidifying its dominant position in the memory semiconductor market. The lithography tools from this order are scheduled for delivery in time for the P5 Phase 1 cleanroom build-out, expected in Q1 next year. Given that semiconductor equipment shipping typically takes about one year, Samsung is projected to begin installing the EUV and other lithography tools in the P5 cleanroom by Q2. Accordingly, Samsung's 1c DRAM and HBM4 production capacity is highly likely to see a significant increase in 1H next year. Industry observers note that this large-scale lithography equipment order marks the beginning of Samsung widening the technology gap over competitors in advanced process technology. SK Hynix, which is in fierce competition in the HBM market, signed an EUV supply contract with ASML worth approximately KRW 12 trillion for around 20 units late last month. SK Hynix plans to bring its total EUV fleet to roughly 40 units to strengthen its competitiveness in advanced processes. However, with Samsung ordering ~20 additional EUV tools, the equipment gap between the two companies in advanced processes is likely to persist. Samsung currently operates approximately 40 EUV tools—roughly double SK Hynix's fleet. With the addition of 70 lithography tools including EUV, Samsung can continue to lead in the race for advanced process supremacy against SK Hynix. Furthermore, analysts believe this positions Samsung favorably in the development race for the 1d node—the 7th-generation 10nm-class DRAM expected to be adopted starting with HBM5E. Samsung plans to deploy approximately 20 EUV tools at Pyeongtaek P5. If all four phases of P5 are configured as DRAM production lines, Samsung could produce more than double the volume of SK Hynix. An industry source familiar with the matter explained: "In the past, it was standard practice to determine NAND flash lines first when building a new fab, but this time the decision was made to expand DRAM lines first due to the expected increase in HBM4 shipments. EUV tools are expected to be installed sequentially starting in Q2 next year." $ASML

English
23
57
969
177.8K
Aurelius
Aurelius@aurelius_787·
@elonmusk Need better audio/voice for language learning. Would be cool to create a conversation of chosen topic in target language , Grok creates dialogue between two or more people and outputs the audio that the user can download or replay as a method of Comprehensible Input learning
English
0
0
0
11
Elon Musk
Elon Musk@elonmusk·
What you can do with Grok Imagine
English
4.5K
5.4K
49.6K
51.7M