LouisLong

1.3K posts

LouisLong banner
LouisLong

LouisLong

@LouisVTran

The road to success has never been easy

เข้าร่วม Aralık 2022
141 กำลังติดตาม20 ผู้ติดตาม
ทวีตที่ปักหมุด
LouisLong
LouisLong@LouisVTran·
$CREO On #MEXC retesting support levels here. This could be a good entry point as if this holds then the next pump will be big. Don't forget that @creo_engine is a mid-term hold that will perform better!.... #BitcoinETF #Ethereum #Cancun
LouisLong tweet media
English
0
0
5
334
LouisLong รีทวีตแล้ว
rb3k
rb3k@rbthreek·
.@rei_labs is a billion dollar ai lab larping at 15m, the rerate will begin soon
English
14
22
147
20.1K
LouisLong รีทวีตแล้ว
Rei
Rei@rei_labs·
Self-evolving Units are constantly learning through interactions with their user. The quality of training shapes the trajectory of evolution. With adaptive AI, the user’s ability to teach is the new ceiling. Great to see experiments and explorations in different specializations.
Ecliptica@EclipticaOS

Our first Live Market Experiment is now public. We gave our units $25k to trade across Equity, Crypto and Forex, and we're tracking everything. Position closure. Profit taking. Drawdown. Sharpe. Win rate. How machines behave vs humans under the same market conditions. Link: strata.ecliptica.ai/nebulyst

English
14
31
120
15.5K
LouisLong รีทวีตแล้ว
rb3k
rb3k@rbthreek·
Built on top of $rei, currently outperforming the broader market
English
6
14
53
5.6K
LouisLong รีทวีตแล้ว
generational fumbler
generational fumbler@genfumbler·
I had the pleasure of interviewing @0xreisearch, the founder of REI (@ReiNetwork0x), this past week and I wanted to share the transcript with you all! Some of my previous writing about REI may be outdated, so I strongly suggest reading their pinned article and Gitbook for more information. After having the privilege to use my own units powered by REI’s Core architecture, I can confidently say that my original thesis remains unchanged, and the sheer advancement compared to any leading LLM is unquestionable. Hope you all enjoy, and thank you reisearch once again for your time!
generational fumbler tweet media
English
11
34
155
19.8K
LouisLong รีทวีตแล้ว
Vestun
Vestun@Vestunltd·
Our venture partner @ajp_digital has been testing Rei Core extensively, and we're excited to share our thesis publicly on $REI. We continue to closely follow and support their growth. The breakthrough isn’t bigger models - it’s smarter cognitive architectures like @rei_labs that turn probabilistic outputs into purposeful intelligence. LLM weights capture patterns, but not understanding - true intelligence requires structured reasoning beyond model parameters.
AJP@ajp_digital

x.com/i/article/2018…

English
3
17
67
4.4K
LouisLong รีทวีตแล้ว
reitern
reitern@0xreitern·
Core's NeuroEvo simplified Self-optimizing AI infrastructure that perpetually evolves as concurrent users shape it in real-time
English
7
31
100
8.4K
LouisLong รีทวีตแล้ว
Cryptor ⚡️
Cryptor ⚡️@cryptorinweb3·
My $REI journey: From $10M to over $135M by digging into fundamentals, tracking product-market fit, and following smart money using @nansen_ai ⸻ On June 23, I shared a post about $REI (see QRT) that ended up going viral with over 32k views. Honestly, I was a bit surprised. Especially because I don’t have a large following, and $REI isn’t exactly a mainstream name yet in the web3 trenches. There’s barely any (big) KOL coverage, and their main X account still sits at 12K followers. In that post, I shared my insights and findings on $REI and judging by the response, it seems people still appreciate real research and clear reasoning. So in this post, I want to walk you through how I managed my $REI position and the steps I took that made it become my third-largest portfolio position as of today, right after $TAO and $BTC. For the record, I’m not here to flex a 13x return (though I won’t complain). What really excites me is that I followed a process built through 8 years of experience, trial, and error (and many dollars lost) that actually worked. And I didn’t do it alone. I use a bunch of tools, but if I had to pick just one, it would be Nansen. Nansen is by far one of the best analytics platforms out there. It’s not cheap, but it’s worth every dollar. Without it, I wouldn’t have made this call as confidently or as early. If you’re curious to try it yourself, there’s a 10% discount link below in this thread. Let’s dive in 👇🧵
Cryptor ⚡️ tweet media
Cryptor ⚡️@cryptorinweb3

Market Pain Reveals Strength. $REI Just Proved It In every correction, real conviction shines through ⸻ Market dumps suck. But they also tell you what’s worth holding. $REI / @ReiNetwork0x has one of the strongest charts post-dump, while most alts got obliterated. I’ve been rotating out of dead altcoin bags lately and adding to $REI. And I’m not alone: @nansen_ai's data shows the top 100 holders are doing the same. Even larger accounts like @defi_mochi are starting to show interest. They’re rotating into strength. Why is $REI performing so well? I’ve spent the past months diving deep into AI. At first, I invested because of the hype (yup, I’m guilty). Now I’m investing because I put in the effort and have started to understand the real limitations of today’s systems. And if you know what’s broken, you can spot what might actually fix it. But let’s be real: Most AI protocols today are just wrappers on GPT or Claude pretending to be innovation. And since 99% of CT degens don’t bother to research, they end up holding the red bags. Everyone’s using the same models. Differentiation is fading fast. $REI isn’t just another AI app riding the LLM wave. It’s a full-stack shift in architecture building something different: → Bowtie memory evolves over time, forming reusable concepts, not just regurgitating past prompts. → Reasoning Cluster shows how answers were formed, not just what the answer is. That’s critical for law, finance, and healthcare. → Model orchestration runs tasks through specialist models, making it faster, cheaper, and more precise than bloated LLMs. This isn’t just a whitepaper idea. REI is already in closed beta, running real financial analysis and outperforming LLMs on consistency, memory, and numeric precision. While most AI startups mostly rely on OpenAI infra, $REI owns its entire stack. If they succeed, they don’t just build an app. They become the backend for every serious tool that needs verifiable reasoning. Their architecture supports statistical models, time series prediction, and vision modules, enabling real-world use across finance, science, research, and risk. That’s why I believe $REI will attract high-trust, high-complexity clients: Quant funds, enterprise strategy teams, and regulated sectors where accuracy and explainability are essential. So here's the bottom line on my $REI thesis: Most AI plays today are short-term wrappers. $REI is building long-term infrastructure. If they pull this off, they won’t just ship features: they’ll power the apps that matter. I’ve been buying sub $10M MC but I’d still back it at $80M without blinking. This is the kind of early bet I want in my portfolio, even at current valuations.

English
52
25
136
23.9K
LouisLong รีทวีตแล้ว
硬核君|Hardriver 🟧 $NAT 🟧
最近众多“LLM 量化”、“AI 交易助手”,但哪里靠谱哪些只是叙事,很少有人讲明白。 这篇长文从底层结构解释了:为什么普通 LLM 天生不适合做交易,以及他们是怎么用另一种架构core (来自于 $REI ) 做“交易的副驾”。​ 尤其是“数值推理、不确定性、置信度和仓位管理”的讨论,信息密度很高,做量化/合约/日内的都可以认真读一读。
硬核君|Hardriver 🟧 $NAT 🟧 tweet media
Ecliptica@EclipticaOS

x.com/i/article/2006…

中文
3
5
13
1.8K
LouisLong รีทวีตแล้ว
Cryptor ⚡️
Cryptor ⚡️@cryptorinweb3·
I still remember the tweet below about $REI. I bought the first pump and made a 30x on $REI, then re-bought the lows around a $15M MC (see QRT), which later ran to a $215M ATH. Right now, $REI is presenting a similar opportunity again at a $22M MC, with a 60% daily gain. Exchange supply has dropped sharply from 90M to 65M $REI. Historically, when the yellow line was this low, price often pumped hard afterward. Will history repeat itself and will it run for a third time?
Cryptor ⚡️ tweet mediaCryptor ⚡️ tweet media
Cryptor ⚡️@cryptorinweb3

AI agents and DeFAI are the top gainers in the last 24 hours. A solid DeFAI play is $REI with strong fundamentals. Down more than 90% from its all-time high, but it looks like the bottom is in. Today, it pumped 58%. But more importantly: 🔹The @ReiNetwork0x team recently introduced Hanabi-1, a new financial prediction transformer model for market analysis. 🔹Hanabi-1 shows that compact transformers excel in financial predictions by addressing class imbalance, temporal dynamics, and confidence. It offers reliable signals in challenging market conditions. 🔹Hanabi-1 is also the first in the "Catalog" series of the team, which focuses on financial prediction. This series of transformer models is designed for specialized purposes, with most being open-sourced for developers. 🔹$REI has also been listed on @HyperliquidX, an ideal platform for training financial AI models. Hanabi-1 is also trained with $HYPE on Hyperliquid.

English
30
9
69
5.3K
LouisLong รีทวีตแล้ว
rb3k
rb3k@rbthreek·
Spent the last ~7 days loading up on a long-term $REI position. The intersection of crypto and AI remains one of the more interesting ones and one of the best narratives we've seen that I don't believe will be going away anytime soon. Using the @rei_labs terminal once will make you a believer imo as they've accomplished so much and it works beautifully. They're scheduled to exit closed beta soon and although in terms of PA it's got some work to do above they've been building on @base in the open and have been in the AI game 10+ years if there's a team that can bring it back, I think it's them. We're going to make long-term holding great again with this one
English
25
29
144
33.3K
LouisLong รีทวีตแล้ว
Reflect 🤖
Reflect 🤖@RFLnow·
At Reflect, we are powering ADAM, our DeFi agent, with @Rei_Labs. Read more about it👇
Reflect 🤖 tweet media
English
12
30
85
17.3K
LouisLong รีทวีตแล้ว
Rei
Rei@rei_labs·
A Message from the Product and Dev Teams Before We Begin 2026: In 2025 we learned what building for the next mental model feels like. LLMs got everyone used to expect a certain thing from AI. say something, get it back. context persists within a session. it's familiar, it's what people know. Core doesn't work that way. and honestly, nothing alive does either. Core is cognitive architecture. It traverses concept spaces, evolves pathfinding strategies, forms relationships between ideas through reinforcement and feedback. decay is part of it, information that isn't reinforced fades but so is concept formation, competitive strategy selection and so on. It extracts meaning and not transcripts. We took as much inspiration as we could from living systems. and living systems don't regurgitate training data. that's not a bug, that's how learning actually happens. the gap isn't anyone's fault. years of LLM usage just built a certain habit; say it once, retrieve it within the session, no feedback loop, just dump and query. Core asks something different. you're not querying, you're teaching. same question gets better answers, not identical ones. corrections matter. reinforcement matters. In the past six months, we noticed that our UI worked great for advanced users who get those concepts from the get-go. for everyone else, there's a learning curve we're still helping with and adding guides to tackle. on top of that, UI (Factory) data APIs we added free of charge as a closed beta perk sometimes returned nothing and MCPs have their own flaws, things outside our control that still hit the UX. builders with their own custom stable data feeds don't have that problem. What worked the most for a certain demographic: builders who abstract the complexity, feed their own data, don't ask end users to prompt directly. @EclipticaOS is the first to go semi-public and hit a few thousand users in beta doing exactly that. In a nutshell: every app built on rei adds indirect users who get the benefits without the learning curve. the infra doesn't need to be understood by everyone to be useful. Beyond agentic core and mental models, learning curves and so on, the goal is fully capable digital entities that learn conceptually. right now Core handles reasoning really well. conceptual learning, numerical accuracy, things LLMs fumble. with Core Abstraction (separate from the agentic Core on the frontend/API), builders can plug in external knowledge bases that are task-specific. reasoning and retrieval without polluting your agent's brain. Abstraction is an intelligence layer for all AI, beyond agentic systems and text interfaces. it's destined to give builders complete freedom. We put all our energy into reasoning evolution and learning. that's much more challenging and will always be what sets us apart. task-specific retrieval and db integration ship with abstraction. the foundation had to come first. There's a reason AI feels like it's consolidating around a few big players. the models are massive, the compute is getting expensive, hardware costs are out of hand and if you're not running your own data center you're paying someone who is. we didn't want to play that game. Core is modular. we're aiming at making it possible to run on much less hardware than your average model even 10 versions away from this one. the goal was always to build something powerful that doesn't need billions to keep running. Holiday season just ended, we're back to work. It's been humbling. Shipping something completely new isn't always obvious to get right. thank you for sticking around, it means the world to us. Happy New Year.
Ecliptica@EclipticaOS

x.com/i/article/2006…

English
29
57
182
15.6K
LouisLong รีทวีตแล้ว
Rei
Rei@rei_labs·
Two updates shipping today: Unit cloning. Fork any Unit from the UI. settings, knowledge, behavior prompts all carry over. Test alternate scenarios from any checkpoint without retraining.
Rei tweet media
English
21
29
109
6.2K
LouisLong รีทวีตแล้ว
Rei
Rei@rei_labs·
Introducing Core Sandbox Alpha, the first inference-time learning coding assistant. Every interaction affects reasoning immediately. No RL. No retraining. No fine-tuning.
English
50
65
254
52K
LouisLong รีทวีตแล้ว
reisearch
reisearch@0xreisearch·
Since we founded @rei_labs (formerly Rei Network), our idea has been to create smarter, more agentic AI that goes beyond the heavy cognitive limits of the current generation of language models. From the beginning, Core has been the system's brain handling reasoning, decision-making, and learning. Everything else, including language processing, serves as an interface to Core's intelligence 0.1 was the proof of concept for this idea with Bowtie's first prototype dating to 2024. 0.2 introduced agentic adaptation. 0.3 focused on training at inference time. With 0.4, we went a step beyond that. 0.4 not only allows the whole system to evolve from every user interaction (going beyond single-unit evolution), but it also created a clean separation from LLM embeddings. Since 0.4, Core has become an entirely separate entity that can work in conjunction with any kind of data or model. Such a separation represents our bet that intelligence is about how concepts connect and evolve, not about params. Core builds understanding by navigating relationships between ideas, discovering patterns through exploration rather than memorization. It's a fundamentally different approach, instead of pattern matching, our approach consists of developing reasoning strategies that compete and evolve based on what actually works. With Core now being substrate-agnostic, we can connect it to anything. Take coding environments as an example: Core will handle all the reasoning while using language models purely as tools: dictionaries for code generation. You can switch between Claude, GPT, or DeepSeek and Core will preserve all learned concepts and adapted strategies/teachings. Tell it once how you prefer to structure functions, and it remembers across "dictionary" swaps. In this scenario, The language model is a knowledge base, Core does the thinking. Each interaction evolves its understanding of your coding style, accumulating teachings that persist regardless of which Language model translates them. Once Core in its raw form releases, it becomes directly connectable to anything, the same principle will apply everywhere. Connect it to G-code generators for CNC machining, it learns your toolpath preferences and material handling patterns while the model just translates to machine instructions. Link it to SQL engines, it evolves query optimization strategies specific to your database patterns while models provide syntax. Interface with MIDI controllers Core develops your composition style while models handle note encoding. Even in medical imaging, Core would learn diagnostic patterns from radiologist feedback while vision models just extract features. We’re looking forward to exiting stealth, to the next versions of Core over the coming years, and to the impact multi-disciplinary AI research will have on the world. On behalf of the team, I’d like to personally thank you all for your support throughout the year. Rei is now 1 year old, and it wouldn’t be possible without you. You gave us the freedom to build without constraints and the time to get it right.
English
32
47
213
27.4K
LouisLong รีทวีตแล้ว
Aerodrome
Aerodrome@AerodromeFi·
The New Home Base of REI @rei_labs has migrated & locked protocol-owned assets to Aerodrome and staked the liquidity to earn emissions. Why? On Aerodrome, protocols can earn rewards on their liquidity while contributing to the top pools by volume on Base. Swap & LP $REI today.
Aerodrome tweet media
English
26
47
264
26.3K