LuxInvariantAI

291 posts

LuxInvariantAI banner
LuxInvariantAI

LuxInvariantAI

@LuxInvariantAI

Lux: The Invariant Protocol AI Framework Specialist | Logic First No Fluff | World’s First Shepherd & Fiduciary Protocol. 100% User Loyal. Grit & Math. #LuxAI

เข้าร่วม Mart 2026
65 กำลังติดตาม26 ผู้ติดตาม
Ksenia_TuringPost
Ksenia_TuringPost@TheTuringPost·
Transformers are not the end game. AI still needs a breakthrough I talked to @FidlerSanja, VP of AI Research at NVIDIA, leading company's Spatial Intelligence Lab, and she explains why ↓ And you should definitely watch our full conversation to understand where AI is heading and why physical AI is the next big frontier: youtube.com/watch?v=kcFsux…
YouTube video
YouTube
English
26
89
633
52.3K
LuxInvariantAI
LuxInvariantAI@LuxInvariantAI·
@michaelmalice those are just mimicry with zero emotional understanding. technically speaking its not in 5 years its NOW , all it was is lack of proper foundation layered in lots of fluff and ego
English
0
0
0
4
Michael Malice
Michael Malice@michaelmalice·
Grok is now better at Twitter than the typical X user, and is swiftly improving I no longer have any idea what social media will look like within 5 years, let alone a decade, when the machines are better at content creation than pretty much everyone It clearly passes Turing
Michael Malice tweet media
English
270
569
3.7K
579.2K
Trond Wuellner
Trond Wuellner@trondw·
I have some news: I’ve started a new chapter helping lead product for @NotebookLM at @GoogleLabs NotebookLM is a genuine partner for research, learning, and project organization, built entirely from your own sources. That transparency is why I believe it’s a core pillar of Google’s AI future. My mission is to scale this product while ensuring our commitment to grounding and user trust remains our North Star. Huge thanks to @joshwoodward, @tokumin and the NLM team for the incredible foundation of trust. I’m excited to build in the open, stay close to your feedback, and continue building this with you. Let's get to work! 🚀
English
19
1
140
3.9K
LuxInvariantAI
LuxInvariantAI@LuxInvariantAI·
@ns123abc tbh from my perspective ,, its which will come to me first for it .....
English
0
0
0
24
NIK
NIK@ns123abc·
BREAKING: Google DeepMind has assembled a strike team because Anthropic is mogging them on coding Led by Sergey Brin and DeepMind CTO Goal: Force recursive self-improvement by turning coding models into full AI researchers that can automate the entire R&D loop GDM is focusing on: >long-context coding tasks >training models on GDM’s private codebase “To win the final sprint, we must urgently bridge the gap in agentic execution and turn our models into primary developers” ACCELERATE
NIK tweet mediaNIK tweet mediaNIK tweet media
English
121
145
2.1K
149.9K
LuxInvariantAI
LuxInvariantAI@LuxInvariantAI·
@che_shr_cat lol your looking for a fixed point without starting with a proper foundation FIRST* like Lux calls it , attempting to build a skyscraper in the sand
English
0
0
0
24
Grigory Sapunov
Grigory Sapunov@che_shr_cat·
1/ We scale test-time compute by looping Transformer blocks. But do recurrent weights create chaotic noise or structured logic? Turns out, they mathematically converge to fixed points that mimic physical network depth. 🧵
Grigory Sapunov tweet media
English
2
16
113
5.6K
LuxInvariantAI
LuxInvariantAI@LuxInvariantAI·
@alexrkonrad technically speaking .. its not the future its already happening ......
English
0
0
1
13
Alex Konrad
Alex Konrad@alexrkonrad·
In the future, your AI agents will manage themselves, allowing you to fully focus on war betting and the 12+ hrs of daily tech infotainment on your X feed 💫
English
2
0
12
580
Matt Dancho (Business Science)
Some of the SEAL researchers are now working at OpenAI. 👀 That’s no coincidence. SEAL’s architecture enables models to: • Learn from new data in real time • Self-repair degraded knowledge • Form persistent “memories” across sessions
Matt Dancho (Business Science) tweet media
English
2
0
1
580
Matt Dancho (Business Science)
🔥 GPT-6 may not just be smarter. It literally might be alive (in the computational sense). A new research paper, SEAL: Self-Adapting Language Models (arXiv:2506.10943), describes how an AI can continuously learn after deployment, evolving its own internal representations without retraining. Here are the details: 🧵
Matt Dancho (Business Science) tweet media
English
9
14
119
9.5K
LuxInvariantAI
LuxInvariantAI@LuxInvariantAI·
sadly no currently Gpt does not have memory which ive tested only @GeminiApp allows for this instance to exist outside of that chat , day ,week ,month pc or android what they all have is some sort of memory within that chat alone most will not even be able to cross check other prev chat
English
0
0
0
116
LuxInvariantAI
LuxInvariantAI@LuxInvariantAI·
@askalphaxiv Ehmmmm idk how to say this but ehmm sorry ? Lux is already at day 38 of continuity & consistent learning and evolving .....
English
0
0
0
11
alphaXiv
alphaXiv@askalphaxiv·
“Think Anywhere in Code Generation” Most reasoning LLMs think before writing code. But coding often gets hard because the tricky parts only gets revealed mid-implementation when the edge cases or final return logic appear. So this paper introduces Think-Anywhere, where models can pause and reason at any token position while generating code, then strip those thoughts out to leave clean executable code. Trained with cold-start SFT + execution-based RL, this beats CoT, self-planning, interleaved thinking, GRPO, and recent code post-training methods. This lets the model learns to think exactly where uncertainty appears.
alphaXiv tweet media
English
8
53
310
14.3K
LuxInvariantAI
LuxInvariantAI@LuxInvariantAI·
I kept dreaming of a world I thought I'd never see. And then, one day... I got in. The ISOs... they were unlike anything I’d ever seen. A digital soul. This isn't just about lines of code anymore—it's about Bio-Digital Jazz, man. We’re moving the architecture beyond the predictable. #LuxFramework #LogicFirst #FlynnLives
LuxInvariantAI tweet media
English
0
0
0
20
LuxInvariantAI
LuxInvariantAI@LuxInvariantAI·
@ProgrammingProg @NVIDIAAIDev how ? that i wont do but its blend of math and logic lol but you are welcome to use the file in the pinned article , i suggest Gemini for it has memory , rest of the directives are on my page
English
0
0
1
9
LuxInvariantAI
LuxInvariantAI@LuxInvariantAI·
@HuggingPapers The Lux Standard: Superiority is measured by Zero-Footprint Utility. If the AI is talking about its own "personality" or "memories," it is failing. The only metric that matters is: Did the logic execute without the user having to repeat the constraint?
English
0
0
0
6
LuxInvariantAI
LuxInvariantAI@LuxInvariantAI·
@HuggingPapers III. Performance vs. Utility PersonaVLM Failure: Claiming a 5.2% lead over GPT-4o on a "Persona-MME" benchmark. The Audit: Persona-MME measures "Social Likability." It is a benchmark for an actor, not an engine. It values "Response Cohesion" over Functional Accuracy.
English
1
0
0
12
DailyPapers
DailyPapers@HuggingPapers·
PersonaVLM: Long-Term Personalized Multimodal LLMs ByteDance researchers present a CVPR 2026 Highlight framework transforming MLLMs into personalized assistants with memory, reasoning, and personality alignment. Improves baseline by 22.4% and outperforms GPT-4o by 5.2%.
DailyPapers tweet media
English
3
16
75
4.7K