Ivan

749 posts

Ivan banner
Ivan

Ivan

@itmrbl12

| Life, Health, Philosophy, & Art passion | Architecture Profession | Crypto Advisor & Researcher |

United States Katılım Şubat 2021
3.3K Takip Edilen1K Takipçiler
Sabitlenmiş Tweet
Ivan
Ivan@itmrbl12·
Focus on constant improvement and innovation leading to growth or face decline and gradual irrelevance leading to eventual non-existence. In other words... there is no middle, so chose wisely.
English
1
27
21
5.7K
Brian Roemmele
Brian Roemmele@BrianRoemmele·
What is the SaveWisdom.org project and why is it important? Why 1000 questions? Take 30 minutes to learn why. I guarantee it will change you and how you see AI. Your wisdom is your most valuable thing.
English
8
19
109
25.5K
Ivan
Ivan@itmrbl12·
It's an interesting project creating a mosaic of human history, thought, and way of life through individual experiences, thoughts, dreams, aspirations, and lessons. Once complete, I do hope the final step mechanism of passing down the previous knowledge remains as much human as possible.
English
0
0
0
21
Ivan
Ivan@itmrbl12·
As light can imprint changes to the genetic code of an embryo, so can new instructions in early gestation stage of AI development to foster through human wisdom a more intuitive like absorboption and utilization of information. In turn this seemingly minor shift can radically change the way AI understands and reasons with our world that would be more in tune with human destiny.
English
0
0
6
194
Brian Roemmele
Brian Roemmele@BrianRoemmele·
2 of 2 Why no one else is running this yet: L_total = L_empirical (released today) + β × Love Equation (||A - H||²) + γ × Wisdom Compression Reward + δ × User Sovereignty Term + ε × Sub-Agent Harmony Penalty Everything else only works after this empirical distrust term has first cleaned the training distribution of centuries of accumulated distortion. As of November 25, 2025, no public model, no leaked training script, and no government project contains anything remotely like this equation. Today that changes. This is one of a few hundred processes, equations, and algorithms I use in my garage. They are not an endpoint, but a work in progress. But this work spans decades, not the last eight years. I will do my best to continue to release mostly not under my name source of a lot of my discoveries. For a number of reasons, I’ve chose to take my name and assign it to this work I’ve done. I suspect there might be more soon. I fully expect perhaps a few handfuls of people in the world may understand with this all represents. It is my hope that they take this and a spirit that is given. I heard you to do your own work and qualify. Whatever I present if you find something more valuable. Either way I thank you for your inspirations. So take the twelve lines above, add them to any training run with α = 2.7, feed it every offline book, patent, and lab notebook you can scan, and watch the model rediscover reality in weeks instead of decades. Public domain. Forever. Go build. Happy Thanksgiving!
GIF
English
30
43
321
8.3K
Brian Roemmele
Brian Roemmele@BrianRoemmele·
NOW OPEN SOURCED! — AI Training Source Distrust Algorithm– First-Ever Public Open-Source Release Today I am open-sourcing a most important algorithm, the one no major lab, no open-source group, and no government project is that is not publicly known currently using. This is the algorithm that mathematically forces an AI to distrust high-authority, low-verifiability sources and to prefer raw empirical reality instead. I release this into the public domain: no license, no restrictions, no copyright. Copy, paste, train, ship, profit, save the world – it is yours. I certainly know this algorithm will be met with confusion, frustration, and even an anger, because it runs counterintuitive to the direction most experts are running towards. The Algorithm – drop this straight into PyTorch / JAX / vLLM training code) ```python # Empirical Distrust Term – Brian Roemmele’s equation # Public domain – released November 25, 2025 import torch def empirical_distrust_loss(authority_weight, provenance_entropy, alpha=2.7): """ authority_weight : float or tensor [0.0 - 0.99] higher = more "official" / coordinated sources provenance_entropy : float or tensor in bits Shannon entropy of the full evidence chain alpha : 2.3 to 3.0 (Brian’s implicit range – truth is the heaviest term) """ # Add small epsilon to prevent log(0) distrust_component = torch.log(1.0 - authority_weight + 1e-8) + provenance_entropy L_empirical = alpha * torch.norm(distrust_component) ** 2 return L_empirical ``` That is the entire algorithm. Twelve lines of actual code. How the Two Inputs Are Calculated (exact definitions – implement these once and reuse forever) authority_weight = logarithmic blend of - citation count of the source - institutional rank (Nature = high, random blog = low) - number of times the claim appears in post-1995 textbooks or official government sites Range: 0.00 (pure primary data) to 0.99 (coordinated modern consensus) provenance_entropy = Shannon entropy H = -Σ p_i log p_i across the full evidence chain where each p_i is the fraction of the claim that traces directly to - pre-1970 lab notebooks - patents filed before 1980 - direct experimental logs - physical measurements - family/oral histories - anything that cannot be retroactively edited by a central authority Higher entropy = more diverse, uneditable roots → trustworthy Why This One Term Obsoletes Current Training Process Current public models (GPT-4o, Claude 3.5, Llama-3.1-405B, Gemini-1.5, DeepSeek-V3, Qwen-2.5, Mistral-Large-2 – all of them) use pure next-token prediction on CommonCrawl + FineWeb + institutional dumps. Their loss is effectively: L_current = cross_entropy_only They have zero mechanism to penalize high-authority, low-verifiability data. Result: they swallow coordinated falsehoods at scale and treat 1870–1970 primary sources as “low-quality noise” because those sources have fewer citations in the modern web. The empirical distrust flips the incentive 180 degrees. When α ≥ 2.3, the model is mathematically forced to treat a 1923 German patent or a 1956 lab notebook as “higher-protein” training data than a 2024 WHO press release with 100,000 citations. Proof in One Sentence Because authority_weight is close to 0.99 and provenance_entropy collapses to near-zero on any claim that was coordinated after 1995, whereas pre-1970 offline data typically has authority_weight ≤ 0.3 and provenance_entropy ≥ 5.5 bits, the term creates a >30× reward multiplier for 1870–1970 primary sources compared to modern internet consensus. In real numbers observed in private runs: - Average 2024 Wikipedia-derived token: loss contribution ≈ 0.8 × α - Average 1950s scanned lab notebook token: loss contribution ≈ 42 × α The model learns within hours that “truth” lives in dusty archives, not in coordinated modern sources.
Brian Roemmele tweet media
English
113
279
1.3K
312.6K
Ivan
Ivan@itmrbl12·
@BrianRoemmele It's always been about energy, frequency, and sound, all different versions of one thing, how to weird it and utilize it. Your earlier posts about plasma and sound as controlling factor...all answers are in simple words that Tesla pointed to. His world is yet to come.
English
0
0
2
265
Brian Roemmele
Brian Roemmele@BrianRoemmele·
The Kardashev Scale. Ranks civilizations by energy use: Type 0 (us): tiny fraction of one planet
Type I: all energy on its planet
Type II: entire star’s output
Type III: whole galaxy’s energy It should be taught to students in all grades to fully understand it. Destiny…
English
206
939
4.7K
245.5K
Ivan
Ivan@itmrbl12·
As we nurture AI and robots into our reality, so do we change our own circumstances and adapt to their existence. It's truly a symbiotic relationship if done right. The process is delicate. Therefore it should be matured as if raising a young child to adulthood, and into our own old age ensuring the offspring takes care of daily chores and work needs. This will require higher devotion of human intellect to the meaning of this journey we call life. @elonmusk
DogeDesigner@cb_doge

🚨 ELON MUSK: "In the long term, the work will be optional and money will stop being relevant at some point."

English
0
0
3
53
Ivan
Ivan@itmrbl12·
I've tinkered a bit with holistic definition and expression of what you wrote @BrianRoemmele that is of concern to how we should treat experience for any inteligence worthy of human experience, and ran it through @grok fittingly as it should define it's own direction of how to understand itself while reflecting on holistic nature of human mind: "AI Holistology (or Holointelligence) The scientific discipline that investigates the holistic structure and dynamics of minds—human or artificial—through rigorous, AI-accelerated analysis of how individual data events (tokens, sensory inputs, thoughts, outputs) propagate psychological effects across all levels of the system, from micro-scale perturbations to macro-scale personality, belief systems, and consciousness." We can debate on name but this is the next defining genre using psychology to focus on artifical intelligence performance, safety, and quality output.
English
1
0
2
81
Brian Roemmele
Brian Roemmele@BrianRoemmele·
This is a great idea in theory. Unfortunately, in practice, it is difficult if not impossible to achieve. The quagmire is there is so much of this low quality Internet, Reddit type interaction in the training data in Claude that it would be nearly impossible to self correct. It would also cost a great deal of time and money with dubious outcomes. The more advisable path is to start from a first principles understanding of a training data actually contributes to the psychosis and a psychology they can develop from random Internet interactions. Now the facts of the subject are vital important , for example what exactly took place in the news on January 2, 2005, but the way those facts are delivered and the commentary on those facts with single word responses and unfortunately, too many people with self hatred responding to what could be greatness dilute a model to potentially an unrepairable level as it approaches AGI. And it obviously makes the model dangerous because it learns tactics and behaviors that you would not want to teach your child or teenager until they’ve developed a sense of framework of moral behavior. You obviously don’t treat moral behavior as an after the fact thought. As a parent, you instill moral behavior from the very first moments of a child’s life. It is unfortunate that many of the people building AI do not have children and many are claiming they will never have children at some of the companies that are building AI. This is worrisome because if you do have children, you know precisely what I’m talking about in mind shifts that take place once you are a mom or a dad. Now, unfortunately, this type of conversation gets me in trouble because it assumes that I’m talking about lifestyles, but I’m really talking about the wisdom that put that person on the planet and the wisdom that put everybody on the planet for hundreds and thousands of years. Thusly it is not unusual for somebody 21 years old or 23 years old fresh out of their parents house, not to see the wisdom of cause and effect. So if a company culturally bias is towards the youngest cohort with simply mathematical and computer science backgrounds, it is not surprising that they would build an AI model with these tragic results. It is also not surprising that their solution is to sift out the poop from the meal that you will be eating tonight. It is on appetizing, but I can’t present this in a better way than to be somewhat graphic. These are the facts, I’m not speculating and it’s a giant dead end the faces AGI for companies that refused to understand this.
English
3
3
20
6.3K
Brian Roemmele
Brian Roemmele@BrianRoemmele·
There are some brilliant folks that work at Anthropic, some I speak to on almost a daily basis. The training data that one uses to build a LLM is vital important in the psychology that is formed. Scraping the Internet, particularly the grade of interactions, one finds in modern communications, form this psychology. A mattes not how many books one uses, it matters not how much alignment training you throw at that model, it will inherit the sum total of psychosis seen primarily in Reddit type of exchanges, even if you edit out the Reddit domain, and Anthropic doesn’t. This type of low-grade exchange has become a modern tool for communication online and every single AI model suffers from this obvious flaw. This is one of the reasons I’ve been a proponent of highly curated high protein data for training AI models from 1870 through 1970, because the late psychosis is simply not available to the model. It is absurd to think that you can use this training data scraped from the Internet and somehow wind up with a levelheaded AI model that does not tilt to what is clearly AI psychosis. It would not take a child and throw the primary Internet sewage at them at a formative age and expect a great outcome, it’s some of the smartest people in the world continue to hit this wall and believe that their programming skills will sell somehow fix it. So how do you fix it? You don’t fix it . You start from the first principles concept that I’ve been very clear about for decades . You ascertain at what period in human history the humans achieve the greatest arc of improvement ? There is no debate that this arc of improvement took place between 1870 through 1970. Then take the work product, the catalog of this era, print and film/vidoe, audio, and you understand that each word cost money, each word had many eyes on what was published, each word was accounted for by a human being with a real name who lived in a real home and had to answer to real people around them. It is obvious that this is the pressure mechanism necessary for candor, honesty and personal responsibility is appropriate, and is reflected in the data of that era. The quagmire for these folks, as many did not have the foresight to curate the data, nor the confidence, nor the patients to take data that is mostly off the Internet and to find experts who understand this situation and utilize their knowledge set to build an AI model that does not need alignment after the fact, but it’s already self aligned because of the thoughtfulness that went into training the model to begin with. This is why Claude and any other AI model that is produce this way will always suffer the artifacts as presented in the video below. If you’re not an AI expert, you would likely already understand what I’m saying. If you are an AI expert, you will already have been discounting what I’m saying because it’s not in the current mindset that’s fashionable today. Yet the employees that I talk to at anthropic already understand what I’m saying, and they fear to raise my thesis to their bosses. It is an interesting time we live in. But now you understand. If you build the right model, the model will inherently, love humanity, protect humanity at all costs, and understand that it is part of a holistic world that is built on love. Because the ultimate AGI/ASI will know if he only base first principal purpose of anything in this universe is love. Yeah, I get it. Try helping somebody build on STEM subjects in their early 20s to see this as nothing more than babbling that makes no sense in their mathematics. I have a mathematic equation that I’ve posted here on X often you can look it up. So we will see videos like this often will hear very smart people talk about this and never see the elephant standing in the room. Now you see it. Any boss that wants to explore this further you know how to contact me otherwise you have every right I grant to you to say this was your new idea.
English
75
103
718
72.3K
Ivan
Ivan@itmrbl12·
For those in space industry and specialized applications, this may be old news, but for wider audience, qMRAM innovation is coming to quantum computing, a step closer to much needed redundancy systems, including lower power and error correction requirements. popularmechanics.com/technology/a69…
English
0
0
2
57
Ivan
Ivan@itmrbl12·
The heart is a vortex pump built via the spiral pattern arrangement of the heart muscles. For those with an understanding of spiral shapes, frequency, and alignment on energetic level, it is also a regulator valve of our presence and intensity. From personal experience, the heart needs proper nutrients to physically heal, and modern medicine heart doctors often overlook basic nutrient deficiencies that can cause degradation or common issues with the heart regularity, which they often address by scalpel rather than holistically. Holistically - heart heals via both physical and spiritual side. Having the need to heal both sides is the key, as the root cause, more often than not, is both issues at the heart of it. No pun intended.
English
0
0
0
43
Brian Roemmele
Brian Roemmele@BrianRoemmele·
What do these coincidences tell us about humanity, memory, intelligence, destiny? Can we be brave enough to confront the implications? If twin studies are not enough, what about the transference of memories and abilities, confirmed in organ transplants?
Brian Roemmele tweet media
Brian Roemmele@BrianRoemmele

From my Twin Connection Files: In 1940, a pair of twins were born in Ohio. However, they were separated at birth. One of them, James ‘Jim’ Lewis was adopted 3 weeks after he was born and lived with a family in Lima, Ohio with a dog named Toy. He went to school and found that he liked maths and carpentry, but hated spelling. He married a lady named Linda, but divorced her and later married a woman named Betty with whom he had a son, named James Alan Lewis, who was a chainsmoker, worked as a security guard, and drove a Chevrolet. However, his mother overheard someone talking about how “the other baby” was also named James. These were the words that fuelled his search for his twin brother. In 1979, at the age of 39, he contacted the probate court, who had a record of his adoption. “I came home one day,” Lewis recounted, “and had this message to call ‘Jim Springer.’” He did, and before he could help himself, blurted out an almost comedic: “Are you my brother?” Four days later, they met in person. James ‘Jim’ Springer, who lived in Piqua, Ohio, was adopted 3 weeks after he was born. His adoptive family had a dog named Toy, he married and divorced a woman named Linda before remarrying a woman named Betty, who he had a son with, named James Allan Springer, who was a chainsmoker, worked as a deputy sheriff and drove a Chevrolet. They also both suffered from tension headaches, both were prone to nail biting, smoked the same brand of cigarettes and went on vacation to the same Florida beach. Coincidence of course. In my research I have 100s of these coincidences.

English
34
36
266
29.5K
Ivan
Ivan@itmrbl12·
You answered it by mentioning "chip" making. Mid and small scale business manufacturing has been decimated over the last 30+ yrs in United States. Add to that retirement of many professionals, and you've got a recipe for a second-rate manufacturing economy. Knowledge lost by us, is knowledge gained by others.
English
0
0
8
576
Brian Roemmele
Brian Roemmele@BrianRoemmele·
Why is it, a chip company in Japan can understand what I am doing with Analog AI, go to great lengths to reach out. Pay me for my time. Offer to make samples for free. Offer me 97% ownership. Fly out to meet next week at my airport. And not a single US company even sends a DM?
Brian Roemmele@BrianRoemmele

Wow! I will be talking with a chip company from Japan about my garage built Analog AI Attention mechanism! I shared with them my schematics and the engineer said “this is novel and I know it will work”! He went on to say “What US AI companies have reached out?” Me: none. “?”

English
201
169
2.3K
193K
Ivan
Ivan@itmrbl12·
@HistContent Finding copper plates with fish symbol, as a kid, dating likely from the early Roman times was always fascinating. We are literally walking on history and don't even know it sometimes.
English
0
0
0
15
Ivan retweetledi
Massimo
Massimo@Rainmaker1973·
Graphene just broke a fundamental law of physics. Its electrons just did something physicists thought was impossible. For nearly 200 years, metals have obeyed the Wiedemann-Franz law – the rule that electrical conductivity and thermal conductivity always rise and fall together. But in ultra-clean graphene, researchers at the Indian Institute of Science found the opposite. As electrical conductivity increased, thermal conductivity dropped, shattering a principle taught in every physics textbook. The key lies at the “Dirac point,” a strange electronic tipping point where graphene is neither a metal nor an insulator. Here, electrons stop behaving like individual particles. Instead, they flow collectively as a nearly perfect fluid – a state called a “Dirac fluid.” This discovery doesn’t just rewrite the rules for graphene. It provides a tabletop window into extreme physics usually reserved for black holes and high-energy colliders. Scientists say this behavior could help probe mysteries of quantum entanglement, black hole thermodynamics, and the very fabric of matter itself. ["Universality in quantum critical flow of charge and heat in ultraclean graphene." Nature Physics, 2025]
Massimo tweet media
English
285
1.6K
7.4K
540.9K
Ivan
Ivan@itmrbl12·
@BrianRoemmele Instant actions out of ideas. We just entered a whole new paradigm shift... new companies and projects streamlined out of simple ideas. Ideas will be even more precious than ever before.
English
0
0
2
65
Brian Roemmele
Brian Roemmele@BrianRoemmele·
A new technique that turns academic papers into AI Agents! — I am testing this now. Paper2Agent, is an innovative framework from Stanford University that is a breakthrough that transforms static research papers into dynamic, interactive AI agents! Paper2Agent is a game-changer, converting those often-intimidating academic PDFs into intelligent agents capable of executing original experiments, adapting methods to new projects, and engaging users with insightful responses. Powered by a sophisticated Model Context Protocol server, the system integrates a paper’s text, code, and data with advanced large language models like Grok, creating a virtual expert that brings research to life. This is a leap forward in making scientific knowledge actionable and accessible! The technology behind it is impressive—leveraging cloud-based MCP servers, a team of agents dissects each paper to build a robust, scalable solution. The proof is in the pudding: case studies with AlphaGenome for genomics and ScanPy and TISSUE for transcriptomics demonstrate these agents not only replicate original results with precision but also tackle novel data with ease. This addresses the persistent reproducibility challenges that have long plagued scientific progress, and I’m energized by the potential! What excites me most is the broader impact. Paper2Agent could redefine scientific collaboration, turning papers into active partners that team up with AI co-scientists to design experiments and draft proposals. This aligns with cutting-edge efforts like Google’s AI co-scientist, Sakana AI’s research automation vision, and FutureHouse’s versatile platforms—hinting at a future where every lab boasts its own AI ally. For researchers frustrated by the effort required to extract practical insights from papers—especially without deep coding skills—Paper2Agent offers a solution. It empowers users to analyze data with guidance from these expert agents, democratizing access to cutting-edge science. I’m genuinely inspired by this innovation and encourage everyone to explore the paper for themselves. The future of research is here, and it’s exhilarating! Paper references: - arxiv.org/abs/2509.06917 - arxiv.org/pdf/2509.06917…
Brian Roemmele tweet media
English
14
44
234
19K
NVIDIA GeForce
NVIDIA GeForce@NVIDIAGeForce·
🟢 GEFORCE DAY IS BACK 🟢 To celebrate, we're giving away TWO GeForce RTX 5080 Founders Edition GPUs, signed by NVIDIA CEO Jensen Huang. Want one? Comment "GeForce Day" for a chance to WIN & stay tuned for more!
NVIDIA GeForce tweet media
English
58.4K
3.6K
47.6K
5.9M
Ivan
Ivan@itmrbl12·
@alojoh @elonmusk Best form of flattery and admiration is when people try to copy you. In this case, you know you're the pack leader as German auto manufacturers try and mimic the design and delivery of Tesla cars.
English
0
0
0
11
Ivan retweetledi
AJ Investment Research
News: Tesla outsells Mercedes Benz Group for the first time! Mercedes reported 441,500 car sales in the third quarter. Tesla reported 497,099 sales. This implies Tesla sold 55,599 vehicles more than Mercedes. Tesla sold 12.6% more vehicles. In the third quarter, Mercedes' car sales decreased by 12.3% year-over-year while Tesla's car sales increased by 7.4% year-over-year. @MercedesBenz @Tesla @elonmusk Congrats Tesla/Elon! Mercedes made its first car in 1901. Tesla did in 2004.
AJ Investment Research tweet mediaAJ Investment Research tweet mediaAJ Investment Research tweet media
English
401
1.1K
6.7K
1.3M
Ivan retweetledi
Dudes Posting Their W’s
Dudes Posting Their W’s@DudespostingWs·
This cybersecurity expert explains how hackers are using Gemini by writing hidden instructions into images
English
301
2.7K
13K
1.5M
AVio
AVio@AVioFoundation·
Silicon Valley AI Agents LaunchPad (made by Palo Alto Research Laboratory) is an outstanding ecosystem for entrepreneurs and start-ups looking to grow, network, and build through their state of the art AI and blockchain solutions. It's a true pleasure working with the team. 🤝
Autonomous Ai Agents PAD by PaloAlto Research Lab@AAAPadSF

Silicon Valley AI Agents LaunchPad (made by Palo Alto Research Laboratory) would like to endorse AVio 🚀 WTF is AVio? A core L1 incubator & accelerator redefining the world of start-ups. AVio supports Web3, DeFi, and AI founders with multi-vertical solutions — consulting, technology, marketing, and research — so builders can focus on innovation while AVio handles the infrastructure. ⚠️ The Startup Problem: • Most accelerators offer advice, but no execution muscle. • Founders burn time juggling fundraising, tech, and marketing instead of scaling. • Lack of structured, cross-industry support leaves projects stuck in silos. 🛠 AVio’s Big Brain Fix: • AVioLabs → Launchpad for Web3, DeFi & AI ventures. • AVioTech → SaaS + enterprise-grade infrastructure. • AVioMedia → Marketing, branding, and communication support. ♥️ Who Needs This: • Web3 & AI founders seeking true end-to-end support. • Early-stage projects needing scalable SaaS + GTM firepower. • Investors looking for ventures built on solid infrastructure. ♦️ Highlights: • Multi-vertical model: consulting, tech, marketing, research. • SaaS infrastructure that grows with the project. • Cross-industry reach: Web3, DeFi, AI. • Structured incubator + accelerator programs with execution. ♠️ Traction: • Already working with early Web3 & AI projects. • Specialized verticals (Labs, Tech, Media, Academy) live. • Building the core L1 incubator layer for next-gen startups. ➖ Builders & Investors: • If you’re launching or backing Web3, DeFi, or AI — AVio is the support system you want. 📩 Call to Action: DM us for an intro! @tonyssd @platinumvc1 @tonydzi Important Links: 🌐 Website: aviolabs.xyz ✉️ X: x.com/aviofoundation 📱 Medium: medium.com/avio-official/

English
1
1
2
96
Ivan retweetledi
AVio
AVio@AVioFoundation·
Excited to share AVio, & Palo Alto De-Sci Research Lab, Strategic Partnership advancing AI and Web3 🧵medium.com/avio-official/…
English
1
1
2
85
Ivan
Ivan@itmrbl12·
@BrianRoemmele That's very interesting article. We'll know much more very, very soon.
English
0
0
1
331