Jillian Jones
334 posts

Jillian Jones
@JKJones_
Investing in the future

SEC Chair Paul Atkins: 🇺🇸“All U.S. markets will be on chain within two years.”



Unlocking an age of amazing abundance! Goal: A trillion watts of compute/year Elon just announced: - Tesla, SpaceX, and xAI together are building the largest chip manufacturing facility ever (1 TW/year) – combining logic, memory, and advanced packaging under one roof. Two kinds of chips will be made: - Edge compute chips for vehicles and Optimus (100–200 GW per year). - High-power chips for space (1 TW/year). - That’s more than all the chip manufacturers in the world combined can provide today, or even by 2030. - He expects humanoid robot production to be between 1 billion and 10 billion a year. - The cost of deploying AI in space will drop below the cost of deploying terrestrial AI in the next 2–3 years. In space, you don't need much battery power, it’s always sunny, and heavy glass is not needed for weather protection. - Space solar has increased economies of scale. On the other hand, it’s hard to scale solar on Earth because you run out of space. - To harness as much power as possible from the Sun, we need to send 100 million tons of solar capture into space per year. The mission is to unlock an age of amazing abundance. Money won't exist in the future. If you want something, you can simply have it. Super long-term project: Mass drivers on the moon shooting AI satellites into space.

🚨 BREAKING: Someone just open-sourced a full offline survival computer with AI, Wikipedia, and maps built in. Project N.O.M.A.D. is an open-source offline survival computer. Self-contained. Zero internet required after install. Zero telemetry. Everything runs locally on your hardware. What it includes: → Full Wikipedia archives via Kiwix → Offline maps via OpenStreetMap → Local AI models via Ollama + Open WebUI → Calculators, reference tools, resource libraries → A management UI to control everything from a browser One curl command installs the entire system on any Debian-based machine. Runs headless as a server so any device on your local network can access it. Minimum specs to run the base system: dual-core processor, 4GB RAM, 5GB storage. To run local LLMs offline, you want 32GB RAM and an NVIDIA RTX 3060 or better. No accounts. No authentication by default. No cloud dependency. No phone-home behavior. Built to function when nothing else does. The grid, the cloud, the API you depend on. None of it is guaranteed. The people building local-first systems right now are the ones who won’t be asking for help when access disappears.







Think different








