So much idle compute that could be used for inference one day.
Model Intelligence Density is hopefully increasing with each frontier model destilation.
This update focuses on critical stability improvements, memory management, and the initial infrastructure for I2P integration. We've also addressed several UI bugs and search logic inconsistencies to provide a smoother experience.
I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took.
Thank you for getting us to this point.
would you rather have FrostWire store a large language model on disk and use 64GB-256GB ram to answer LLM inference requests for $1 per million tokens. (10x cheaper tan OpenAI, Anthropic, Gemini...)?
(64Gb for smaller models, $0.25M tokens)
LLMs do not meaningfully "refactor" at anything other than a junior engineering level. They can basically do some window dressing and move code around between files. True refactoring means creating new abstractions, which LLMs can't do because they can't form world-models.