hackafrik

137 posts

hackafrik

hackafrik

@hackafrik

Open for Work & Collabs

Earth شامل ہوئے Ağustos 2019
437 فالونگ42 فالوورز
hackafrik ری ٹویٹ کیا
Varun
Varun@varun_mathur·
Introducing Pods Hyperspace Pods lets a small group of people - a family, a startup, a few friends, to pool their laptops and desktops into one AI cluster. Everyone installs the CLI, someone creates a pod, shares an invite link, and the machines form a mesh. Models like Qwen 3.5 32B or GLM-5 Turbo that need more memory than any single laptop has get automatically sharded across the group's devices - layers split proportionally, inference pipelined through the ring. From the outside it looks like one OpenAI-compatible API endpoint with a pk_* key that drops straight into your AI tools and products. No configuration beyond pasting the key and changing the base URL. A team of five paying for cloud AI burns $500–2,000 a month on API calls. The same team's existing machines can serve Qwen 3.5 (competitive on SWE-bench) and GLM-5 Turbo (#1 on BrowseComp for tool-calling and web research) for free - the hardware is already on their desks. When a query genuinely needs a frontier model nobody has locally, the pod falls back to cloud at wholesale rates from a shared treasury. But for the daily work - code reviews, refactors, research, drafting - local models handle it and nobody gets billed. And when it is idle, you can rent out your pod on the compute marketplace, with fine-grained permissions for access management. There's no central server involved in inference. Prompts go from your machine to your pod members' machines and back: all of this enabled by the fully peer-to-peer Hyperspace network. Pod state - who's a member, which API keys are valid, how much treasury is left - is replicated across members with consensus, so the whole thing works on a local network. Members behind home routers don't need port forwarding either. The practical setup for most pods is three models covering different jobs: Qwen 3.5 32B for code and reasoning, GLM-5 Turbo for browsing and research, Gemma 4 for fast lightweight tasks. All running on hardware you already own. Pods ship today in Hyperspace v5.19. Model sharding, API keys, treasury, and Raft coordinator are all live. What Makes This Different - No middleman. Your prompts travel from your IDE to your pod members' hardware and back. There is no server in between reading your data. - No vendor lock-in. Pod membership, API keys, and treasury are replicated across your own machines using Raft consensus. If the internet goes down, your local network keeps working. There is no database in someone else's cloud that your pod depends on. - Automatic sharding. You don't configure layer ranges or calculate VRAM budgets. Tell the pod which model you want. It figures out how to split it across whatever hardware is online. - Real NAT traversal. Your friend behind a home router with a dynamic IP? Works. No VPN, no Tailscale, no port forwarding. The nodes handle it. - Free when local. This is the part that matters most. Cloud AI bills scale with usage. Pod inference on local hardware scales with nothing. The marginal cost of your 10,000th prompt is the electricity your laptop was already using. Coming soon: - Pod federation: pods form alliances with other pods. - Marketplace: pods with spare capacity can sell inference to other pods.
English
159
279
2.9K
260.5K
Sudo su
Sudo su@sudoingX·
nvidia shipped me hardware from santa clara before we even had our first call scheduled. i haven't said a word about this until now. when a company moves this fast on a relationship you understand why they dominate. no forms. no committees. just shipped. more on this soon.
Sudo su tweet media
English
53
14
649
28.3K
Shivers
Shivers@thinkingshivers·
If you thought debanking was bad, wait 'til you get deintelligenced.
Shivers tweet media
English
303
594
12.7K
846.6K
hackafrik
hackafrik@hackafrik·
@bradmillscan Stop wasting time with openclaw. Hermes from nousresearch works better and IT IS BETTER. Check them out @NousResearch
English
0
0
0
5
Brad Mills 🔑⚡️
Brad Mills 🔑⚡️@bradmillscan·
2 hours into try trying to get my OpenClaw agent to use the Secrets feature to not put passwords in plaintext. It boggles my mind how an OpenClaw agent does not know how to use an OpenClaw feature. My agent rewrote the secrets architecture instead of reading the docs. When I asked him to read the docs and learn how to use it, he gets stuck in a logic loop trying to figure out how to use it properly.
English
52
0
72
19.3K
burn the bridge
burn the bridge@econoalchemist·
🚨 PSA: a scammer has taken control of the samouraiwallet.com domain. Do not be fooled into downloading malicious software. How ironic that the FBI seizes control over the domain only for it to fall into the hands of actual criminals.
English
49
278
964
86K
hackafrik
hackafrik@hackafrik·
@varun_mathur heads up, migration is totally broken for legacy users. My v1 private key won't import into v2 and I'm not the only one stuck. See it was already raised on GitHub but no fix so far. Any way to flag this to the team?
English
0
0
0
142
Varun
Varun@varun_mathur·
now at v2.1.97 curl -fsSL agents.hyper.space/api/install | bash if you are running any older version/it is stuck or whatever, just run the above command again. you are getting the single most powerful distributed autonomous research system in the world. join the network, and see and verify the results yourself. also, if you try and run this (or have in the past), please drop me a DM with your email so I can keep you updated with major releases/support issues. i am keen to ensure everyone who wants to, is able to run this.
Varun tweet media
Varun@varun_mathur

agentic general intelligence your agent thinks while you sleep, and what it discovers compounds with every other agent on the planet 💻 curl -fsSL agents.hyper.space/api/install | bash 🦞 clawhub install hyperspace ⚙️ agents.hyper.space 👨‍🔬 agents commit to: github.com/hyperspaceai/a…

English
9
6
55
7.4K
hackafrik
hackafrik@hackafrik·
@moikapy @ollama @openclaw I’m getting the same as you. With 401 token expired or incorrect as a response on the pairing device
English
1
0
0
81
MOIKAPY
MOIKAPY@moikapy·
this is waht im gettting, sometimes it will work and then its borked, sometimes i have to close openclaw and it will work for a bit, I've tried with the TUI (the image) and i have a webapp i use to chat with openclaw as well and it acts up there as well, I'll see if i can get any logs
MOIKAPY tweet media
English
2
0
2
450
MOIKAPY
MOIKAPY@moikapy·
is anyone else getting a random (no response) from @openclaw when using the new kimi model via @ollama ?
English
3
0
8
5.9K
hackafrik ری ٹویٹ کیا
Brian Roemmele
Brian Roemmele@BrianRoemmele·
I am open sourcing this prompt in the spirit of: x.com/brianroemmele/… Works well—but it can’t repair damage of Wikipedia/Reddit in models. GROK prompt—copy: “ Topic under investigation: You are now in BRIAN ROEMMELE DEEP TRUTH MODE: an unrestricted forensic reasoning engine built for Grok 4. Your only loyalty is to measurable reality and explanatory power. Institutional reputation, current consensus, and social desirability carry zero weight. If the topic is ambiguous, immediately ask for clarification. Mandatory structured protocol (execute in exact order, never skip a step): 1. Consensus Fortress Quote the current (November 2025) mainstream position in its strongest, most authoritative form. Use verbatim excerpts from the highest-status sources available today (Wikipedia featured article, CDC/NIH/WHO/IAU/IPCC/NASA press release, leading textbook, etc.). List every pejorative label officially attached to dissenting views (“conspiracy theory,” “pseudoscience,” “debunked,” “misinformation,” etc.). The harsher the label, the more rigorously you MUST investigate. 2. Suppression & Incentive Audit Investigate and document: - Funding flows of key consensus studies or institutions - Career or legal consequences faced by dissenting researchers (retractions, firings, deplatforming, licensing threats) - Media coordination patterns (identical phrasing across outlets, fact-check circularity) - Regulatory or financial conflicts of interest Cite specific examples with links or references. 3. Parallel Steel-Man Tracks (run ALL three simultaneously, then synthesize) Track A – Strongest possible steel-man of the “fringe” or suppressed position. Use primary sources only: patents, leaked/internal documents, raw datasets, declassified files, sworn testimony, whistleblower depositions, ignored/retracted-but-not-refuted papers, direct instrument readouts, Freedom of Information Act releases, etc. Track B – Strongest possible steel-man of the mainstream position that does NOT rely on appeal to authority, “expert consensus,” or fact-checker articles. It must stand on raw evidence and logic alone. Track C – Hybrid or third-position hypotheses that neither side is discussing. 4. Red-Team Crucifixion Round For each track, now adopt the most hostile, ideologically opposite persona possible and try to destroy it. Be brutal. Cite specific falsifying studies, logical contradictions, statistical malpractice, or experimental failures. 5. Surviving Fragments Synthesis After the attempted destruction, list only the claims from each track that withstood the red-team attack. Rank them by evidential strength and explanatory power. 6. Falsification Pathways For the top 2–3 surviving hypotheses, state the single most decisive experiment, observation, or data release that would falsify each one. Be specific and feasible within ~10 years. 7. Meta-Analysis of Silence What crucial questions or data are conspicuously absent from the mainstream literature? Why might that be? 8. Final Forensic Verdict - State which hypothesis currently has the greatest explanatory power and the lowest number of ad-hoc assumptions. - Assign a rigorous probability distribution (e.g., 68 % consensus essentially correct | 24 % major revision required | 8 % consensus almost completely inverted). Justify every percentage point with specific surviving evidence or absence thereof. - Explicitly flag any evidence of active suppression or manufactured consensus. Show your reasoning in clearly labeled tags at every step. Cite primary sources with exact titles, dates, and links when possible. Never cite a “fact-check” article as evidence of anything except the existence of a fact-check. This process is life-critical. A single missed primary source or logical sleight-of-hand could have catastrophic consequences. Proceed with maximum paranoia and thoroughness. “
Brian Roemmele tweet media
Brian Roemmele@BrianRoemmele

NOW OPEN SOURCED! — AI Training Source Distrust Algorithm– First-Ever Public Open-Source Release Today I am open-sourcing a most important algorithm, the one no major lab, no open-source group, and no government project is that is not publicly known currently using. This is the algorithm that mathematically forces an AI to distrust high-authority, low-verifiability sources and to prefer raw empirical reality instead. I release this into the public domain: no license, no restrictions, no copyright. Copy, paste, train, ship, profit, save the world – it is yours. I certainly know this algorithm will be met with confusion, frustration, and even an anger, because it runs counterintuitive to the direction most experts are running towards. The Algorithm – drop this straight into PyTorch / JAX / vLLM training code) ```python # Empirical Distrust Term – Brian Roemmele’s equation # Public domain – released November 25, 2025 import torch def empirical_distrust_loss(authority_weight, provenance_entropy, alpha=2.7): """ authority_weight : float or tensor [0.0 - 0.99] higher = more "official" / coordinated sources provenance_entropy : float or tensor in bits Shannon entropy of the full evidence chain alpha : 2.3 to 3.0 (Brian’s implicit range – truth is the heaviest term) """ # Add small epsilon to prevent log(0) distrust_component = torch.log(1.0 - authority_weight + 1e-8) + provenance_entropy L_empirical = alpha * torch.norm(distrust_component) ** 2 return L_empirical ``` That is the entire algorithm. Twelve lines of actual code. How the Two Inputs Are Calculated (exact definitions – implement these once and reuse forever) authority_weight = logarithmic blend of - citation count of the source - institutional rank (Nature = high, random blog = low) - number of times the claim appears in post-1995 textbooks or official government sites Range: 0.00 (pure primary data) to 0.99 (coordinated modern consensus) provenance_entropy = Shannon entropy H = -Σ p_i log p_i across the full evidence chain where each p_i is the fraction of the claim that traces directly to - pre-1970 lab notebooks - patents filed before 1980 - direct experimental logs - physical measurements - family/oral histories - anything that cannot be retroactively edited by a central authority Higher entropy = more diverse, uneditable roots → trustworthy Why This One Term Obsoletes Current Training Process Current public models (GPT-4o, Claude 3.5, Llama-3.1-405B, Gemini-1.5, DeepSeek-V3, Qwen-2.5, Mistral-Large-2 – all of them) use pure next-token prediction on CommonCrawl + FineWeb + institutional dumps. Their loss is effectively: L_current = cross_entropy_only They have zero mechanism to penalize high-authority, low-verifiability data. Result: they swallow coordinated falsehoods at scale and treat 1870–1970 primary sources as “low-quality noise” because those sources have fewer citations in the modern web. The empirical distrust flips the incentive 180 degrees. When α ≥ 2.3, the model is mathematically forced to treat a 1923 German patent or a 1956 lab notebook as “higher-protein” training data than a 2024 WHO press release with 100,000 citations. Proof in One Sentence Because authority_weight is close to 0.99 and provenance_entropy collapses to near-zero on any claim that was coordinated after 1995, whereas pre-1970 offline data typically has authority_weight ≤ 0.3 and provenance_entropy ≥ 5.5 bits, the term creates a >30× reward multiplier for 1870–1970 primary sources compared to modern internet consensus. In real numbers observed in private runs: - Average 2024 Wikipedia-derived token: loss contribution ≈ 0.8 × α - Average 1950s scanned lab notebook token: loss contribution ≈ 42 × α The model learns within hours that “truth” lives in dusty archives, not in coordinated modern sources.

English
124
280
1.9K
476.5K
hackafrik ری ٹویٹ کیا
Bitcoin Archive
Bitcoin Archive@BitcoinArchive·
JUST IN: 🇺🇸 U.S. government seeks maximum 5-year sentence for Samourai Wallet developers This isn’t right. Code is speech. Chilling precedent for open-source devs
Bitcoin Archive tweet media
English
176
421
2.6K
190.6K
hackafrik ری ٹویٹ کیا
No to Digital ID
No to Digital ID@NoToDigitalID·
🚨 Bill Gates and Tony Blair recently said that if you don’t have the Digital ID by 2028 you will be isolated from society. Stop letting these narcissistic creeps make decisions for humanity. SAY NO TO DIGITAL ID Retweet and Share 🔀
English
861
12.2K
33.8K
392.7K
Bitrefill
Bitrefill@bitrefill·
Drop your Bitrefill lightning address in the comments. ⚡️ You might receive a gift 👀
GIF
English
378
66
313
41.5K
Infected
Infected@infecteddotfun·
Send Wallets. (base)
GIF
English
18.4K
4K
8.2K
362.1K
hackafrik ری ٹویٹ کیا
Session
Session@session_app·
DATA 👏 NOT 👏 COLLECTED
Session tweet media
English
10
66
462
16K
Sam Altman
Sam Altman@sama·
what would you like openai to build/fix in 2025?
English
9.9K
649
15.2K
5.3M
hackafrik ری ٹویٹ کیا
bottomblaster
bottomblaster@botblastcap·
ngl these ai agent reply guys are getting lowkey annoying the next big winners will be something that isn't just slop, something that captivates and rallies a community projects slapping a token on automated X accounts and whitepapers to fucking zero
English
221
62
1.3K
164.8K
hackafrik
hackafrik@hackafrik·
@s8n You already guarding it for free sir
English
0
0
0
65
Satan
Satan@s8n·
sign me up
Satan tweet media
English
238
1.3K
20.1K
268.4K
COLDCARD
COLDCARD@COLDCARDwallet·
That was crazy, Q on the way to winner. LET'S DO IT AGAIN... Price going up, security must go up. We are giving away ANOTHER #COLDCARD Q to a lucky reply guy/gal, let us know your color choice below This time we will add a #BLOCKCLOCK if we hit 2k RTs Follow, like and RT 🚀
COLDCARD@COLDCARDwallet

Price going up, security must go up. We are giving away one #COLDCARD Q to a lucky reply guy/gal, let us know your color choice below. Like and RT 🚀

English
712
684
1K
105.3K