blessedstaff | Goldsand

669 posts

blessedstaff | Goldsand

blessedstaff | Goldsand

@blessedstaff

building the halal replacement for your savings account. 2x dad and chai enthusiast. 🇵🇰🇺🇸

Singapore Katılım Temmuz 2021
632 Takip Edilen969 Takipçiler
blessedstaff | Goldsand
blessedstaff | Goldsand@blessedstaff·
@syntakhs @thesubmitter_ exposing a fraud? a fraud that you haven't even seen? i offered you a chance to actually take a look and see how it works, instead you've leveled accusations, curses, and profanity during the holy month. i'm sorry brother, but you are being blocked. blessings.
English
0
0
1
34
📜
📜@syntakhs·
@blessedstaff @thesubmitter_ My work here is done, exposing the fraud for what it is You ‘operate’ Ethereum and Solana networks for fees but it’s ‘not staking’? That’s a lie, those are PoS (Proof-of-Stake) networks you little PoS (Piece of Shit), earning fees by running nodes is literally staking
English
1
0
2
256
blessedstaff | Goldsand retweetledi
Tariq | Goldsand
Tariq | Goldsand@thesubmitter_·
We're definitely entering a new era in Muslim innovation. Case in point: we’ve brought together Muslim Silicon Valley investors & engineers, Apple-quality designers, and PhD level financial experts and Islamic law experts to build a revolutionary fintech app that beats interest-based savings. The sheer talent + ihsaan in one project would have been unthinkable a few years ago, but now it's possible.
Saufiyah Ali@saufiyvh

I don't think Muslim creatives understand how rare their position is. You have access to both worlds: the deen and the dunya's tools. You can build, design, write, film, code, and every single thing you create can be an act of worship if the intention is right. Most people never get that clarity. Most people spend their lives chasing meaning through work that has no connection to anything eternal. You already have the answer. Use your skills for Allah. That's it.

English
34
56
747
70.1K
blessedstaff | Goldsand
blessedstaff | Goldsand@blessedstaff·
@syntakhs @thesubmitter_ blockchain tx fees doesn't means staking. that's an easy way to understand it for normies, which is who this product is targeted towards. if you want to try out the product, you can see for yourself that it's not staking. or do you prefer to just criticize and curse people?
English
1
0
1
81
📜
📜@syntakhs·
@blessedstaff @thesubmitter_ 😂😂 BS If this is ‘not that,’ why does your own FAQ say you earn by operating Ethereum and Solana? Fear Allah ﷻ man That is high risk staking, you’re marketing a volatile crypto protocol as a ‘zero volatility’ bank replacement May Allah ﷻ’s curse be upon you as well
📜 tweet media
English
1
0
4
87
blessedstaff | Goldsand
blessedstaff | Goldsand@blessedstaff·
@syntakhs @thesubmitter_ instead of cursing you back, I ask Allah SWT to guide you and protect your tongue from harming others without first giving them the benefit of the doubt. we have never scammed anyone. we have many happy users who used our products and we reached over $15M TVL
English
1
0
1
58
blessedstaff | Goldsand
blessedstaff | Goldsand@blessedstaff·
@syntakhs @thesubmitter_ what a way to start the last 10 nights by cursing people without even investigating what they're working on. first of all, @thesubmitter_ is not even Egyptian. second, it's true that we prev built shariah-compliant staking products for crypto-natives, but this is not that.
English
2
0
1
80
Abbas Khan ⟠
Abbas Khan ⟠@KhanAbbas201·
Lost 20 bucks over this bet. Imagine thinking Indonesian cuisine is better than Afghan, @sama fix this pls.
Abbas Khan ⟠ tweet media
English
34
0
83
5.3K
Ali
Ali@analyticalali·
Gmove Pakistan 🇵🇰 Been 10+ years since i visited, so long over due. Place has changed quite a bit but the one thing that remains is the crazy level of hospitality people show you. If any Movers or CT people around Karachi or Islamabad let me know, im always down for Quetta Chai.
Ali tweet media
English
29
2
56
4.3K
Chris AR Blauvelt
Chris AR Blauvelt@arblauvelt·
Considering switching my cell phone service. Been with T-Mobile for long time, which I love most for its free overseas service But reception by my house is terrible and I’m noticing more plans including free global roaming Anyone try Xfinity mobile or Google fi? Or others?
English
6
0
2
1K
blessedstaff | Goldsand
blessedstaff | Goldsand@blessedstaff·
Gone is the age when religion was the source of civilization Now your sins have become the foundation of your culture. وہ زمانہ گیا کہ دین تھا فقط منشائے تہذیب، تمدّن کے سبب ہو گئے تمہارے گناہ! -Allama Iqbal
English
0
0
7
191
Joe
Joe@joebradford·
A Review of "Bitcoin, Fiat, and Islamic Finance" (Summary) [1] I was excited to read Harris Irfan and Allen Farrington's paper "Bitcoin, Fiat, and Islamic Finance." The authors perform a valuable service by questioning whether modern banking, including Islamic banking, truly aligns with Islamic principles. Their identification of interest-based structures, central bank moral hazard, and wealth concentration as problematic is important and well-articulated. However, the paper's central thesis faces significant challenges. First, the authors conflate monetary medium with contractual structure, treating fiat currency as the fundamental problem when Islamic law actually prohibits riba and exploitative risk-transfer arrangements, not the monetary base itself. The claim that "fiat is haram" remains inadequately supported. Fiqh defines money functionally, not by supply characteristics. These prohibited structures can exist under any monetary system, including Bitcoin. Second, while the authors emphasize that fiat inflation extracts wealth from currency holders, they fail to address that Bitcoin's deflationary nature creates an opposite but equally problematic wealth transfer: from debtors to creditors, from entrepreneurs to passive capital holders. Combined with any fixed obligations, hard money deflation can be even more regressive than fiat inflation, concentrating wealth with those who already possess capital rather than those creating economic value, as evidenced by historical gold standard crises. Critically, the authors advocate for profit-sharing instruments like mudaraba and musharaka as proper Islamic alternatives. However, these very instruments create monetary claims exceeding the hard currency base. When a mudarib generates returns through productive activity, economic value grows beyond the fixed Bitcoin supply, creating the same deflationary pressure they critique in fiat systems. This reveals that the problem is not credit expansion itself, but rather the structure of risk allocation: partnership that allocate equitably versus creditor/debtor relationships that cause debt spirals. By treating "money creation" as the disease rather than interest-based contracts as the problem, they mistake symptoms for causes. Bitcoin may change monetary tradeoffs, but it doesn't eliminate them. What matters more is not the monetary base but whether we build inherently just financial structures and market mechanisms on top of it. As I've argued elsewhere, "Sound monetary policy stems from values and ethical framework, not the specific form of money." [2] --------------- links below 👇👇
Axiom@axiombtc

Bitcoin, Fiat, And Islamic Finance by @harris_irfan and @allenf32. Links to follow, available online and in pdf in English and Arabic.

English
12
12
104
17.5K
blessedstaff | Goldsand
blessedstaff | Goldsand@blessedstaff·
AI researchers: my 10M token context window LLM is on its way to becoming AGI Father of reinforcement learning: “LLMs aren’t bitter lesson pilled” Please AI researchers, get a grip
Andrej Karpathy@karpathy

Finally had a chance to listen through this pod with Sutton, which was interesting and amusing. As background, Sutton's "The Bitter Lesson" has become a bit of biblical text in frontier LLM circles. Researchers routinely talk about and ask whether this or that approach or idea is sufficiently "bitter lesson pilled" (meaning arranged so that it benefits from added computation for free) as a proxy for whether it's going to work or worth even pursuing. The underlying assumption being that LLMs are of course highly "bitter lesson pilled" indeed, just look at LLM scaling laws where if you put compute on the x-axis, number go up and to the right. So it's amusing to see that Sutton, the author of the post, is not so sure that LLMs are "bitter lesson pilled" at all. They are trained on giant datasets of fundamentally human data, which is both 1) human generated and 2) finite. What do you do when you run out? How do you prevent a human bias? So there you have it, bitter lesson pilled LLM researchers taken down by the author of the bitter lesson - rough! In some sense, Dwarkesh (who represents the LLM researchers viewpoint in the pod) and Sutton are slightly speaking past each other because Sutton has a very different architecture in mind and LLMs break a lot of its principles. He calls himself a "classicist" and evokes the original concept of Alan Turing of building a "child machine" - a system capable of learning through experience by dynamically interacting with the world. There's no giant pretraining stage of imitating internet webpages. There's also no supervised finetuning, which he points out is absent in the animal kingdom (it's a subtle point but Sutton is right in the strong sense: animals may of course observe demonstrations, but their actions are not directly forced/"teleoperated" by other animals). Another important note he makes is that even if you just treat pretraining as an initialization of a prior before you finetune with reinforcement learning, Sutton sees the approach as tainted with human bias and fundamentally off course, a bit like when AlphaZero (which has never seen human games of Go) beats AlphaGo (which initializes from them). In Sutton's world view, all there is is an interaction with a world via reinforcement learning, where the reward functions are partially environment specific, but also intrinsically motivated, e.g. "fun", "curiosity", and related to the quality of the prediction in your world model. And the agent is always learning at test time by default, it's not trained once and then deployed thereafter. Overall, Sutton is a lot more interested in what we have common with the animal kingdom instead of what differentiates us. "If we understood a squirrel, we'd be almost done". As for my take... First, I should say that I think Sutton was a great guest for the pod and I like that the AI field maintains entropy of thought and that not everyone is exploiting the next local iteration LLMs. AI has gone through too many discrete transitions of the dominant approach to lose that. And I also think that his criticism of LLMs as not bitter lesson pilled is not inadequate. Frontier LLMs are now highly complex artifacts with a lot of humanness involved at all the stages - the foundation (the pretraining data) is all human text, the finetuning data is human and curated, the reinforcement learning environment mixture is tuned by human engineers. We do not in fact have an actual, single, clean, actually bitter lesson pilled, "turn the crank" algorithm that you could unleash upon the world and see it learn automatically from experience alone. Does such an algorithm even exist? Finding it would of course be a huge AI breakthrough. Two "example proofs" are commonly offered to argue that such a thing is possible. The first example is the success of AlphaZero learning to play Go completely from scratch with no human supervision whatsoever. But the game of Go is clearly such a simple, closed, environment that it's difficult to see the analogous formulation in the messiness of reality. I love Go, but algorithmically and categorically, it is essentially a harder version of tic tac toe. The second example is that of animals, like squirrels. And here, personally, I am also quite hesitant whether it's appropriate because animals arise by a very different computational process and via different constraints than what we have practically available to us in the industry. Animal brains are nowhere near the blank slate they appear to be at birth. First, a lot of what is commonly attributed to "learning" is imo a lot more "maturation". And second, even that which clearly is "learning" and not maturation is a lot more "finetuning" on top of something clearly powerful and preexisting. Example. A baby zebra is born and within a few dozen minutes it can run around the savannah and follow its mother. This is a highly complex sensory-motor task and there is no way in my mind that this is achieved from scratch, tabula rasa. The brains of animals and the billions of parameters within have a powerful initialization encoded in the ATCGs of their DNA, trained via the "outer loop" optimization in the course of evolution. If the baby zebra spasmed its muscles around at random as a reinforcement learning policy would have you do at initialization, it wouldn't get very far at all. Similarly, our AIs now also have neural networks with billions of parameters. These parameters need their own rich, high information density supervision signal. We are not going to re-run evolution. But we do have mountains of internet documents. Yes it is basically supervised learning that is ~absent in the animal kingdom. But it is a way to practically gather enough soft constraints over billions of parameters, to try to get to a point where you're not starting from scratch. TLDR: Pretraining is our crappy evolution. It is one candidate solution to the cold start problem, to be followed later by finetuning on tasks that look more correct, e.g. within the reinforcement learning framework, as state of the art frontier LLM labs now do pervasively. I still think it is worth to be inspired by animals. I think there are multiple powerful ideas that LLM agents are algorithmically missing that can still be adapted from animal intelligence. And I still think the bitter lesson is correct, but I see it more as something platonic to pursue, not necessarily to reach, in our real world and practically speaking. And I say both of these with double digit percent uncertainty and cheer the work of those who disagree, especially those a lot more ambitious bitter lesson wise. So that brings us to where we are. Stated plainly, today's frontier LLM research is not about building animals. It is about summoning ghosts. You can think of ghosts as a fundamentally different kind of point in the space of possible intelligences. They are muddled by humanity. Thoroughly engineered by it. They are these imperfect replicas, a kind of statistical distillation of humanity's documents with some sprinkle on top. They are not platonically bitter lesson pilled, but they are perhaps "practically" bitter lesson pilled, at least compared to a lot of what came before. It seems possibly to me that over time, we can further finetune our ghosts more and more in the direction of animals; That it's not so much a fundamental incompatibility but a matter of initialization in the intelligence space. But it's also quite possible that they diverge even further and end up permanently different, un-animal-like, but still incredibly helpful and properly world-altering. It's possible that ghosts:animals :: planes:birds. Anyway, in summary, overall and actionably, I think this pod is solid "real talk" from Sutton to the frontier LLM researchers, who might be gear shifted a little too much in the exploit mode. Probably we are still not sufficiently bitter lesson pilled and there is a very good chance of more powerful ideas and paradigms, other than exhaustive benchbuilding and benchmaxxing. And animals might be a good source of inspiration. Intrinsic motivation, fun, curiosity, empowerment, multi-agent self-play, culture. Use your imagination.

English
0
0
7
228