HappyHippo ~ low fees high APY yield optimizer

4.8K posts

HappyHippo ~ low fees high APY yield optimizer banner
HappyHippo ~ low fees high APY yield optimizer

HappyHippo ~ low fees high APY yield optimizer

@HappyHippoDefi

https://t.co/YMNzSl2zmj is an Automated Liq Mgmt for Pancakeswap Hippo on ARB (💙,🧡) https://t.co/er82q0qB35 Telegram: https://t.co/a6YoyFkG5P

Wakanda Katılım Kasım 2021
468 Takip Edilen2.1K Takipçiler
Sabitlenmiş Tweet
HappyHippo ~ low fees high APY yield optimizer
Dear Hippo users, after almost 3 years of serving yields with our impregnable vaults, we've decided to wind down Hippo thus we recommend for you to withdraw your deposits. Don't panic as this is not a sudden process and the website will continue work as normal
English
0
0
3
532
HappyHippo ~ low fees high APY yield optimizer retweetledi
Bindu Reddy
Bindu Reddy@bindureddy·
Even if Strawberry or Q* or whatever were AGI or ASI, almost no one is going to pay $2000 for it No consumer anyway... What would they do with this super intelligence?
English
254
21
332
69.6K
HappyHippo ~ low fees high APY yield optimizer
How do PWMs, despite all the bad rep manage to attract billions and billions every year from investors? If the client made money from tech, pitch them a real estate deal. If the client made money from real estate, pitch them a tech deal If the client made money from a generic conventional business, pitch them arcane financial securities. If the client made money from trading/finance, pitch them a generic bussiness deal. In short, pitch the client things they dont understand in order to for them to give you money 😁
English
1
0
4
406
Moiz Ali
Moiz Ali@moizali·
I've worked with 3 different private wealth managers over the past 5 years, including @GoldmanSachs. To date, I can say that: A. They have provided virtually no value in growing my net worth. They promise access to exclusive investment opportunities, but the investments aren't nearly as good or as exclusive as you'd think. Elliott Management has $71 Billion under management. How exclusive do you think it is? Every wealth manager pitched me "exclusive access" to Elliott. It's the fucking Vanguard of private wealth managers. Forerunner Ventures? They raised $1 billion dollars. Nothing you couldn't get access to if you really wanted/tried. But to funds you can't get access to, they can't either. Sequoia? Not a chance in hell. B. They are structured against success. You know what I want to invest in? The small scrappy guy who bought two properties in SoCal or Idaho or Oklahoma and learned how to work with contractors and flipped them. Now, he wants to buy 10 or a small apartment building and do the same. But Private Wealth Managers are all focused on acquiring and retaining large, rich clients. Why? Because their compensation is based on a percentage of money you have with them. If you have $10M invested with them, they make less than if you have $100M. So they want big fish. As a result, they can't invest in a guy raising $10M to buy real estate in Coral Gables Florida, because he's too small for them. They can only invest in the Elliots of the word. C. The idea that they are going to set you up with unique advisors who will be helpful is malarkey. The people they set you up with are run of the mill attorneys or accountants. They aren't creative. They aren't thoughtful. They aren't amazing. If they were, they'd hang up with their own shingle and make a ton of money. You think the best tax attorney works at Goldman Sachs where he makes $1m a year? He can start his own firm and make 10X that. D. They aren't smarter than you. The Private Wealth Manager I work with today forecasted a soft landing with no meaningful interest rate raises 2.5 years ago. They suggested I invest ~$10M in medium term bonds because there was 3% yield to be had and they didn't think interest rates would go up. I remember sitting in that conference room listening to them and thinking "are you fucking incompetent or insane" I invested in one fund with Colony Capital that was focused on real estate during the pandemic. It LOST money. One of the few funds to break the buck during the pandemic in real estate. And it wasn't focused on office real estate, so don't even say that. Private Wealth Manager's Ph.Ds will say "discounted cash flows" and "regression analysis" to make your head spin, and then jerk off in the dark with your money. E. The worst is Goldman Sachs though. I mean they are the fucking worst. Rather than invest in Elliott, they say "we have our own Elliott where we do the same thing but better". That may be true, but they'd say that no matter what you suggested. If @BillGates agreed to pay me a billion dollars tomorrow if I loaned him $1 today, Goldman would advise against it. Goldman would say "don't lend him the dollar - give it to us to invest instead" because then they'd earn fees on that dollar. A. If you're a PWM, tell me why I'm wrong. How are you different than other managers? Why are you better? B. Are you a client and have had a better experience with Private Wealth Managers than me? If so, please explain why? C. If you're thinking about using a PWM, I'd suggest just investing in the S&P500. What do you think?
English
578
349
4.4K
1.7M
Gas Biz Guy
Gas Biz Guy@gas_biz·
Top two things a Gas station needs to get your business.
English
621
2
215
188.4K
HappyHippo ~ low fees high APY yield optimizer
@karpathy You probably can alreay do this in East Asia where mobile wallets & digital banks dominate. In the US, banking & financial tech advancement have been hindered by the Bank Holding Company Act of 1956
English
0
0
0
34
Andrej Karpathy
Andrej Karpathy@karpathy·
I feel like a large amount of GDP is locked up because it is difficult for person A to very conveniently pay 5 cents to person B. Current high fixed costs per transaction force each of them to be of high enough amounts, which results in business models with purchase bundles, subscriptions, ad-based, etc., instead of simply pay-as-you-go. As an example, I'd like my computer to auto-pay 5 cents to the article/blog that I just read but I can't, and I think we're worse for it. In a capitalist system, transactions between entities are the gradient signal of the economy. Because our pipes don't support low magnitude terms in the sums, the gradients are not flowing properly through the system. I'm not familiar enough with payments to have an idea of specific solutions, but I expect we'd see a lot of positive 2nd / 3rd order effects if the gradients were allowed to flow properly, frictionlessly and with much higher resolution.
English
1K
739
9.2K
1.2M
0xsifu
0xsifu@0xSifu·
0xsifu tweet media
ZXX
7
2
44
5.5K
Andrej Karpathy
Andrej Karpathy@karpathy·
# RLHF is just barely RL Reinforcement Learning from Human Feedback (RLHF) is the third (and last) major stage of training an LLM, after pretraining and supervised finetuning (SFT). My rant on RLHF is that it is just barely RL, in a way that I think is not too widely appreciated. RL is powerful. RLHF is not. Let's take a look at the example of AlphaGo. AlphaGo was trained with actual RL. The computer played games of Go and trained on rollouts that maximized the reward function (winning the game), eventually surpassing the best human players at Go. AlphaGo was not trained with RLHF. If it were, it would not have worked nearly as well. What would it look like to train AlphaGo with RLHF? Well first, you'd give human labelers two board states from Go, and ask them which one they like better: Then you'd collect say 100,000 comparisons like this, and you'd train a "Reward Model" (RM) neural network to imitate this human "vibe check" of the board state. You'd train it to agree with the human judgement on average. Once we have a Reward Model vibe check, you run RL with respect to it, learning to play the moves that lead to good vibes. Clearly, this would not have led anywhere too interesting in Go. There are two fundamental, separate reasons for this: 1. The vibes could be misleading - this is not the actual reward (winning the game). This is a crappy proxy objective. But much worse, 2. You'd find that your RL optimization goes off rails as it quickly discovers board states that are adversarial examples to the Reward Model. Remember the RM is a massive neural net with billions of parameters imitating the vibe. There are board states are "out of distribution" to its training data, which are not actually good states, yet by chance they get a very high reward from the RM. For the exact same reasons, sometimes I'm a bit surprised RLHF works for LLMs at all. The RM we train for LLMs is just a vibe check in the exact same way. It gives high scores to the kinds of assistant responses that human raters statistically seem to like. It's not the "actual" objective of correctly solving problems, it's a proxy objective of what looks good to humans. Second, you can't even run RLHF for too long because your model quickly learns to respond in ways that game the reward model. These predictions can look really weird, e.g. you'll see that your LLM Assistant starts to respond with something non-sensical like "The the the the the the" to many prompts. Which looks ridiculous to you but then you look at the RM vibe check and see that for some reason the RM thinks these look excellent. Your LLM found an adversarial example. It's out of domain w.r.t. the RM's training data, in an undefined territory. Yes you can mitigate this by repeatedly adding these specific examples into the training set, but you'll find other adversarial examples next time around. For this reason, you can't even run RLHF for too many steps of optimization. You do a few hundred/thousand steps and then you have to call it because your optimization will start to game the RM. This is not RL like AlphaGo was. And yet, RLHF is a net helpful step of building an LLM Assistant. I think there's a few subtle reasons but my favorite one to point to is that through it, the LLM Assistant benefits from the generator-discriminator gap. That is, for many problem types, it is a significantly easier task for a human labeler to select the best of few candidate answers, instead of writing the ideal answer from scratch. A good example is a prompt like "Generate a poem about paperclips" or something like that. An average human labeler will struggle to write a good poem from scratch as an SFT example, but they could select a good looking poem given a few candidates. So RLHF is a kind of way to benefit from this gap of "easiness" of human supervision. There's a few other reasons, e.g. RLHF is also helpful in mitigating hallucinations because if the RM is a strong enough model to catch the LLM making stuff up during training, it can learn to penalize this with a low reward, teaching the model an aversion to risking factual knowledge when it's not sure. But a satisfying treatment of hallucinations and their mitigations is a whole different post so I digress. All to say that RLHF *is* net useful, but it's not RL. No production-grade *actual* RL on an LLM has so far been convincingly achieved and demonstrated in an open domain, at scale. And intuitively, this is because getting actual rewards (i.e. the equivalent of win the game) is really difficult in the open-ended problem solving tasks. It's all fun and games in a closed, game-like environment like Go where the dynamics are constrained and the reward function is cheap to evaluate and impossible to game. But how do you give an objective reward for summarizing an article? Or answering a slightly ambiguous question about some pip install issue? Or telling a joke? Or re-writing some Java code to Python? Going towards this is not in principle impossible but it's also not trivial and it requires some creative thinking. But whoever convincingly cracks this problem will be able to run actual RL. The kind of RL that led to AlphaGo beating humans in Go. Except this LLM would have a real shot of beating humans in open-domain problem solving.
Andrej Karpathy tweet media
English
405
1.2K
8.8K
1.2M
HappyHippo ~ low fees high APY yield optimizer
I could definitely see how that could work, however here's my comment on it. 1. The best way to get to know models is to become an amateur photographer of sorts. Buy a DSLR camera and setup an instagram profile, that way you could cold approach them, just walkup to them in the club, park, mall or whatever and say you love their look and want to do a photoshoot of them for private use. Models are usually pretty easy to spot from the way they dress and conduct themselves. 2. From the models that I know and speaking from experience across 3 continents here, most of them don't have a problem having casual sex with "friends" but they do have a problem with clingy guys, especially if you're not rich. Modelling gigs don't pay very well and models would like to always keep their options open to find the best possible mate (rich). So, there's a good chance you could bang them just by being friends as long as you're fun and useful to be around with, just make to sure to signal that there'll be no strings attached.
English
0
1
3
302
BowTiedPassport
BowTiedPassport@BowTiedPassport·
Why You Should Friendzone Models🧵: If you're familiar with the modeling industry, you know it's transient. Girls get a contract> flown to a city> put in a model house where they sleep in a room with six other chicks from various locations for a couple of months, hoping to get booked for as many jobs as possible.
BowTiedPassport tweet media
English
28
25
671
350.2K
StripMallGuy
StripMallGuy@realEstateTrent·
Serious question: You’re at a restaurant. The waiter comes to your table and asks: "Bottled water or tap water?" What is your answer?
English
610
2
207
231.7K
HappyHippo ~ low fees high APY yield optimizer
🤔Billing wise, should look something like this McKinsey Engagement Billing Detail Client: City Hall Ballers Project: Waste Management Strategy Duration: January 1, 2023, to December 31, 2023 Personnel Senior Partner: 100 hours @ $800/hour = $80,000 Partner: 300 hours @ $700/hour = $210,000 Associate Partner: 400 hours @ $500/hour = $200,000 Engagement Manager: 1,000 hours @ $400/hour = $400,000 Senior Associate: 1,200 hours @ $300/hour = $360,000 Associate: 2,500 hours @ $250/hour = $625,000 Analyst: 3,000 hours @ $200/hour = $600,000 Total Personnel Costs: $2,475,000 Expenses Travel and Accommodation: $250,000 Meals and Incidentals: $75,000 Transportation: $100,000 Miscellaneous Project Expenses: $50,000 Total Expense Costs: $475,000 Other Costs Data and Analytics Subscriptions: $100,000 Third-party Consulting Fees: $200,000 Specialist Contractors: $300,000 Technology Licensing Fees: $150,000 Total Other Costs: $750,000 Contingency and Risk Management: $300,000 Grand Total: $4,000,000
English
0
0
1
468
StripMallGuy
StripMallGuy@realEstateTrent·
Many people live like this: They try to get a credit by telling the hotel how upset they are about the water pressure. They ask for discounts from their friend the auto-mechanic, and negotiate even with their barber and their dentist. It's an all-out relentless effort to save a few bucks here and there, but at what cost? They dedicate themselves to obsessively saving every penny they can, even if they make millions. They can't help it. It's an addiction. They have to win. They never consider how much better off they'd be if they put all that energy to better use, or saved themselves the trouble. You saved $80, but what about the conflict, stress, and time it took to do it? What a crazy blind spot.
English
158
48
1.3K
383.8K
StripMallGuy
StripMallGuy@realEstateTrent·
When I was growing up, there was a girl in my class named Cindy. She always got the best grades in the class, by far. One year there was a simple division test, and the teacher asked the students to switch papers and grade each other’s tests. I had Cindy’s. She missed one problem, and her grade was a 98%. As she approached me and saw her paper, her face dropped. Her eyes started to water. She began to panic. I was really confused - her score was surely the highest in the class. She grabbed her paper, rushed back to her desk, looked around, and picked up her backpack. She used it as a shield as she erased the incorrect answer, and changed her score to perfect. The teacher noticed what was going on, and called her parents. As I look back now as a parent myself, I can’t help but wonder if Cindy ever recovered from being raised this way. Some parents have no idea how much damage they are causing their young kids.
English
54
18
790
243.2K
HappyHippo ~ low fees high APY yield optimizer
#HippoV3 June Jubilee axlUSDC-USDT vault June 2024 recap Net Yield = End Price – Start Price = 1.129 - 1.1016 = 0.0113 = 1.13% (13.56% Annualized) Total 12.9% actual real yield generated since launching on October 2023, with only $50k of still available capacity left
HappyHippo ~ low fees high APY yield optimizer tweet media
English
0
1
1
584