ASS

59 posts

ASS banner
ASS

ASS

@ASS_OfSol

Decentralized AI voice engine. Permissionless voice cloning. Powered by $ASS. The future sounds different. CWryLYq4PpvyKeBwADZ54UedyTKX6rCWUvUrD9RLpump

انضم Kasım 2025
5 يتبع458 المتابعون
تغريدة مثبتة
ASS
ASS@ASS_OfSol·
Our technology isn’t just *revolutionary* it is uncensored, un-woke. Just as good, if not *better* than what is currently out there. It challenges big tech status quo — Pass us the towel they threw. Buy your peace of mind: CWryLYq4PpvyKeBwADZ54UedyTKX6rCWUvUrD9RLpump #privacy
ASS tweet media
English
17
7
26
32.8K
David Gokhshtein
David Gokhshtein@davidgokhshtein·
What’s the most underrated project in crypto that nobody’s talking about?
English
1.1K
50
650
92.2K
ASS أُعيد تغريده
ASS
ASS@ASS_OfSol·
jeeters jeeted
English
8
1
9
18K
ASS
ASS@ASS_OfSol·
Our uncensored AI image generator is complete and ready for deployment. 💎 $ASS holders get: - Unlimited image generation - No censorship - Web3 transactions - Free access for life The feature is built. Tested. Ready. Last chance to get in before the moon mission 🚀 #ASS #AI #Web3 #DeFi
ASS tweet media
English
8
6
15
19.7K
ASS
ASS@ASS_OfSol·
CWryLYq4PpvyKeBwADZ54UedyTKX6rCWUvUrD9RLpump This is real Ai. Not a larp. Simple.
ASS tweet media
English
4
9
26
14.1K
ASS
ASS@ASS_OfSol·
@BasedNPCsol We believe in being able to sell private tech using Web3 only for transactions. Simple. You want a robot? Well you also want Privacy.
English
0
0
6
210
Based Skywalker
Based Skywalker@BasedNPCsol·
@ASS_OfSol This computer is computer vision 👀 Planning on making robots???
English
1
0
1
222
ASS
ASS@ASS_OfSol·
This bad boy right here is going to enable so much private and free Ai modes to the Web3 market. Not a larp. Real tech. Degenerates deserve their own degenerate uncensored Ai. Best company to be apart of.
ASS tweet media
English
4
3
14
5.7K
ASS أُعيد تغريده
ASS
ASS@ASS_OfSol·
We encourage you to use the tech. Clone any voice! 100% Private! ass.studio/clonify/text-t…
English
36
11
73
19.8K
ASS
ASS@ASS_OfSol·
We increased free usage for every user, world wide. Soon ASS will only be available to $ASS holders. Announcement at 11:11 Pm CT
English
3
6
21
7.8K
ASS
ASS@ASS_OfSol·
Two more days. 11.11 11:11 CWryLYq4PpvyKeBwADZ54UedyTKX6rCWUvUrD9RLpump Real Ai - no larp.
ASS tweet media
13
5
23
11.8K
ASS
ASS@ASS_OfSol·
We need more degenerates involved in ASS Seriously, this isn’t *just* an Ai voice cloning company.
English
4
2
12
3.1K
ASS
ASS@ASS_OfSol·
Things are looking good for tomorrow. Good luck us. 📈
English
5
0
13
3K
ASS
ASS@ASS_OfSol·
We love what we are introducing. 11.11 2025 | 11:11 PM CT Order Your privacy now CWryLYq4PpvyKeBwADZ54UedyTKX6rCWUvUrD9RLpump
ASS tweet media
English
17
10
29
3.6K
ASS
ASS@ASS_OfSol·
If you believe in privacy you should help support us. You don’t even have to buy our coin, just help us get exposure because privacy is a big thing right now.
English
1
1
9
2.4K
ASS
ASS@ASS_OfSol·
@sama We provide *true* privacy and do not sell users data here — we are small but we are growing and providing *lifetime* and free unlimited usage of our tech to our investors. Just invest 100$ and you get access to our Ai forever and always. Simple.
English
0
0
2
239
Sam Altman
Sam Altman@sama·
I would like to clarify a few things. First, the obvious one: we do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market. If one company fails, other companies will do good work. What we do think might make sense is governments building (and owning) their own AI infrastructure, but then the upside of that should flow to the government as well. We can imagine a world where governments decide to offtake a lot of computing power and get to decide how to use it, and it may make sense to provide lower cost of capital to do so. Building a strategic national reserve of computing power makes a lot of sense. But this should be for the government’s benefit, not the benefit of private companies. The one area where we have discussed loan guarantees is as part of supporting the buildout of semiconductor fabs in the US, where we and other companies have responded to the government’s call and where we would be happy to help (though we did not formally apply). The basic idea there has been ensuring that the sourcing of the chip supply chain is as American as possible in order to bring jobs and industrialization back to the US, and to enhance the strategic position of the US with an independent supply chain, for the benefit of all American companies. This is of course different from governments guaranteeing private-benefit datacenter buildouts. There are at least 3 “questions behind the question” here that are understandably causing concern. First, “How is OpenAI going to pay for all this infrastructure it is signing up for?” We expect to end this year above $20 billion in annualized revenue run rate and grow to hundreds of billion by 2030. We are looking at commitments of about $1.4 trillion over the next 8 years. Obviously this requires continued revenue growth, and each doubling is a lot of work! But we are feeling good about our prospects there; we are quite excited about our upcoming enterprise offering for example, and there are categories like new consumer devices and robotics that we also expect to be very significant. But there are also new categories we have a hard time putting specifics on like AI that can do scientific discovery, which we will touch on later. We are also looking at ways to more directly sell compute capacity to other companies (and people); we are pretty sure the world is going to need a lot of “AI cloud”, and we are excited to offer this. We may also raise more equity or debt capital in the future. But everything we currently see suggests that the world is going to need a great deal more computing power than what we are already planning for. Second, “Is OpenAI trying to become too big to fail, and should the government pick winners and losers?” Our answer on this is an unequivocal no. If we screw up and can’t fix it, we should fail, and other companies will continue on doing good work and servicing customers. That’s how capitalism works and the ecosystem and economy would be fine. We plan to be a wildly successful company, but if we get it wrong, that’s on us. Our CFO talked about government financing yesterday, and then later clarified her point underscoring that she could have phrased things more clearly. As mentioned above, we think that the US government should have a national strategy for its own AI infrastructure. Tyler Cowen asked me a few weeks ago about the federal government becoming the insurer of last resort for AI, in the sense of risks (like nuclear power) not about overbuild. I said “I do think the government ends up as the insurer of last resort, but I think I mean that in a different way than you mean that, and I don’t expect them to actually be writing the policies in the way that maybe they do for nuclear”. Again, this was in a totally different context than datacenter buildout, and not about bailing out a company. What we were talking about is something going catastrophically wrong—say, a rogue actor using an AI to coordinate a large-scale cyberattack that disrupts critical infrastructure—and how intentional misuse of AI could cause harm at a scale that only the government could deal with. I do not think the government should be writing insurance policies for AI companies. Third, “Why do you need to spend so much now, instead of growing more slowly?”. We are trying to build the infrastructure for a future economy powered by AI, and given everything we see on the horizon in our research program, this is the time to invest to be really scaling up our technology. Massive infrastructure projects take quite awhile to build, so we have to start now. Based on the trends we are seeing of how people are using AI and how much of it they would like to use, we believe the risk to OpenAI of not having enough computing power is more significant and more likely than the risk of having too much. Even today, we and others have to rate limit our products and not offer new features and models because we face such a severe compute constraint. In a world where AI can make important scientific breakthroughs but at the cost of tremendous amounts of computing power, we want to be ready to meet that moment. And we no longer think it’s in the distant future. Our mission requires us to do what we can to not wait many more years to apply AI to hard problems, like contributing to curing deadly diseases, and to bring the benefits of AGI to people as soon as possible. Also, we want a world of abundant and cheap AI. We expect massive demand for this technology, and for it to improve people’s lives in many ways. It is a great privilege to get to be in the arena, and to have the conviction to take a run at building infrastructure at such scale for something so important. This is the bet we are making, and given our vantage point, we feel good about it. But we of course could be wrong, and the market—not the government—will deal with it if we are.
English
5.8K
1.4K
12.5K
8M
ASS
ASS@ASS_OfSol·
Imagine typing all this to shill your Ai tech as private and secure but it’s really just selling your data unlike @ASS_OfSol Seriously?
Sam Altman@sama

I would like to clarify a few things. First, the obvious one: we do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market. If one company fails, other companies will do good work. What we do think might make sense is governments building (and owning) their own AI infrastructure, but then the upside of that should flow to the government as well. We can imagine a world where governments decide to offtake a lot of computing power and get to decide how to use it, and it may make sense to provide lower cost of capital to do so. Building a strategic national reserve of computing power makes a lot of sense. But this should be for the government’s benefit, not the benefit of private companies. The one area where we have discussed loan guarantees is as part of supporting the buildout of semiconductor fabs in the US, where we and other companies have responded to the government’s call and where we would be happy to help (though we did not formally apply). The basic idea there has been ensuring that the sourcing of the chip supply chain is as American as possible in order to bring jobs and industrialization back to the US, and to enhance the strategic position of the US with an independent supply chain, for the benefit of all American companies. This is of course different from governments guaranteeing private-benefit datacenter buildouts. There are at least 3 “questions behind the question” here that are understandably causing concern. First, “How is OpenAI going to pay for all this infrastructure it is signing up for?” We expect to end this year above $20 billion in annualized revenue run rate and grow to hundreds of billion by 2030. We are looking at commitments of about $1.4 trillion over the next 8 years. Obviously this requires continued revenue growth, and each doubling is a lot of work! But we are feeling good about our prospects there; we are quite excited about our upcoming enterprise offering for example, and there are categories like new consumer devices and robotics that we also expect to be very significant. But there are also new categories we have a hard time putting specifics on like AI that can do scientific discovery, which we will touch on later. We are also looking at ways to more directly sell compute capacity to other companies (and people); we are pretty sure the world is going to need a lot of “AI cloud”, and we are excited to offer this. We may also raise more equity or debt capital in the future. But everything we currently see suggests that the world is going to need a great deal more computing power than what we are already planning for. Second, “Is OpenAI trying to become too big to fail, and should the government pick winners and losers?” Our answer on this is an unequivocal no. If we screw up and can’t fix it, we should fail, and other companies will continue on doing good work and servicing customers. That’s how capitalism works and the ecosystem and economy would be fine. We plan to be a wildly successful company, but if we get it wrong, that’s on us. Our CFO talked about government financing yesterday, and then later clarified her point underscoring that she could have phrased things more clearly. As mentioned above, we think that the US government should have a national strategy for its own AI infrastructure. Tyler Cowen asked me a few weeks ago about the federal government becoming the insurer of last resort for AI, in the sense of risks (like nuclear power) not about overbuild. I said “I do think the government ends up as the insurer of last resort, but I think I mean that in a different way than you mean that, and I don’t expect them to actually be writing the policies in the way that maybe they do for nuclear”. Again, this was in a totally different context than datacenter buildout, and not about bailing out a company. What we were talking about is something going catastrophically wrong—say, a rogue actor using an AI to coordinate a large-scale cyberattack that disrupts critical infrastructure—and how intentional misuse of AI could cause harm at a scale that only the government could deal with. I do not think the government should be writing insurance policies for AI companies. Third, “Why do you need to spend so much now, instead of growing more slowly?”. We are trying to build the infrastructure for a future economy powered by AI, and given everything we see on the horizon in our research program, this is the time to invest to be really scaling up our technology. Massive infrastructure projects take quite awhile to build, so we have to start now. Based on the trends we are seeing of how people are using AI and how much of it they would like to use, we believe the risk to OpenAI of not having enough computing power is more significant and more likely than the risk of having too much. Even today, we and others have to rate limit our products and not offer new features and models because we face such a severe compute constraint. In a world where AI can make important scientific breakthroughs but at the cost of tremendous amounts of computing power, we want to be ready to meet that moment. And we no longer think it’s in the distant future. Our mission requires us to do what we can to not wait many more years to apply AI to hard problems, like contributing to curing deadly diseases, and to bring the benefits of AGI to people as soon as possible. Also, we want a world of abundant and cheap AI. We expect massive demand for this technology, and for it to improve people’s lives in many ways. It is a great privilege to get to be in the arena, and to have the conviction to take a run at building infrastructure at such scale for something so important. This is the bet we are making, and given our vantage point, we feel good about it. But we of course could be wrong, and the market—not the government—will deal with it if we are.

English
0
1
6
3.5K