ali

6.9K posts

ali banner
ali

ali

@AliSarialtun

Hayat işte

Turkey Katılım Ocak 2012
2K Takip Edilen252 Takipçiler
Sabitlenmiş Tweet
ali
ali@AliSarialtun·
Crypto coin işine 1 ile girdim, şimdi 9 oldu. Arada 8 daha yatırdım çünkü
Türkçe
1
2
22
0
ali
ali@AliSarialtun·
@ilker_kocael Daha en baştan, psikanalizin amacı insanın kendini iyi hissetmesi mi? Kendini adamak için neye ihtiyaç duyar, onlarla neden bağ kuramaz? Nedir güdülenmesini engelleyen, neden yabancılaşmış hisseder? Kendini bilmeden nasıl bilsin
Türkçe
0
0
1
988
İlker Kocael
İlker Kocael@ilker_kocael·
Düşünür Slavoj Žižek bu videoda psikanalize dair iki klişeyi ele alıyor: 1) Psikanaliz “kendini tanıma” sürecidir. 2) Psikanalizin amacı acımızı hafifletmektir. Psikoterapist Adam Phillips’in konuya dair tespitlerinden yola çıkan Žižek bu iki klişeyi yerden yere vuruyor, asıl hastalığın “takıntılı bir şekilde kendini tanıma arzusu olduğunu” iddia ediyor.
Türkçe
14
123
827
92K
ali
ali@AliSarialtun·
@EliBenSasson People simply do not tweet on things they don’t have opinions about
English
0
0
0
29
Eli Ben-Sasson | Starknet.io
Eli Ben-Sasson | Starknet.io@EliBenSasson·
When was the last time you read a tweet and thought "I don't have an opinion about that"?
English
28
2
45
3.5K
ahmet gümüştekin
ahmet gümüştekin@ahmetgumustekin·
Kripto dünyasında asıl karmaşa merkezi borsalarda değil, on-chain (zincir üstü) tarafta yaşanıyor. Onlarca farklı ağ, yüzlerce swap işlemi ve likidite havuzları arasında iz kaybetmek çok kolay. Özellikle DeFi (Merkeziyetsiz Finans) tarafında işlem yapanlar için hazırladığım bu "Şeffaf Nakit Çıkışı" rehberini, on-chain operasyonlarınızı yasal bir zemine oturtmak için kullanabilirsiniz. ⛓️ On-Chain İşlemlerde "İzlenebilir" Nakit Çıkışı On-chain hareketleriniz ne kadar karmaşık olursa olsun, banka hesabınıza giren paranın kaynağını ispatlayabilmek için şu disiplini kurmalısınız: 1. Proje Bazlı İzole Cüzdanlar On-chain tarafta her proje veya strateji (örneğin bir likidite havuzu veya yeni bir token) için ayrı bir cüzdan tanımlayın. Neden? Bütün işlemler tek cüzdanda çorba olduğunda, maliyeye veya bankaya "Bu 10.000 dolar şu projenin kârıdır" demek imkansızlaşır. Cüzdan ayrımı, işlemin dijital parmak izidir. 2. On-Chain Konsolidasyon (Toplama) Onlarca farklı cüzdandan Türk borsasına parça parça çekim yapmayın. Exit Gate (Çıkış Kapısı): Kendinize sadece "temiz" stabil coinlerin toplandığı ana bir on-chain cüzdan belirleyin. Diğer cüzdanlardaki kârlarınızı bu ana cüzdana aktarın. Böylece borsaya giden paranın tek bir noktadan çıktığı bir "on-chain hiyerarşi" oluşturursunuz. 3. Borsaya Giriş ve "Niyet" Beyanı Parayı on-chain'den Türk borsasına attığınız an, o artık izlenebilir bir finansal harekettir. Açık Kayıt: Borsanın yatırma/çekme formlarında veya banka açıklamasında; paranın "Kripto varlık ticari kârı" mı yoksa daha önce bankadan gönderdiğiniz "Ana para iadesi" mi olduğunu net belirtin. On-chain geçmişiniz (Etherscan, Solscan vb.) bu beyanınızın en büyük kanıtı olacaktır. 💡 Neden Bu Kadar Önemli? On-chain işlemler anonim gibi görünse de, banka hesabınıza para girdiğinde anonimlik biter. Bu sistemle: Mali İnceleme: Yarın bir gün "kaynak" sorulursa, her projenin cüzdanı ve kâr realizasyonu bir dosya düzeninde elinizde olur. Vergi Uyumu: Gelecekteki olası düzenlemelerde, kâr ve ana parayı en baştan ayırdığınız için hesaplama hatası yapmazsınız. Unutmayın: On-chain'de kaos, banka hesabında dert demektir. Basitleştirin, iz bırakın, rahat uyuyun. #OnChain #DeFi #KriptoPara #Web3 #KriptoVergi #Blockchain #Bitcoin #Ethereum #FinansalOkuryazarlık
ahmet gümüştekin tweet media
Türkçe
6
0
26
9.5K
ali
ali@AliSarialtun·
@TobbyKitty Enteresandır bizdeki nice milliyetçi, konu İspanya’ya, Kolombiya’ya gelince sosyalist liderleri beğenir oluyor. Sosyal adalet böyle bir şey ama kendine de bakmak lazım.
Türkçe
0
0
0
144
tobbykitty.eth
tobbykitty.eth@TobbyKitty·
Senin adamlık şaka mı ya 🇪🇸🇪🇸🇪🇸
ibrahim Haskoloğlu@haskologlu

🔴#SONDAKİKA | İspanya Başbakanı Pedro Sánchez: • 23 yıl önce, başka bir ABD yönetimi bizi Orta Doğu'da bir savaşa sürükledi. • O zamanlar da, bu savaşın Saddam Hüseyin'in kitle imha silahlarını ortadan kaldırmak olduğunu iddia etmişlerdi ve demokrasiyi getirmek için saldırı yapıldığı söyleniyordu. • Ancak gerçekte kıtamızın Berlin Duvarı'nın yıkılmasından bu yana yaşadığı en büyük güvensizlik dalgasını tetikledi. • Irak Savaşı, cihatçı terörizmde ciddi bir artışa, Doğu Akdeniz'de büyük bir göç krizine ve genel olarak enerji fiyatlarında artışa yol açtı. • Bir yasa dışılığa başka bir yasa dışılıkla karşılık veriseniz insanlığın büyük felaketleri işte böyle başlar. • İspanya bu felakete ve savaşa karşıdır. Çünkü biz hükümetlerin insanların yaşamlarını iyileştirmek, sorunlara çözüm üretmek için var olduklarını, insanların yaşamlarını daha da kötüleştirmek için değil, anlıyoruz. • Ve bu görevi yerine getiremeyen liderlerin başarısızlıklarını savaş dumanıyla örtbas etmeleri ve bu süreçte birkaç kişinin cebini doldurmaları kesinlikle kabul edilemez. • İspanya bu savaşa karşıdır ve posizyonunu kimse değiştiremeyecektir.

Türkçe
5
3
218
20.3K
ali retweetledi
Yunus Emre Erdölen
Yunus Emre Erdölen@yunuserdolen·
Geçmişte Vietnam savaşına karşı çıktığı için sektörden dışlanan Jane Fonda, bugün de İran’a saldırıyı kınamak için protestolara katılmış. ‘Esas eejim değişikliği Amerika’ya lazım’ demiş
Türkçe
85
2.8K
18.3K
535.1K
David Hoffman
David Hoffman@TrustlessState·
@NYCMayor @NYCMayor I encourage you to talk to your Iranian constituents They feel more safe today than any other day in the last 47 years It seems you might be confusing the Islamic Regime, which has occupied Iranian land, and who we are striking, with actual Iranians
English
15
16
412
20.8K
Mayor Zohran Kwame Mamdani
Mayor Zohran Kwame Mamdani@NYCMayor·
Today’s military strikes on Iran — carried out by the United States and Israel — mark a catastrophic escalation in an illegal war of aggression. Bombing cities. Killing civilians. Opening a new theater of war.  Americans do not want this. They do not want another war in pursuit of regime change. They want relief from the affordability crisis. They want peace. I am focused on making sure that every New Yorker is safe. I have been in contact with our Police Commissioner and emergency management officials. We are taking proactive steps, including increasing coordination across agencies and enhancing patrols of sensitive locations out of an abundance of caution. Additionally, I want to speak directly to Iranian New Yorkers: you are part of the fabric of this city — you are our neighbors, small business owners, students, artists, workers, and community leaders. You will be safe here.
English
67.2K
60.3K
406.5K
39.5M
ali
ali@AliSarialtun·
@TobbyKitty Ne için çalıştığını bilmedikçe, düzgün idealler benimsemedikçe bir işe yaramaz
Türkçe
0
0
0
570
tobbykitty.eth
tobbykitty.eth@TobbyKitty·
Yatıp kalkıp Gazi Mustafa Kemal Atatürk'e şükretmemiz lazım. Bu orospu çocukları bizi de zamanında sindirmek istediler, sike sike kovduk. Millet olarak köpek gibi çalışmamız gerekiyor, biz düşersek sıranın bize gelmeyeceği ne malum?
ibrahim Haskoloğlu@haskologlu

🔴#SONDAKİKA | ABD BAŞKANI TRUMP, İRAN REJİMİNİ ASKERİ YOLLA DEĞİŞTİRECEKLERİNİ AÇIKLADI. Trump, İranlılara: “İşimiz bittiğinde, hükümetinizi ele geçirin, sizin olacak. Bu muhtemelen gelecek nesiller boyunca tek şansınız olacak.”

Türkçe
22
12
381
26.9K
ali
ali@AliSarialtun·
@HoldstationW The drainings still continue though. Hacker refills gas and drains more tokens. The loss must be bigger by now. Is this going to be updated?
English
1
0
0
369
Holdstation - The DeFAI Smart Wallet
🚨 HOLDSTATION INCIDENT UPDATE Total confirmed loss: 462,000 USDT We take this situation extremely seriously. Our team is actively investigating the root cause and strengthening all security layers. A compensation plan is being prepared and will be shared with the community. We appreciate your patience and will continue to provide transparent updates. Holdstation Team
English
49
15
86
27.4K
Nolan
Nolan@0x_nolan·
@AliSarialtun @zachxbt @codeesura I suspect it's some kind of Vscode extension. Could you check if you have any dangerous extensions installed?
English
1
0
0
45
ali
ali@AliSarialtun·
@HoldstationW It has been horrible. Even though I was lucky to save an important part, still can’t imagine what’s happening
English
2
0
0
445
Holdstation - The DeFAI Smart Wallet
We are actively investigating the incident and will provide further updates, including a remediation and compensation plan for affected users, as soon as possible. In the meantime, please safeguard your assets by transferring funds to a secure wallet. Thank you for your patience and cooperation.
English
5
1
9
6.2K
Holdstation - The DeFAI Smart Wallet
We recently experienced a security incident with our product. The extent of any impact is still being assessed, and our team is actively investigating. We'll notify affected users as soon as possible. Please stay calm and await the latest updates and remediation steps from our official X page. Thank you for your patience.
English
43
12
55
48.8K
ali
ali@AliSarialtun·
@HoldstationW I am having the worst night ever...
English
3
0
1
851
ali
ali@AliSarialtun·
@HoldstationW I am one of the affected users. see below information please
English
0
0
0
21
ali
ali@AliSarialtun·
@tether please take a look
English
1
0
0
14
Tory | io.net 🦾
Tory | io.net 🦾@MTorygreen·
@AnthropicAI you trained on the open internet and then call it “distillation attacks” when others learn from you labs that like to preach “open research” suddenly crying about open access this is what happens when intelligence sits behind a centralized api with subsidized tokens
English
28
58
2.8K
132.4K
Anthropic
Anthropic@AnthropicAI·
We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax. These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models.
English
7.3K
6.2K
54.6K
33.7M
Rob Bensinger ⏹️
Rob Bensinger ⏹️@robbensinger·
Hundreds of scientists, including 3/4 of the most cited living AI scientists, have said that AI poses a very real chance of killing us all. We're in uncharted waters, which makes the risk level hard to assess; but a pretty normal estimate is Jan Leike's "10-90%" of extinction-level outcomes. Leike heads Anthropic's alignment research team, and previously headed OpenAI's. This actually seems pretty straightforward. There's literally no reason for us to sleepwalk into disaster here. No normal engineering discipline, building a bridge or designing a house, would accept a 25% chance of killing a person; yet somehow AI's engineering culture has corroded enough that no one bats an eye when Anthropic's CEO talks about a 25% chance of research efforts killing every person. A minority of leading labs are dismissive of the risk (mainly Meta), but even the fact that “will we kill everyone if we keep moving forward?” is hotly debated among researchers seems very obviously like more than enough grounds for governments to internationally halt the race to build superintelligent AI. Like, this would be beyond straightforward in any field other than AI. Obvious question: How would that even work? Like, I get the argument in principle: “smarter-than-human AI is more dangerous than nukes, so we need to treat it similarly.” But with nukes, we have a detailed understanding of what’s required to build them, and it involves huge easily-detected infrastructure projects and rare materials. Response: The same is true for AI, as it’s built today. The most powerful AIs today rely on extremely specialized and costly hardware, cost hundreds of millions of dollars to build,¹ and rely on massive data centers² that are relatively easy to detect using satellite and drone imagery, including infrared imaging.³ Q: But wouldn’t people just respond by building data centers in secret locations, like deep underground? Response: Only a few firms can fabricate AI chips — primarily the Taiwanese company TSMC — and one of the key machines used in high-end chips is only produced by the Dutch company ASML. This is the extreme ultraviolet lithography machine, which is the size of a school bus, weighs 200 tons, and costs hundreds of millions of dollars.⁴ Many key components are similarly bottlenecked.⁵ This supply chain is the result of decades of innovation and investment, and replicating it is expected to be very difficult — likely taking over a decade, even for technologically advanced countries.⁶ This essential supply chain, largely located in countries allied to the US, provides a really clear point of leverage. If the international community wanted to, it could easily monitor where all the chips are going, build in kill switches, and put in place a monitoring regime to ensure chips aren’t being used to build toward superintelligence. (Focusing more efforts on the chip supply chain is also a more robust long-term solution than focusing purely on data centers, since it can solve the problem of developers using distributed training to attempt to evade international regulations.⁷) Q: But won’t AI become cheaper to build in the future? Response: Yes, but — (a) It isn’t likely to suddenly become dramatically cheaper overnight. If it becomes cheaper gradually, regulations can build in safety margin and adjust thresholds over time to match the technology. Efforts to bring preexisting chips under monitoring will progress over time, and chips have a limited lifespan, so the total quantity of unmonitored chips will decrease as well. (b) If we actually treated superintelligent AI like nuclear weapons, we wouldn’t be publishing random advances to arXiv, so the development of more efficient algorithms and more optimized compute would happen more slowly. Some amount of expected algorithmic progress would also be hampered by reduced access to chips. (c) You don’t need to ban superintelligence forever; you just need to ban it until it’s clear that we can build it without destroying ourselves or doing something similarly terrible. A ban could buy the world many decades of time. Q: But wouldn’t this treaty devastate the economy? A: It would mean forgoing some future economic gains, because the race to superintelligence comes with greater and greater profits until it kills you. But it’s not as though those profits are worth anything if we’re dead; this seems obvious enough. There’s the separate issue that lots of investments are currently flowing into building bigger and bigger data centers, in anticipation that the race to smarter-than-human AI will continue. A ban could cause a shock to the economy as that investment dries up. However, this is relatively easy to avoid via the Fed lowering its rates, so that a high volume of money continues to flow through the larger economy.⁸ Q: But wouldn’t regulating chips have lots of spillover effects on other parts of the economy that use those chips? A: NVIDIA’s H100 chip costs around $30,000 per chip and, due to its cooling and power requirements, is designed to be run in a data center.⁹ Regulating AI-specialized chips like this would have very few spillover effects, particularly if regulations only apply to chips used for AI training and not for inference.¹⁰ But also, again, an economy isn’t worth much if you’re dead. This whole discussion seems to be severely missing the forest for the trees, if it’s not just in outright denial about the situation we find ourselves in. Some of the infrastructure used to produce AI chips is also used in making other advanced computer chips, such as cell phone chips; but there are notable differences between these chips. If advanced AI chip production is shut down, it wouldn’t actually be difficult to monitor production and ensure that chip production is only creating non-AI-specialized chips. At the same time, existing AI chips could be monitored to ensure that they’re used to run existing AIs, and aren’t being used to train ever-more-capable models.¹¹ This wouldn't be trivial to do, but it's pretty easy relative to many of the tasks the world's superpowers have achieved when they faced a national security threat. The question is whether the US, China, and other key actors wake up in time, not whether they have good options for addressing the threat. Q: Isn't this totalitarian? A: Governments regulate thousands of technologies. Adding one more to the list won’t suddenly tip the world over into a totalitarian dystopia, any more than banning chemical or biological weapons did. The typical consumer wouldn’t even necessarily see any difference, since the typical consumer doesn’t run a data center. They just wouldn’t see dramatic improvements to the chatbots they use. Q: But isn’t this politically infeasible? A: It will require science communicators to alert policymakers to the current situation, and it will require policymakers to come together to craft a solution. But it doesn’t seem at all infeasible. Building superintelligence is unpopular with the voting public,¹² and hundreds of elected officials have already named this issue as a serious priority. The UN Secretary-General and major heads of state are routinely talking about AI loss-of-control scenarios and human extinction. At that point, the cat has already firmly left the bag. (And it's not as though there's anything unusual about governments heavily regulating powerful new technologies.) What's left is to dial up the volume on that talk, translate that talk into planning and fast action, and recognize that "there's uncertainty how much time we have left" makes this a more urgent problem, not less. Q: But if the US halts, isn’t that just ceding the race to authoritarian regimes? A: The US shouldn’t halt unilaterally; that would just drive AI research to other countries. Rather, the US should broker an international agreement where everyone agrees to halt simultaneously. (Some templates of agreements that would do the job have already been drafted.¹³) Governments can create a deterrence regime by articulating clear limits and enforcement actions. It’s in no country’s interest to race to its own destruction, and a deterrence regime like this provides an alternative path. Q: But surely there will be countries that end up defecting from such an agreement. Even if you’re right that it’s in no one’s interest to race once they understand the situation, plenty of people won’t understand the situation, and will just see superintelligent AI as a way to get rich quick. A: It’s very rare for countries (or companies!) to deliberately violate international law. It’s rare for countries to take actions that are widely seen as serious threats to other nations’ security. (If it weren't rare, it wouldn't be a big news story when it does happen!) If the whole world is racing to build superintelligence as fast as possible, then we’re very likely dead. Even if you think there's a chance that cautious devs could stay in control as AI starts to vastly exceed the intelligence of the human race (and no, I don't think this is realistic in the current landscape), that chance increasingly goes out the window as the race heats up, because prioritizing safety will mean sacrificing your competitive edge. If instead a tiny fraction of the world is trying to find sneaky ways to build a small researcher-starved frontier AI project here and there, while dealing with enormous international pressure and censure, then that seems like a much more survivable situation. By analogy, nuclear nonproliferation efforts haven’t been perfectly successful. Over the past 75 years, the number of nuclear powers has grown from 2 to 9. But this is a much more survivable state of affairs than if we hadn’t tried to limit proliferation at all, and were instead facing a world where dozens or hundreds of nations possess nuclear weapons. When it comes to superintelligence, anyone building "god-like AI" is likely to get us all killed — whether the developer is a military or a company, and whether their intentions are good or ill. Going from "zero superintelligences" to "one superintelligence" is already lethally dangerous. The challenge is to block the construction of ASI while there's still time, not to limit proliferation after it already exists, when it's far too late to take the steering wheel. So the nuclear analogy is pretty limited in what it can tell us. But it can tell us that international law and norms have enormous power. Q: But what about China? Surely they’d never agree to an arrangement like this. A: The CCP has already expressed interest in international coordination and regulation on AI. E.g., Reuters reported that Chinese Premier Li Qiang said, "We should strengthen coordination to form a global AI governance framework that has broad consensus as soon as possible."¹⁴ And, quoting The Economist:¹⁵ "But the accelerationists are getting pushback from a clique of elite scientists with the Communist Party’s ear. Most prominent among them is Andrew Chi-Chih Yao, the only Chinese person to have won the Turing award for advances in computer science. In July Mr Yao said AI poses a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, the former president of Baidu, a Chinese tech giant, and Xue Lan, the chair of the state’s expert committee on AI governance, also reckon that AI may threaten the human race. Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans as humans see ants. "The influence of such arguments is increasingly on display. In March an international panel of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deceit. A short time later the risks posed by AI, and how to control them, became a subject of study sessions for party leaders. A state body that funds scientific research has begun offering grants to researchers who study how to align AI with human values. [...] "In July, at a meeting of the party’s central committee called the 'third plenum', Mr Xi sent his clearest signal yet that he takes the doomers’ concerns seriously. The official report from the plenum listed AI risks alongside other big concerns, such as biohazards and natural disasters. For the first time it called for monitoring AI safety, a reference to the technology’s potential to endanger humans. The report may lead to new restrictions on AI-research activities. "More clues to Mr Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should 'abandon uninhibited growth that comes at the cost of sacrificing safety', says the guide. Since AI will determine 'the fate of all mankind', it must always be controllable, it goes on. The document calls for regulation to be pre-emptive rather than reactive." The CCP is a US adversary. That doesn't mean they're idiots who will destroy their own country in order to thumb their nose at the US. If a policy is Good, that doesn't mean that everyone Bad will automatically oppose it. Policies that prevent human extinction are good for liberal democracies and for authoritarian regimes, so clueful people on all sides will endorse those policies. The question, again, is just whether people will clue in to what's happening soon enough to matter. My hope, in writing this, is to wake people up a bit faster. If you share that hope, maybe share this post, or join the conversation about it; or write your own, better version of a "wake-up" warning. Don't give up on the world so easily.
Rob Bensinger ⏹️ tweet media
English
81
201
704
105.5K
ali
ali@AliSarialtun·
@tiarazen Buraya bakmış mıydınız?
ali tweet media
Türkçe
1
0
0
81
Esra K.
Esra K.@tiarazen·
Uyumadan hemen önce, karanlıkta okumak için elimdeki basılı kitapların PDF'sini bulup indirmeye çalışıyorum. Kitabımı evde unuttuysam da sağda solda fırsat bulunca okumak için güzel oluyor. Şu andaki kitabımın PDF'sini bulamadığım için karanlıkta yattım bu tweet'i atıyorum:)
Türkçe
3
0
12
2.2K