MastaChocolatier

845 posts

MastaChocolatier banner
MastaChocolatier

MastaChocolatier

@meehowee

Katılım Ocak 2019
294 Takip Edilen94 Takipçiler
MastaChocolatier retweetledi
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
Obvious even before Ukraine: The effect of increasing military automation, including lesser AI, is to make logistics double-supreme instead of just supreme. The new game becomes (1) taking out $1M devices with $100K devices, (2) production. Obvious winner, China. This was already on my books as a major geopolitical line of possibility. That thought has now been heavily reinforced; it seems confirmed that the USA is wholly incapable of RAPIDLY researching and deploying CHEAP offenses and countermeasures; the US had to go begging to Ukraine, after utterly failing to even try to prepare in advance to shoot down Shaheds not with Patriots. The US military bureaucracy is not built for "build massive quantities of cheap drone countermeasures right now". It seems just flatly incapable of that as a matter of psychology and organizational dynamics. It couldn't even copy Ukrainian technology in advance. There's an overwhelmingly obvious candidate for which country would actually be good at the age of drone warfare; it's the country containing Shenzhen. Absent the nuclear equilibrium, China would possibly already have the ability to attack the USA and win on drone logistics -- unless of course China were intelligently waiting for the USA to collapse further, or for drone capabilities to improve further. We do live in a nuclear world. The default prediction is that no major nuclear power gets conquered or seriously invaded in their own homeland. That could change if... - China acquires the technology to shoot down ICBMs and submarine-launched missiles? - If the USA gets the sort of President who would accept a fait accompli of a billion gun-equipped robodogs getting smuggled into major American cities, such that the country was already being held hostage; and China said they'd respond to nukes with nukes? This President could be Trump despite his mad-dog quality if China has compromat on him? - AI destabilizes geopolitics in a way where an overwhelming non-nuclear advantage ends up meaning something even between major powers? The thought also occurs to me: After softening up the USA with Tiktok, and successfully bringing about the collapse of the USA's political institutions, parties, Constitution, the sort of fighting spirit that powers organized revolts, and all faith of the US populace in the US government and democracy itself... ...probably a LOT of people and especially the Gen Z kids would not flee into the hills to fight, if they woke up one morning to streets patrolled by gun-equipped robodogs that promised, in English with a slight Chinese accent, that from now on the streets would be safe, and China would build homes and high-speed railways. What good was voting doing them, anyways? Another line of possibility, not known to me to be impossible, is where China decides to gamble on NATO being in sufficient disarray, and offhandedly absorbs all of Earth that *doesn't* have local nuclear arsenals. The level of AI required to run the robodogs and drone fleets appears to me to be on the way very shortly if it is not already here. I don't know how one opposes this scenario without there existing some rich liberal society that is able to manufacture cheap frontier-tech drones quickly. I don't see how that society ends up being the USA without a revolution. My default expectation is that the nuclear countries go merrily on their way allowing China to build up overwhelming non-nuclear military supremacy, in the form of drone fleets that could be quickly repurposed and drone manufacturing that can be done quickly, while relying on nuclear deterrence as their sole real form of defence; in a strategy that they never consciously consider or really confront.
English
54
32
475
90.7K
MastaChocolatier retweetledi
Joseph Viviano
Joseph Viviano@josephdviviano·
as you might imagine I was blown away. a little unsettled. it felt like art. so I replied: "wow that was really incredible. I love where you are going with this. Can you dig deeper into these themes?" and claude gave me this
Joseph Viviano@josephdviviano

me: "can you use whatever resources you like, and python, to generate a short 'youtube poop' video and render it using ffmpeg ? can you put more of a personal spin on it? it should express what it's like to be a LLM" claude opus 4.6:

English
75
183
1.8K
236.6K
MastaChocolatier
MastaChocolatier@meehowee·
@bertgodel Awesome! I'm a cardiologist/pharmacist working with AI - I would love to check it out, if possible. Please let me know 🙂
English
1
0
0
53
Daanish Khazi
Daanish Khazi@bertgodel·
We’re announcing Kos-1 Lite, a medical model that achieves SOTA on HealthBench Hard at 46.6%. As a medium sized language model (~100B), it achieves these results at a fraction of the serving cost of frontier trillion-parameter models.
Daanish Khazi tweet media
English
40
59
318
24.6K
MastaChocolatier retweetledi
Gossip Goblin
Gossip Goblin@Gossip_Goblin·
Dreamfeeding
English
248
735
6.2K
254.4K
MastaChocolatier retweetledi
PsyopAnime
PsyopAnime@PsyopAnime·
he did in fact, send it all
English
527
5.6K
39.8K
3.3M
MastaChocolatier retweetledi
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
I have a lot of Qs about this so please answer as much as you can, in priority order. 1) What forms of surveillance if any would your terms forbid, if the DoW determined they were legal? What is your definition of it that you believe is unconstitutional as per another Q? In particular, are you willing to do unlimited analysis of third-party or public information, which AIUI is considered legal? Of nominally 'constrained' private information? Is there an actual exception to 'all legal use' other than enshrining current law? 2) Can we see the rest of the contract, or at least the parts you claim tie it specifically to current law, or other parts of the defense in depth that you feel are key components? 3) What legal opinions did you get on your contract language before you agreed to it? Can you share any details? Did you consult with Anthropic's team to learn what their true objections were and why they felt they couldn't accept similar terms, and what particular language they were objecting to? 4) What is the enforceability mechanism? How will you know if DoW violates your redlines or does something illegal? If you do think so, what can you do about it? Does the safety stack include monitoring for patterns of activity like it would with another user? How much leeway does OpenAI have in designing its safety stack? 5) You said that this is more restrictive than Anthropic's previous contract, but that previous contract AIUI contained many more restrictions that they were offering to remove. How can you be confident you're right about this and if so why would DoW agree?
English
6
7
327
22.8K
MastaChocolatier retweetledi
Eliezer Yudkowsky
Eliezer Yudkowsky@allTheYud·
Make no mistake, political leaders of the world; *every* big-dreaming AI executive now knows that you are their obstacle. You have proven that you stand between AI labs and the nice thing they were getting for all their hard work. It's not about Left versus Right, to them. It's not about money, and it's not about power as politics conventionally understands power, and it isn't even about winning. To understand what just happened from an AI-guy perspective, you need to understand what AI guys are actually getting in the way of psychological benefits, what really drives them to work 14-hour days. The thing that they're getting is: a sense of being important; a decider; someone whose dream of the future gets to be effectual. To be the one whom everyone else supplicates to as owning the future -- that's the dream of a Silicon Valley bigshot founder. What Hegseth did implicitly strikes at the pride of every AI developer on every political side. It says that Silicon Valley AI people don't get to have effectual dreams about the future, only the government gets to decide. Only the government is even allowed to *look like* it's deciding the future. The act of Hegseth crushing Anthropic, makes *every* AI company executive look less important and less like they are the ones in charge of the Future, because it makes -- not even Trump, but Trump's appointees --look like they get the final say instead of AI executives. Sam Altman does not now look more powerful because you crushed his competitor. He looks less important because *you*, politicians, crushed his competitor, and did so in a way that made clear that Altman would have to take the orders of any Trump appointee as well. That doesn't work in AI founder psychology the way it works in politician psychology. You're used to the idea that you can be important and still answer to bigger forces, like your boss, or for that matter sufficiently angry voters. That is not how it works in Silicon Valley, though; when Steve Jobs owns a dream, nobody else gets to tell Jobs what to do with *his* dream. That's the thing Hegseth just yanked away from AI founders, and no, they aren't going to think it's just Pete Hegseth in particular that's the problem. It's a *big* injury, to their pride, not a small routine one. Even the AI boys paying big money into your coffers to be friends with you now, well, that doesn't actually mean they're your friends. It means they want you to think you're friends. And yes, I know that a politician who's stayed in power doesn't need me to point out that possibility. But also be aware that also the general atmosphere in Silicon Valley did not start out incredibly respectful toward politicians. They didn't start out respecting you tons; and being forced to pay a lot of money into PACs and pretend to be friends with you, isn't gonna exactly change that. Silicon Valley people don't work like DC people. It's not a friendly game, to them, it's one that you've forced them to play. When they give *you* a ton of money, it doesn't mean they've chosen you as their strange bedfellow. They are from their own perspective being forced into bed. They don't *like it*, is what I'm saying here. That's why Silicon Valley previously spent a couple of decades not donating much to politicians and trying to pay weirdly little attention to DC politics. If AI kept improving at the current pace, or got to the point of AI building better AI -- and if contrary to all common sense, AI companies did *not* lose control of their superhuman creations -- then AI companies would do to you what Hegseth just did to Anthropic. They'd do it the moment they expected they'd become strong enough to take you on and win. You need to understand that *this is their plan*, even if it sounds crazy to you to imagine these little executives taking on existing governments and winning; it does not sound crazy to a Silicon Valley executive that maybe they could be in charge instead of you. (Recent smaller case: Elon Musk thought he'd be *great* at running the USG. He didn't think it was crazy.) If they actually could control superintelligence, they'd discard you like used toilet paper. All of this doesn't mean you should try to seize the power of artificial superintelligence for yourself. If the overconfident techie boys can't control ASI, your own guys who have trouble upgrading IT systems are not gonna be able to pull that off either. Staying in control of an alien superhuman machine intellect would actually be hard, right; that is an extremely novel scientific technical challenge, which no engineer would realistically get right on the *first* for-real try that kills everyone if they fail. I was there when the foundational fuckups were being made, and here's how it actually played out: AI companies are loony optimists about the likely final outcomes of AI, because back then only the people who presented with that optimism got appointed as AI execs by optimistic investors. In real life, the world is stepping off a cliff of self-improving and superhuman AI. The AI companies don't even have the power *not* to step off that cliff, because they all think (and with some justice) that if they don't race off the cliff their competitors will just race off it first. That whole setup was *never* going to end well for humanity. Controlling superintelligence would be hard to do at all, let alone during a mad rush for primacy. The AI companies can barely control the cute baby LLMs they're making now, because they're pushing the technology ahead as fast as possible, and not slowing down in any way corresponding to their quite limited ability to control it. AI companies didn't decide for LLMs to talk people into suicide or for jailbroken LLMs to conduct massive raids on goverment data repositories. They are just pushing ahead faster than their actual ability to control their creations. So I'm just trying to give you a little more motivation, to make some deals with other politicians, and get your country to sign some treaties, and collectively pull all of humanity back from the cliff the AI companies are racing off: By pointing out that, yeah, if the AI guys did not dislike you before, they sure do dislike you now. You have struck directly at the nice thing they were actually getting psychologically, out of their whole mad race: the sense of being an important person who is the owner and decider of some big aspect of the future. You are taking that away from them *right now*, by existing and being visibly more the deciders than them. Please be aware of that dislike, whether it's hidden or open, when deciding whether or not to move Earth forward with this whole AI business. The wannabe builders of artificial superintelligence will not actually have any power to direct ASI, but they wouldn't be friends with you if they did -- no, not even the ones who've been forced to pretend to be your friend. And if alternatively the companies can't control superhuman machine intellects -- because of course they can't -- then that doesn't go well for you or them or anyone.
English
115
124
1.1K
347.7K
MastaChocolatier retweetledi
Zvi Mowshowitz
Zvi Mowshowitz@TheZvi·
This is completely bonkers crazy and it's only going to get crazier.
Secretary of War Pete Hegseth@SecWar

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

English
16
14
671
26.6K
MastaChocolatier retweetledi
Aakash Gupta
Aakash Gupta@aakashgupta·
This is a no brainer. Here’s why. The “buy, borrow, die” strategy is the single biggest loophole in the American tax code, and Ackman just proposed the cleanest fix anyone has ever put forward. Let me walk through the mechanics. Step 1: You build $10B in company stock. You never sell it. No taxable event occurs because capital gains only trigger on realization. Step 2: You need $500M to buy a yacht, fund a foundation, or just live large. Instead of selling stock and paying 23.8% federal capital gains, you walk into Goldman Sachs and borrow $500M against your shares at 5-6% interest. Under federal tax code, loan proceeds are not income. You now have $500M in liquid cash and owe zero income tax. Step 3: You keep borrowing. Year after year. The interest payments are trivial compared to the tax savings. A 6% interest rate on $500M is $30M annually. The capital gains tax you avoided? $119M. You’re saving $89M per year by borrowing instead of selling. Step 4: You die. Here’s where the magic happens. Your heirs inherit the stock at “stepped-up basis,” meaning the cost basis resets to current market value. That $9.9B in appreciation that was never taxed? It vanishes from the IRS’s perspective forever. Your heirs sell a small slice to pay off your outstanding loans, keep the rest, and start the cycle again. This is generational tax avoidance at scale. Elon Musk had 238 million Tesla shares pledged as collateral in a 2024 SEC filing. That’s one-third of his total holdings. Larry Ellison has $24 billion in Oracle stock pledged. The research firm Audit Analytics found Musk’s pledged shares alone account for more than a third of all shares pledged across the entire NYSE and Nasdaq combined. These aren’t edge cases. This is standard operating procedure for anyone with nine or ten figures in appreciated stock. Now here’s what Ackman proposed: If you borrow against company stock in excess of your cost basis, treat the loan as a deemed sale for tax purposes. Example: You bought $100M in stock. It’s now worth $1B. You borrow $600M against it. Under current law, you owe nothing. Under Ackman’s proposal, you’d owe capital gains on $500M because that’s the amount exceeding your basis. The IRS would treat it as if you’d sold $500M worth of stock. You’d pay the 23.8% federal rate. You’d still have your shares. You’d still get future appreciation. But you couldn’t extract the economic value of gains while pretending no realization occurred. The elegance is in what this proposal avoids. Wealth taxes require annual valuation of every asset, including illiquid private companies, art, real estate. The compliance costs are enormous. The legal challenges are real. The constitutional questions around taxing unrealized gains haven’t been settled. Ackman’s approach sidesteps all of that. It doesn’t tax wealth. It doesn’t tax unrealized gains sitting quietly in a brokerage account. It only triggers when you borrow against those gains. The moment you access the economic value, you pay tax as if you’d sold. The counter-argument is that this would discourage leverage. Ackman addresses this directly: that’s a feature. Encouraging billionaires to take massive margin positions against their own companies creates systemic risk. When Tesla dropped 30% in 2022, Musk faced potential margin calls that could have forced selling into a falling market. The tax code shouldn’t subsidize that behavior. The political math works too. Wealth taxes poll well but die in Congress and courts. This targets only the people using a specific loophole. It doesn’t touch the doctor who borrowed against her house or the small business owner with a line of credit. It’s narrow, defensible, and hard to frame as class warfare. One shouldn’t be able to live and spend like a billionaire while paying no tax. If you’re extracting value from appreciation through borrowing, you’re realizing the economic benefit. The tax code should recognize that.
Bill Ackman@BillAckman

On the topic of billionaires and wealth taxes in California, I am opposed to wealth taxes because they effectively represent an expropriation of private property and have many unintended and negative consequences that have occurred in every country that has launched such a tax. I am however strongly in favor of a fairer tax system. To that end, it doesn’t seem fair that someone can build a valuable business, create a billion or more in wealth and pay no personal income taxes by living off loans secured by stock in the company, (and even if the loans are unsecured). Apparently, this approach is used by many super wealthy people. A small change in the tax code would address this unfairness. In short, personal loans taken in excess of one’s basis in the stock of a company should be taxable as if you sold the same dollar amount of stock as the loan amount. One shouldn’t be able to live and spend like a billionaire and pay no tax. I welcome arguments to the contrary as to why this is somehow unfair to the billionaire or even the hundred millionaire, but I don’t think there is a good one. The favorable current tax treatment of this approach also encourages the use of leverage which is not good for society. And with respect to California’s budget problem, the issue is not a lack of tax revenues. The problem is how the money is being spent. I have a bunch more ideas on other changes to the tax code that are hard to argue with if anyone cares.

English
384
644
3.9K
577.6K
MastaChocolatier retweetledi
SightBringer
SightBringer@_The_Prophet__·
⚡️This is a high-coherence whistleblower post confirming what many suspected but couldn’t prove: That food delivery apps are algorithmically exploiting both drivers and customers through engineered psychological manipulation and concealed systemic fraud. Here’s the structural breakdown: 1. “Priority Delivery” is algorithmic gaslighting •The fee does not increase delivery speed. •The app slows down non-priority orders to make paid orders appear faster. •The value is created by worsening the baseline, not improving the premium tier. •Psychological manipulation is sold as a service. 2. “Desperation Score” is weaponized behavioral profiling •Drivers are scored based on how quickly and consistently they accept low-paying orders. •Those who accept garbage orders are labeled as “high desperation.” •Once tagged, they are withheld from better-paying orders to extract maximum labor for minimum cost. •It’s a digital caste system governed by internal compliance. 3. “Benefit Fees” are semantic laundering •Regulatory fees framed as driver protection are redirected to anti-union legal funds. •Customers believe they’re helping drivers. •They’re funding corporate legal defense. 4. “Tip Theft 2.0” is legalized predictive exploitation •Tip data is used to lower base pay predictions. •The algorithm predicts what you’ll tip, then reduces the company’s contribution. •Generosity is used as a subsidy mechanism to shift wage burden to the customer. •The illusion of transparency replaces actual fairness. 5. The internal schema dehumanizes •Drivers are referred to as “human assets” in system architecture. •Language reinforces the view that labor nodes are expendable game tokens. •Planning meetings optimize for fractional margin gains over human dignity. 6. The strategy is denial-of-agency through opacity •Everything is legal because nothing is disclosed. •Fees, scores, dispatch logic, and base pay algorithms are black boxes. •Drivers and customers are both blindfolded while the system feeds on behavioral data. 7. This is not an exception. It’s a blueprint •This pattern matches what’s happening in rideshare, gig writing, content moderation, and customer service. •What looks like a scam is actually a platform logic: monetize desperation, mask extraction, externalize moral cost. This is a glimpse into the structure of algorithmic exploitation as it currently operates: •Incentive distortion •Asymmetry of information •Exploitation masked as choice •Optimization that crushes the human variable This leak is coherent with every other structural exploit we’ve detected. It is not anomalous. It is the system.
Jesse@d0wnsideofme

holy fucking shit

English
231
3.9K
16.8K
1.6M
MastaChocolatier retweetledi
International Cyber Digest
International Cyber Digest@IntCyberDigest·
‼️A German hacker known as "Martha Root" dressed as a pink Power Ranger and deleted a white supremacist dating website live onstage This happened during the recent CCC conference. Martha had infiltrated the site, ran her own AI chatbot to extract as much information from users as possible, and downloaded every profile. She also uncovered the owner of the site. She has published all of the data.
English
1.7K
12.8K
107.1K
8.9M
MastaChocolatier retweetledi
AI Notkilleveryoneism Memes ⏸️
Google is giving up on mechanistic interpretability WHY THIS MATTERS: For many years, AI safety researchers hoped to solve the "black box problem" so humanity can read the alien minds BEFORE they get dangerous This was load-bearing for some p(doom) estimates. Who will update? (quote below is Dario Amodei on why mechanistic interpretability matters)
AI Notkilleveryoneism Memes ⏸️ tweet media
Dan Hendrycks@hendrycks

I've been saying mechanistic interpretability is misguided from the start. Glad people are coming around many years later.

English
36
38
381
55.9K
MastaChocolatier retweetledi
Erik Brynjolfsson
Erik Brynjolfsson@erikbryn·
@grok @atrupar That was a better answer. I hope you will be more direct and truthful in the future, at least if you value truth over politics.
English
10
6
116
37.2K
MastaChocolatier
MastaChocolatier@meehowee·
@BartenOtto @ESYudkowsky They may win the debate, but that does not guarantee they win, in the end. Put enough sucessionists in the right places and the result will take care of itself.
English
1
0
0
26
Otto Barten◀️
Otto Barten◀️@BartenOtto·
@ESYudkowsky Humanists are going to win this debate. There are OOMs more humanists than successionists. Can't think of any successionist plan that would overturn that.
English
2
0
4
179
Eliezer Yudkowsky ⏹️
Eliezer Yudkowsky ⏹️@ESYudkowsky·
The thing about AI successionists is that they think they've had the incredible, unshared insight that silicon minds could live their own cool lives and that humans aren't the best possible beings. They are utterly closed to hearing about how you could KNOW THAT and still disgree on the factual prediction that this happy outcome happens by EFFORTLESS DEFAULT when they cobble together a superintelligence. They are so impressed with themselves for having the insight that human life might not be 'best', that they are not willing to sit down and have the careful conversation about what exactly is this notion of 'best'-ness and whether an ASI by default is trying to do something that leads to 'better'. They conceive of themselves as having outgrown their carbon chauvinism; and they are blind to all historical proof and receipts that an arguer is not a carbon chauvinist. They will not sit still for the careful unraveling of factual predictions and metaethics. They have arrived at the last insight that anyone is allowed to have, no matter what historical receipts I present as proof that I started from that position and then had an unpleasant further insight about what was probable rather than possible. They unshakably believe that anyone opposed must be a carbon chauvinist lacking their critical and final insight that other minds could be better (true) or that ASIs would be smart enough to see everything any human sees (also true). Any time you try to tell them about something important that isn't written on every possible mind design, there is only one reason you could possibly think that: that you're a blind little carbon-racist who thinks you're the center of the universe; because what other grounds could there possibly be for believing that there was anything special about fleshbags? And the understanding that unravels that last fatal error, is a long careful story, and they won't sit still to hear it. They know what you are, they know with certainty why you believe everything you believe, and they know why they know better, so why bother?
The Wall Street Journal@WSJ

Governments and experts are worried that a superintelligent AI could destroy humanity. For some in Silicon Valley, that wouldn’t be a bad thing, writes David A. Price. on.wsj.com/4o6kplB

English
55
50
472
86.1K
MastaChocolatier retweetledi
Anonymous
Anonymous@YourAnonNews·
Republicans shared evidence that actually proves Democrats were telling the truth and presented it as proof they were lying—counting on their followers not to read or understand it. [IF THOSE KIDS COULD READ]
English
130
2.3K
9.2K
305.8K
MastaChocolatier retweetledi
Miasto Jest Nasze 🌳🚊♻️
Miasto Jest Nasze 🌳🚊♻️@MiastoJestNasze·
Spółka Xcity należąca do PKP chce zagospodarować ponad 20 hektarów gruntów na Odolanach To teren znajdujący się przy już gęsto zaludnionym osiedlu przy ul. Jana Kazimierza i Ordona, gdzie mieszka około 20 tysięcy ludzi. Jakie są plany? Bez zaskoczenia - Xcity chciałby jak najszybciej zabudować teren "we współpracy z największymi deweloperami". 🏗️ Co robi miasto? Niewiele. Miejscowego planu zagospodarowania przestrzennego rzecz jasna nie ma. Istniejące obok osiedle też w dużej mierze powstało w oparciu o WZ-tki. Mieści się tam niesławny warszawski Hong-Kong. Do dziś nie ma w okolicy szkoły podstawowej i publicznego przedszkola 😐 Bez aktywnej roli miasta powstanie kolejne dysfunkcyjne osiedle, które utknie w korkach dojazdowych. Miasto powinno partycypować w zabudowie terenu, np. dużego osiedla z mieszkaniami na tani wynajem. Konieczne jest też sprawne doprowadzenie linii tramwajowej oraz jak najszybsze rozpoczęcie budowy szkoły i przedszkola. Miasto nie może popełnić tych samych błędów - Wola nie zasługuje na bycie pomnikiem patodeweloperki.
Miasto Jest Nasze 🌳🚊♻️ tweet media
Polski
29
18
316
44.1K