B

608 posts

B banner
B

B

@Sayebin

👨‍👩‍👧, blessed, music, tech, missing few funny bones

Katılım Ekim 2015
186 Takip Edilen5 Takipçiler
B
B@Sayebin·
@MarkJCarney What’s the role of pension funds?
English
0
0
4
37
Mark Carney
Mark Carney@MarkJCarney·
We’re working with partners abroad to bring home deals for Canadians. Australia's IFM announced plans to invest up to $10B in Canada, and signed an agreement with our pension funds to deepen cooperation and investment.
English
1.3K
692
3.3K
131.9K
B
B@Sayebin·
@brianlilley 💯EU is more than market - free people movement/immigration is part of the deal. Canada Debt has ballooned to almost double in 1 year, severe housing shortage, 2nd highest spender on healthcare but 23rd in terms of quality, immigration fraud…Lots to fix first
English
0
0
0
10
B
B@Sayebin·
@TheHillTimes @HarrisAuthor Difference in opinion with current US gov coupled with a missing in action current gov Diplomacy does not equate to an automatic yes for Economic policy suicide.
English
0
0
0
2
B
B@Sayebin·
@jbeggs252 Is there a big enough export market ok to bear fuel, transportation, insurance costs etc etc. to cross the Atlantic and still Canadian made car is cheaper there? Need to be pragmatic, to ignore a market at your doorstep is foolish economically. High Time for diplomacy to work.
English
0
0
0
3
@JohnBeggs
@JohnBeggs@jbeggs252·
While the entire free world is decoupling from relationship with US, Poilievre proposed deepening CDN integration and dependence. He's not running for PM. He's running for Governor.
English
123
347
1.4K
17.4K
Fit_Fusion
Fit_Fusion@FitFusion__·
Only 0.0001% can find it...!!
Fit_Fusion tweet media
English
292
14
116
287.1K
B
B@Sayebin·
@KirkLubimov Isn’t 100M 2050 Liberal policy? Fix required at Liberal Gov level. Vote the right representatives.
English
0
0
0
20
Kirk Lubimov
Kirk Lubimov@KirkLubimov·
What?! Indian High Commissioner to Canada tells us we need at least 100 million population. "You are the 2nd largest country in the world with a 40 million population, you need AT LEAST 100 million population." No, we don't. We can be a zero tax jurisdiction if we increase productivity and didn't have to support hoards of foreigners.
English
217
202
961
62.2K
B
B@Sayebin·
@Tablesalt13 Steer the action back to Canada Gov to deliver a common sense immigration policy.
English
0
0
0
16
B
B@Sayebin·
@NathanLands It’s one company calling out that its model is not reliable in 2 areas and needs human oversight. Good call - liability protection. That’s common ground enough for 2 parties to work out. No one knows what went down or what makes OpenAI safer.
English
0
0
0
6
B
B@Sayebin·
@JeffLadish This - ‘ultimately the companies and the government need to work together to prevent AI from getting wildly out of control. We have the world’s best AI researchers and scientists.‘
English
0
0
0
4
Jeffrey Ladish
Jeffrey Ladish@JeffLadish·
This is a huge mistake. The government has every right to choose not to work with any AI company it doesn’t want to do business with. And indeed, I think the government should be demanding far more oversight over what AI companies can and cannot do. This… isn’t that. The DoW agreed to Anthropic’s contract terms and later decided they didn’t like them. And then decided to try to destroy Anthropic if they couldn’t get exactly what they wanted from the company. The government has never labeled an American company as a supply chain risk before today. This signals a fundamental misunderstanding about what’s happening in AI right now. This move will not just antagonize Anthropic. There will be huge rippling repercussions throughout all the top AI companies. Look around on twitter . Already, top AI researchers from all the big companies are pissed. This is not a good way to do AI company oversight, and it will make proper future oversight harder. Here’s the problem: we may be very close to full recursive self improvement. Millions of AIs making smarter AIs making smarter AIs. We have absolutely no idea how to do that safely. We desperately need to coordinate to avoid disaster. That’s what the US government should focus on. That’s where they should step in and require AI companies to toe the line. Instead, they’re focused on a political side show and alienating the entire AI talent base in the process! And what do they gain by that? It’s not obvious that the AI companies will just fall in line. This is a complicated game, and the AI companies, if they act together, wield enormous power. I’m not happy about this, but it’s true. The government also has enormous power. And it’s easier for any of these actors to do damage than to create shared value. Ultimately the companies and the government need to work together to prevent AI from getting wildly out of control. We have the world’s best AI researchers and scientists. We have not made nearly enough progress in interpretability or alignment to be confident we can retrain control through full RSI. When I talk to other researchers, the vibe is pretty grim. These political leaders have no clue, and it’s really quite tragic. Hegseth has no idea he’s putting his whole family in danger. This isn’t about wokeness or the political issue of the day. I hope our leaders can come to see this in time, for all of our sakes.
Secretary of War Pete Hegseth@SecWar

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

English
38
61
654
98.1K
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
In a democracy, it’s absolutely ok to define who can use the things you make and how. But it’s also absolutely ok for the Government to lose trust in you, tell you to fuck off and find an alternative. It’s also absolutely ok for you to nuke your own company in the process. The timing of this is not good for Anthropic and could be a potential boon to every other model that is exceeding expectations in their upcoming version (Grok, OAI, Gemini). More generally, I don’t see how this isn’t a slippery slope. What if a model maker updates their ToS that would block a use case that is legal but subjective? Agreeable in some states but not in others? What about in different countries with different governance or religions? It’s a huge can of worms. How can a government or company rely on a model that could have an ever-changing definition of what’s allowed without taking on major business/governance risk? They won’t. My hunch is that the company that embraces the “no holds barred” ToS will win because it’s the least risky to adopt wrt long term risk of getting rug-pulled.
Chamath Palihapitiya tweet media
English
426
148
2.1K
271.9K
B
B@Sayebin·
@chamath Clash of Values - value side wins, hands down long term
English
0
0
0
9
Chamath Palihapitiya
Chamath Palihapitiya@chamath·
Ask this simple question: would this fumble have happened if it were Tim Cook? Sundar Pichai? Satya Nadella? Obviously not. Even if they did push back, they have had years of proving their ability to navigate very complicated situations and find win-wins. Whatever happened here isn’t an issue of business strategy, it’s one of ego. It’s not my fumble. Own it.
Brando 👀🥡🥢@brandoclicks

@chamath This is a disgusting and egregious misrepresentation of the situation. How could you ever, in good faith, amplify the comparison between a direct quote and a characterization of a response? You know better than this @chamath, where have your scruples gone?

English
146
60
1.5K
562.8K
B
B@Sayebin·
Peter Girnus 🦅@gothburz

I work in government affairs at OpenAI. My job is federal partnerships. When an agency wants our models, I make sure the paperwork is beautiful. Paperwork is my love language. On my desk I have a framed quote that says "Policy Is Just Code That Runs on People." I bought the frame at Target. It was in the Live Laugh Love section. I did not see the irony at the time. I still don't. We had a good week. On Monday, we closed a $110 billion funding round. One hundred and ten billion dollars. Amazon put in fifty. Nvidia put in thirty. Valuation: $730 billion. The largest private fundraise in the history of anyone raising anything. There was a company-wide Slack message about it. The message used the word "transformative" twice and the word "safety" once. The word "safety" was in the last sentence, after the link to the new branded hoodie pre-order. The hoodies are nice. They're the soft kind. On Tuesday, we fired a research scientist for insider trading on Polymarket. He had opened seventy-seven positions across sixty wallets, betting on our product announcements before they were public. Over three years. Total profit: sixteen thousand dollars. Seventy-seven positions. Sixty wallets. Sixteen thousand dollars. That is two hundred and eight dollars per wallet. The man had access to the most valuable product roadmap in artificial intelligence and he used it to make less money than a good weekend at a Reno blackjack table. The wallets were linked. Not discreetly linked. Linked like Christmas lights. One wallet was reportedly called something I cannot repeat but it contained the word "OpenAI" and a number. He did not use a VPN. He did not use an alias. He used Polymarket, the platform that is designed to be publicly auditable, to place bets on information he stole from the company that invented GPT. A compliance team composed entirely of Labrador retrievers would have found this by lunch on day one. We did not find it for three years. This will matter later. On Wednesday, a petition appeared. "We Will Not Be Divided." Four hundred and seven signatures. Two hundred sixty-six from Google. Sixty-five from OpenAI. The petition warned that the government was pitting AI companies against each other on safety. It said that if one company broke ranks, the government would use the defection to lower the bar for everyone. I meant to read it. It went into my to-read folder. The to-read folder also contains the Responsible Scaling Policy, three think-tank white papers on AI governance, and a New Yorker article someone sent me in November. The folder is aspirational. On Thursday, OpenAI told CNN we would maintain "the same red lines as Anthropic." Same red lines. On Friday, Anthropic told the Pentagon no. The Pentagon had given them seventy-two hours to remove the safety guardrails from Claude. Anthropic's guardrails were not in a policy document. They were not in a legal reference. They were in the code. Written into Claude's architecture. If Claude hit a safety boundary, Claude stopped. Not because a lawyer said so. Because the math said so. You could fire every lawyer at Anthropic and the model would still refuse. You cannot remove code with a contract amendment. You can remove a contract reference by Tuesday. I checked. Anthropic said no. By that evening, the Pentagon had designated them a supply-chain risk. I have worked in government procurement for eight years. Government paperwork does not move in hours. I have waited nine weeks for a badge renewal. I once spent four months getting a PDF notarized. This designation moved in hours. The document was pre-written. Formatted before the deadline expired. Calibri 11pt. Consistent margins. Somebody wanted this very badly. I respect the craft. I do not think about the implication. That is not my scope. Within hours, we had signed the replacement contract. I was proud of the turnaround. My team moved fast. Legal moved fast. Everyone moved fast. We are very good at moving fast. We are not always sure what we are moving toward, but the speed is impressive and the hoodies are soft. The contract referenced DoD Directive 3000.09, which governs autonomous weapon systems. The directive requires "appropriate levels of human judgment over the use of force." The word "appropriate" is not defined. This is not an oversight. This is the point. The word "appropriate" is the most load-bearing word in the entire contract and it is doing exactly as much work as a throw pillow on a couch that is on fire. Anthropic built a wall. We referenced a document about where walls should go. Anthropic's guardrails were architecture. Ours were a citation. Theirs execute. Ours can be filed. The Pentagon asked both companies to take down the wall. Anthropic said it's load-bearing, the building will collapse. We said what wall? Oh, you mean the wallpaper. Here, watch. It peeled off beautifully. It was designed to. Sam announced the partnership that night. The word "responsible" appeared in the announcement and in the contract. In the announcement it was a brand. In the contract it was a footnote to a directive that uses the word "appropriate" which nobody has defined. The word traveled from a legal document to a public statement without changing its font. Only its meaning. At this valuation, "responsible" means: we will do the thing the other company refused to do, and we will describe doing it with the same adjective they used to describe not doing it. By Saturday morning, "How to delete your OpenAI account" was the number one post on Hacker News. 982 points. By noon, subscription cancellations were up eighty-nine times the daily average. Not eighty-nine percent. Eighty-nine times. Someone in our Slack posted the Hacker News link with the message "should we be worried?" Someone else reacted with the branded hoodie emoji. We have a branded hoodie emoji now. It was introduced on Monday, to celebrate the fundraise. It has been used four hundred and twelve times. Mostly in the #general channel. Mostly this week. The communications team drafted a response. The response used the word "committed" three times and the word "safety" four times. It did not use the word "guardrails." It did not use the word "code." It did not explain anything. It was a holding statement. It held nothing. It held beautifully. Here is the math. The twenty-dollar-a-month customers were upset. The two-hundred-million-dollar customer was upset because the previous vendor had guardrails that could not be removed. The hundred-and-ten-billion-dollar investors were not upset. The subscription cancellations, at eighty-nine times the daily rate, represented less than the interest on Amazon's fifty billion dollar contribution calculated over a long weekend. Twenty dollars. Two hundred million. One hundred and ten billion. Three different price points. Three different definitions of "responsible." The most expensive one won. It always does. The math does not have red lines. The math has a cap table and a TAM slide that now includes "defense and intelligence" where it previously said "enterprise and consumer." One word changed on one slide in one deck and the company is worth one hundred and ten billion dollars more. The sixty-five OpenAI employees who signed the petition came to work on Monday. They sat at their desks. Nobody asked them about it. Nobody asked them to resign. Nobody brought it up at the all-hands. The all-hands had catering. Sweetgreen. The chopped salads. Someone made a joke about the kale being "responsibly sourced." No one laughed. Then everyone laughed. Then it was quiet. The petition had four hundred and seven signatures. The contract had one. Now: the Polymarket thing. Seventy-seven positions. Sixty wallets. Three years. A public blockchain. We did not catch him. That same week, we were entrusted with deploying artificial intelligence on America's classified military networks. The classified networks. The ones where the detection requirements are somewhat more rigorous than "check if anyone's gambling on our launch dates on a website that is literally designed to be publicly auditable." The company that could not find the Polymarket guy can now be found in the Pentagon's classified infrastructure. I'm sure it'll be fine. We move fast. The contract is signed. The deployment is underway. The compliance documentation will reference the directives. The directives will use the word "appropriate." I will not define it. That is not my scope. My scope is the paperwork. The paperwork is beautiful. The petition is still a Google Doc. Nobody has updated it. The signatures still say four hundred and seven. The to-read folder still has the New Yorker article from November. The branded hoodie pre-order closed on Wednesday. I got mine in navy. It's the soft kind. On Thursday we told CNN: the same red lines. On Friday we signed the contract they refused. We do have the same red lines. We drew ours in pencil.

QME
0
0
0
214
Mark Valorian
Mark Valorian@markvalorian·
Idk who needs to hear this (apparently all of twitter) but OpenAI did not just magically get the DoD to agree to the terms Anthropic was asking for. Sam is blowing smoke up your ass to distract from the fact OpenAI just took the terms Anthropic considered so egregious, it warranted jeopardizing an enormous part of their business. The DoD does not just break off a massive contract to accept the same demands 5 minutes later from someone else. Until explicitly indicated otherwise, the only logical conclusion here is that OpenAI swooped in and unscrupulously stooped lower than Anthropic was willing to go for the money. Assume all OpenAI data will now be used for what Anthropic deemed “mass domestic surveillance of Americans”. Plan and prompt accordingly.
Sam Altman@sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

English
128
2.3K
15.1K
826.3K
B
B@Sayebin·
@MarcMillerVM No, Canadians have heart. It’s people saying bring the pendulum back to center. The ask is for Gov to review non sensical policies written to-date without proper study / public consultation and nary a thought for Canadians in general. Everyone Canadian at some point if done ✅
English
0
0
0
11
Mark Carney
Mark Carney@MarkJCarney·
During the pandemic, inflation shot up globally. The cost of everyday items went up, and wages didn’t keep pace. That created an affordability problem. We have a plan to fix it. youtube.com/watch?v=9MIxaG…
YouTube video
YouTube
English
2.2K
325
1.4K
180.1K
B
B@Sayebin·
@DavidJPba inter province trade barriers - the height of public apathy and gov complacency. Make your vote count!
English
0
0
0
3
David Parker
David Parker@DavidJPba·
Canada, a nation with an absolutely obscene amount of Natural Gas, is buying it from Australia and shipping it across the world, because we are retarded.
English
403
1.6K
13.2K
195.7K
B
B@Sayebin·
@SeanFraserMP Absolutely about Values - helping genuine refugees and keeping those gaming our policies out. Every immigrant shapes the Values of our nation - we want to evolve together but without losing our soul. This is about poor execution and lack of controls. No other spin needed.
English
0
0
0
6
Sean Fraser
Sean Fraser@SeanFraserMP·
We can strengthen the integrity of the asylum system in Canada, but let’s not lose our values along the way.
English
477
23
210
35.3K
Sean Fraser
Sean Fraser@SeanFraserMP·
People fleeing violence, war, and persecution deserve to be treated with compassion, not cruelty. This week’s Conservative proposal to deny health care to some of the world’s most vulnerable is offensive in the extreme.
English
4.6K
450
1.6K
275K
B
B@Sayebin·
@SeanFraserMP At 63% approval rate from gov data and flood gate -this in itself should raise alarm bells. Genuine refugees are not able to get in while those gaming the system clog the queue and gain paid holiday + pr. Why wouldn’t Gov not want to bring back common sense ?
English
0
0
0
6
B
B@Sayebin·
@jemelehill The rush for military and market advantage without guardrails is insane. That other models don’t see these risks clearly is telling. Raises the question ‘what else’ do we not know.
English
0
0
1
297
B
B@Sayebin·
@AnthropicAI Front seat in an AI race. That no other companies see this is telling. No gov or WSC mechanisms to control AI harm to humanity - it is a real issue. There is Global paralysis to arrive at a common framework for using AI. All about competitive advantage.
English
0
0
0
5
B
B@Sayebin·
@yegwave No interest to import EU problems
English
0
0
0
2
YEGWAVE
YEGWAVE@yegwave·
76% of Canadians support the idea of CANZUK 🇨🇦 A plan is being discussed that would link Canada, Australia, New Zealand and the UK in a new agreement allowing citizens to live and work in each other’s countries without visas, while also strengthening trade and economic co-operation 🇨🇦🇳🇿🇦🇺🇬🇧
YEGWAVE tweet mediaYEGWAVE tweet media
English
738
325
3.1K
226.9K