chartitec
423 posts

chartitec रीट्वीट किया
chartitec रीट्वीट किया

我也不知道怎么告诉老板“AI是工具不是魔法”,只能请教睿智的网友们了。
也许可以扮作弱小无知的 AI 小白,给老板上难度让他施法,可能他就知难而退了
洛图@soavocado
@dotey ”AI是工具,不是魔法“,这句话怎么能递给老板,宝玉老师给点建议😂
中文
chartitec रीट्वीट किया

Mohnish Pabrai explains how studying airplane crashes completely changed his approach to investing:
"The checklist is a very useful tool, and it's helped us in aviation a lot. Air travel has become extremely safe mainly because of the use of checklists by pilots. The aviation checklist came about from examining failures. When a plane crashed, the FAA would always go in and try to figure out why the crash happened."
He details the surprisingly pragmatic mathematics behind FAA safety protocols:
"They have a definition of what a human life is worth. It used to be around $10 million a few years back. When they see a plane crash, they try to figure out: if nothing was done, how many lives would be lost over the next 5, 10, 20 years? They multiply that by that $10 million number. Then they look at the cost of asking the industry to make changes to aircraft design. They will only make those changes if the cost is less than the projected loss. You don't want air travel to be so expensive that we cannot get on an airplane, and you don't want planes crashing all the time."
Pabrai applied this exact "post-crash" analysis to the stock market:
"I did the same thing when I developed a pre-investment checklist. I looked at great minds of investing who had made investments that didn't work, but where the reason why it wouldn't work may have been obvious before the investment was made. For example, Berkshire buying Dexter Shoes. The concept that low-cost foreign manufacturing may impact the moat should have been visible to Warren and Charlie."
He reveals the surprising results of categorizing the greatest failures of the world's best investors:
"I looked at many, many investments that had failed by great minds and recategorized them into different buckets. It was really interesting. The single greatest reason why investments failed was because the businesses were leveraged. The second reason was some misunderstanding of the nature of the moat."
English

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.
In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.
AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only.
We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements.
We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
English
















