MIT FutureTech

341 posts

MIT FutureTech banner
MIT FutureTech

MIT FutureTech

@MITFutureTech

An interdisciplinary research group working to understand the economic and technical foundations of progress in computing @MIT_CSAIL & @mit_ide

Cambridge, Massachusetts Katılım Mayıs 2022
160 Takip Edilen725 Takipçiler
MIT FutureTech
MIT FutureTech@MITFutureTech·
@ylecun this should help - anyone can see for themselves how automation will effect occupations in our new tool: expertise.mit.edu. Based on expertise paper by @davidautor and @ProfNeilT tl;dr: in some jobs, automation eliminates expert tasks, reduces wages, and permits entry of less expert workers. In others, it eliminates inexpert tasks, boosts wages, and raises barriers to entry #AI #Automation #FutureOfWork #LaborEconomics
Yann LeCun@ylecun

Dario is wrong. He knows absolutely nothing about the effects of technological revolutions on the labor market. Don't listen to him, Sam, Yoshua, Geoff, or me on this topic. Listen to economists who have spent their career studying this, like @Ph_Aghion , @erikbryn , @DAcemogluMIT , @amcafee , @davidautor

English
0
0
2
203
MIT FutureTech retweetledi
Google Public Policy
Google Public Policy@googlepubpolicy·
How can we ensure AI's economic benefits are widely shared? At the AI for the Economy Forum, @Google and @MITFutureTech are convening policymakers & experts to discuss the partnerships needed to support workers in the AI transition. Learn more: goo.gle/4dGy7cN
English
0
2
2
343
MIT FutureTech retweetledi
MIT FutureTech retweetledi
Epoch AI
Epoch AI@EpochAIResearch·
For example, @MITFutureTech found that shifting from LSTMs (green) to Modern Transformers (purple) has an efficiency gain that depends on the compute scale: - At 1e15 FLOP, the gain is 6.3× - At 3e16 FLOP, the gain is 26× Naively extrapolating to 1e23 FLOP, the gain is 20,000×!
Epoch AI tweet media
English
1
2
26
1.1K
MIT FutureTech retweetledi
Dr. Peter Slattery
Dr. Peter Slattery@PeterSlattery1·
Great to see the MIT AI incident tracker, led by Simon Mylius, featured in @TIME: time.com/7346091/ai-har… The AI Incident Tracker maps incidents in the AI Incident Database according to the MIT AI Risk Repository’s causal and domain taxonomies, and assigns each incident a harm-severity score. Using an LLM, it processes raw incident reports, providing a scalable methodology that can be applied cost-effectively across much larger datasets as numbers of reported incidents grow. In the dashboard you can explore trends such as: - distribution of incident classifications by year - distribution of incident sub-domains by year - incidents with high direct harm severity scores by year - incidents causing severe harm in more than one harm category - distribution of harm severity scores by year Our last update added new evaluation fields for each incident, including: - 5 categories of NatSec impact: Physical Security & Critical Infrastructure / Information Warfare & Intelligence Security / Sovereignty & Government Functions / Economic & Technological Security / Societal Stability & Human Rights - A Fishbone/Ishikawa diagram that presents a number of potential causes for each incident - The primary goal of the AI system involved Visit our website to explore the data. Congratulations also to Daniel Atherton and the AI Incident Database (@IncidentsDB) for the coverage. We are lucky to be able to build on their critical work. Thanks to Harry Booth for the write-up (@HarryBooth59643)
English
1
6
4
512
MIT FutureTech
MIT FutureTech@MITFutureTech·
The world’s top AI researchers are paid millions for their expertise—but how much do individual breakthroughs really drive AI progress, compared to simply building bigger datacenters? Our new paper, On the Origins of Algorithmic Progress in AI, suggests the literature overestimates algorithms and underestimates compute. Deep dive via our substack: substack.com/home/post/p-18… #AIResearch #ComputeScaling #AlgorithmicProgress
English
0
0
2
123
MIT FutureTech retweetledi
Rob Seamans
Rob Seamans@robseamans·
I'm excited to share an article I wrote for the @WSJ on AI-related jobs of the future. e.g., the “AI explainer,” an expert who understands AI deeply and can translate it into plain language for managers and others wsj.com/tech/ai/new-jo… via @WSJ
English
0
1
7
329
MIT FutureTech
MIT FutureTech@MITFutureTech·
Are you looking to gain direct experience in technical and governance AI Safety research? Sharing this fantastic opportunity for a full-time, paid, in-person Spring Research Fellowship with Cambridge Boston Alignment Initiative. MIT FutureTech’s @aksaeri and @PeterSlattery1  are among the mentors supporting fellows in developing impactful, rigorous AI safety research: cbai.ai
English
0
0
1
151