Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸

469 posts

Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸 banner
Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸

Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸

@OSET_Greg

Co-founder, COO of the @OSET Institute & co-host of @DeadMenDontVote Podcast. | Slava Ukraini! | Retweets ≠ Endorsement.

Katılım Şubat 2022
247 Takip Edilen113 Takipçiler
Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸 retweetledi
BadYorky
BadYorky@BadYorky·
@ElectionBabe @ordinarytimemag @four4thefire @OSET_Greg @OSET AI is shaping up to be a very useful tool for many data applications. The company I am working for is using it right now. But, AI is in no way sentient at this point. I worry about the insane computing/memory power needed to sustain it. This may lead to monopoly power.
English
0
1
3
40
Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸
@ElectionBabe @KGWNews Yep, we're certainly aware of ofrest fires, but nothing close to a metropolitan area approaching 2M people thankfully. All in central and eastern Oregon. It finally disappeared (probably low on fuel). About only conclusion is aerial photography/surveying.
English
0
0
1
30
Genya
Genya@ElectionBabe·
@OSET_Greg @KGWNews There’s several forest fires in your area. Might be small recon plane checking for hot spots or unauthorized campfires.
English
1
0
0
28
Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸
Hey @KGWNews, so for the past 1.5 hours a single engine fixed wing aircraft (e.g., a Cessna 172) has been circling over Portland West Hills/Forest Park, center mark roughly NW Skyline Blvd and NW Thompson Rd, counterclockwise at I estimate ~2,500' slow arcing circles. Wassup?
English
1
0
0
71
Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸 retweetledi
TrustTheVote® Project 🇺🇸🇺🇦
Luiza Jarovsky, PhD@LuizaJarovsky

🚨 [AI REGULATION] The definition and societal impacts of open foundation models should be at the core of AI policy & regulation discussions. Why? In many jurisdictions, there are more lenient rules for these AI models. Read this: ➡️ The EU AI Act, for example, in its Article 53, which covers General-Purpose AI Models, establishes that: "2. The obligations set out in paragraph 1, points (a) and (b), shall not apply to providers of AI models that are released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available. This exception shall not apply to general-purpose AI models with systemic risks." ➡️ In the US, the @FTC has recently published an article covering open-weights foundation models, highlighting their potential benefits (including enabling greater innovation, driving competition, improving consumer choice, and reducing costs) and possible risks to consumers when compared to centralized closed models (link to the article below). ➡️ Given the high stakes and the fact that regulatory authorities will look at these AI models differently, both their definition and their societal impacts - especially in comparison to closed ones - must be closely scrutinized. ➡️ In this context, the paper "On the Societal Impact of Open Foundation Models" by @sayashk, @RishiBommasani, @kevin_klyman, @ShayneRedford, @ashwinforga, @pcihon, @aspenkhopkins, @KevinBankston, @BlancheMinerva, @mbogen, @ruchowdh, Alex Engler, @PeterHndrsn, @YJernite, @sethlazar, @smaffulli, @alondra, @jpineau1, @aviskowron, @dawnsongtweets, @victorstorchan, @dzhang105, Daniel E. Ho, @percyliang & @random_walker is a must-read for everyone in AI, especially those focused on regulation and policymaking (link below). Quotes: ➡️ The paper discusses the distinctive properties of open foundation models, their benefits, and a risk assessment framework to evaluate risks and threats. On the recommendations and calls to action, they state: "Researchers investigating AI risks: Our preliminary analysis of the misuse risk of open foundation models reveals significant uncertainty for several misuse vectors due to incomplete or unsatisfactory evidence. In turn, researchers investigating AI risks should conduct new research to clarify the marginal risks for misuse of open foundation models. In particular, in light of our observations regarding past work, greater attention should be placed on articulating the status quo, constructing realistic threat models (or arguments for why speculative threat models yield generalizable evidence), and considering the full supply chain for misuse." ➡️ And to policymakers: "(....) Policies that place obligations on foundation model developers to be responsible for downstream use are intrinsically challenging, if not impossible, for open developers to meet. If recent proposals for liability (Blumenthal & Hawley, 2023b) and watermarking (Executive Office of the President, 2023; Chinese National Information Security Standardization Technical Committee, 2023; G7 Hiroshima Summit, 2023) are interpreted strictly to apply to foundation model developers, independent of how the model is adapted or used downstream, they would be difficult for open developers to comply with (Bommasani et al., 2023a), since these developers have little ability to monitor, moderate, or prohibit downstream usage." ➡️ AI liability is still an open topic in many jurisdictions worldwide. At this point, it is extremely important that policymakers and regulators invest time and effort in understanding the different risk profiles, the best way to regulate them, and who is responsible for the harm after it happens. ➡️ Find all relevant links below. ➡️ To stay up to date with the latest developments in AI policy & regulation, join 28,500 people who subscribe to my weekly newsletter (link below).

Indonesia
0
1
1
109
Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸
This week the media continued a relentless disproportionate focus on President Biden’s age & health, while mostly ignoring the nonsensical ramblings of Trump. Yet, awareness re: Project 2025 is growing as more & more voters realize it's dangerous blueprint for a Trump Admin v2.0
English
0
0
0
25
Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸
Hey @portlandgeneral any insight on power outage over an hour ago in 97229 in midst of record heatwave? We feared this. Your automated phone attendant claims this is scheduled maintenance?! 🤯That can't be right, right? 😳 Can you please update on what the situation is?
English
0
0
0
24
Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸
3] Or was it worse? 🤔Did DJT have that conversation with Putin closer to the invasion date, AFTER he lost the election? If so, seems that would be a violation of 18 USC 953 (the Logan Act). I think this one of many 💩declarations DJT made last eve is worth investigating. /END
English
0
0
0
14
Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸
2] When did he have this conversation? As early as the meeting where he ousted the stenographers & expunged the meeting minutes? If so, then seems DJT withheld info from national security re: an impending invasion. Query: was that was in violation of any US or international law?
English
1
0
0
16
Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸
While there's no doubt last night's debate was a hot mess all the way around, I'm shook by one thing in particular: Trump's claim to have spoken with Putin prior to the invasion, learning that it was Putin's "dream" to do so. Wait, what?! 😳Short 📷
English
1
0
0
28