Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸
469 posts

Gregory Miller 🇺🇸 🇺🇦 🇮🇱 🇵🇸
@OSET_Greg
Co-founder, COO of the @OSET Institute & co-host of @DeadMenDontVote Podcast. | Slava Ukraini! | Retweets ≠ Endorsement.



CC: @OSET_Greg


Interesting article in Germany's leading business & finance newspaper today, @handelsblatt, to which our COO @OSET_Greg (a 1st generation U.S. German) contributed a couple of thoughts. handelsblatt.com/technik/it-int…

Interesting article in Germany's leading business & finance newspaper today, @handelsblatt, to which our COO @OSET_Greg (a 1st generation U.S. German) contributed a couple of thoughts. handelsblatt.com/technik/it-int…



Let’s be clear: This is illegal and unconstitutional. The American people had voted. The courts had ruled. The Electoral College had met and voted. The Governor in every state had certified the results and sent a legal slate of electors to the Congress to be counted. The Vice President has no constitutional authority to tell states to submit alternative slates of electors because his candidate lost. That is tyranny. Our institutions held on Jan 6 because Mike Pence refused to violate his oath to the Constitution. Trump picked JD Vance because Vance will do whatever Trump wants, including violating the Constitution. They are both far too dangerous to serve. It’s our duty to stop them.

A brief history of the AI hype:





🚨 [AI REGULATION] The definition and societal impacts of open foundation models should be at the core of AI policy & regulation discussions. Why? In many jurisdictions, there are more lenient rules for these AI models. Read this: ➡️ The EU AI Act, for example, in its Article 53, which covers General-Purpose AI Models, establishes that: "2. The obligations set out in paragraph 1, points (a) and (b), shall not apply to providers of AI models that are released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available. This exception shall not apply to general-purpose AI models with systemic risks." ➡️ In the US, the @FTC has recently published an article covering open-weights foundation models, highlighting their potential benefits (including enabling greater innovation, driving competition, improving consumer choice, and reducing costs) and possible risks to consumers when compared to centralized closed models (link to the article below). ➡️ Given the high stakes and the fact that regulatory authorities will look at these AI models differently, both their definition and their societal impacts - especially in comparison to closed ones - must be closely scrutinized. ➡️ In this context, the paper "On the Societal Impact of Open Foundation Models" by @sayashk, @RishiBommasani, @kevin_klyman, @ShayneRedford, @ashwinforga, @pcihon, @aspenkhopkins, @KevinBankston, @BlancheMinerva, @mbogen, @ruchowdh, Alex Engler, @PeterHndrsn, @YJernite, @sethlazar, @smaffulli, @alondra, @jpineau1, @aviskowron, @dawnsongtweets, @victorstorchan, @dzhang105, Daniel E. Ho, @percyliang & @random_walker is a must-read for everyone in AI, especially those focused on regulation and policymaking (link below). Quotes: ➡️ The paper discusses the distinctive properties of open foundation models, their benefits, and a risk assessment framework to evaluate risks and threats. On the recommendations and calls to action, they state: "Researchers investigating AI risks: Our preliminary analysis of the misuse risk of open foundation models reveals significant uncertainty for several misuse vectors due to incomplete or unsatisfactory evidence. In turn, researchers investigating AI risks should conduct new research to clarify the marginal risks for misuse of open foundation models. In particular, in light of our observations regarding past work, greater attention should be placed on articulating the status quo, constructing realistic threat models (or arguments for why speculative threat models yield generalizable evidence), and considering the full supply chain for misuse." ➡️ And to policymakers: "(....) Policies that place obligations on foundation model developers to be responsible for downstream use are intrinsically challenging, if not impossible, for open developers to meet. If recent proposals for liability (Blumenthal & Hawley, 2023b) and watermarking (Executive Office of the President, 2023; Chinese National Information Security Standardization Technical Committee, 2023; G7 Hiroshima Summit, 2023) are interpreted strictly to apply to foundation model developers, independent of how the model is adapted or used downstream, they would be difficult for open developers to comply with (Bommasani et al., 2023a), since these developers have little ability to monitor, moderate, or prohibit downstream usage." ➡️ AI liability is still an open topic in many jurisdictions worldwide. At this point, it is extremely important that policymakers and regulators invest time and effort in understanding the different risk profiles, the best way to regulate them, and who is responsible for the harm after it happens. ➡️ Find all relevant links below. ➡️ To stay up to date with the latest developments in AI policy & regulation, join 28,500 people who subscribe to my weekly newsletter (link below).

Not correct. For many reasons. 1. The articles referenced are from 2015 & 2016 & this equipment is retired. 2. Most US voters use a paper ballot w/ auditing procedures post-election. Mail ballots are paper ballots! 3. Mail ballots are secure, reliable, & accessible. By design.

Electronic voting machines and anything mailed in is too risky. We should mandate paper ballots and in-person voting only.
