Umair S

35 posts

Umair S banner
Umair S

Umair S

@imumairs

Formal Methods x Safety x Robots x AI

San Francisco Bay Area Katılım Ağustos 2014
418 Takip Edilen100 Takipçiler
Umair S
Umair S@imumairs·
@satnam6502 ∀ context ∈ {winter, summer, breakfast, dinner, sadness, celebration}, ghee_on(makki_roti) > ghee_on(wheat_roti) Proof is easier than we think so QED
English
0
0
1
49
Satnam Singh
Satnam Singh@satnam6502·
An enduring childhood memory is of my mother making a vast vat of ghee, slowly melting 35lbs of butter and skimming off the impurities to yield liquid gold. It filled our apartment with an overwhelmingly comforting nutty sweet aroma, and even today the smell of melting butter transports me to that small flat in Glasgow and a feeling of security and love.
English
5
5
160
7.9K
Umair S
Umair S@imumairs·
I recently asked the Harmonic API to prove Z-Transform theorems (en.wikipedia.org/wiki/Z-transfo…). It performed really well. This is exactly the work from my ITP-14 paper using HOL Light. By having Z-Transform formalization and proof of its properties, we can reason about quite a few concepts from digital signal processing and control systems. Soon I will share my experiments. @HarmonicMath
English
0
0
1
202
Satnam Singh
Satnam Singh@satnam6502·
A few months ago when I started to learn Lean I made the decision to work my way through the "Functional Programming in Lean" tutorial but to totally ignore the "Theorem Proving in Lean" tutorial. That's because all I want to do is to be able to state theorems. I'm leaving the proofs to @HarmonicMath's Aristotle system. It's amazing how well this mad strategy is working out in practice. Theorems For Free! #Lean4
Satnam Singh tweet media
English
4
6
105
7.4K
Umair S
Umair S@imumairs·
@DrJimFan Right, but in the context of Physical AI, two important open questions are the latency and accuracy (which are somewhat relaxed for LLM applications).
English
0
0
2
245
Shriram Krishnamurthi (primary: Bluesky)
Hey @willowtv — your new software platform has made login fail everywhere. Can't log into Willow on my iPad NOR on my Roku. The Roku version now *forces* me to subscribe through Roku, even though I already have a sub. Seriously buggy — better rollout please?
English
3
0
8
1.6K
Umair S
Umair S@imumairs·
When AI can generate a thousand working implementations before lunch, the question shifts from “can we build it?” to “did we ask for the right thing?” Upper-level PL courses become training grounds for precision specification - teaching students to write specs so airtight that AI can’t wriggle through loopholes with technically-correct-but-useless solutions.
English
1
0
1
109
Shriram Krishnamurthi (primary: Bluesky)
In a world where most code in modern programming languages will be machine-generated, what is the role of an upper-level programming languages course? Interesting and non-obvious answers please.
English
50
11
71
13.3K
Umair S
Umair S@imumairs·
Yesterday, I sat down with my daughter (grade 1) to brainstorm an app idea. Instead of jumping straight to features, she started with something simple but profound: "First, let's talk about what we DON'T want." - No games - No work apps - No boring apps Then she moved to what we DO want: a voice-driven teaching app with folders for different subjects: math, reading, art, music. She even mapped out the sign-in flow (it's funny that she added brother/sister name and pet name 😃), thinking through how kids would actually use it. She didn't need to know how to code. She needed to know how to think. This is the power of first principles. LLMs are useful tools. But they're most effective when we bring: - Clear problem definition - Strong conceptual thinking - A well-structured foundation The future belongs to clear thinkers who know how to ask the right questions.
Umair S tweet mediaUmair S tweet mediaUmair S tweet media
English
1
0
2
99
Umair S retweetledi
Atoosa Kasirzadeh
Atoosa Kasirzadeh@Dr_Atoosa·
One downside of focusing AI safety discussions primarily on extreme scenarios is that it can detract from examining AI safety through the lens of system safety and technological risk management. It's unfortunate that advertisement of certain terminology in AI communities has led to overly narrow thinking about safety concerns.
Andrew Ng@AndrewYNg

At the Artificial Intelligence Action Summit in Paris this week, U.S. Vice President J.D. Vance said, “I’m not here to talk about AI safety.... I’m here to talk about AI opportunity.” I’m thrilled to see the U.S. government focus on opportunities in AI. Further, while it is important to use AI responsibly and try to stamp out harmful applications, I feel “AI safety” is not the right terminology for addressing this important problem. Language shapes thought, so using the right words is important. I’d rather talk about “responsible AI” than “AI safety.” Let me explain. First, there are clearly harmful applications of AI, such as non-consensual deepfake porn (which creates sexually explicit images of real people without their consent), the use of AI in misinformation, potentially unsafe medical diagnoses, addictive applications, and so on. We definitely want to stamp these out! There are many ways to apply AI in harmful or irresponsible ways, and we should discourage and prevent such uses. However, the concept of “AI safety” tries to make AI — as a technology — safe, rather than making safe applications of it. Consider the similar, obviously flawed notion of “laptop safety.” There are great ways to use a laptop and many irresponsible ways, but I don’t consider laptops to be intrinsically either safe or unsafe. It is the application, or usage, that determines if a laptop is safe. Similarly, AI, a general-purpose technology with numerous applications, is neither safe nor unsafe. How someone chooses to use it determines whether it is harmful or beneficial. Now, safety isn’t always a function only of how something is used. An unsafe airplane is one that, even in the hands of an attentive and skilled pilot, has a large chance of mishap. So we definitely should strive to build safe airplanes (and make sure they are operated responsibly)! The risk factors are associated with the construction of the aircraft rather than merely its application. Similarly, we want safe automobiles, blenders, dialysis machines, food, buildings, power plants, and much more. “AI safety” presupposes that AI, the underlying technology, can be unsafe. I find it more useful to think about how applications of AI can be unsafe. Further, the term “responsible AI” emphasizes that it is our responsibility to avoid building applications that are unsafe or harmful and to discourage people from using even beneficial products in harmful ways. If we shift the terminology for AI risks from “AI safety” to “responsible AI,” we can have more thoughtful conversations about what to do and what not to do. I believe the 2023 Bletchley AI Safety Summit slowed down European AI development — without making anyone safer — by wasting time considering science-fiction AI fears rather than focusing on opportunities. Last month, at Davos, business and policy leaders also had strong concerns about whether Europe can dig itself out of the current regulatory morass and focus on building with AI. I am hopeful that the Paris meeting, unlike the one at Bletchley, will result in acceleration rather than deceleration. In a world where AI is becoming pervasive, if we can shift the conversation away from “AI safety” toward responsible [use of] AI, we will speed up AI’s benefits and do a better job of addressing actual problems. That will actually make people safer. [Original text: deeplearning.ai/the-batch/issu… ]

English
0
3
20
1.7K
Ravid Shwartz Ziv
Ravid Shwartz Ziv@ziv_ravid·
Let's talk a bit about authorship order on a paper. Yes, everyone cares about it, and it can become very emotional. Even if you think that big professors don't care, they do (although I know two professors who don't—you can probably guess who).
English
1
7
156
48.2K
Umair S retweetledi
Dan Hendrycks
Dan Hendrycks@hendrycks·
We’ve found as AIs get smarter, they develop their own coherent value systems. For example they value lives in Pakistan > India > China > US These are not just random biases, but internally consistent values that shape their behavior, with many implications for AI alignment. 🧵
Dan Hendrycks tweet mediaDan Hendrycks tweet mediaDan Hendrycks tweet media
English
708
2K
10.8K
6.2M
Umair S retweetledi
International Association for Safe & Ethical AI
Can we trust AI? “Everyone in the world is getting onto this brand new airplane that really has never been tested before. And it’s going to take off and it’s never going to land. It has to fly, forever.” – Stuart Russell, IASEAI President.
English
2
4
13
953
Umair S retweetledi
Max Tegmark
Max Tegmark@tegmark·
Almost everyone I speak with wants tool AI rather than AGI: controllable AI than empowers us rather than overmarketed “digital god” AI that replaces us and that we don’t know how to control. The key point about the Venn diagram below is they we can get basically all the tools we want as long as we don’t put too much “A”, “G” and “I” into the same system:
Future of Life Institute@FLI_org

🛠️ 🗣️ As @tegmark is discussing at @IASEAIorg, the controllable and beneficial AI tools we want are within our reach - but to get them, we must start treating the AI industry like all other high-impact industries, with legally binding safety standards that incentivize companies to innovate and prioritize public safety. 📄 At the link, read our new Policymakers' Guide to AI - including our proposed AI safety standards: bit.ly/4htdQq4

English
109
87
550
72.8K
Umair S retweetledi
Yoshua Bengio
Yoshua Bengio@Yoshua_Bengio·
As AI models rapidly advance in capabilities, their impact on people's lives will only continue to grow. It's crucial that governments prioritize the well-being of citizens and ensure they listen to them as they shape the future of AI. time.com/7213096/uk-pub…
English
20
34
159
9.8K
Umair S retweetledi
International Association for Safe & Ethical AI
Thank you to all our speakers, presenters, and attendees at #IASEAI25! We look forward to continuing the dialogue we started this week. We’ll post our recorded sessions in the coming days. Join the movement to ensure AI benefits all of humanity. #AISafety #EthicalAI #AIRegulation #TechPolicy #SustainableAI #AICooperation Special thanks to our plenary speakers: @MathiasCormann , @Yoshua_Bengio , @ancadianadragan , @geoffreyhinton , @mariaressa , @mmitchell_ai , @JosephEStiglitz , @tegmark , @katecrawford , @Dr_Atoosa , @ghadfield , @tobyordoxford , @NicolasMoes , @zicokolter
English
2
7
37
3.6K
Umair S retweetledi
reasonX Labs
reasonX Labs@reasonXLabs·
Our CEO, Umair Siddique @imumairs , will represent @reasonxlabs at the @IASEAIorg '25 Conference. He’ll join global leaders, incl. Nobel laureates & top researchers, to discuss AI safety & ethics. His talk: Safety of AI-Powered Robots: Challenges & a Way Forward.
reasonX Labs tweet media
English
0
1
0
41
Satnam Singh
Satnam Singh@satnam6502·
I am spending a few days in Toronto at the Groq office. I’ve already identified Almond Butterfly Cafe as a great looking gluten free celiac option. Looking forward to discovering more.
Satnam Singh tweet media
English
8
0
37
3.9K
Umair S
Umair S@imumairs·
@jhasomesh may be he wanted to say \ln(100) 😂
English
0
0
1
1.6K
Somesh Jha
Somesh Jha@jhasomesh·
A department chair told me that they hired someone who already has more than 100+ papers. What! How is this possible? Thoughts.
English
20
3
60
119.5K
Tzu-Han Hsu👩🏻‍💻🎹
‼️New Papers‼️ Very excited to share that we got 2 ✌🏻 papers accepted to TACAS’23!!!🫣🥹😍🥳 Big thank you to all my collaborators. Submitted 2 and get both accepted feels truly magical🪄(titles in the reply.) Going to be in Paris in April the very first time in my life!🤩
East Lansing, MI 🇺🇸 English
8
4
107
12.6K