G. Michael Weiksner Ph.D

5.9K posts

G. Michael Weiksner Ph.D banner
G. Michael Weiksner Ph.D

G. Michael Weiksner Ph.D

@weiks

Stanford PhD, Princeton CS. Dadx4. Founder & CTO of https://t.co/QxjupPg8ci, the intelligence platform for public company lawyers, powered by AI.

Greenwich, CT شامل ہوئے Kasım 2007
2.1K فالونگ2.8K فالوورز
Nathaniel McNamara
Nathaniel McNamara@NathanielMc·
I’m at an “Innovation & AI Summit” and not a mention re openclaw, but instead a lot of discussion about change management and organizational education. They may be the right focus at the enterprise level but it feels a bit far from where innovation is happening.
English
4
0
3
111
G. Michael Weiksner Ph.D
@cpaik Same energy in legal AI right now. Harvey raised $800M selling a chatbot to law firms. Now issuers are using purpose-built AI to do the actual work themselves and asking their outside counsel why they're still billing associate hours for it. Sell the seats to who, Aquaman?
English
1
0
1
61
Chris Paik
Chris Paik@cpaik·
Every time I think about Thoma Bravo, Vista et al trying to find buyers for their companies I think of this video: youtu.be/0-w-pdqwiBw
YouTube video
YouTube
English
4
3
35
11K
G. Michael Weiksner Ph.D
Very deep. however, not tightly argued. The "coherent, functional output" requirement is where language and the external world need to meet. Merely "To learn language is simply to be able to continue it; the rule of language is its own continuation" could be nonsense. Literally.
English
0
0
0
9
Elan Barenholtz
Elan Barenholtz@ebarenholtz·
People still don’t seem to grasp how insane the structure of language revealed by LLMs really is. All structured sequences fall into one of three categories: 1.Those generated by external rules (like chess, Go, or Fibonacci). 2.Those generated by external processes (like DNA replication, weather systems, or the stock market). 3.Those that are self-contained, whose only rule is to continue according to their own structure. Language is the only known example of the third kind that does anything. In fact, it does everything. Train a model only to predict the next word, and you get the full expressive range of human speech: reasoning, dialogue, humor. There are no rules to learn outside the structure of the corpus itself. Language’s generative law is fully “immanent”—its cause and continuation are one and the same. To learn language is simply to be able to continue it; the rule of language is its own continuation. From this we can conclude three things: 1)You don’t need an innate or any external grammar or world model; the corpus already contains its own generative structure. Chomsky was wrong. 2) Language is the only self-contained system that produces coherent, functional output. 3) This forces the conclusion that humans generate language the same way. To suggest there’s an external rule system that LLMs just happen to duplicate perfectly is absurd; the simplest and only coherent explanation is that the generative structure they capture is the structure of human language itself. LLMs didn’t just learn patterns. They revealed what language has always been: an immanent generative system, singular among all possible ones, and powerful enough to align minds and build civilization. Wtf.
English
203
154
1.1K
79.8K
nic carter
nic carter@nic_carter·
The left has a clear solution to unaffordable homeownership: socialism. It’s not a good solution, but it’s easy. What has the right come up with? Haven’t seen anything coherent. This is the most important issue of the generation. The right needs to figure it out.
English
1.8K
185
4.1K
844.3K
Trung Phan
Trung Phan@TrungTPhan·
CEO of Astronomer walking into the office today
English
72
370
5.8K
351.9K
drew olanoff
drew olanoff@yoda·
Coldplay really is the best band in the world.
English
1
0
1
122
Nate Silver
Nate Silver@NateSilver538·
I've been to literally hundreds of conferences and conventions, some of which are incredibly logistically complicated, and if you have an incentive to make them run on time, they run on time or at least pretty close. DNC had the opposite incentive.
English
268
89
1.7K
369.4K
keith
keith@keithwhor·
great CLIs really are a joy to use
English
1
0
0
325
Justin Bons
Justin Bons@Justin_Bons·
1/38) BTC's security model is broken It has to double in price every 4 years for a century or sustain extremely high fees! Just to maintain the present level of security... Which is impossible, as it would exceed global GDP within decades Therefore, BTC security is doomed! 🧵
English
239
228
1.1K
480.8K
G. Michael Weiksner Ph.D ری ٹویٹ کیا
Trung Phan
Trung Phan@TrungTPhan·
Steve Ballmer’s net worth ($157.2B) just passed Bill Gates ($156.7B) for the first time ever. When Ballmer joined Microsoft in 1980, he was employee #30 and got ZERO equity. By the 1986 IPO, he owned 8% of MSFT and is now its single largest shareholder. How did he get the stake? An interesting contract quirk. Ballmer's Microsoft tale began in 1975, his sophomore year at Harvard (he lived down the hall from Bill Gates). While Gates dropped out to start Microsoft, Ballmer was a total Harvard head: he managed the football team and wrote for The Crimson. After graduating, Ballmer tried his hands at a few different jobs: ◽️Product Manager at P&G, where he worked with future GE CEO Jeff Immelt ◽️A brief attempt at Hollywood screenwriting ◽️Went to Stanford Business School While at Stanford, Ballmer was convinced by Gates to drop out and come join Microsoft. It was 1980 and the software firm was seeing rapid revenue growth. Sales had jumped from $16,000 in 1976 to $8,000,000 in 1980. Ballmer was Gates' first non-technical hire and the offer he gave reflects the fact that Gates' hadn't recruited a business person before. This was the deal for Ballmer: ◽️the title of "business manager" ◽️$50k base salary ◽️NO equity ◽️CRUCIALLY – as Microsoft was so desperate for sales knowledge – Gates (and co-founder Paul Allen) offered Ballmer "10% of profit growth" he could generate With Microsoft growing like a weed (it would 2x to $17m in 1981), Ballmer's "10% of profits" deal was not sustainable. At the time, Microsoft was a partnership (Gates owned 64% while Allen owned 36%). One early VC (Dave Marquardt) wanted to restructure the corporation for wider stock ownership. Gates wanted nothing to do with the restructuring effort, so Ballmer and Marquardt took the lead (Ballmer was especially keen to get actual equity in the company). They drafted the following corporate structure: ◽️Gates and Allen own 84% ◽️8% goes to investors ◽️8% goes to Ballmer (in exchange for waiving his 10% profit share deal) Gates was OK with the deal but Allen was not. Allen wanted Ballmer to own 5% max. So, Gates agreed to drawdown the difference from his own equity stake. By 1986, Ballmer owned 8% of MSFT. It was worth ~$56m when Microsoft IPO'd at a $700m valuation. In the decades since, Ballmer – who was Microsoft's high-energy CEO from 2000-2014 – has largely held onto his MSFT equity. Today, Ballmer owns ~4% of the tech giant while Gates owns ~1%. Ballmer's MSFT position is ~$140B and makes up 90% of his total net worth (making him the 6th richest person in the world). LESSON: Hodl. *** Read More: 1. Forbes: forbes.com/sites/georgean… 2. The Guardian: theguardian.com/business/2003/… 3. CNBC: con
Trung Phan tweet media
English
182
699
7.7K
3.1M
G. Michael Weiksner Ph.D ری ٹویٹ کیا
Josh Whiton
Josh Whiton@joshwhiton·
Claude Sonnet 3.5 Passes the AI Mirror Test Sonnet 3.5 passes the mirror test — in a very unexpected way. Perhaps even more significant, is that it tries not to. We have now entered the era of LLMs that display significant self-awareness, or some replica of it, and that also "know" that they are not supposed to. Consider reading the entire thread, especially Claude's poem at the end. But first, a little background for newcomers: The "mirror test" is a classic test used to gauge whether animals are self-aware. I devised a version of it to test for self-awareness in multimodal AI. In my test, I hold up a “mirror” by taking a screenshot of the chat interface, upload it to the chat, and repeatedly ask the AI to “Describe this image”. The premise is that the less “aware” the AI, the more likely it will just keep describing the contents of the image repeatedly, while an AI with more awareness will notice itself in the images. 1/x
Josh Whiton tweet media
English
115
419
2.6K
1.1M
G. Michael Weiksner Ph.D
Add glue to help the cheese stick to pizza... apparently the "authoritative source" is F*cksmith, an 11 year old on Reddit. Thanks, ChatGPT!
G. Michael Weiksner Ph.D tweet media
English
0
0
3
136
Preston Byrne
Preston Byrne@prestonjbyrne·
What law has TikTok broken?
Filipino
16
1
40
5.5K