Seth Cronin

3.3K posts

Seth Cronin banner
Seth Cronin

Seth Cronin

@SethCronin

Invent things, build things, and get intellectual property to maximize the value of the things you build and invent.

Burlington, VT Katılım Temmuz 2018
143 Takip Edilen296 Takipçiler
Seth Cronin
Seth Cronin@SethCronin·
I think the flavor of this particular line of rhetoric is couched in the fact that the organization publishing this stuff actively has disagreements with the use of AI in government, military, security operations, etc. It's an enormous amount of leverage over the sovereign nation to have the power to trigger these exploits and I don't really think there's a prior for it. I know this metaphor has been beaten to death but it's at least sort of like if Raytheon built a doomsday device and said only they were the ones allowed to control it
English
1
0
0
53
wanye
wanye@xwanyex·
Many critics of AI are just wrong on the merits, confused about what’s happening, in denial of the advancements. What *I’m saying* is that even when the advancements are real, genuine, significant, as this one is, there’s *still* something about the way in which boosters talk about it that isn’t quite right, that manages to be unjustifiably hyperbolic and just sort of unnecessarily breathless and alarmist. This is much harder to put your finger on, much harder to criticize, which is maybe why so many critics resort to straight denialism.
Tenobrus@tenobrus

maybe this is not yet clear, so let me state it plainly: as of right now Anthropic, and really a small number of individuals at Anthropic, has the capacity to directly attack and cause major damage to the United States Government, China, and generally global superpowers. government agencies like the NSA do not have internal models or defense capabilities that outclass frontier models. if they chose to do so, they could likely exfiltrate top secret information from government systems, gain control over critical infrastructure including military infrastructure, sabotage or modify communications between members of government at the highest level, and potentially carry on activities for some time without detection. the thing about having access to a huge number of zerodays your adversaries don't know about is it gives you a massive asymmetric advantage. they did not exploit this to gain power or destabilize the world order. they publicly released the information that they had these capabilities and worked to mitigate these flaws. you should be grateful american frontier labs have proven themselves remarkably trustworthy and concerned with the public good. but it's critical you understand we are in a new regime. private entities now have power that directly rivals and impacts the government's monopoly on influence and violence. and anthropic is certainly not the only one, there's little chance OpenAI's internal models are far behind. this trend will accelerate on virtually every dimension, not slow down. my prediction for how it plays out is the relatively imminent seizure and nationalization of labs by the US government, sometime over the next two years. it's very tough for me to see how they accept the existence of this kind of threat. but this adds a whole new class of governance issues, as then we've handed these extremely wide-reaching capabilities from private entities to public ones.

English
19
5
134
15.6K
Seth Cronin
Seth Cronin@SethCronin·
@daniel_mac8 Welcome to the brinkmanship era of ai: mutually assured de-slop-tion
English
0
0
1
30
Dan McAteer
Dan McAteer@daniel_mac8·
don’t believe Anthropic shared the Claude Mythos blog post for marketing purposes only. think they do have a god-level model with the potential to disrupt society. do believe they did it as a 5D chess move to put OpenAI in a bind. OpenAI now has 3 options: > release a version of Spud that’s on the level of Mythos and risk a catastrophe > don’t release a powerful version of Spud and allow Anthropic to continue their surge as the leader esp in Enterprise > release watered-down Spud and disappoint none of these are good options. best bet for OAI would be if they have some type of safety/alignment special sauce no one knows about and can release AGI Spud while ensuring it’s safe.
English
44
5
129
11.5K
Seth Cronin
Seth Cronin@SethCronin·
b is very important.The companies chosen seem to be an interesting one. They must: A) Have oodles and oodles of money B) Not be a direct/realistic competitor to Anthropic (sorry @GeminiApp ) C) must be one of US
English
0
0
0
5
Seth Cronin
Seth Cronin@SethCronin·
@jodiecongirl What you’re telling me there’s deterministic and non- deterministic and they should be put on a… spectrum?
English
0
0
0
1
Jodi Beggs
Jodi Beggs@jodiecongirl·
now you're telling me i can interact with claude as i would with my normie employee, meaning that i have to iterate and figure out how to talk to him properly in order to get what i want...anyway you can see how this is a hard sell for a certain group of people right
English
4
0
12
881
Jodi Beggs
Jodi Beggs@jodiecongirl·
so here's the thing...coding is sort of like talking to a highly intelligent autistic person in that the computer is going to execute instructions perfectly and literally, regardless of your actual intent...i am comfortable with this interaction, even with actual people (cont'd)
English
2
0
12
1.6K
Seth Cronin
Seth Cronin@SethCronin·
The USPTO shifted AI patent eligibility guidance on March 30th. If your company files AI patents, the rules changed last week. Here is what matters. For years, AI patent applications have run into Section 101 rejections. Examiners call the invention abstract. Abstract ideas cannot be patented. Thousands of legitimate AI innovations have been rejected or abandoned on those grounds. The new guidance clarifies what it takes to show the "practical application" that moves a claim out of abstract territory. The key shift: claims that tie AI to a specific, measurable technical improvement in a computer system now have a clearer pathway. Generic claims, the ones that describe what AI does without specifying how it produces a concrete technical result, those still fail. What this means in practice: If you have pending AI applications, audit them now. Look at every independent claim. Ask whether it is tied to a specific technical outcome, or whether it describes AI behavior at a high level. For companies actively filing: the "what does this AI do better, specifically, and how" framing is more critical than ever. The examiner needs to see the improvement, not just the functionality. The window to restructure pending claims before your next office action is open right now. Link in comments for the full USPTO guidance analysis. #AI #Patents #USPTO #IPStrategy #Innovation
Seth Cronin tweet media
English
0
0
0
8
Seth Cronin
Seth Cronin@SethCronin·
GlobalFoundries just filed patent infringement suits against Tower Semiconductor over 11 patents. The ask: profit damages and an import ban. Pay attention to how they framed it: "protecting high-performance American chip innovation." That language is deliberate. In a post-CHIPS Act environment where the US is investing hundreds of billions to rebuild domestic semiconductor manufacturing, IP is becoming as much a policy weapon as a competitive one. The pattern is clear: as chip fabrication capacity grows, so does the aggression around who owns the process innovations that make it possible. Foundries are not just competing on yield and cost anymore. They are competing on patents. For anyone designing chips or managing semiconductor supply chains, three questions matter right now. Do you know which process patents the chips in your products depend on? If your foundry gets hit with an injunction or import ban, what is your contingency? Is your own IP portfolio positioned to play defense or offense in this environment? Most companies in the semiconductor supply chain have never had to ask these questions seriously. That era is ending. Semiconductor IP battles are the new trade wars. The GlobalFoundries-Tower case is a preview. Link in comments for the full case background. #Semiconductors #Patents #CHIPS #IPStrategy #TechPolicy
Seth Cronin tweet media
English
0
0
1
17
Seth Cronin
Seth Cronin@SethCronin·
@xwanyex Lot of words to say in order to be religious you need to abandon logic. Obviously! You wouldn’t need faith if you could get there with logic.
English
0
0
1
41
wanye
wanye@xwanyex·
I can’t believe I actually have to explain this, but, “I just find it impossible to accept that the accounts we have of the apostles would exist unless they really did behave exactly as described on the basis of having witnessed a resurrection” is trivially defeated by, “well, I find it impossible to accept that there was a resurrection, mate.” You are creating a case of, “which is more likely” and, “a guy rose from the dead” is definitely the less likely of the two possibilities, from a purely scientific and secular worldview, even if the other thing is really, really, really, really, really, really, really, really improbable. Even if we grant that the accounts that we have today of their behavior are perfectly accurate, that nothing has been left out, that no mistakes were made, that literally everything occurred exactly as described, and even if we therefore grant that they went to their deaths genuinely believing they had firsthand evidence of a resurrection, one would still have to say, I think, that some other explanation for their behavior, no matter how unlikely or improbable, is still more likely than that a guy genuinely rose from the dead. If your acceptance of Christianity is based on arguments like this one, then I think it will always be flimsy. These just aren’t very good arguments. That is to say, at the very least, these arguments are not going to be convincing to most smart, scientifically-minded people. I would just simply resist the urge to try to compare probabilities in this way. Most smart, rational people see these two options and think that resurrection is the dramatically least likely of all available explanations. ”But without the resurrection, these accounts of the disciples make no sense!” just simply cannot overcome the improbability of a literal resurrection (again, from a purely secular, scientific worldview). I think it is a mistake to base your Christianity on these kinds of arguments. I think you will find that these kinds of arguments are not very convincing to most educated people. And I think also that the reason for this is that it is in fact not a very convincing argument. We cannot hope to construct Christianity on logic in this way. If one believes, as I do, that Christ was in fact resurrected, as I proclaim in my recitation of the Nicene Creed every Sunday, then one must have the courage to accept that this must somehow be possible absent the intuition described above.
English
94
8
397
111.8K
Seth Cronin
Seth Cronin@SethCronin·
15 years. Hundreds of millions in legal fees. Dozens of biotech companies built on the outcome. The CRISPR patent war just got another chapter. On March 27th, the USPTO Patent Trial and Appeal Board reaffirmed its earlier decision: the Broad Institute holds patent priority for CRISPR-Cas9 use in eukaryotic cells. UC Berkeley challenged. The challenge failed. Again. This is not just two universities arguing over credit. CRISPR therapies are now FDA-approved. The market is growing fast. Every licensing deal, every spinout, every therapy that reaches a patient flows through whoever holds these foundational patents. The stakes are in the billions. Three things this case teaches: First, foundational technology patents are worth fighting for. Fifteen years and still counting. Second, priority date documentation is everything. The Broad Institute won on the strength of their documentation. Not just on who invented it first in the lab, but on who proved it first on paper. The paper trail is the patent. Third, biotech IP is a contact sport. If you are building in this space, your IP position needs to be airtight before you start commercializing. Which company does this PTAB ruling help most? Link in comments for the full decision background. #CRISPR #Biotech #Patents #IPStrategy #LifeSciences
Seth Cronin tweet media
English
0
0
0
17
Seth Cronin
Seth Cronin@SethCronin·
@jodiecongirl If an ad grabs me, I will watch. If it’s unskippable, I will assume the product can only be sold by advertising entrainment and has no intrinsic value.
English
0
0
0
2
Seth Cronin
Seth Cronin@SethCronin·
$300 million acquisition. European hard tech company. Solid IP portfolio. The buyer almost walked anyway. This is the sell-side IP story nobody talks about. Hard tech companies spend years building real technology. Real innovations. Filed patents, maintained portfolios, international coverage. The IP was genuinely strong. But when the buyer's team came in for diligence, they struggled to see it. Think about selling a house. You can have a great property in a great location, well-maintained and structurally sound. But if it is not staged, buyers will not see themselves in it. They cannot picture how they would live there. Sell-side IP diligence is exactly the same. We spent three weeks staging that portfolio. We mapped every patent family to the buyer's existing product lines. We identified the moats the buyer would inherit. We surfaced the white space that would accelerate their roadmap. We told the story of the IP in the buyer's language, not the seller's. Deal closed. $300 million. The lesson: great IP, left unstaged, gets discounted or overlooked. Buyers do not have time to reconstruct the vision on their own. That is your job, or your IP advisor's job. Is your portfolio staged for the buyer who might show up tomorrow? #MergersAndAcquisitions #IPStrategy #Patents #Innovation #HardTech
Seth Cronin tweet media
English
0
0
0
15
Seth Cronin
Seth Cronin@SethCronin·
Since 2014, the Alice decision has made it nearly impossible to patent software, diagnostics, and many life science inventions in the United States. A bipartisan bill in Congress could change that. The Patent Eligibility Restoration Act (PERA) would eliminate the judicial exceptions to patentability that have blocked thousands of patent applications over the past 12 years. The bill has bipartisan sponsorship from Senators Tillis and Coons and Representatives Kiley and Peters. If passed, it would be the most significant change to US patent law since Alice itself. Here is what PERA would do: remove the "abstract idea," "law of nature," and "natural phenomenon" exceptions that courts have used to reject patents on software methods, diagnostic tests, and biological discoveries. Here is what it would not do: make everything patentable. Novelty and obviousness requirements stay intact. The bar for patenting does not disappear. It just stops moving based on unpredictable judicial interpretation. The bill faces organized opposition from parts of the medical community and large tech companies who benefit from the current regime. If you are a CTO running innovation programs in AI, biotech, or medical devices, this is worth tracking. Patent strategies that have been off the table since 2014 could reopen. What would PERA mean for your company's IP strategy?
Seth Cronin tweet media
English
0
0
1
19
Ethan Mollick
Ethan Mollick@emollick·
The replies here are terrible meaning-shaped slop. Don’t bother me reading them (sorry, couple of good human commenters)
Ethan Mollick tweet media
English
20
0
74
13.1K
Ethan Mollick
Ethan Mollick@emollick·
Big deal paper here: field experiment on 515 startups, half shown case studies of how startups are successfully using AI. Those firms used AI 44% more, had 1.9x higher revenue, needed 39% less capital: 1) AI accelerates businesses 2) The challenge is understanding how to use it
Ethan Mollick tweet mediaEthan Mollick tweet media
Hyunjin Kim@hyunjinvkim

🚨 Excited to share a new working paper! 🚨 AI can improve individual tasks. But when does it improve firm performance? Our paper proposes one key friction firms face: the "mapping problem" -- discovering where and how AI creates value in a firm's production process. 🧵1/

English
115
162
1K
354.6K
Seth Cronin
Seth Cronin@SethCronin·
I have been getting a lot of calls lately. The question is always some version of: "How do I make money from my patents?" Here is what I tell every caller. First, know who is buying. The active buyers right now are licensing companies, PE-backed IP funds, and strategic acquirers building defensive portfolios. Each has different expectations for what they will pay and why. Second, know what they want. Broad claims, clear evidence of use in the market, and remaining life on the term. Narrow claims with 3 years left are not attracting serious offers. Third, do not spend money until you have done the basics. Before you hire a broker or licensing firm: Map your claims to products actually in the market. Identify who is using your technology and how. Assess claim strength against prior art. Get a realistic valuation from comparable transactions. Too many patent holders skip this work, hire an expensive intermediary, and wonder why nothing sells. The companies that succeed at monetization do their homework first. They build relationships with buyers over time. They present clean, well-documented portfolios with clear evidence of use. Patent monetization is real. But it is a strategy, not a lottery ticket. Thinking about monetizing your portfolio? Start with the research.
Seth Cronin tweet media
English
0
0
0
10
Seth Cronin
Seth Cronin@SethCronin·
The problem with OpenAI and prompt caching has nothing to do with the architecture of whether or not you're using the agent SDK. It's actually how the harness itself is intended to be used. Your prompt is cached for a maximum of 1 h. If you have an OpenAI running a cron job passing 20 to 100k tokens every 1.5 h, literally zero of those tokens are cached.
English
1
0
0
131
dex
dex@dexhorthy·
Concerning re: anthropic. The previous narrative just went out the window Reports of openclaw usage with the plain sdk (as was supposedly permitted) now being blocked based on system prompt, even if using the claude agent sdk I was previously a little to the anthropic side of the spectrum on this because of EXACTLY one argument - “third party harnesses don’t use caching properly, can’t be controlled with feature flags, etc” If they are blocking use of the claude agent sdk wholesale in openclaw, then this completely invalidates that argument and I desire an answer as to what is allowed and why. I am disappointed that the communications thus far have failed to articulate the reasons here, and does make it harder to trust whatever they say next. However I will maintain cautious optimism that there is a good explanation for all this beyond the cheap “rug pull” “evil” “kill all the startups” jeers
dex@dexhorthy

like I’ve said a few times, well within TOS to do this, they built the model, if they wanna give you inference at pennies on the dollar on the condition that you use their harness, great, they have the right to do this. On this topic in particular, I don’t understand the “evil” or “rugpull”, jeers. There was never any promise to give people cheap inference. Before the claude code max plan we were all paying per token to use this stuff. And we’re more or less happy to do it (sure the VC funding helps). Every enterprise I know pays per token because when you use subsidized inference, YOU are the product. “Have some cheap code, in exchange for helping to train the next gen of models” You can hate on that particular behavior if you want but nobody is making you take part in that particular market dynamic. Do I wanna see a world where model companies take some of their massive financial gains and use that to pull everybody up? Of course. I hope it happens some day. An allegory perhaps: If public e-bike company gave you a subscription on rides and you proceeded to around ripping out batteries and sticking them in your own bike and ride around town, you’d get banned for that too. Especially if your bike was poorly wired and overloaded the batteries/cause them to flame up etc. Banning that behavior would deliver far better results for the people who were using the system as designed

English
52
26
443
142.9K
Seth Cronin
Seth Cronin@SethCronin·
My eternal quest for agentic email organization continues. So far Claude code using Powershell COM on classic outlook with an inbox zero skill taking notes in obsidian is the complex/brittle stack that is keeping my inbox from being a living hell. It’s only about 70% accurate though and still requires a ton of babysitting.
English
0
0
0
108
Allie K. Miller
Allie K. Miller@alliekmiller·
My hot take is that this is only a 15% productivity gain for M365 users. M365 read connectors are a great start. As is Claude computer use for windows. But computer use is too slow (on purpose) for actual inbox triage. And this connector only really has Read access. So yes it can access and search and read and gather and synthesize and analyze…but that’s not TASK completion in these tools. That doesnt let me delegate any email management to my AI system. Give me the power to manage, edit, write, draft, send, and then we can talk. MSFT is clearly dipping its toes in the Anthropic waters more. Here’s to hoping they crack enterprise-secure actions beyond search, find, and read.
Claude@claudeai

Microsoft 365 connectors are now available on every Claude plan. Connect Outlook, OneDrive, and SharePoint to bring your email, docs, and files into the conversation. Get started here: claude.ai/customize/conn…

English
50
12
142
42.4K
Seth Cronin
Seth Cronin@SethCronin·
Eight European countries just weakened their IP protections. Meanwhile, the UAE and Malaysia are strengthening theirs. The US Chamber's 2026 International IP Index tells a story most executives are not tracking. The EU's December 2025 pharmaceutical legislation expanded the Bolar exemption and cut data protection periods. Eight member states saw their IP scores drop as a result. At the same time, UAE, Ecuador, Malaysia, and Brunei posted the largest score gains in the index, reflecting deliberate IP modernization efforts. The US retained its number one ranking at 95.15%, but scored fractionally lower than 2025. The report flagged "march-in rights" concerns as a downside risk. Why this matters for your filing strategy: the jurisdictions where your patents are strongest today may not stay that way. IP protection is not static. It is a policy choice, and governments are making different choices than they did five years ago. If you have a global patent portfolio, three questions worth asking: 1. Are you filing in jurisdictions that are strengthening IP protection? 2. Are your European patents in sectors affected by the new pharma legislation? 3. When did you last review your geographic filing strategy?
Seth Cronin tweet media
English
0
0
0
21
Seth Cronin
Seth Cronin@SethCronin·
Nike won a jury verdict against Lululemon for patent infringement. Then a federal judge wiped it out. The $355,450 in damages? Gone. The Flyknit patent (US 8,266,749)? Invalid. The reason? Obviousness. Judge Subramanian ruled that the patent claims were obvious in light of prior art, overturning the jury's finding entirely. This came a full year after Nike won at trial. Let that sink in. You can invest years in litigation, convince a jury you are right, win a damages award, and still lose everything on a post-trial motion. This is not an edge case. Obviousness challenges succeed more often than most patent holders realize, especially when prior art was not thoroughly vetted before enforcement. Three lessons from this case: 1. Stress-test your claims against prior art before you litigate, not after. 2. A jury win is not a final win. Post-trial motions can reverse outcomes entirely. 3. The cost of enforcing a patent that gets invalidated is worse than never enforcing at all. When did you last stress-test your most valuable patents?
Seth Cronin tweet media
English
0
0
0
39
Andrea
Andrea@acolombiadev·
You don’t need Obsidian if an agent is indexing a private GitHub repo. The wiki is just markdown files. Any agent with repo access can read, write, and maintain it. GitHub renders .md natively. Your agent handles the rest.
Andrej Karpathy@karpathy

LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.

English
23
6
219
50.8K