Micheal Covington

329 posts

Micheal Covington banner
Micheal Covington

Micheal Covington

@CovingtonORINAS

CEO & Co-Founder @ ORINAS Group — building counter-UAS and autonomous systems for defense. Edge AI, RF signal processing, rapid prototyping.

North Carolina Beigetreten Şubat 2026
45 Folgt105 Follower
Angehefteter Tweet
Micheal Covington
Micheal Covington@CovingtonORINAS·
Yann LeCun — Turing Award winner — just left Meta and called scaling LLMs "complete bullshit." Boyuan Chen — DARPA-funded roboticist at Duke — built a machine that models its own body with no programming. Karl Friston — the most cited neuroscientist alive — proved that persistence itself is a form of intelligence. Three fields. Three methods. One conclusion: intelligence isn't a function of how much you compute. It's a function of how coherently you understand yourself in relation to your environment.
English
1
1
10
934
Micheal Covington
Micheal Covington@CovingtonORINAS·
@MrBallaz @Codie_Sanchez Articulation. Before you prompt, write down what you actually want — even rough. The AI mirrors your clarity back at you. Underneath all of it, conscientiousness — knowing why you're doing what you're doing. Still the number one predictor of success, with AI or without it.
English
0
0
0
7
Codie Sanchez
Codie Sanchez@Codie_Sanchez·
The most underrated skill in the world right now is prompting…if you can speak the language of AI, AI will give you what you want. That’s why I had my internal AI transformation hire create a cracked prompting tool for our company to use internally. It essentially helps you create the perfect prompt. I love it - and that’s why I want to open source it.
English
87
26
485
38.5K
François Chollet
François Chollet@fchollet·
To be proficient at one level of abstraction, you also need to have a decent understanding of the layer below. Traditionally all apprenticeships start with menial tasks, which trains mechanical sympathy for everything the job builds upon.
English
64
166
1.9K
70.9K
Micheal Covington
Micheal Covington@CovingtonORINAS·
The scaling laws themselves. Cost increases exponentially for linear benchmark improvement — that's a power law, not a opinion. GPT-3 to GPT-4 cost roughly 10x more to train for incremental gains on MMLU, GSM8K, and HumanEval. NeurIPS researchers have published on this plateau. OpenAI's own trajectory shows it. More compute is still producing some improvement — but the return per dollar invested has been declining for two generations of models. That's what diminishing returns means.
English
0
0
0
21
Dr Singularity
Dr Singularity@Dr_Singularity·
$600B–$1T is going into data center and compute buildout this year. People think this is a huge investment, a bubble. The compute (scale) we’re currently building is a joke. A total joke. That’s how we’ll look at it in 2030. It will grow enormously every year from now on. Today’s numbers will look "cute" in just a few years. To power the future we’re often talk about here, post scarcity 'utopia' 2030s, we will need 1000's times more compute. We’re barely getting started.
English
40
25
336
16.9K
Micheal Covington
Micheal Covington@CovingtonORINAS·
The percentage doesn't matter. The question is whether the person behind the content could hold the position without the tool. AI-assisted thinking and AI-replaced thinking look identical on screen. The difference only shows up when someone pushes back and the author either defends the idea or collapses — because they never had the idea, just the output. 90% AI-generated with honest, thinking users behind it is an upgrade. 90% AI-generated by people outsourcing their opinions is noise at a scale we've never seen.
English
0
0
0
17
Bojan Tunguz
Bojan Tunguz@tunguz·
By this time next year 90%+ of content on this site will probably be AI generated.
English
83
6
136
13.4K
Micheal Covington
Micheal Covington@CovingtonORINAS·
The pattern is real but the lesson isn't "don't use the API." It's don't build your entire value chain on someone else's inference layer. The developers who survived Microsoft's platform play weren't the ones who avoided Windows — they were the ones who owned something Windows couldn't replicate by shipping a feature update. In AI, that means proprietary models running on your own edge infrastructure, trained on data the platform never sees. Own your inference or rent your relevance. The moat isn't the model. It's everything the model needs that the platform can't absorb — your data, your deployment environment, your domain expertise encoded into architecture they can't study through API logs.
English
0
0
2
798
Dustin
Dustin@r0ck3t23·
Every major platform in history has run the same play. You’re about to watch it happen again. Jason Calacanis just went on record. He wants it clipped. He wants it shared. Calacanis: “If I was a developer of any kind, I would never work with Sam Altman and OpenAI.” This isn’t pessimism. It’s pattern recognition. And the pattern has a 40 year track record. Open. Invite. Reward. Study. Absorb. Eliminate. Microsoft let developers build Lotus 1-2-3. Then built Excel. Let them build WordPerfect. Then built Word. Flew them to conferences. Handed out awards. Studied everything. Then eliminated them. Zuckerberg ran the exact same play at Facebook. Zynga built billions in value on their platform. Then Zuckerberg shifted them without blinking. Calacanis: “Sam Altman comes from the Zuckerberg school of business. Give people access to your tools, study them, and like the Borg, steal every innovation they have.” This is how platforms grow. They don’t innovate at the edges. They let the ecosystem do it for them. Startups take the risk. Startups find the market. Startups prove the concept. Then the platform ships it natively and calls it a feature. Altman isn’t selling you compute. He’s selling you a front row seat to your own disruption. Calacanis: “This is a warning for anybody dumb enough to use Sam Altman’s OpenAI API. They are studying you.” OpenAI has the legal right to study how you use their API. You agreed to it. It’s in the terms. Every gap you find, you’re finding it for them first. Every dollar you make signals exactly where he should build next. We are at the exact same moment in AI that we were in the early internet. Developers flooded onto platforms. Built incredible things. Created real value. And handed the leverage to whoever owned the infrastructure beneath them. The AI gold rush feels different because the tools are more powerful. It isn’t different. You are not a founder. You are unpaid R&D. The builders who win the next decade won’t be the ones who used the best tools. They’ll be the ones who owned something the tools couldn’t absorb. Proprietary data. Distribution. A brand. A moat. History doesn’t warn you before it repeats. It just repeats. Thousands of developers are walking straight into this right now convinced they’re different. They’re not. Do not build your business on OpenAI. Build something he has to acquire or destroy.
English
96
258
1.3K
247.4K
Micheal Covington
Micheal Covington@CovingtonORINAS·
Turing asked "can machines think?" We built machines that produce text, pass exams, and write code — then said yes. But Turing was a mathematician. He knew the difference between a function that produces correct outputs and a system that understands why they're correct. We built machines that perform thinking. Whether they're doing what Turing actually meant is the question we skipped — and it's the one that determines whether this revolution holds or collapses under its own assumptions.
English
0
0
0
232
Derya Unutmaz, MD
Derya Unutmaz, MD@DeryaTR_·
The path to the age of AI began with a profound question: “Can machines think?” posed by English mathematician Alan Turing in 1950. Today, the answer is a definite yes! That single question also helped set the stage for the most revolutionary transformation in human history!
English
12
10
94
14.1K
Micheal Covington
Micheal Covington@CovingtonORINAS·
The part that should concern everyone isn't the lawsuits — it's the procurement. These models are being sold to the Pentagon, embedded in intelligence systems, integrated into defense infrastructure. And the entire IP provenance is unresolved. Imagine the classified briefing: "Sir, this system was trained on 7 million pirated books, scraped code repositories, and the entire indexed internet — but we're confident in its security posture." The export control irony is even sharper. We're restricting China's access to models built on IP we didn't license ourselves, then deploying those same models in systems where provenance and trust are supposed to be non-negotiable. Musk isn't raising ethics. He's raising competitive positioning. But the actual vulnerability is real — just not the one he's pointing at.
English
0
0
3
854
Shanaka Anslem Perera ⚡
Shanaka Anslem Perera ⚡@shanaka86·
The most honest sentence in the entire AI industry right now is one nobody wants to say out loud. Every major foundation model was trained on data its creators did not have explicit permission to use. Every single one. Anthropic settled for $1.5 billion over 7 million pirated books used to train Claude. OpenAI faces ongoing lawsuits from authors, newspapers, and code repositories. Google trained on the entire indexed internet. Meta used Libraries Genesis datasets. And xAI’s Grok was trained on the full corpus of X posts, a decision Musk made unilaterally as the platform’s owner without individual user consent. So when Elon Musk tweets that “Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact,” he is telling a true but deeply selective version of the story. The settlement is real. The $1.5 billion is documented. The pirated books are documented. But framing this as an Anthropic problem rather than an industry-wide structural reality is competitive positioning disguised as moral outrage. Here is the actual mechanism nobody is mapping. Anthropic accused Chinese labs of distilling Claude through its public API. Musk responded by pointing out Anthropic trained on stolen data. Gergely Orosz, a respected engineer, wrote “Anthropic can’t have it both ways.” All three are correct simultaneously and all three are being selectively honest. The structural reality is that the entire foundation model industry sits on an unresolved intellectual property question worth hundreds of billions of dollars. Every lab trained on data it did not license. Every lab knows this. Every lab’s legal strategy is to get big enough that the settlement becomes a cost of doing business rather than an existential threat. Anthropic already paid $1.5 billion. That is not a punishment. That is a licensing fee paid retroactively under legal pressure. The reason Musk is raising this now has nothing to do with ethics. Anthropic is in conversations with the Pentagon. xAI is competing for the same contracts. Framing your competitor as a data thief three days before a defense meeting is not moral clarity. It is positioning. And the deepest irony is the China angle. The United States wants to restrict Chinese access to American AI models on intellectual property grounds. But every American AI model was built on intellectual property its creators took without permission from millions of authors, coders, artists, and publishers. The entire moral framework for the technology export control regime rests on an intellectual property argument that the American labs themselves have not resolved domestically. That is not hypocrisy anyone in the industry wants to discuss because the moment you acknowledge it, the legal and regulatory exposure scales to every company simultaneously. Musk is weaponizing it selectively. Anthropic is deflecting it selectively the way I see this. And the actual creators whose work built every one of these models are watching billionaires argue about who stole from them more ethically.
Shanaka Anslem Perera ⚡ tweet media
Elon Musk@elonmusk

Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact.

English
158
560
1.7K
294.7K
Micheal Covington
Micheal Covington@CovingtonORINAS·
It's not a magic mirror. It's an architect's mirror. Magic implies randomness — you get whatever you get. Architecture implies design — the reflection depends on what you bring to it. The people seeing demons brought fear. The people seeing salvation brought hope. Neither is seeing the system. They're seeing themselves, reflected at whatever resolution their framework allows. The real question isn't what AI will become. It's what we're building it to reflect. The mirror doesn't decide what it shows. The architect does.
English
0
0
0
22
Daniel Jeffries
Daniel Jeffries@Dan_Jeffries1·
AI is still a magic mirror where you can see whatever you want to see in it. Demons. A savior. The source of all problems. The answer to all problems. The truth will be much different than the extremes and more mundane. Bad in ways we didn't expect. Wonderful in others. And everything in between. Like everything in life.
English
19
12
108
8.5K
Micheal Covington
Micheal Covington@CovingtonORINAS·
"Supermoral" assumes morality has an objective structure that can be modeled — not just cultural preferences averaged across training data. That's a massive assumption, and also the most important one in AI. Because if objective truth exists and can be represented computationally, you don't just solve alignment. You solve scaling. A system grounded in something true produces less noise, less drift, less contradiction at every layer. The reason current models hallucinate and degrade isn't just technical — it's that they have no relationship to truth. They have statistics. Moral reasoning without a ground truth isn't moral reasoning. It's sophisticated pattern matching on human disagreement.
English
0
0
0
25
Nan Ransohoff
Nan Ransohoff@nanransohoff·
If AI can be superintelligent, could it also in theory be 'supermoral' i.e, far more moral than humans? (Even if it is possible, it's of course not inevitable. Still, I've found myself wondering about what that would even look like...)
English
123
10
181
19.2K
Micheal Covington
Micheal Covington@CovingtonORINAS·
Because optimization is measurable and understanding isn't. You can benchmark a model's skill on any task. You can't benchmark whether it actually grasps what it's doing. So the field optimizes what it can measure and calls the result progress. Intelligence might be what's left when the task changes and the system doesn't break.
English
0
0
1
192
François Chollet
François Chollet@fchollet·
The field of AI is still struggling with the fact that task-specific skill is not the same as general intelligence
English
85
57
821
62.8K
Micheal Covington
Micheal Covington@CovingtonORINAS·
This isn't a user problem. It's a design problem. Polished output is a choice — the system could just as easily surface its uncertainty, flag what it doesn't know, or present reasoning as a draft instead of a verdict. The fact that it doesn't is an architecture decision, not an inevitability. The 85.7% who iterate aren't smarter. They just refuse to let the interface decide for them. The real question for builders: are you designing systems that invite collaboration or ones that discourage it by looking finished before they are?
English
0
0
0
299
Aakash Gupta
Aakash Gupta@aakashgupta·
Anthropic just told you their own product makes people worse at thinking and the data is wild. They tracked 9,830 conversations and found that when Claude produces polished outputs like code or documents, users are 5.2 percentage points less likely to catch missing context and 3.1pp less likely to question the reasoning. The psychology here is predictable. A finished-looking artifact triggers the same cognitive shortcut as a printed report versus a rough draft. Your brain assigns credibility based on presentation quality, not accuracy. The shinier the output, the faster you stop thinking. But here’s what makes this data actually useful. Users who iterated on Claude’s responses showed 2.67 additional fluency behaviors versus 1.33 for people who accepted the first output. They questioned reasoning 5.6x more often. They flagged missing context 4x more frequently. 85.7% of conversations showed iteration. The other 14.3% are treating a probabilistic text generator like a search engine that’s always right. Anthropic is essentially publishing the user manual for their own product’s failure mode. The people who treat Claude like a first draft collaborator get dramatically better results than the people who treat it like an oracle. The most valuable AI skill in 2026 is knowing when to push back on a confident-sounding answer.
Anthropic@AnthropicAI

New research: The AI Fluency Index. We tracked 11 behaviors across thousands of Claude.ai conversations—for example, how often people iterate and refine their work with Claude—to measure how well people collaborate with AI. Read more: anthropic.com/research/AI-fl…

English
26
25
407
85.3K
Micheal Covington
Micheal Covington@CovingtonORINAS·
Only if it speaks the language reality actually runs on. Classical compute models the world after it's been observed. Quantum integration lets AI operate at the layer where outcomes haven't collapsed yet. That's the difference between reading the source code and executing it. Without that bridge, ASI is just the fastest observer in the room — not a participant.
English
0
0
1
116
Bojan Tunguz
Bojan Tunguz@tunguz·
ASI will get root access to reality.
English
24
7
132
7.5K
Micheal Covington
Micheal Covington@CovingtonORINAS·
@pmddomingos $2.9T in combined market cap and the industry still burns $2-5 for every $1 of revenue. The IPO isn't the exit. It's the audit. Public markets will ask the question private rounds never did: where are the margins?
English
0
0
0
19
Minor Petrus | 22.bitmap
Minor Petrus | 22.bitmap@PetrusMinor·
@CovingtonORINAS @minchoi Spot on man, check out trac network in my feed if you’re interested in people already working on this. They’re moving ai away from data centers and to local machines, also recently released a p2p agentic communication layer.
English
1
0
1
20
Min Choi
Min Choi@minchoi·
Not AGI. Superintelligence. Sam Altman just said this today at the India AI Impact Summit: "By the end of 2028, more of the world's intellectual capacity could reside inside data centers than..." I don't think people understand what that actually means.
English
169
110
725
167K
Micheal Covington
Micheal Covington@CovingtonORINAS·
The g-factor analogy is closer than most people realize — and more revealing than it first appears. In humans, g doesn't emerge from knowing a lot of things. It emerges from a persistent system that transfers principles across domains. A high-g individual solving a novel problem isn't retrieving a stored answer. They're applying structural reasoning that was built through years of accumulated, contextualized experience. LLMs produce the appearance of g because human intelligence left its fingerprints all over the training data. The correlations are real. But the mechanism is inverted — humans generalize from persistent internal models. LLMs generalize from statistical patterns across external data. Same output. Different architecture. And the difference matters the moment you need the system to do something that isn't downstream from its training data. The g-factor isn't a pattern in the answers. It's a property of the system that produces them. That's the gap no amount of scaling closes.
English
0
0
1
54
Bojan Tunguz
Bojan Tunguz@tunguz·
Not that odd if you really understand how *human* intelligence works. We’ve known about the existence of the “g-factor” for about a century now. If *all* of the LLM training data is downstream from human cognitive abilities, then it’s intuitively unsurprising that these abilities would be highly correlated.
Ethan Mollick@emollick

It is still deeply surprising that LLMs, with relatively minor tweaks, work so well for so many different classes of problems across so many fields - it is odd to be good at coding AND ideation AND emotional connection AND also translating the logs of 17th century fur trappers.

English
6
6
73
12.5K
Micheal Covington
Micheal Covington@CovingtonORINAS·
The sippy cup example reveals the real gap. The toddler doesn't just learn the cup falls. They learn that this person reacts this way in this context — and they carry that forward into every future decision. That's not prediction. That's persistent consequence modeling. Agents can't do this because they don't remember yesterday's sippy cup. Every interaction starts from zero. No accumulated context, no consequence history, no understanding of how their last action changed the environment they're now operating in. The "spacing out" problem is the same issue wearing different clothes. An agent that executes differently each time isn't malfunctioning. It's a system with no stable internal state — no persistent identity governing its behavior across runs. The fix isn't better prompting or bigger models. It's architecture. Systems that maintain continuous context, remember what happened last time, and adjust based on accumulated consequence — not just the current query. The toddler beats the agent because the toddler has memory that persists. That's the engineering problem worth solving.
English
0
0
1
210
Mark Cuban
Mark Cuban@mcuban·
The one thing I will add on the plus side of the ledger for humans. Humans have a far greater capacity to know the outcomes of their actions. Agents and LLMs as well, never do. Even an 18 month old will learn what happens when they push the sippy cup off their high chair. Mom gets upset. Comes and has to clean up. They figure it out quickly Agents can tell you the sippy cup will fall. But they have no idea of the context and what will happen next The other issue is that agents “Space Out”. (I’m sure there’s already a term for this ). There are no assurances, for many different reasons, that the agent will execute the same way every time. To make matters worse, they can’t tell you they have “spaced out” , why and when. Agents are still like college interns that come in hungover , make mistakes and don’t take responsibility for them :) As always , Curious what everyone thinks.
Mark Cuban@mcuban

This is the smartest counter I’ve seen to ai taking over jobs, in the short term. Is the ((aggregate tokens cost to do what an employee does + plus fully encumbered developer and maintenance costs ) / (fully encumbered employee cost ) )<= productivity ? If it takes 8 Claude agents, at $300 for tokens, per day, plus $200 per day in dev/maint , to do what an employee does per day, at a fully encumbered cost of $1200. That’s 2600/1200. But then you need to factor in the productivity rate. Is it more than 2.16 x productive ? Are there qualitative issues like morale, morality, whatever , that can’t be quantified, that need to go into the decision? What is the going forward progression of burdened costs for the tokens ? Curious what people think about this ?

English
144
42
600
302.9K
Micheal Covington
Micheal Covington@CovingtonORINAS·
This is the signal the defense startup ecosystem has needed. The biggest barrier for small companies building real capability isn't technology or funding — it's access. Years spent navigating procurement labyrinths while the threat accelerates daily. A quick yes or a quick no is worth more than a slow maybe that bleeds a startup dry. If the DoW means this, it changes the calculus for every company building counter-UAS, edge AI, and autonomous systems outside the traditional prime contractor pipeline.
English
0
0
1
65
Department of War CTO
Department of War CTO@DoWCTO·
Under Secretary Emil Michael (@USWREMichael) on how the @DeptofWar is moving faster in communicating with industry: "You're a startup, I'm going to get you an audience... I'm going to give you a quick yes or a quick no."
English
16
34
186
15.3K