Kasper Saugmann

8.1K posts

Kasper Saugmann banner
Kasper Saugmann

Kasper Saugmann

@kaspers

Tech journalist and entrepreneur. Words in @borsendk @frihedsbrevet @weekendavisen @videnskabdk, and more. Former co-founder @denuafhaengige (acq)

Copenhagen, Denmark Katılım Mayıs 2008
1.1K Takip Edilen1.5K Takipçiler
Jens
Jens@Jens57348198·
@ammitzbollbille @PeterMAstrup Måske fordi danske skatteydere også kommer til at betale for hendes promovering til sådanne ansøgninger - lige som de gjorde i 2023-2024. Derfor er det naturligvis relevant.
Dansk
1
0
2
97
Peter Astrup
Peter Astrup@PeterMAstrup·
Hvis jeg stadig var journalist og skulle med til Socialdemokraternes pressemøde kl. 12:15, så ville jeg spørge om: - Hvis du bliver statsminister igen kan du så garantere, at du ikke forlader posten til fordel for et topjob i udlandet? - Meget tyder på, at der var meget begrænset og måske slet ingen droner. Hvorfor sagde du, at vi var “under angreb” fra en “kapabel aktør”? - Du har i adskillige år sagt, at danskerne skal arbejde mere, fordi russerne ikke går hjem fra samlebåndet og det hele ikke “hænger sammen” ellers, men nu tre dage inden valget, så skal danskerne pludselig have ret til deltid? Wat?
Dansk
17
21
238
13.5K
@levelsio
@levelsio@levelsio·
@Suhail What is about to come? Why are you vagueposting
English
14
0
636
93.3K
Priyanka Vergadia
Priyanka Vergadia@pvergadia·
JUST DROPPED: Anthropic's research proves AI coding tools are secretly making developers worse. "AI use impairs conceptual understanding, code reading, and debugging without delivering significant efficiency gains." -- That's the paper's actual conclusion. 17% score drop learning new libraries with AI. Sub-40% scores when AI wrote everything. 0 measurable speed improvement. → Prompting replaces thinking, not just typing → Comprehension gaps compound — you ship code you can't debug → The productivity illusion hides until something breaks in prod Here's why this changes everything: Speed metrics look fine on a dashboard. Understanding gaps don't show up until a critical failur and when they do the whole team is lost. Forcing AI adoption for "10x output" is a slow-burning technical debt nobody is measuring. Full paper: arxiv.org/abs/2601.20245
Priyanka Vergadia tweet media
English
154
620
2.3K
274.9K
Kasper Saugmann
Kasper Saugmann@kaspers·
@drapersgulld I love Claude Code and it was actually my first choice for the task, but after it failed miserably for ten minutes in fetching simple data, I gave Lovable a shot and it worked right away.
English
0
0
0
41
Kasper Saugmann
Kasper Saugmann@kaspers·
@pvergadia Why not say both points out loud? The pitfalls and how to achieve enhanced performance?
English
0
0
0
49
Priyanka Vergadia
Priyanka Vergadia@pvergadia·
I work with enterprises and unfortunately <10% users actually use these tools in efficient ways where they generate and comprehend. 90% use them like another ChatGPT or Claude with bad prompts! After more than 6 months of training and comprehensive change management we get maybe 30-40% to a good usage pattern. This is exactly the point that needs to be said out loud because the reality on ground in real enterprises is very different from what we read on social media! These are real humans trying to work in an entirely different way, building new habit take effort and time and lots of leadership support.
Kasper Saugmann@kaspers

@pvergadia Nah, dropped six weeks ago, and the your conclusion is more nuanced than that. It depends on how you use it. If you use it correctly (Generation-Then-Comprehension) then it will yield even better results than without. Didn't you read the paper?

English
5
4
12
3.2K
Kasper Saugmann
Kasper Saugmann@kaspers·
@AdamBartas I just used Lovable and Claude Code for the same task. CC scrambled for ten minutes, Lovable just worked
English
0
0
2
916
Adam Barta
Adam Barta@AdamBartas·
First sign that Lovable is dead Pivoting to general assistant is the most "investor-pleasing" move you could do Their app building business is obv going nowhere and investor money is drying up Why should anyone use Lovable instead of the already established ecosystems
Anton Osika – eu/acc@antonosika

Introducing Lovable for more general tasks. Lovable has always been for building apps. Today it also becomes your data scientist, your business analyst, your deck builder, and your marketing assistant. This is a big step toward what Lovable is becoming: a general-purpose co-founder that can do anything. See examples below.

English
180
25
1.2K
214.4K
Kasper Saugmann
Kasper Saugmann@kaspers·
@GaryMarcus It depends on how you use it, as the paper says. The "Generation-Then-Comprehension" outperforms all other uses, including no usage. Didn't you read the paper?
Kasper Saugmann tweet media
English
0
0
5
320
Sam Altman
Sam Altman@sama·
I have so much gratitude to people who wrote extremely complex software character-by-character. It already feels difficult to remember how much effort it really took. Thank you for getting us to this point.
English
4.5K
2.2K
35.8K
5.5M
Kasper Saugmann
Kasper Saugmann@kaspers·
@pelledragsted Hvis du mener at være bedre til at drive bank end Nordea, skylder du så ikke næsten samfundet at tage det sats, det er at starte noget nyt og vise dem, hvor skabet skal stå?
Dansk
0
0
1
132
Pelle Dragsted
Pelle Dragsted@pelledragsted·
Det er jo barokt. Gigantbanken Nordea havde sidste år et overskud på over 30 milliarder. Nu nedlægger de 1500 job. Det er bl.a. den virksomhed, Venstre vil forgylde med deres økonomiske plan. En skattelettelse på 252 millioner - en kvart milliard - mener Venstre, Nordea fortjener. Jeg vil hellere støtte arbejderen, pensionisten og børnefamilien end storbankerne. #dkpol dr.dk/nyheder/senest…
Dansk
86
20
164
46.7K
Naval
Naval@naval·
Coding an app is the new starting a podcast.
English
1.6K
2.4K
27.3K
2.8M
Gary Marcus
Gary Marcus@GaryMarcus·
Misleading summary. Should be deleted. Altman doesn’t say a (known) new architecture is coming; he says he anticipates one will come someday. PS: I also think we need something radical and new. In fact that’s what I’ve been saying for the last decade. Excess focus on exploiting LLMs has likely delayed discovery.
Rohan Paul@rohanpaul_ai

Sam Altman just said in his new interview, that a new AI architecture is coming that will be a massive upgrade, just like Transformers were over Long Short-Term Memory. And also now the current class of frontier models are powerful enough to have the brainpower needed to help us research these ideas. His advice is to use the current AI to help you find that next giant step forward. --- From 'TreeHacks' YT Channel (link in comment)

English
21
22
165
16.9K
tmuxvim
tmuxvim@tmuxvim·
has anyone else noticed that GPT-5.4 often ends its responses with like, clickbait? it often promise to reveal "the one surprising X that will do Y" or something like that
tmuxvim tweet media
English
612
80
7.1K
418.2K
Egan Peltan
Egan Peltan@EganPeltan·
Ok this is ridiculous. Everything here could have been done without ChatGPT 1. The dog is on conventional immunotherapy with the mRNA vax 2. It appears the mRNA vaccine started WITH ICI, so we can’t know if the vax had ANY additional effect 3. The team can’t say what ~~AI~~ identified the neoantigens (no, not AF3). It sounds like they used existing sequence homology workflows. 4. No evidence of antigen-specific effect from mRNA vax - the authors need to prove this before anyone can believe the neoantigen selection had any effect 5. The in-kind contributions here are ~$20-50k. Custom cancer vax isn’t cheap Custom mRNA cancer vaccines have been in development for years! None have been clear, resounding successes (yet). Once we have Phase 3 PFS/OS, not N=1 anecdotes, we can start having arguments about people being denied access to effective treatments by unnecessary regulation Right now, the only thing people are denied are the opportunity to fork over $50k for hope and dreams. Should we really make it easier for people exploiting desperation to sell unproven remedies? If you don’t want to think before you tweet, ask the fucking AGI if any of these claims are remotely plausible and what evidence you’d need to believe them
vittorio@IterIntellectus

this is actually insane > be tech guy in australia > adopt cancer riddled rescue dog, months to live > not_going_to_give_you_up.mp4 > pay $3,000 to sequence her tumor DNA > feed it to ChatGPT and AlphaFold > zero background in biology > identify mutated proteins, match them to drug targets > design a custom mRNA cancer vaccine from scratch > genomics professor is “gobsmacked” that some puppy lover did this on his own > need ethics approval to administer it > red tape takes longer than designing the vaccine > 3 months, finally approved > drive 10 hours to get rosie her first injection > tumor halves > coat gets glossy again > dog is alive and happy > professor: “if we can do this for a dog, why aren’t we rolling this out to humans?” one man with a chatbot, and $3,000 just outperformed the entire pharmaceutical discovery pipeline. we are going to cure so many diseases. I dont think people realize how good things are going to get

English
50
58
412
69.8K
Kasper Saugmann
Kasper Saugmann@kaspers·
@rohanpaul_ai Nope, what he said was: "I bet there is another new architecture to find that is gonna be as big of a gain as Transformers were over LSTMs." Every day people bet that stuff is coming that never materializes.. But I hope he'll be right!
English
1
0
13
1.2K
Rohan Paul
Rohan Paul@rohanpaul_ai·
Sam Altman just said in his new interview, that a new AI architecture is coming that will be a massive upgrade, just like Transformers were over Long Short-Term Memory. And also now the current class of frontier models are powerful enough to have the brainpower needed to help us research these ideas. His advice is to use the current AI to help you find that next giant step forward. --- From 'TreeHacks' YT Channel (link in comment)
Rohan Paul@rohanpaul_ai

Morgan Stanley predicts a massive AI breakthrough driven by a huge spike in computing power across major U.S. laboratories. Increasing the amount of hardware used for training by 10x can effectively double the intelligence of these models. The recently released GPT-5.4 Thinking model already matches human experts on professional tasks with a score of 83% on the GDPVal benchmark. The biggest hurdle for this growth is an energy crisis, with the U.S. power grid facing a shortfall of 18 gigawatts by December-28. To keep running, developers are bypassing the grid by taking over Bitcoin mining sites and using natural gas turbines for their AI factories. This shift is creating a solid investment cycle where 15-year leases on data centers generate high financial yields for every watt consumed. Large companies are already reducing their staff numbers because these new AI tools can perform professional work for a tiny fraction of the cost. Researchers expect AI to begin recursive self-improvement by June-27, meaning the software will autonomously upgrade its own code without human help. The future economy will likely treat raw intelligence as a commodity that is manufactured by these massive computing and energy clusters.

English
119
76
761
651.8K