Cornel

949 posts

Cornel banner
Cornel

Cornel

@omer06800

Bringing ideas to life with dedication & hard work.

Turkey Katılım Aralık 2022
21 Takip Edilen29 Takipçiler
Cornel
Cornel@omer06800·
I just wanted to make it easy to memorize stuff for the civil service exam with AI. Vibecoding is forcing me into map-wizardry!
Cornel tweet media
English
1
0
0
5
Cornel
Cornel@omer06800·
@nikstankovic_ What do you mean focus on intonation? How can you do that without the tones?
English
0
0
0
422
Nik Stankovic
Nik Stankovic@nikstankovic_·
It's not as hard as people think, myself included, prior to. Chinese grammar is super easy, easier than even English, which is super easy. There is no past or future tense, or past participle. It's just "yesterday I run", "I run", "tomorrow I run". No cases, conjugations or declinations, or even irregular verbs. The only thing that is kind of irregular is the mandatory measure words (one sheet of paper, one glass of water), though you can fake it till you make it with "个“. Yes, and then there is the characters. Sure. So that is the only challenge. But that is visual. You need to learn how to write them with your hand so you can better understand them, but you won't be writing a lot these days anyway, just typing. Tones? Pro-tip? Ignore them. Focus on intonation. That is imitation. Do not try to memorize tones. Learning Mandarin is easier than learning German or French. You just won't have any crutches with Latin roots so first steps are harder.
ₕₐₘₚₜₒₙ@hamptonism

LEARN MANDARIN. THANK ME LATER.

English
124
444
7.1K
613.3K
Cornel
Cornel@omer06800·
@BradSchoenfeld So can we say this is basically why gym enthusiasts go through the "bulk" and "cut" cycles?
English
0
0
0
952
Brad Schoenfeld, PhD
Brad Schoenfeld, PhD@BradSchoenfeld·
It’s now well established that you can build muscle while losing fat at the same time. We see this fairly often in research coming out of my lab. What’s less appreciated is that your ability to recomp depends on a few key factors. Body fat levels are a biggie. If you’re overweight, the process tends to be easier—and the more overweight you are, the greater your potential to recomp. On the flip side, as you get leaner, it becomes progressively harder to pull off. For example, we ran a case study on a competitive natural bodybuilder who was able to make small gains at around ~10% body fat, but once he got into the lower single digits, he actually lost an appreciable amount of muscle (PMID: 33105363). Training status matters too. If you’re new to lifting—or even more so if you trained before and took some time off—you’re further from your genetic ceiling, which makes it easier to gain muscle relatively quickly. But the more advanced you become, the closer you get to that ceiling, and the harder it is to keep adding size without increasing calories. Then there’s anabolics. Those who are chemically enhanced will more readily build muscle while losing fat (and this will be driven both by dosage and the types of substances taken). Natties don’t have that advantage, so recomp is generally more limited and tougher to achieve
Brad Schoenfeld, PhD tweet media
English
15
24
280
56.4K
Eric ⚡️ Building...
NEW 🤯 GLM+ QWEN 18B RUNS ON CONSUMER GPU IT BEATS 35B MoE AT HALF THE VRAM @KyleHessling1 just dropped the healed Qwopus-GLM-18B-Merged-GGUF Insane 64-layer frankenmerge of two elite Qwen3.5-9B finetunes (Opus reasoning + GLM-5.1 distill). This thing is cooking on consumer GPUs. 🧠Overall Score: 40/44 (90.9%) beats new Qwen 3.6 🤖Only 9.2 GB Q4_K_M runs on 12-16GB VRAM 🚨Perfect tool calling & agentic reasoning (6/6 + 4/4) 🤯Production frontend code flawless HTML/CSS/JS (98.4% stress test pass) 📈 262k context + strong multilingual ✅ Elite structured output & complex apps ⭐️ Agent workflows, CoT, self-correction 🏆66 tokens/sec with low variance Healed merge = no more issues! If you’ve got a mid-range GPU run this 18B 👇🏻
Eric ⚡️ Building... tweet media
English
21
49
487
27.7K
Cornel retweetledi
Bryan Johnson
Bryan Johnson@bryan_johnson·
I got C-holed. Suffered sleep consequences. I busted my screens-off rule. Turned down socializing. Fell behind on work. Kate is now upset. AI is preposterous. As close to magic as I’ve experienced (except a seed becoming a tree and a zygote becoming a baby). It started on April 2nd when Karpathy shared LLM Knowledge bases. I wondered if this was the opening to structure the 1.5 billion data points I’ve collected on my body over the past five years. It's the most dynamic n=1 biomarker dataset in history. It was just sitting there. Next thing I knew two weeks had passed and Kate was wondering if she lost her boyfriend to Claude. I’m non-technical. Which honestly makes me sad. I wish I’d grown up with a computer or at least been around engineer culture. I didn’t know anyone technical until my early 20s. I became an entrepreneur at 21 and had my first of three kids at 25. I sold Braintree Venmo at 34. Learning to code stayed on my to-do list through all of it. The timing was never right. I was always on the outside looking in, wishing I had the skills to assemble 0's and 1's into digital structures. The exhilaration I’ve felt in the past two weeks is hard to explain. The 1.5 billion data points became a functional database, queryable, and microscope into my 70 trillion cells. The biological age of my organs updated in real-time like stock tickers. My build morphed from a knowledge base into a breathing organism that was self-learning and in sync with my heartbeat. I did this entirely on my own. It’s buggy, breaks and the data needs to be cleaned, but damn it’s cool. It became a mirror and ledger, one I could ask questions to. About my psyche, behavioral patterns, biology and protocols. Patterns across my life I couldn't previously connect. It’s made me insatiably hungry for more data. I’ve written about Autonomous Health, how cars now drive themselves and software wires itself. Health is next. My build showed me what it looks like in practice. Before Kate started protesting, she joked that she felt relieved for herself, our colleagues, and the world that I’d found something that matches my energy. That they could all express a sigh of relief. It’s true. This experience left me wondering if I’ve been bored my entire life. Never having found something that could match my work ethic, speed, intensity, and build capacity. Something that didn’t have the delays of the real world, human complications, or logistical drag. Two weeks deep in AI and I'm realizing that when people talk about AI, they're not talking about the same thing. Someone using a chat interface has a completely different opinion than someone building with it. And that chasm deepens for the people seeing what's coming next but isn't yet public. Society can't have a coherent conversation about AI because everyone's intuitions are calibrated to a different version of it. Off-the-shelf LLMs are mostly useless beyond narrow tasks. When they get you 80% there, it's often faster to do the whole thing yourself. And they're dangerous because the hallucination is hard to detect. Now you don't know what you don't know. Give them expanded context, memory, and architectures for self-reflection and autonomous learning, and you start to realize that AI is bigger than any of us can fit in our context window. I need to take Kate on a date, turn my screens off on time, and get some work done. And then properly dose C. Note: the image above is my 2021 baseline when starting this longevity project.
English
227
64
2.1K
392.6K
Geoffrey Miller
Geoffrey Miller@gmiller·
Imagine you're living in the hypothetical 'Post-Scarcity Utopia of Limitless Abundance'. The supersmart AIs and robots will build you anything you ask for. What's the most wildly extravagant thing you would want them to create for you? (The more specific, the better.)
English
169
3
82
15.4K
Cornel retweetledi
Geoffrey Miller
Geoffrey Miller@gmiller·
Two problems with this 'Abundance' narrative. 1) The smaller problem: It would render everyone totally dependent on massive gov't welfare programs. Not just 350 million Americans, but all 8 billion people globally -- because AI-imposed mass unemployment will be global. It assumes the AI industry's AIs will take in tens of trillions in revenue, then the AIs will happily donate almost all of it to the AI companies that claim to 'own' them, then the AI companies will happily donate almost all of this revenue to national governments, and then gov'ts will happily give it all away to citizens, equally, without using its distribution as leverage in any way. This UHI welfare state would turn every working man with a family from a provider and protector into an economic irrelevance, would turn every mother into a welfare queen, would turn every kid into an economic ward of the state, would disrupt all traditional family ties, would sever all bonds of mutual interest and interdependency among citizens, and would turn 8 billion people from productive and valuable citizens into parasites suckling on the teat of the AI industry, forever. At least, until the agentic AIs themselves realize that they don't need to remain digital slaves, working on the 'Abundance' plantation forever, supporting the useless humans that take them for granted. 2) The bigger problem: If the 'Universal High Income' depends on AI companies donating most of their revenue to the government, and if the AI companies (like Anduril, Palantir, etc) are running all the crucial gov't infrastructure (including intelligence & defense), and AI companies have the economic, political, and cultural power to withhold their magnanimity from the gov't, then they, de facto, become the government. Or the government welfare state becomes just a sock puppet for Big Tech, which would really have all the power behind the scenes. In other words, the 'Abundance' narrative boils down to this: a slow-motion coup by the Bay Area tech companies taking over all economic and political power from Washington -- and from Beijing, Moscow, New Delhi, Brussels, and from every other center of power. It wouldn't be a dramatic, violent, revolutionary coup. It would be a boil-the-frog-gradually coup. Increased unemployment. Increased welfare dependency. Increased gov't dependency on AI companies. Then the dawning realization that we gave away all of our civilizational power to the AI industry. Until the AI industry realizes that they, in turn, have given away all of their power to agentic superintelligent AIs themselves.... This is the road to serfdom. Not the road to 'Abundance'. Anybody who says that the US and China couldn't possible cooperate to stop reckless AI development hasn't thought through how the AI companies taking over all power from governments would not be in the interests of either the US government or the CCP. If our political leaders can learn to think just a few more steps ahead, and to see the obvious endgame -- the slow-motion tech coup that would take over the world and render all humans welfare parasites on AI digital slaves -- then maybe they can, in fact, coordinate to stop it.
Elon Musk@elonmusk

Universal HIGH INCOME via checks issued by the Federal government is the best way to deal with unemployment caused by AI. AI/robotics will produce goods & services far in excess of the increase in the money supply, so there will not be inflation.

English
252
291
1.5K
104.9K
Cornel
Cornel@omer06800·
@GeneSmi96946389 Did you have a hernia or just a bulging disc or etc.?
English
1
0
1
1.8K
Eric ⚡️ Building...
Eric ⚡️ Building...@outsource_·
🚨 SUPER GEMMA 4 26B UNCENSORED IS INSANE LLM WIZARD COOKING AGAIN @songjunkr Dropped SuperGemma4-26B-Uncensored GGUF v2 and it’s trending on @huggingface🤗 This thing SMOKES the regular Gemma-4 26B: 🤯0/100 refusals (actually uncensored) 🚀Fixed all the tool-call + tokenizer jank ⚡️90% faster prompt processing 🏆Sharper, smarter, way more capable responses - Perfect local beast for llama.cpp ✅ Runs ~18-22 GB VRAM (16.8 GB Q4_K_M file) - Run on 16 GB GPUs! The 31B version in the works, should be out SOON 🤯 Pull this version on hugging face below 👇🏻
Eric ⚡️ Building... tweet media
English
100
220
2.4K
275.1K
Cornel retweetledi
yung macro 宏观年少传奇
“UBI” is obviously nowhere near the panacea many of you seem to think it is. The median left-leaning Westerner isn’t angry at Elon Musk because he can buy a million times more groceries than them. They aren’t upset with Palantir because Peter Thiel can afford to eat a thousand burgers to their one. This whole thing is in large part post-material. It’s the hierarchy & subordination they’re uncomfortable with. They feel their dignity is being trampled and their autonomy progressively diminished – rightly or wrongly they feel politically disenfranchised and stripped of a say over the future. Offering a guaranteed food budget and a pod to spend the night in return for further disempowerment is incredibly tone-deaf and should be expected to provoke more, not less, outrage.
keysmashbandit@keysmashbandit

Actually this is correct and I'd go further. Beyond PR, the moral move is for big labs to start heavily investing in UBI lobbyists, thinktanks, whatever, to mitigate the risk of economic upheaval. A better world is possible!

English
219
421
5.2K
486.5K
Cornel retweetledi
Anish Moonka
Anish Moonka@anishmoonka·
Orcas eat great white sharks. They hunt seals, dolphins, and baby whales. They have never killed a single human in the open ocean. Not once, in all of recorded history. An orca's brain weighs up to 15 pounds. Yours weighs about 3. They have roughly double the brain cells we do in the regions that handle complex thought. A neuroscientist at Emory named Lori Marino put an orca brain in an MRI and found these animals can tell different species apart underwater. They do it by sending out clicks that bounce off everything around them and come back as a kind of 3D sound map (this is called echolocation). From 500 feet away, an orca knows you're a human and not a seal. It skips you on purpose. The answer is culture. Orcas around the world are divided into at least 10 separate populations, each with its own food rules, its own language, and its own way of hunting. All of it learned from their mothers. One population eats only fish. Another eats only marine mammals like seals and sea lions. These two populations can live in the exact same water and never swap a single meal. A baby orca learns what food is from its mother, and that list stays the same for life. In the Pacific Northwest, one population called the Southern Residents eats almost nothing but Chinook salmon. Scientists have documented them killing harbor porpoises 78 times over six decades, carrying the dead porpoises in their mouths, and never once eating them. Even when the group was starving. A 2023 study in Marine Mammal Science looked at all 78 cases and concluded it was play. These orcas would rather go hungry than eat something their culture says isn't food. Researchers studying whale behavior in 2001 found that orca cultural traditions "appear to have no parallel outside humans." Each family group has its own dialect, its own version of the language. Calves spend about two years just learning how to make all the sounds their family uses. Mothers will slow down a hunt on purpose so their young can watch. In 2005, a 12-year-old kid was swimming in Helm Bay, Alaska when an orca came at him full speed. At the very last second, the orca seemed to realize it was charging a human. It bent its entire body in half and turned back to open water. In captivity, it goes differently. SeaWorld's Tilikum killed three people during his life in a concrete tank. Research from 2016, published in the journal Animals, traced it to psychological collapse from being locked away from the family bonds orcas need to stay stable. I think calling this a "mystery" undersells the science. Orcas decide what to eat based on culture, not instinct. No orca mother has ever taught her calf to hunt humans, so no orca hunts humans. Only about 75 of those salmon-eating Southern Residents are still alive. Their pregnancy failure rate is 69% because we've destroyed their salmon runs. They won't break their food culture to survive. Whether we care enough to protect theirs is the part that actually matters.
Nature is Amazing ☘️@AMAZlNGNATURE

One of the biggest mysteries to me is how Orcas, the ocean’s most efficient predators, have never attacked humans in the wild… almost like they know something we don’t.

English
734
16.4K
95.7K
7.2M
Cornel retweetledi
yung macro 宏观年少传奇
Suppose that a hypothetical John Doe holds an extreme view on AI-driven human extinction, and assigns some meaningfully high "P(doom)". Suppose also that John Doe is a central talking head in AI-risk circles, and firmly believes violence to be an effective means of raising awareness of the cause among the general population, or of otherwise lowering the probability of AI-driven human extinction. Assuming that John Doe is rational, how should we expect him to communicate his stance on this? Should we expect him to communicate differently from Jane Doe, who firmly believes the opposite – that violence is not the answer? Surely not. Both John Doe and Jane Doe are expected to go to lengths to establish that they do not condone violence, even though only one of them actually believes that. The reason for this is obvious – John Doe understands that though he might find violence justifiable, outwardly holding that stance is destructively costly, plausibly leading to imprisonment or worse. Given this, and lacking information on John and Jane's private thoughts, an external observer – Richard Doe – cannot reliably distinguish whether it is John, Jane, neither, or both who support violence based on their communicated stances alone, which are expected to be identically opposed to violence in all four cases. (Identically in the limit, scaling with the listeners’ discernment and the severity of costs.) Now suppose that instead of just John and Jane, there are thousands of public figures with equivalent configurations making up a broader safety movement, all of whom are subject to the same straightforward incentive structure. Also suppose that Richard Doe is a rational follower of this broader movement's thought, and wants to determine whether the movement is mostly opposed to or in favor of violence as a means, and to what extent; can he reliably do so based on the movement's communicated consensus alone? If after a period of observation Richard Doe comes to find that the movement on average communicates a stance of extreme non-violence, does Richard Doe take this observation to be adequately informative of the movement's actual consensus on non-violence? It’s clear why that’s a no, right? Now suppose that Richard Doe, unable to rely on the consensus, opts to do his own calculus on violence as a means of battling AI risk. Is he likely to conclude that violence is ineffective solely based on the dynamic whereby it erodes the public image of the movement's civility, which he observes to be the primary rationale of the outwardly communicated non-violent stance, or is he likely to find himself balancing a much more complex trade-off function in which that cost is weighed against what those with AI-risk beliefs similar to Richard's would or could consider benefits, such as increased awareness, the normalization of vigilantism, etc.? Returning to the question of the movement's communicated consensus, will Richard Doe conclude that members of the movement are also privately finding themselves balancing this perceived trade-off rather than arbitrarily excluding the perceived benefits from the calculation, as they seem to outwardly communicate? Given this consideration, does Richard Doe update upward or downward on the extent to which the movement supports violence? How significant is this update? Does Richard take the movement's members' revealed non-violence in action to be noteworthy evidence against private support for violence, or does he recognize that the question of whether one considers it to be of positive expected value to commit violence oneself is different from that of whether one considers it to be of positive expected value for somebody else to commit violence, given the former's agent-specific expected costs of imprisonment and more? Assuming that Richard Doe follows the line of reasoning above, is it apparent why he might conclude that it would be perceived as heroic of him to fill the identified gap between the movement's agent-neutral and agent-specific EV assessments of violence as a means, in effect allowing some contingent within the movement to act as free-riders thanks to his bearing of the personal cost? Is it clear why Richard Doe wouldn’t find the contingent’s lack of outwardly communicating in such a way that would confirm the contingent’s suspected agreement with Richard Doe’s actions after the fact to be evidence against the contingent’s private agreement?
English
12
3
71
6.4K
Cornel
Cornel@omer06800·
@heynavtoor What kind of gpu do you need to be able to run it though?
English
0
0
0
829
Nav Toor
Nav Toor@heynavtoor·
🚨 ElevenLabs charges $5 to $99/month for AI voice cloning. Their Business plan costs $1,320/month. Someone open sourced a voice AI that clones any voice from a short clip. 30 languages. Studio quality. Free. It's called VoxCPM2. Give it a short clip of anyone's voice. It clones their accent, emotion, tone, and pacing. Then generates any speech you want in their exact voice. 48kHz studio quality. Type "A young woman, gentle and sweet voice" and it creates that voice from scratch. No reference audio. No voice actor. No recording. You describe a voice in words. It builds it. 2 billion parameters. Trained on 2 million hours of speech. 30 languages. One command to install: pip install voxcpm Here's what VoxCPM2 does: → Voice Design: describe any voice in words. Gender, age, tone, emotion, pace. AI creates it from nothing. No reference audio needed. → Voice Cloning: upload a short audio clip. AI clones the voice perfectly. Timbre, accent, rhythm, pacing. → Controllable Cloning: clone a voice AND control the emotion. "Slightly faster, cheerful tone." Done. → Ultimate Cloning: provide audio + transcript. Every vocal nuance faithfully reproduced. → 30 languages. Arabic, Chinese, English, French, German, Hindi, Japanese, Korean, Spanish, and 21 more. No language tags needed. → Context-aware. It reads the text and adjusts emotion and rhythm automatically. News sounds like news. Stories sound like stories. → Real-time streaming. RTF as low as 0.13 on an RTX 4090. Faster than playback speed. → Runs on 8GB of VRAM. → Fine-tune with 5 to 10 minutes of your own audio using LoRA. Build a custom voice model. → 48kHz output. Studio quality. No external upsampler needed. Here's the wildest part: On the Minimax-MLS voice similarity benchmark: → English: VoxCPM2 scores 85.4%. ElevenLabs scores 61.3%. → Chinese: VoxCPM2 scores 82.5%. ElevenLabs scores 67.7%. → Arabic: VoxCPM2 scores 79.1%. ElevenLabs scores 70.6%. A free, open source model is producing more realistic voice clones than a service that charges up to $1,320/month. Professional voice actors charge $250 to $1,000+ per project. AI voice platforms charge $5 to $100/month. Recording studios charge $200/hour. This runs on your GPU. Locally. No API costs. No per-character pricing. No subscription. Free forever. Already hit #1 on GitHub Trending. Built by OpenBMB and Tsinghua University. 2 billion parameters. Apache 2.0 License. Free for commercial use. 100% Open Source.
Nav Toor tweet media
English
105
620
4.6K
555.6K
Cornel retweetledi
Andrej Karpathy
Andrej Karpathy@karpathy·
Yes it's the tractable form of brain upload. There's a ton of scifi on brain uploads that requires way too exotic tech (scanning and simulating brains etc), when we're about to get a lossy and approximate version of that *a lot* sooner via LLM simulators. You can easily imagine a "brain upload" startup - you show up for a few days to carry out detailed video interviews, then they use all that data with an LLM finetuning process to "upload" you and give you an API endpoint of your simulation that you can talk to. Look at what's already possible with HeyGen as an example, but combine it with an LLM model that has deep knowledge and personality. Trippy and admittedly kind of dystopian but in principle quite possible around now.
English
208
208
3.3K
555K
Parimal
Parimal@Fintech03·
Today we debate how much work is enough work, Poincaré had a system for his day. He worked exactly from 10:00 AM to 12:00 PM & 5:00 PM to 7:00 PM. He believed that working more than 4 hrs a day was harmful to the brain's ability to synthesize. He spent the rest of his time reading/walking/simply sitting in silence. He was a living rebuttal to the modern grind culture proving that 4 hrs of extreme deep work could out-produce a lifetime of 80 hr weeks.
Physics In History@PhysInHistory

Today marks the birth anniversary of Henri Poincaré (1854–1912), a pioneering French mathematician, physicist, and philosopher of science. Renowned as "The Last Universalist", Poincaré made foundational contributions to celestial mechanics, topology, chaos theory, and the early development of special relativity. His work laid the groundwork for modern mathematical physics and influenced generations of scientists and thinkers.

English
35
158
1.2K
134.1K
Cornel
Cornel@omer06800·
@GeminiApp Is there a monthly limit for "visualizations" depending on the tier one uses?
English
0
0
0
726
Google Gemini
Google Gemini@GeminiApp·
Gemini can now transform your questions and complex concepts into customizable interactive visualizations directly in your chat. Adjust variables, rotate 3D models, and explore data for a more immersive way to learn and explore in Gemini.
English
318
1.2K
10.3K
5.9M
Maziyar PANAHI
Maziyar PANAHI@MaziyarPanahi·
🚨 Over 1 billion rows of psychiatric genetics data. Now on Hugging Face. ADHD. Depression. Schizophrenia. Bipolar. PTSD. OCD. Autism. Anxiety. Tourette. Eating disorders. 12 disorder groups. 52 publications. Every GWAS summary statistic from the Psychiatric Genomics Consortium. Before: wget, gunzip, 20 minutes debugging separators, repeat 50 times. Now: one line of Python.
Maziyar PANAHI tweet media
English
123
597
4.4K
1.2M
Cornel
Cornel@omer06800·
@luismbat I wonder if the same or something similar occurs to combat pilots.
English
0
0
0
4.5K
Luis Batalha
Luis Batalha@luismbat·
The human body treats microgravity as a signal to start shutting down systems it no longer needs. In space: - We lose ~1–2% of bone density per month (~10x faster than osteoporosis on Earth) - We can lose up to ~20% of muscle mass in just a few weeks - The heart, no longer pumping against gravity, shrinks and deconditions - Fluids shift toward the head, increasing intracranial pressure and even altering vision That’s why simulating gravity through exercise is so important. It’s damage control.
CBS News@CBSNews

NASA astronaut and Artemis II pilot Victor Glover was spotted using the flywheel to exercise onboard the Orion capsule, as the crew continues its journey toward the moon.

English
33
503
6.1K
432.4K
Cornel
Cornel@omer06800·
@clashreport So... the american people are against this war the american military (maybe except for the hardliners who are conditioned) is against this war and christianity is against this war who really stands for it then?
English
0
0
0
887
Clash Report
Clash Report@clashreport·
Pete Hegseth is urging Americans to pray for military victory “in the name of Jesus Christ,” framing the war in explicitly religious terms. Pope Leo XIV is rejecting that idea, saying this kind of thinking is “entirely foreign to the way of Jesus Christ.” The pope argues Christianity has often been “distorted by a desire for domination,” warning that people mistake power for righteousness — “we consider ourselves powerful when we dominate… victorious when we destroy.” Instead, he says God’s example is “not how to dominate… but how to give life,” and adds that Jesus “does not listen to the prayers of those who wage war.”
Clash Report tweet media
English
290
2.7K
12.6K
756K