Not Sure

2.6K posts

Not Sure

Not Sure

@ethix_ru

Slowly, Harry took out the Elder Wang and crossed it with Hermione's ten-and-three-quarter-inches

Little Italy, Russia انضم Şubat 2012
145 يتبع14 المتابعون
Not Sure
Not Sure@ethix_ru·
@asper Проходит помаленьку, скоро сорок стукнет. Меня раз в несколько лет в Пермь заносит, можно встретиться покушать чего-нибудь (или выкушать :) )
Русский
1
0
1
13
asper
asper@asper·
@ethix_ru дааа, так и подумал! Книжку-то я прочитал в итоге, спасибо, выглядела мотивирующе и бодро! Но погуглил отзывы и все пишут, что не работает. Как жизнь ваще?
Русский
1
0
0
11
asper
asper@asper·
Внезапно красивое число! Но мозгу как будто чего-то не хватает, без числа 14 всё равно ощущение незавершённости. Дотянулся проклятый Гитлер.
asper tweet media
Русский
2
0
3
148
Not Sure
Not Sure@ethix_ru·
@asper Мы виделись один раз 15-20 лет назад, когда я тебе книжку Норбекова-то и передал, хехе
Русский
1
0
0
8
asper
asper@asper·
@ethix_ru я почитал что это всё самогипноз и шарлатанство и даже не стал пытаться а мы знакомы?
Русский
1
0
0
10
Not Sure
Not Sure@ethix_ru·
@davepl1968 I thought maybe it was user time/kernel time thing but turns out it's even simpler than that. Thanks!
English
0
0
0
13
Dave W Plummer
Dave W Plummer@davepl1968·
@ethix_ru If the process is gone, then what's its telling you is that the processes that are still around used 34% of the CPU, but that the CPU is fully engaged. The remainder is attributed to the processes that have exited.
English
1
0
2
84
Dave W Plummer
Dave W Plummer@davepl1968·
So I'm writing a script on how Task Manager "lies" to you about CPU usage, and why, and how it actually does the math and accounting behind the scenes. One thing my original didn't have to account for was clock speed changes! So I'm trying to explain that, but would like to know if this makes sense, or what I need to add/change or where it's unclear! "Speaking of the 90s, there’s one wrinkle on modern machines that didn’t really haunt the original Task Manager, and that’s the fact that a “CPU second” no longer implies anything like a fixed amount of work. Back in the day, the scheduler’s time accounting and the processor’s actual throughput were much more tightly coupled because the CPU clock speed was comparatively static. On a modern CPU, though, the hardware is constantly changing gears. A mostly idle core may be downclocked, parked, or dropped into a deep sleep state sipping power through a cocktail straw, and then the instant real work arrives it can jump to a much higher frequency or even turbo past its nominal clock. Task Manager’s accounting is still fundamentally time-based, which means it is answering “how long was this process scheduled and not idle?” not “how many cycles did it actually get?” So a core that was busy for half the sample interval still shows about 50% CPU, even though the amount of work it accomplished during that half can vary wildly depending on what frequency the silicon was running at. And that’s where users quite reasonably get annoyed, because what they experience is performance, not accounting. They see one machine sitting at 25% CPU and feeling lightning fast, while another is pinned at 100% and feels like it’s trying to run Windows through molasses. The meter isn’t exactly wrong, but it’s measuring occupancy rather than productivity. Put differently, modern CPU usage is more like “how full was the freeway?” than “how many miles were actually traveled?” A half-full freeway with Ferraris on it can move a lot more traffic than a jammed freeway full of cement trucks. The old Task Manager was built in an era where time-used was a pretty decent proxy for work-done, but on today’s processors with dynamic frequency scaling, turbo boost, thermal throttling, and deep idle states, that connection has gotten a lot looser. So when the numbers feel a little slippery, it’s not because the tool is broken so much as the hardware stopped being simple enough for a single percentage to tell the whole story."
English
28
22
213
15.4K
Not Sure أُعيد تغريده
ValdikSS
ValdikSS@ValdikSS·
Завершу срач о пиратстве, чтобы у японцев вообще жопа отвалилась. В далёком 2012, будучи фанатом Maximum the Hormone, нашел смешное видео, где барабанщица играет свою же песню ROLLING1000tOON в аркаде на лёгкой сложности так, как играет на концертах, и загрузил его на YouTube →
Русский
25
109
2.9K
226.3K
Not Sure
Not Sure@ethix_ru·
@Joshimuz I decided to have a breast augmentation.
English
0
0
2
93
Joshimuz
Joshimuz@Joshimuz·
(found)
English
4
4
74
3.3K
Joshimuz
Joshimuz@Joshimuz·
Yesterday for April Fools I started a playthrough of the Google Translate mod for GTA:SA which translate chain and then re-voiceovers nearly every single line in the entire game... and it's been absolutely hilarious so far. Doing the second half of the game today!
English
10
120
1.5K
116.5K
Terrible Maps
Terrible Maps@TerribleMaps·
日本語で投稿すれば、もっと多くの人の反応が得られるようになるってこと?これで合ってる?
Terrible Maps tweet media
日本語
61
63
1.8K
91.1K
Not Sure أُعيد تغريده
Teapo64
Teapo64@Teapo64·
@MerriamWebster I hole-hardedly agree, but allow me to play doubles advocate here for a moment. For all intensive purposes I think you are wrong. In an age where false morals are a diamond dozen, true virtues are a blessing in the skies.
English
181
234
7.5K
145.3K
Not Sure أُعيد تغريده
Merriam-Webster
Merriam-Webster@MerriamWebster·
Not to be THAT dictionary, but… It’s ‘per se,’ not ‘per say.’ It’s ‘dog-eat-dog world,’ not ‘doggy-dog world.’ It’s ‘hunger pangs,’ not ‘hunger pains.’ It’s ‘one and the same,’ not ‘one in the same.’ It's 'buck naked,' not 'butt naked.'
English
3.1K
9.3K
67.4K
6.1M
Not Sure
Not Sure@ethix_ru·
@merr1k У форбс сумбурно написано: абзац про плату сверх 15 ГБ зарубежного мобильного трафика, потом абзац про ограничение платформами использования их сервисов через VPN, потом абзац снова про первое, потом абзац снова про второе. А остальные сделали:
GIF
Русский
0
0
0
240
Dave Frenkel
Dave Frenkel@merr1k·
А зачем все в заголовки выносят «трафик VPN»? В источнике вообще весь международный трафик. Если бы они могли отличить VPN от не VPN, они бы не плату за него ввели, они бы его заблокировали нахуй. У BBC с их источником про «измерили 10 гб» такая же странность
Meduza@meduzaproject

Forbes: Минцифры попросило операторов связи ввести плату за трафик с использованием VPN-сервисов. Это сделано по поручению Путина meduza.io/news/2026/03/3…

Русский
21
14
867
72.1K
Not Sure
Not Sure@ethix_ru·
@rick_give @Kraul_en Maybe, you Location could be a Location bit Location Locations Location more gentle Location next Location Location Location Location Key Location Location time
English
0
0
0
4
Rika
Rika@rick_give·
@Kraul_en I fear that her saying "location" will never sound normal again.
Rika tweet media
English
1
0
26
647
Kraul
Kraul@Kraul_en·
"I didn't kidnap her" Evil Neuro wonders where Filian is
English
9
42
828
14.4K
Dave W Plummer
Dave W Plummer@davepl1968·
"Photography isn't art" "The telephone is just a novelty" "A record just records vibrations" "Computers are just fast calculators" You know what's even crazier? Three years ago, some people heard that LLMs are "just predicting the next token" and have continued to write them off because of some cynical buzz they once heard at a cocktail party. And now it occupies the part of their brain that would otherwise be used for knowledge, seemingly impossible to dislodge despite ample evidence. Consider that you yourself are a large model and that you have no manifestation in the actual universe until your own large model predicts its next motor action: so reducing something to its elements doesn't constrain it, and it only partially describes it. When you're using a commercial LLM today, you’re not interacting with a bare “next-token predictor”, you’re using a full system wrapped around it. The model still generates tokens, but it’s augmented with on-the-fly tool use (such as web search, code execution, and APIs), retrieval systems that pull in fresh or private data (RAG), and orchestration layers that decide when to call those tools and how to integrate their results. So while token prediction is of course the CORE mechanism, the real capability comes from the surrounding system that turns it into something far closer to a general-purpose problem-solving engine. TLDR: No, LLMS don't just predict the next word from some big-assed tree-based lookup table.
maya benowitz 🕰️@cosmicfibretion

The AI psychosis is so bad that the humans are hallucinating now. The belief that next-token prediction will not only replicate but exceed all human thought is an extrapolation that borderlines religious dogma.

English
27
14
124
9.6K
Not Sure أُعيد تغريده
Just Egg🥐
Just Egg🥐@Just_Egg13·
Korone & Ryan Gosling saying each other's name
English
7
388
3.5K
38.5K
Not Sure
Not Sure@ethix_ru·
@Andrea_C_White I was already so far past the point of no return I couldn't even remember what it looked like when I had passed the point of no return I passed the point of no return I passed the turn like a point of no
English
0
0
0
13
Andrea C. White
Andrea C. White@Andrea_C_White·
wrapping up a bunch of wips so here's a thing
Andrea C. White tweet media
English
43
1.4K
8.7K
130.9K
Not Sure
Not Sure@ethix_ru·
@Joshimuz I just accidentally the whole recording (:
English
0
0
1
155