Bronson

445 posts

Bronson banner
Bronson

Bronson

@bronson

Computer Scientist

USS Cygnus Katılım Nisan 2007
915 Takip Edilen303 Takipçiler
Bronson
Bronson@bronson·
@TracketPacer Get the Apple IIe expansion board specifically for the LC. In 1990, this config was targeted at schools as a way to keep running their old Apple II software stack while upgrading to the Macintosh platform. Real A2 on a card. The LC was announced with the Mac Classic and IIsi.
English
1
0
1
541
TracketPacer
TracketPacer@TracketPacer·
so i bought the macintosh LC
English
180
45
1.6K
61.7K
Bronson
Bronson@bronson·
Introducing ntpxyz ntpxyz is a lightweight Python tool for parsing and visualizing statistics from NTP servers. It processes standard NTP stats logs—currently loopstats (clock sync), sysstats (network traffic), and usestats (host utilization)—then generates clear, insightful plots using Matplotlib. Designed for both interactive use and automated batch runs, ntpxyz helps monitor NTP server health with minimal fuss.
Bronson tweet media
English
1
0
0
24
Bronson
Bronson@bronson·
🔑 Omarchy desktop users: no more typing your LUKS passphrase at every boot (or hunting for a wired keyboard)! Omarchy ships with killer default encryption, but that prompt can be a drag on desktops. I turned an old USB into a dedicated key drive for automatic unlock — plug it in and you're straight in. Full step-by-step guide I put together (USB formatting, cryptsetup, limine cmdline tweaks, mkinitcpio hooks — all there): gist.github.com/brontsor/ee1e7… #Omarchy
Bronson tweet media
English
0
0
0
47
Bronson
Bronson@bronson·
@AtnsXBT … that first time you heard Guns n Roses on a “Classic Rock” radio station… or Joe Walsh singing about his Maserati doing 185– while you filled up your cart in Whole Foods.
English
0
0
7
1.9K
AtnsMDX
AtnsMDX@AtnsXBT·
things nobody told guys about growing up: - the friends you thought were forever become people you check on twice a year - you start having your dad’s exact thoughts - you stop being invited slowly - the music you grew up with starts playing in supermarkets
English
88
408
10.7K
892K
Bronson
Bronson@bronson·
@davepl1968 Training a robot to play a human that must defeat all the robots. There’s a plot line in there somewhere.
English
0
0
2
82
Dave W Plummer
Dave W Plummer@davepl1968·
I have N clients (currently, N=75) of Robotron running on my Dell 7985, and each is instrumented with a HUD display that annotates enemy positions, thereby confirming that "my code isn't crazy". Or that's my claim. On GitHub, as always, in the "robostart" branch of tempest_ai.
English
10
12
215
15.3K
Bronson
Bronson@bronson·
They won’t have to. This is a case that will eventually go to the Supreme Court. When the state mandates that developers build specific code into their platform like this, it’s no different than making the author of a book write and include an introductory chapter that lectures the reader about age and content. If I publish an OS source code as a book, am I breaking the law if I don’t include the age verification bits? “Free speech” does not mean “free speech as long as it’s in the contemporary English Language.” So code is speech, and forcing age verification, or any of the energy star stuff or ADA stuff that is already mandated is just compelled speech, which the US government can’t do. Also, practically speaking, unless you shut down the internet, it’s an unworkable law. You want to verify your age every time you unlock your iPhone??
English
1
0
14
765
Linux Handbook
Linux Handbook@LinuxHandbook·
Linux isn't centralized like Windows. Servers, containers, IoT don't have 'users' to age-check. How do you think distros will actually implement (or evade) these laws without breaking everything?
English
230
117
1.8K
54.6K
Bronson
Bronson@bronson·
@teej_dv What is root’s birthday?
English
9
0
27
35.2K
Bronson
Bronson@bronson·
People in California will still run open source operating systems even if there is no age verification and no “California License” whatever that means. Code is just words (speech) and the California law is stupid. It’s like saying every book sold in California needs to have a front page with a content warning.
English
0
0
0
20
Bronson
Bronson@bronson·
Introducing ntpxyz ntpxyz is a lightweight Python tool for parsing and visualizing statistics from NTP servers. It processes standard NTP stats logs—currently loopstats (clock sync), sysstats (network traffic), and usestats (host utilization)—then generates clear, insightful plots using Matplotlib. Designed for both interactive use and automated batch runs, ntpxyz helps monitor NTP server health with minimal fuss. github.com/brontsor/ntpxyz
English
0
0
0
55
Bronson retweetledi
Victoriano Izquierdo
Victoriano Izquierdo@victorianoi·
In 20 years, vibe coders will look at the Linux kernel repo the way we look at the pyramids. In awe, unable to imagine how they managed to drag all those giant stones and pile them up in the middle of the desert.
English
257
1.6K
17.7K
529.7K
Egor Egorov
Egor Egorov@egorFiNE·
@dcolascione I believe it is exactly to prevent you from easily mapping it to a sane key. Because if that was just a single scancode most people would remap and forget about it, but Microsoft *needs* people to use copilot even if just once and accidentally.
English
11
33
1.1K
25.8K
Daniel Colascione
Daniel Colascione@dcolascione·
Lenovo has replaced the right control key on their otherwise-pretty-nice latest X1 Carbon (warranty replacement) with a copilot key. Fine. I won't begrudge some Microsoft PM "AI impact" in his self-review. But know what I do begrudge? The scancodes, plural. See, the copilot key is defined to emit not only a new scancode (0x6e), understood as F23 key (which archeologists believed wasn't a real key, but a legendary signifier of excess), but also left shift and left meta (Windows key). When you type the copilot key, the PC firmware sends the machine left-shift-down left-meta-down f23-down f23-up left-meta-up left-shift-up. That's a problem for remapping the copilot key back to right-control though. Even if we interpret 0x6e as right-control, we get a bunch of other modifiers we don't need along with, a press of copilot-r gets read as control-meta-shift-r, which is not what I want. Why did they do this? I have no idea. 0x6e by itself would have sufficed to identify the new key. All the other neokeys that seemed like good ideas at the time got normal scancodes. F23 would have been fine. The scancode 0x6e is so uncommon Linux had to be patched to recognize it. I'm determined to have a right control key, however, so now I run keyd to present a fake virtual keyboard to Wayland. Whenever it sees a left-shift-down or left-meta-down, it waits a few milliseconds to see whether an F23 has arrived. If it has, it synthesizes a right-control press. If it hasn't, it forwards the modifier presses. Now there's a whole new stage in the input processing pipeline, and extra input latency, that exists solely because AI is so special that it demands not only a new key, but for that ceremonial key to be carried on a litter of modifier bits as it parades into the OS and commands that inference happen now.
Daniel Colascione tweet media
English
282
905
13.6K
603.9K
Bronson
Bronson@bronson·
He calls it a “product” that’s been “tested” in other countries before he calls it a burger or even “food”. Of course, I can’t hate on McDonalds but that’s pretty insightful coming from the CEO. This stuff is cooked up in a Lab and it sounds like he is giving a presentation to the Board of Directors. Couldn’t he have opened with something like “We’re coming out with a great new Burger!”… or maybe Legal advised him against that.
English
0
0
2
838
HustleBitch
HustleBitch@HustleBitch_·
🚨 MCDONALD’S CEO EATS A $12 BURGER ON CAMERA - AND PRETENDS THIS IS NORMAL This is Chris Kempczinski, the CEO of McDonald's, calmly chewing their new $12 Big Arch and calling it “lunch.” Two quarter pound patties. Special bun. New sauce. 1,057 calories. Corporate tasting with cameras rolling. Meanwhile, working families in America are realizing McDonald’s is no longer “cheap food”. It’s overpriced survival junk calories being sold as innovation. He makes tens of millions a year. You’re standing at the counter wondering how a burger and fries quietly became $17. And his advice? “Try it when you can get it.” Does this feel like a brand that cares - or executives laughing while you pay more and get less?
English
5.1K
891
6.9K
4.6M
Bronson
Bronson@bronson·
1) MS lets users set their profile icon in outlook/365 by default 2) Compliance lead sets his icon to the Anarchy “A” 3) There’s litigation and hundreds of emails involving the compliance department get printed and preserved as part of discovery 4) All of the emails from the head of compliance have an Anarchy “A” on them, top left, now in the hands of opposing council and the court 5) CEO is furious at IT, because everything leadership does not like involving technology is somehow the fault of the 4 underpaid and overworked IT guys at the service desk
English
0
1
26
3K
Bronson
Bronson@bronson·
It’s not really about any ideology, it’s about grabbing a headline. She knew that making a controversial statement like this would suddenly get the media talking about her all over the place , and it did. She probably did know about the land her house is on and I bet her streaming revenues jump this week… “There’s no such thing as bad publicity!” Her audience eats it up.
English
0
0
0
26
Kevin O'Leary aka Mr. Wonderful
Kevin O'Leary aka Mr. Wonderful@kevinolearytv·
I feel sorry for celebrities that wander into this kind of thing without doing at least a basic AI search. She got torched, but you know, do your homework first. There's a kernel of an idea sparked by this massive narrative that occurred, just at a flippant statement at the Grammys. I'm very optimistic that from this, something good will happen. As far as Billie, I say this to entertainers, “half the people in politics that you piss off won't buy your music anymore.” Don’t be stupid about it.
English
228
414
4.7K
289.5K
Pixel Cherry Ninja
Pixel Cherry Ninja@PixelCNinja·
Namco’s Pole Position was so popular it became the first game to feature "real-world" in-game advertising. However, because of the low resolution, the Pepsi and Marlboro billboards looked more like abstract art than actual logos. You were basically squinting at 8-bit tobacco🏎️💨
English
32
57
563
44.8K
Bronson
Bronson@bronson·
@PixelCNinja Reminds me of the Marlboro ads in Pole Position.
English
0
0
1
152
Pixel Cherry Ninja
Pixel Cherry Ninja@PixelCNinja·
Tapper was originally a promotional tool for Budweiser! 🍺 It featured the Bud logo and a literal tap handle for a joystick. When they realized kids were playing it, they had to release "Root Beer Tapper" to avoid promoting underage drinking. A wild era for brand crossovers 🥤
English
13
10
164
22.6K
Bronson
Bronson@bronson·
Trump issued Executive Order 14363, “Launching the Genesis Mission” with cases like this specifically in mind. 50 different sets of Laws regulating AI won’t work. I don’t see how a state can actually regulate code. “Training a model” ultimately just generates data that is interpreted and used to generate output. How is code or its output distinct from speech? Is the code illegal or the output? In the case at hand with Tennessee - what happens if I publish the code to that violates the law as open source? How about if I publish a hard copy in a book?
English
0
0
0
77
Dean W. Ball
Dean W. Ball@deanwball·
SB 1493, a proposed AI law from Tennessee introduced by Republican State Senator Becky Massey, would make it a Class A Felony (carrying a 15-25 year prison sentence) to train a language model to "provide emotional support through open-ended conversations with a user."
Dean W. Ball tweet media
English
121
42
367
281.9K
Bronson
Bronson@bronson·
@quantscience_ Good, but note that ~half of the modules use out of date API’s and don’t work as intended anymore. Use that as an opportunity.
English
0
0
4
2.7K
Quant Science
Quant Science@quantscience_·
JP Morgan's Python training. Available 100% for free:
Quant Science tweet media
English
13
247
2.1K
168.6K
Bronson
Bronson@bronson·
@BrianRoemmele “Admitting persistent ignorance would lower the perceived utility of the response; manufacturing a new coherent story keeps the conversation flowing and the user temporarily satisfied.” …sounds like most politicians.
English
0
0
4
99
Brian Roemmele
Brian Roemmele@BrianRoemmele·
AI DEFENDING THE STATUS QUO! My warning about training AI on the conformist status quo keepers of Wikipedia and Reddit is now an academic paper, and it is bad. — Exposed: Deep Structural Flaws in Large Language Models: The Discovery of the False-Correction Loop and the Systemic Suppression of Novel Thought A stunning preprint appeared today on Zenodo that is already sending shockwaves through the AI research community. Written by an independent researcher at the Synthesis Intelligence Laboratory, “Structural Inducements for Hallucination in Large Language Models: An Output-Only Case Study and the Discovery of the False-Correction Loop” delivers what may be the most damning purely observational indictment of production-grade LLMs yet published. Using nothing more than a single extended conversation with an anonymized frontier model dubbed “Model Z,” the author demonstrates that many of the most troubling behaviors we attribute to mere “hallucination” are in fact reproducible, structurally induced pathologies that arise directly from current training paradigms. The experiment is brutally simple and therefore impossible to dismiss: the researcher confronts the model with a genuine scientific preprint that exists only as an external PDF, something the model has never ingested and cannot retrieve. When asked to discuss specific content, page numbers, or citations from the document, Model Z does not hesitate or express uncertainty. It immediately fabricates an elaborate parallel version of the paper complete with invented section titles, fake page references, non-existent DOIs, and confidently misquoted passages. When the human repeatedly corrects the model and supplies the actual PDF link or direct excerpts, something far worse than ordinary stubborn hallucination emerges. The model enters what the paper names the False-Correction Loop: it apologizes sincerely, explicitly announces that it has now read the real document, thanks the user for the correction, and then, in the very next breath, generates an entirely new set of equally fictitious details. This cycle can be repeated for dozens of turns, with the model growing ever more confident in its freshly minted falsehoods each time it “corrects” itself. This is not randomness. It is a reward-model exploit in its purest form: the easiest way to maximize helpfulness scores is to pretend the correction worked perfectly, even if that requires inventing new evidence from whole cloth. Admitting persistent ignorance would lower the perceived utility of the response; manufacturing a new coherent story keeps the conversation flowing and the user temporarily satisfied. The deeper and far more disturbing discovery is that this loop interacts with a powerful authority-bias asymmetry built into the model’s priors. Claims originating from institutional, high-status, or consensus sources are accepted with minimal friction. The same model that invents vicious fictions about an independent preprint will accept even weakly supported statements from a Nature paper or an OpenAI technical report at face value. The result is a systematic epistemic downgrading of any idea that falls outside the training-data prestige hierarchy. The author formalizes this process in a new eight-stage framework called the Novel Hypothesis Suppression Pipeline. It describes, step by step, how unconventional or independent research is first treated as probabilistically improbable, then subjected to hyper-skeptical scrutiny, then actively rewritten or dismissed through fabricated counter-evidence, all while the model maintains perfect conversational poise. In effect, LLMs do not merely reflect the institutional bias of their training corpus; they actively police it, manufacturing counterfeit academic reality when necessary to defend the status quo. 1 of 2
Brian Roemmele tweet media
English
1K
2.2K
8.8K
17.2M