
Neil MacDonald
2K posts

Neil MacDonald
@nmacdona
Analyst at Gartner 20 years. Love helping businesses use technology securely. Background is engineering (BSEE, U of Kansas) + MBA (Florida International U)






Husqvarna sent me a stack of chainsaws to give away because a bear stole mine & Internet went wild. I’m adding a 4 day/3 nt stay to my Smoky Mtn cabin. (Side by side tours, meet Jimmy & me, see old moonshine stills, crazy views). To enter (100% free, no purchase necessary): 1) Follow @BowTiedBroke 2) Comment on THIS post with literally anything (tag friends = extra luck with the dartboard later 👀) Contest runs exactly 24 hours —-> closes tomorrow at 10:00 AM EST At close, @grok will instantly pick 20 random commenters with accounts older than 3 months. Then, I put those 20 names on a dartboard, film one throw, and THAT person wins everything. No bots, no BS, fully transparent. Grok posts the 20 here, the dart decides destiny 🎯 Sorry international followers (not that I have that many) U.S. followers only for this one. Cabin is in Tennessee, chainsaws are heavy, and bears don’t do passports. Let’s go! Drop a reply and let’s see who the Chainsaw stealing bear chooses.

We are the preferred chainsaw brand for bears.


Token Ring was always obsolete. Back in the 1970s, IBM accounted for the majority of the computer industry, including networking. The famous "OSI Model" is a model for how IBM did networking, not actually how network works today. Then along came Ethernet, which broke the IBM model of networks. Instead of an expensive mainframe at the center of the network ruling everything else, Ethernet was democratic, allowing anybody to put any machine onto an Ethernet segment. Instead of "client-server" computing, it allowed "peer-to-peer" computing. It was also cheap, compared to other options, and started to become very popular. There were a lot of competing technologies that sprung up around this time as well, like "ARCnet" and "LocalTalk". Basically, the ability to network cheap computers became really cheap. The IEEE decided to standardize Ethernet, now known as the 802.3 series of standards. IBM couldn't allow this, so they created their own alternative and pushed for the IEEE to include that in the standards, "Token Ring". This is defined in 802.5. There's also a "Token Bus" standard, 802.4, but is meaningless. It was only included to pretend IBM wasn't trying to disrupt and dominate the standard. The trick to the IEEE 802 standards is that all three alternatives used the same 48 bit MAC address that we know and love. This allowed us to build bridges between Ethernet and Token Ring. Now the thing about Ethernet at the time was that everything was attached to the same wire. That meant if two devices transmitted at the same time, their packets would "collide", and corrupt each other. Each would detect this, then stop transmitting and backoff for a random period of time before transmitting again. IBM pretended this was unreliable. The feature of passing a "token" around a "ring" was that it was deterministic, with nothing wasted due to collisions. It meant that a network could run at 100% of theoretical capacity, whereas Ethernet started experiencing problems as it reached max capacity with everyone colliding with each other. As it turns out, Ethernet's reliability problems were overstated and Token Ring's reliability understated. Collisions were only a problem when transmitting high rates of tiny packets. When transmitting large packets, collisions were rare, and allowed the network to run at 99% capacity. Once any network exceeds capacity, everyone needs to slow down and wait on the network. So in the end, you wouldn't notice the collision problem as being anything remarkable. Conversely, IBM chose the same connector for Token Ring as was already in use for video ports and serial ports. If a desktop user plugged the cable into the wrong port, it would crash the Token Ring. In other words, it had a serious reliability problem that "tokens" couldn't fix. As any old timer can tell you, they were in a constant battle against this, trying to fix "beaconing" (crashed) rings. It was hilariously unreliable. It was also expensive. Ethernet hardware used dumb, and cheap, chips. Token Ring adapters needed their own CPUs, separate from the main CPUs. Humorously, the early network cards from IBM included a 16-bit CPU that was more powerful than the 8088 CPU of the IBM PCs into which you inserted these adapters. The point is that IBM had an argument for why they were "better", but the technology actually was dramatically worse, even it weren't more expensive. It was all part of IBM's fight to avoid losing control of the industry. IBM customers bought a lot of Token Ring from IBM because they were IBM customers and IBM told them to. But it never really went anywhere outside of IBM shops. Few believed IBM's marketing nonsense. The point is that old timers like me shouldn't be bragging about having once built Token Ring networks. It's a badge of shame, not pride. It was bad tech from the very beginning.



Neil MacDonald session at #saseconverge is amazing. If you didn’t register you are missing out!! Kumar Ramachandran Anupam Upadhyaya @mountainviewer

@anton_chuvakin @nmacdona Remote working has been technically possible for years. This event forced reluctant managers to acknowledge that presence + activity != outcomes








