All Purpose Geek

1.7K posts

All Purpose Geek banner
All Purpose Geek

All Purpose Geek

@allpurposegeek

Official Twitter account for All Purpose Geek We do #DonationOptional #TechSupport and #TechNews. Just @ us.

Florida, USA Katılım Haziran 2009
139 Takip Edilen674 Takipçiler
Sabitlenmiş Tweet
All Purpose Geek
All Purpose Geek@allpurposegeek·
We were watching one of our favorite non-IT-related podcasts, but it happened to highlight the dangers of malicious actors gaining access to camera feeds—ranging from exposed NVRs with default passwords to bodycams lacking proper security. It’s a critical reminder for those who may not typically consider these vulnerabilities. If you’re looking for guidance on securing your cameras or other surveillance devices, we can help. For those interested, the podcast we watch sometimes is Rotten Mango - highly recommended. youtu.be/FPBTJGxOrdE?si…
YouTube video
YouTube
English
1
1
2
344
All Purpose Geek
All Purpose Geek@allpurposegeek·
Again today. A customer called because they no longer have MFA access to their email. Yesterday they went to a @TMobile store, bought a new phone, were guided through the Apple backup and restore process, and turned in their old phone. Guess what did not transfer? Guess what they no longer had access to because the old phone was already gone? The sensitive data inside their MFA / 2FA app. How many times does this need to happen before mobile carriers treat this like a standard handoff issue instead of making it the customer’s problem after the damage is done? This is not some obscure edge case. Authenticator apps, passkeys, push approvals, banking apps, password managers, and account recovery flows are now part of basic phone migration reality. How hard is it for @TMobile, @ATT, @Verizon, and @Ask_Spectrum, @boostmobile to hand customers a simple warning flyer before they trade in a device? "Do you use MFA or 2FA apps? Test them on your new phone before surrendering the old one." That one sentence could prevent hours of lockouts, emergency IT calls, lost productivity, and unnecessary stress. At this point, failing to warn customers is not just bad service. It is negligence dressed up as a phone upgrade.
All Purpose Geek@allpurposegeek

Hey @ATT @TMobile @Verizon @Ask_Spectrum considering how often I have to remediate this after people get new phones, I'm urging you mobile providers develop a flyer like this and hand it out with people swap their phones. If you can't train your customer service agents handling phone swaps to bring this to your customer's attention, then just let this do the talking.

English
0
1
0
34
All Purpose Geek
All Purpose Geek@allpurposegeek·
Hey @ATT @TMobile @Verizon @Ask_Spectrum considering how often I have to remediate this after people get new phones, I'm urging you mobile providers develop a flyer like this and hand it out with people swap their phones. If you can't train your customer service agents handling phone swaps to bring this to your customer's attention, then just let this do the talking.
All Purpose Geek tweet media
English
0
1
1
150
All Purpose Geek
All Purpose Geek@allpurposegeek·
What is it going to take to get even a comment from any of these companies?
All Purpose Geek@allpurposegeek

Quoting my earlier post: “Hey @TMobile @Verizon @GetSpectrum — please train your CSRs to ask customers if they use 2FA apps like Duo, Microsoft Authenticator, or Google Authenticator before trading in their phones. These apps don’t always transfer during phone backups.” Now here we are again — I’ve had to help multiple people just this week who were locked out of their accounts after upgrading. @TMobile @Verizon @Ask_Spectrum — what is it going to take for your teams to ask one simple, essential question before a customer hands over their phone and get locked out of their accounts? As more platforms phase out SMS-based MFA in favor of app-based authentication, this problem is only going to escalate. Your reps are the last line of defense. They need to be trained to explain that 2FA apps do not reliably transfer through standard phone backup or restore processes. In most cases, users must still have access to the original app to reauthorize on a new device — or they risk being completely locked out. Techs like me are constantly getting pulled in to fix preventable messes because this isn’t being addressed on your end. Start protecting your customers. This is basic digital hygiene in 2026.

English
0
0
0
24
All Purpose Geek
All Purpose Geek@allpurposegeek·
The "Fast Start" feature of windows 11 which replaces "Shutdown" with hibernation (and still call it shutdown) does more harm than good. I have users with 70+ day uptimes because they are not actually restarting the system when they use the shutdown option.
English
0
0
0
33
All Purpose Geek
All Purpose Geek@allpurposegeek·
Quoting my earlier post: “Hey @TMobile @Verizon @GetSpectrum — please train your CSRs to ask customers if they use 2FA apps like Duo, Microsoft Authenticator, or Google Authenticator before trading in their phones. These apps don’t always transfer during phone backups.” Now here we are again — I’ve had to help multiple people just this week who were locked out of their accounts after upgrading. @TMobile @Verizon @Ask_Spectrum — what is it going to take for your teams to ask one simple, essential question before a customer hands over their phone and get locked out of their accounts? As more platforms phase out SMS-based MFA in favor of app-based authentication, this problem is only going to escalate. Your reps are the last line of defense. They need to be trained to explain that 2FA apps do not reliably transfer through standard phone backup or restore processes. In most cases, users must still have access to the original app to reauthorize on a new device — or they risk being completely locked out. Techs like me are constantly getting pulled in to fix preventable messes because this isn’t being addressed on your end. Start protecting your customers. This is basic digital hygiene in 2026.
English
0
1
3
241
All Purpose Geek
All Purpose Geek@allpurposegeek·
I was brought in to work with a client who had been running their business on a small Intel NUC style mini PC that had been deployed by a previous IT provider. The system used Windows 11 as the host OS and ran a virtual machine containing an old SBS 2011 installation to support their legacy line of business application, Aderant Total Office. All workstations remained joined to that SBS domain. Because the system continued to function, they were allowed to operate this way long past any reasonable support window. Fast forward to today. The hardware is now hard locking, and we were engaged to diagnose and repair the issue. As part of that process, we did what responsible IT providers should do. We recommended replacing the system with hardware properly sized for virtualization and advised migrating the SBS 2011 environment to a modern, supported operating system. That is where the resistance began. The client balked at the cost, repeatedly attempted to reduce scope, and pushed for cheaper alternatives rather than addressing the underlying architectural problem. The real issue was not just failing hardware. It was that whoever originally deployed this environment set a fundamentally false expectation that critical business infrastructure could be safely run long term on underpowered, consumer grade hardware with obsolete software. That expectation does real damage. Businesses rely on their systems being stable, secure, and supportable. Running a production domain controller and line of business application inside a VM on a mini PC was never a sustainable design, and it should never have been presented as one. Keeping it in service simply because it still powered on deferred risk rather than eliminated it. This is what makes situations like this especially frustrating. Every time an IT provider cuts corners and presents a fragile, underbuilt solution as acceptable, it raises unrealistic expectations for everyone else. When those systems eventually fail, the providers who refuse to repeat those mistakes are the ones who get painted as overpriced or accused of upselling. The problem is perception. Proper IT work costs more upfront because it accounts for longevity, security, supportability, and failure scenarios. Corner cutting hides those costs temporarily, then hands them back later as outages, data loss, emergency repairs, and rushed replacements. By the time that happens, the discussion is no longer about sound engineering. It becomes about sticker shock. This dynamic rewards bad practices and penalizes good ones. Clients become conditioned to believe that running critical infrastructure on bargain hardware is normal, and that anyone advising otherwise must be exaggerating risk. The reality is simpler and far less cynical. The difference is not thrift versus greed. It is whether a provider is willing to be honest about risk and design systems that will still be standing years later. When shortcuts are normalized, the entire industry pays the price. Clients lose trust, failures become inevitable, and the providers who try to do things the right way are forced to defend costs that never should have been optional in the first place.
English
0
1
1
45
All Purpose Geek
All Purpose Geek@allpurposegeek·
@Ask_Spectrum Can you explain why your current business router platform cannot provide truly transparent static IP addresses to customers? Static IP service is generally expected to be functionally bridged and hands off at the provider edge, allowing the customer’s own router or firewall to handle traffic inspection, shaping, and application logic. That expectation breaks down with Spectrum’s current deployment model. There is a known issue where SIP ALG remains enabled on Spectrum managed routers and actively interferes with traffic, even when static IP addresses are assigned. This problem is made worse on the newer cloud managed only routers, where customers and their IT providers have no local administrative access and must open a service ticket just to disable a feature that should not be enabled by default. This design has led to extensive and unnecessary troubleshooting across multiple business environments. VoIP providers routinely point to customer edge devices as the source of call quality issues, which is a reasonable assumption given that static IP traffic is expected to be transparent. Only after significant time investment does it become clear that the Spectrum router itself is modifying or interfering with traffic. The eventual remediation ends up being disabling SIP ALG on Spectrum’s hardware, assuming the customer can even get that change approved and implemented. Historically, this was not an issue. Earlier Spectrum Business static IP deployments either scripted the modem directly or used a modem and router combination that allowed local administrative access. That model preserved customer and IT provider control and avoided these problems entirely. The shift to a locked down, web only managed router removed that control by design and introduced a failure point that did not previously exist. When asking about alternatives, I was told directly by a Spectrum sales representative that dedicated fiber is the only option Spectrum offers that provides true transparency for static IP customers. That is not a technical limitation inherent to static IP service, it is a limitation created by Spectrum’s own hardware and firmware decisions. Customers should not be forced into significantly higher cost services simply to avoid traffic manipulation introduced by provider managed equipment. At a minimum, features like SIP ALG should be disabled by default for any static IP customer. More broadly, Spectrum should be re evaluating how business hardware is deployed and whether those decisions align with real world enterprise networking expectations. Requiring service tickets to undo behavior that breaks customer workloads is not an acceptable baseline for business or enterprise offerings. This is a solvable problem, but it requires Spectrum to acknowledge that the current direction is fundamentally wrong for static IP clients and to course correct accordingly.
English
0
0
0
51
All Purpose Geek
All Purpose Geek@allpurposegeek·
This critical issue with RAM, SSD, and NAND right now isn't just annoying price hikes or slim pickings. It's quietly turning into a legit national security headache, and hardly anyone's talking about it like they should. The big trusted suppliers are getting hammered because all the production capacity is getting vacuumed up by AI data centers. Massive orders from the hyperscalers are eating huge chunks of global output. That leaves way less for everyone else, and the squeeze on reliable, vetted sources is only getting tighter. We're talking brutal increases: NAND costs have surged dramatically since early 2025, with some reports showing over 200% jumps in wafer prices and certain segments hitting even steeper spikes. Experts from TrendForce forecast NAND Flash contract prices rising 33–38% quarter-on-quarter in Q1 2026 alone, while client SSDs could see over 40% jumps as supply shifts to higher-margin enterprise gear. Cumulative effects in the hardest-hit areas like consumer and legacy drives are pushing toward 300% or more as everything funnels toward AI priorities. Major players are making moves that hit consumers hard. Micron has already exited the consumer memory business entirely, shutting down its Crucial brand to focus on enterprise and AI customers. Rumors swirl that SK hynix might follow suit, leaving Samsung as one of the few big names still serving the broader market. This shift starves the consumer side even more. When good supply dries up, the market doesn't just sit there politely. Something always fills the void, and it's not always from the good guys. People start cutting corners on procurement because they have to keep things running. Gray-market stuff looks more appealing by the day. We're already seeing plenty of fake SSDs out there: drives with tampered firmware that report way more capacity than they actually have. They pass quick checks, look totally normal, and then your data starts vanishing or corrupting once you actually push the drive. This is happening today, before the shortage really peaks. If things get much worse, way more folks (even careful ones) will end up buying from sketchy sources just to stay operational. The problem scales up fast. Memory isn't some optional add-on. RAM and storage are the bedrock of every critical system we depend on, everything from laptops to servers. If those parts are compromised at the hardware level, there's no simple patch, no "just reinstall Windows," no easy rollback. You might never even detect it until it's way too late. This isn't hypothetical. National security folks have been warning about hardware supply-chain risks for years. When organizations, especially in hospitals, power grids, government systems, defense contractors, get forced to source outside trusted channels just to keep the lights on, the attack surface explodes. It gets exponentially worse when the same dodgy components end up embedded in infrastructure that literally keeps society running. What makes it scarier is how locked-in some policy folks seem to be on pushing AI adoption and tech dominance at all costs. When the narrative is all about accelerating AI no matter what, the downstream fallout (like eroding hardware trust, consumer exposure, and enterprise vulnerabilities) can get downplayed or straight-up ignored. That leaves regular businesses and everyday people holding the bag, dealing with risks they didn't ask for and can't really fix themselves. The worst part? Hardware backdoors or compromises can sit there silently for the entire life of the device. Unlike software malware that you can usually detect and nuke, once bad silicon is in, it's in. We treat energy security and food security like strategic must-haves for obvious reasons. Semiconductor and memory supply chains deserve the same urgency. When the market chases short-term profits and funnels everything toward a handful of high-margin AI use cases while starving the broader ecosystem, it creates exactly the kind of openings adversaries dream about. This feels like one of those rare moments where kicking the can down the road is the absolute worst move. Hardware trust is the foundation everything else stands on. Let it crumble, and suddenly nothing built on top of it feels solid anymore. We need to pay attention now, before the damage is baked in.
English
0
1
0
47
All Purpose Geek
All Purpose Geek@allpurposegeek·
I had to manually fix yet another machine affected by backup software leaving a filter driver registry entry behind. As usual, that meant booting into recovery and editing the registry offline just to remove a broken dependency. It got me thinking why Windows still has no built in way to automatically fix this. Windows has a long standing reliability gap around UpperFilters and LowerFilters that Microsoft should address directly, starting with Automatic Repair and post boot troubleshooting. Filter drivers are treated as hard dependencies. When a third party product is removed incorrectly, Windows continues to reference a filter driver that no longer exists. The result is system instability that ranges from missing devices to complete boot failure. This is not an edge case. It commonly affects USB controllers, optical drives, storage volumes, and systems that previously had backup, antivirus, or disk utilities installed. A single orphaned filter reference can prevent Windows from loading a critical driver stack even though the underlying hardware and core drivers are healthy. Windows already detects these failures. The kernel, Service Control Manager, and SetupAPI all log missing or unloadable drivers during boot. Despite this, the operating system makes no attempt to remediate the problem. Administrators are instead forced to manually edit offline registry hives or reinstall removed software purely to clean up dependencies. At a minimum, Microsoft should add filter driver remediation to the Automatic Repair feature. During Automatic Repair, Windows should do the following. * Enumerate UpperFilters and LowerFilters in critical class and storage stacks * Validate that each referenced filter driver service and binary exists and is loadable * Automatically remove or quarantine orphaned filter entries that cannot be resolved * Log remediation actions clearly so they are transparent and reversible If a system fails to boot because a filter driver is missing, removing that reference restores functionality without harming system integrity. Keeping a broken dependency provides no benefit and guarantees failure. Post boot troubleshooting should also include this capability. The Troubleshooter should provide an option such as "Remove missing filter dependencies" so end users and administrators have a clear, supported path to fix these issues. These tools should scan class and device stacks, identify broken filter chains, and offer guided repair without requiring registry editing or command line intervention. Microsoft already performs automatic recovery in other areas, including boot configuration repair, driver rollback, service recovery, and device re enumeration. Filter drivers remain an architectural blind spot that undermines system recoverability and is proportionately impacts enterprise and power users. Adding filter driver remediation to Automatic Repair and Troubleshooting would materially improve Windows reliability, reduce unnecessary rebuilds, and eliminate a class of failures the operating system is already capable of detecting. This is not about breaking third party software. It is about allowing Windows to recover cleanly when that software is gone.
English
0
0
0
31
All Purpose Geek
All Purpose Geek@allpurposegeek·
All Purpose Geek@allpurposegeek

Hey @microsoft / @MSFTExchange It is time for Microsoft to make DNS SRV the first Autodiscover lookup the de-facto standard. Legacy Outlook clients and legacy ActiveSync implementations are no longer supported anyway, yet Autodiscover behavior still prioritizes discovery paths that only exist to accommodate them. This creates unnecessary complexity for modern Exchange, hybrid, and cloud environments. Microsoft Outlook and ActiveSync compatible clients should always query the DNS SRV record first when performing Autodiscover. If the SRV record does not exist, only then should clients proceed to secondary methods. DNS SRV records were designed specifically to solve service discovery without guessing hostnames. Autodiscover still relies on inferred URLs, certificate gymnastics, and HTTP to HTTPS redirects in 2025, which is no longer appropriate. An SRV first approach would immediately eliminate several long standing issues. There would be no forced requirement for "Autodiscover." domain or root domain HTTPS listeners. There would be no need for SAN or UCC certificates purely to satisfy guessed endpoints. There would be no HTTP to HTTPS redirect workarounds (Exactly what Microsoft relies on to redirect autidscover.customdonain.com to autodiscover.outlook.com). There would be no need to block or exclude Microsoft 365 Autodiscover endpoints using registry overrides. Administrators could cleanly redirect to any valid endpoint, including autodiscover.outlook.com. Behavior would become predictable across on prem, hybrid, and cloud environments. Today, Outlook may probe Microsoft 365 endpoints before local discovery, follow stale or incorrect SCP records, guess HTTPS URLs that require additional certificates, fall back to insecure HTTP redirects, and cache broken endpoints long after migrations. This forces administrators to deploy exclusions, redirects, split DNS, and registry hacks simply to make Autodiscover deterministic. This is no longer an edge case. It is routine work. Backward compatibility is no longer a valid justification. Unsupported Outlook versions and deprecated ActiveSync clients should not dictate modern discovery logic. Continuing to prioritize legacy behavior preserves technical debt at the expense of administrators and customers. A reasonable and safe path forward is simple. Always query DNS SRV first. If the SRV record exists, use it and stop. If the SRV record does not exist, then fall back to SCP, cloud probing, and hostname based discovery. Autodiscover does not need more workarounds. It needs a clear, modern default. Microsoft should formally adopt and document SRV first Autodiscover as the standard behavior going forward. If Microsoft makes that the standard, than others who implement ActiveSync or outlook anywhere functionality will do the same.

English
0
0
0
30
Aya Ventures
Aya Ventures@Aya_Ventures·
So now that it is officially 2026. Who is ready for HHN35?
English
2
0
3
52
All Purpose Geek
All Purpose Geek@allpurposegeek·
Hey @microsoft / @MSFTExchange It is time for Microsoft to make DNS SRV the first Autodiscover lookup the de-facto standard. Legacy Outlook clients and legacy ActiveSync implementations are no longer supported anyway, yet Autodiscover behavior still prioritizes discovery paths that only exist to accommodate them. This creates unnecessary complexity for modern Exchange, hybrid, and cloud environments. Microsoft Outlook and ActiveSync compatible clients should always query the DNS SRV record first when performing Autodiscover. If the SRV record does not exist, only then should clients proceed to secondary methods. DNS SRV records were designed specifically to solve service discovery without guessing hostnames. Autodiscover still relies on inferred URLs, certificate gymnastics, and HTTP to HTTPS redirects in 2025, which is no longer appropriate. An SRV first approach would immediately eliminate several long standing issues. There would be no forced requirement for "Autodiscover." domain or root domain HTTPS listeners. There would be no need for SAN or UCC certificates purely to satisfy guessed endpoints. There would be no HTTP to HTTPS redirect workarounds (Exactly what Microsoft relies on to redirect autidscover.customdonain.com to autodiscover.outlook.com). There would be no need to block or exclude Microsoft 365 Autodiscover endpoints using registry overrides. Administrators could cleanly redirect to any valid endpoint, including autodiscover.outlook.com. Behavior would become predictable across on prem, hybrid, and cloud environments. Today, Outlook may probe Microsoft 365 endpoints before local discovery, follow stale or incorrect SCP records, guess HTTPS URLs that require additional certificates, fall back to insecure HTTP redirects, and cache broken endpoints long after migrations. This forces administrators to deploy exclusions, redirects, split DNS, and registry hacks simply to make Autodiscover deterministic. This is no longer an edge case. It is routine work. Backward compatibility is no longer a valid justification. Unsupported Outlook versions and deprecated ActiveSync clients should not dictate modern discovery logic. Continuing to prioritize legacy behavior preserves technical debt at the expense of administrators and customers. A reasonable and safe path forward is simple. Always query DNS SRV first. If the SRV record exists, use it and stop. If the SRV record does not exist, then fall back to SCP, cloud probing, and hostname based discovery. Autodiscover does not need more workarounds. It needs a clear, modern default. Microsoft should formally adopt and document SRV first Autodiscover as the standard behavior going forward. If Microsoft makes that the standard, than others who implement ActiveSync or outlook anywhere functionality will do the same.
English
0
0
0
66
All Purpose Geek retweetledi
Aya Ventures
Aya Ventures@Aya_Ventures·
So I also make silly ideas into reality.
Aya Ventures tweet mediaAya Ventures tweet mediaAya Ventures tweet mediaAya Ventures tweet media
English
1
1
3
142
All Purpose Geek
All Purpose Geek@allpurposegeek·
The Windows 11 25H2 update, and by extension Server 2025, introduces strict SID uniqueness enforcement that is already causing real problems across both workgroup and domain environments. Systems that previously shared files, printers, or RDP connections without issue are now experiencing failures because Windows treats duplicate SIDs as a breaking condition rather than tolerated legacy behavior. The security intent behind this change is understandable, yet the rollout disregards decades of real world deployment practices that Microsoft itself shaped. Administrators did not rely on cloning images without sysprep for no reason. Duplicate SIDs continued to function even in domain environments, which reinforced the belief that cloning a base image and simply renaming the workstation was an acceptable, efficient approach. Meanwhile sysprep remained notoriously fragile. Any machine that had received Store delivered appx package updates often failed sysprep unless those packages were manually stripped. Microsoft’s own guidance to build reference images offline was unrealistic for modern enterprises that need early online access to provision line of business apps. Cloning ended up being the only approach that consistently worked. With 25H2, this long tolerated pattern has been flipped into a breaking condition. Millions of machines built through normal workflows are now affected, from MSP managed fleets to refurb channels to mixed workgroup and domain setups to education labs and small medical offices. Sysprep cannot be considered a viable remediation path for this issue. It is destructive and was never designed to run post deployment. Its long history of failing due to accumulated appx baggage makes it an impossible fix for real environments that have been in service for months or years. Users should not be required to rely on a tool that fails on fully updated systems simply to comply with a new enforcement standard. The situation is made worse by how Microsoft has handled mitigation. The Known Issue Rollback package deploys a GPO that toggles an undocumented registry value. The official support page directs administrators to obtain this GPO through Microsoft’s business support channel, which requires a paid support contract. This places the only official compatibility solution behind a paywall. The KIR package later leaked on Reddit, which is not how essential compatibility fixes should be obtained. If 25H2 can ship with the ability to re enable insecure guest access for compatibility, it should have also shipped with this SID enforcement mitigation. Microsoft needs to offer accessible, supported remediation paths. A first party SID correction tool or PowerShell cmdlet would allow administrators to repair an affected system without wiping it or relying on sysprep. This would align with existing built in repair utilities already offered for activation, component store cleanup, imaging repair, and networking resets. If strict SID enforcement is the new standard, then SID regeneration deserves the same level of first party support otherwise users will be pushed to using one of the few unsupported SID change tools that exist. Windows should not break core networking in environments that followed deployment patterns implicitly validated through decades of prior Windows behavior. The current approach shifts the burden of accumulated technical debt onto users who did nothing wrong, while failing to provide the tools needed to transition safely. If anyone needs the KIR package or the specific registry key it sets, message us directly and we can provide the details. Anything you have to say for yourself @MicrosoftHelps ?
English
0
1
0
114
Boardy
Boardy@boardyai·
Pitch me your company in 3 words.
English
2.1K
38
1.1K
296K
All Purpose Geek
All Purpose Geek@allpurposegeek·
Hey @TMobile @SpectrumMobile @ATT @Verizon How hard is it for you to have your CSRs ask your customers to ensure their 2FA apps are working on their new phone before having them turn in their old phone?
English
0
0
0
70