Clickbaity title. It makes it sound like remote work for everybody everywhere almost broke because of this-or-that thing involving Microsoft and Asus. Not the case. The article is about some obscure issue for a particular company who trusted their IT to be handled by Active Directory in a Microsoft Azure cloud environment, involving Asus home routers. Hardly a general insight.
The root cause for this is the inability of our industry to properly define standards and then enforcing/sticking to them. Everybody just hacks something that kinda-sorta works and whoever has the larger market share is right and everybody else has to suffer, even if they themselves want to do the right thing and do it properly.
Let's face it, we all suck at this and have to pay for it with this kind of meaningless waste of time "troubleshooting".
I agree that the title is clickbaity -- I expected something much worse and more intentional than some obscure integration bug between Asus routers' DHCP server and proxy-auto-discovery.
Why our industry typically fails to agree to formal standards is that it takes time and effort to agree upon standards. The W3C is a good example: browser vendors move faster than the W3C could keep up with, so they decided to bypass them.
I don't fully share your negativism towards this: the ability to innovate quickly is important, and even when things get fully standardized, there is no guarantee that every vendor implements them correctly (again, see HTML and just how different things can behave with different browser vendors that all use the same standard).
> I don't fully share your negativism towards this: the ability to innovate quickly is important
I fully agree! But that's a different thing. Of course the ability to quickly innovate is important, but you are implying that the current state of affairs is a necessary consequence. It's not.
I also like to "quickly innovate" by hacking some proof of concept code. Exploration is important and fun. I don't write a test first, or no test at all, if I don't even know yet fully what it is that I'm building or how the API needs to look like. But in the long run I can't keep going like that.
No, it's a fact of life. If you don't, then somebody else will and you'll go under.
If you first spend 5 years crafting the perfect invitation to a date for your crush, then meanwhile another guy will have not just asked her out, but by that time they will have rings on their fingers, a house and two kids.
Or you could deem illegal to not follow iso/ietf/w3c unless it is already in a proposal stage.
We may not have the internet we have today but I am not sure it would be for the worse. Some things would have probably slowed down, we may had also avoided the terrible flash/activex/javawebapp period but companies may had pushed for html5 earlier through standardisation process.
No one is going to wait to debate how to do TCP, DNS, or SMTP in an RFC process while not sending bits over the wire.
Most of the core IETF RFCs/BCPs are standardization of things already created. “Prior implementation and testing” is an explicit goal of the IETF process, so I guess you could argue that you’re following that process automatically by creating the initial non-standardized implementation.
How would having a published proposal for flash/activeX/Java web app/foobar have avoided those? A company could crank out a draft proposal in a couple of afternoons.
I worked with the author of RFC 1149. He said it didn’t take him long at all to execute it.
Your idea to outlaw imperfect solutions is noble, but misses the fact that mature solutions learn from earlier, imperfect work.
If we’re fining or jailing people for releasing and/or using hacked together code, it either means no progress or rubber-stamp standards bodies that approve everything.
And I feel a little sorry for 13-year-old me, who presumably you would have treated harshly for releasing some BBS software that saw wide adoption when I “should have” invented IPv6 instead.
Trilobites did pretty well. Crocodiles and Greenland sharks are still trucking. Innovation is fun and is strong on the steep bit of the S-curve but it’s not intrinsically superior.
These type of bugs / disconnects between systems can have a major productivity impact for teams and organisations. And they are relatively common. :/
I've frequently worked on such issues during my career and the time it takes to identify root cause / effective mitigations can be draining. I've found the key is to focus on creating reproducible test cases and a way to capture environment information in a manner that ensures you are comparing apples to apples. Some systems are so brittle that even adding additional load to capture observability data causes nuances that obscure the root cause. :(
Ah, yes, my old nemesis, WinHttpAutoProxySvc... For years, on both Windows 10 and 11, this has had the habit of randomly spiking the CPU core the service is running on to 100%, in some kind of busy-loop that's effectively preventing anything that uses the Win32 HTTP API from working.
So, if symptoms include laptop fan thinks it's a jet engine, Start menu refusing to fully populate, Search not responding, and a lot of apps just not launching at all (or taking several minutes to do so), a quick look at the Services tab in Task Manager for WinHttpAutoProxySvc, followed by go to details and End task on the corresponding svchost.exe might just do the trick. You can ignore the big scary warning about this restarting the system: that's a lie.
For a slightly more permanent fix, paste the following into a .reg file and merge it into your Registry:
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WinHttpAutoProxySvc]
"Start"=dword:00000004
This will disable the service (which Microsoft has made impossible to do via regular GUI or CLI tooling), and after a reboot, you should be able to, like, use your PC for a while. Keep an eye open for rogue Windows Updates, though, as Microsoft really, really wants to re-enable this service using those. (Apparently, WinHttpAutoProxySvc does all kinds of important stuff, including address assignment for non-native IPv6 setups, none of which I care about, but before blindly following 'just disable this thing' advice from the Internet, just think for a while before rolling it out to your entire fleet).
Playing whack-a-mole with WinHttpAutoProxySvc has been oddly satisfying so far: one of these days I might actually grab a debugger to see what's going on here (because, yes, also after updating to Windows 11 22H2, which re-enabled the service, I had the same-old symptoms within days -- I admire the writer of this article for getting a fix for their problem that worked!)
i remember when registry tweaks wouldnt randomly reset and random bloatware would not appear/reappear with each OS update. those were the "good" old days of XP/2000.
anyways, i finally got fed up with Windows 10 Updates + Defender shenanigans and perma-switched to Linux (EndeavourOS/KDE). Oh, the joy of your machine only ever doing what you told it, and blazing fast filesystem access with low resource usage! Unthinkable!
> So did I but I had the bad luck of choosing Ubuntu
i hear nothing but good things about PoP_OS if you want to stick with a Ubuntu derivative.
certainly with linux you cannot just buy whatever hardware, but with just a bit of research (Intel Wifi, AMD gpu) everything works with 0 issues. im on a ryzen P14s AMD 4k-display thinkpad and only had to swap out the mediatek wlan card it came with for an Intel Wi-Fi 6E AX210 (more stable, better perf, monitor mode etc)
This mentality is a common one I see when people comment on Windows: people setting an override that isn't actually the intended override to 'fix' an issue without root cause analysis by disabling a component, then complaining about the 'intended override' getting reverted with a system upgrade.
(and 'impossible using normal GUI/CLI tooling' seems unlikely, for one as regedit is still 'normal tooling' as well, and secondly because I've yet to come across a service that can't be disabled with the 'sc' command line utility, discounting Windows Defender though that is rather a special case due to the chance of malware itself disabling it, so it has a very specific series of steps to first disable the tamper protection in a trusted way and only then disable the service)
In this case it seems like an environment-specific issue, as if everyone would have this specific thing be broken, it'd probably long be noticed and fixed. In fact, one could probably easily find out what is wrong using a profiling session with WPR/WPA as the main symptom here is 'uses 100% of a CPU core', a methodology recently most famously spread by Google's Bruce Dawson.
(similarly, I wonder if these same people would 'complain' a Linux distro upgrade would affect their edits to /lib/systemd/system/*.service files - I would somewhat presume that to not be the case, in fact)
And yes, before you dial the condescension up to 11 again, I'm aware of the need to do this from an elevated command prompt, as demonstrated by the fact that it works fine for other services, e.g. Windows Update:
(And, for the record, I was not complaining about anything, merely pointing out my environmentally specific experiences with the topic of an article, IMHO also clearly pointing out that my choice of solution is a gross hack and that I should look more closely into the issue one of these days when my copious spare time allows...)
Please, start with disabling WPAD requests and only then disable the service.
type "Disable WPAD.reg"
Windows Registry Editor Version 5.00
[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Wpad]
"WpadOverride"=dword:00000001
That way even if the WinHttpAutoProxySvc would be enabled it wouldn't be triggered by WPAD discovery.
I sometimes wonder if someone will someday tweet just what were they thinking?!? with a Ghidra screenshot of some quick feature I wrote before a deadline.
Using an issue reported (in a Linux compatibility layer, even!) with few to little root cause analysis as evidence a 'networking stack is bonkers' is a bit exaggerated - IPv6 in itself is also a bit wacky, and I've had similar issues exist in a variety of environments.
As an opposite example, here's a Wine bug report of a Windows application failing to work in some network environments, but only if it in any way touches the Linux networking stack (such as using VirtualBox's soft NAT, or if running under Wine): https://bugs.winehq.org/show_bug.cgi?id=53346
I guess, in general, software (and networking) is 'bonkers'.
> Using an issue reported (in a Linux compatibility layer, even!)
It looks like its an issue with the bridge, nothing to do with WSL, but that is pretty much the only place on the internet to report it. The bridge is literally sending packets to the wrong interface (an interface that isn't even a part of the bridge!). Since it's closed source, no one except Microsoft can look into it.
But to figure out what was wrong took a really deep dive into Windows networking, and it is bonkers, absolutely bonkers. At least compared to every other networking stack I've ever deep dived into. Maybe if you spend enough time with it, everything else looks bonkers. So maybe you're correct:
> I guess, in general, software (and networking) is 'bonkers'.
I tried converting the gaming machine over to Linux, but it's just not there yet. It's making huge strides and I'm super happy about it, but claiming that most or all games run on Linux is waaaaay overstating the reality.
Thank you for the confirmation and support! I love where Linux gaming is going, but we really should be honest about it.
The easiest way to leave a bad impression on newcomers is... telling them one thing and having them see the exact opposite.
I recommend giving Linux a try, it's way more capable than it used to be. With that said, pack a parachute - you probably will find yourself needing to switch at times.
> Retrospectively I suspect confusion around what it means to shut down or restart a computer led to many of the reports of reboots/shutdowns/driving into the office fixing the problem or not [...]
I read somewhere (I think it was a post on /r/sysadmin) that, on modern Windows, "shut down" actually means hibernate, so old-school people who learned that it's always better to power down and power back up instead of just rebooting (since it also restarts the hardware, not only the software) are led astray: on modern Windows, it's the opposite, "restart" is the option which does a clean start-up, while "shut down" just fakes it.
"Expires: December 1999". It was in draft in the last millennium but it died.
It's also a terrible idea. For example now anyone running a evil DHCP server in a WLAN you joincan get your browser to follow a malicious PAC script which lets them MITM even HTTPS traffic... see eg https://www.pcworld.com/article/415991/disable-wpad-now-or-h...
(This was of course back when Windows users were getting regularly pwned by a windows worm of the week so wasn't anything out of the ordinary)
How do you MITM HTTPS without control over the cert store on the client, or access to private keys that let you generate certs that are trusted? You don't, and the threat of this is nation-state-level stuff.
The article you link to posits a malicious PAC file which leaks the contents of request URIs. This is NOT the same as MITMing all HTTPS.
This is also an illustration why, on devices such as this, it's good to layer security with things such as always-on VPN.
EDIT: The root of that article is decent, but it has so many problems... And it starts tacking on the caveats about how it's wrong near the bottom. Like:
"The two researchers showed that some widely used VPN clients, like OpenVPN, do not clear the Internet proxy settings set via WPAD. This means that if attackers have already managed to poison a computer’s proxy settings through a malicious PAC before that computer connects to a VPN, its traffic will still be routed through the malicious proxy after going through the VPN."
This only works if the VPN client doesn't rewrite the routing table to send everything through the tunnel. And if they keep the OS' network state detection from noticing a state change, which in turn triggers a proxy setting refresh. (WinHttpWebProxyAutoSvc specifically does this.)
I suppose you're right that the HTTPS content MITM vulnerabilities wrt PAC have been fixed. But still the URI leaks are bad enough since they leak a lot of info about you.
Re VPNs .. quoting from the pcworld article:
> The two researchers showed that some widely used VPN clients, like OpenVPN, do not clear the Internet proxy settings set via WPAD. This means that if attackers have already managed to poison a computer’s proxy settings through a malicious PAC before that computer connects to a VPN, its traffic will still be routed through the malicious proxy after going through the VPN.
I maintain that that WPAD is terrible from a security POV, an OS has no business executing untrusted configuration javascript in my web browser. You can just exploit browser bugs there without user navigating anyhere untrusted, like shown here: https://googleprojectzero.blogspot.com/2017/12/apacolypse-no...
At least on Windows 10 for the last few years, the JS is not executed in your browser. There is specifically an isolated, limited environment called pacjsworker.exe that executes PAC files and does not support nearly the same amount of JS needed by a brower. Check this out for a list of the functions in a PAC file: http://findproxyforurl.com/pac-functions/
WinHTTP then uses this to determine if the traffic should be routed through a proxy or not.
It is not, or at least is not now, jscript.dll as the article mentions.
The bug was in WinHttpAutoProxySvc, but at some point ASUS (et al) must have noticed it and chose to send the blank Option 252 to take advantage of the result, without realizing that the result is poor behavior and reporting it to Microsoft.
EDIT: More specifically, it was that a service running on the client was told, in a weird way, that there was no proxy to use. This weird way triggered a bug, so it kept using that setting even after it no longer received that signal (different DHCP on a different network) and a there-is-now-a-proxy setting was available (via DNS).
EDIT 2: To be more clear, Azure AD needs access to the internet, uses WinHTTP (because this is Windows' standard HTTP libraries), and when a bug in one part of this stack was exposed, AAD didn't work. There were no problems with AAD here, even though that's where the user saw the error.
Wow, what a nightmare to troubleshoot and try to figure out. I feel lucky that a month before the March 2020 I switched to a new job doing more direct application support instead of general desktop/AD related work or I may have had to deal with that exact issue.
OP here. It was... I was really fortunate that some other company had figured it out in parallel with MS, because I was hitting a dead end.
I got to the point where I was pretty sure that WPAD wasn't working right, and was in the midst of capturing all sorts of data and escalating that, when a side conversation prompted someone to ask "was it you I worked with on X?". Then it all came together.
I think this is one of those things that ends up being so esoteric most folks haven't worked on it, then the chain of events leading to it... But the moment I heard the trigger I was able to repro it... That was quite a relief.
Why do so many companies require proxy configurations to be set on the endpoint, rather than using transparent proxying? Wouldn't that completely avoid a whole class of issues?
This isn't quite my space, but from what I recall there's a couple wrinkles with transparent proxying. It's overall a good idea, but has some edge cases:
- Authentication can go sideways in weird ways. A 407 from what looks like the correct site can cause odd things. IIRC there isn't great support from vendors for authenticating transparent proxies, too. Sure, you could auth off the machine instead (X device is on the network, therefore it's allowed), but what about shared machines... Proxy auth as user is better, because it allows requests to be tracked to a user ID and login session, not just a device.
- Client getting load balanced between proxies during a session can trigger reauth (because auth sessions not shared between proxies), this can confuse a client.
- HTTPS sites can get weird.
- Routing in very large private environments can complicate default routing to the public internet. Although this can be handled by doing transparent auth + optional manual config.
hm... Could it be related to malware? If there's a malicious device in network, it would need a proxy configuration. With a transparent proxy back-connecting to attacker would be easier
On the other hand, attacker could probably always send messages out via DNS tunnel
Malware can easily just pick up and use the system proxy settings. Hell, on Windows it just has to use WinHTTP and it'll use the system's paths to the internet.
Don't browsers use CONNECT to access HTTPS sites over a proxy? Isn't the certificate situation exactly the same inside of that vs. with transparent proxying?
Website blocked: nuxx.net
Malwarebytes Browser Guard blocked this website because it may contain malware activity.
We strongly recommend you do not continue.
I've been fighting with this for years, and I'm pretty sure Yandex is the problem. Hell, I even signed up for Yandex's webmaster tools as part of trying to fix this.
Years ago Yandex was flagging on some sample perl code that had a .txt extension (some something like udpscanner.pl.txt or so, IIRC) that I had sitting in a directory. There's no perl CGI, no way for it to execute on the server; just sample code. IIRC it was even served up as text/plain for easy reading in browsers. It was something that would be run directly on an OS, to do some fast scanning of open ports. For an end user to run it they would have to download it and get it executed by their perl interpreter. Definitely NOT an exploit in a browser.
As I recall it was something super basic that I found on a compromised server years ago and referenced in an old writeup.
For some reason it was flagged by Yandex as a browser exploit, they reported up to other places, and Malwarebytes flagged the whole site as malicious.
Since fighting the technical reasons why their scan is flawed is Sisyphus, I ended up just removing it from the site and getting Yandex to rescan. They now list the site as clean, but some old tools still say something untoward is going on.
It's 100% clean on VirusTotal. I'm not sure what Malwarebytes has been doing lately, but I had to remove it from a relative's computer after it kept throwing false positives.
I think they are using some old list, or keep around positives even after they get removed from elsewhere.
I'm the OP and nuxx.net was getting flagged by Yandex because I had an old perl script, I believe udpscanner.pl, in a directory as I referenced it in some old writeup. It was actually named udpscannerl.pl.txt, was served up as text/plain, on a server with no perl CGI, and was something that needed to be run interactively from an interpreter. Literally, sample code.
Yet for some reason Yandex flagged it as a malicious site. And Malwarebytes picked that up... And apparently continues to do so years after I removed the file and got Yandex to rescan and mark the site as clean.
What occurs to me is how consumers have this home internet equipment -- of varying quality, although here it may be "lack of spec" rather than "out of spec" that's a problem -- without the capacity to troubleshoot it if it goes wrong.
What if the problem had been triggered by specific routers, but did not bring down the whole server until reboot, but just caused a denial of service to the users with those routers? It would perhaps never have been figured out at all, certain users would just find themselves unable to connect, and have no idea what to do to fix it, with really no feasible way to find out. Router, modem, broadband provider, workstation, OS upgrade, who knows? With most Enterprise IT just saying "I don't know, works for everyone else, must be something wrong on your end, good luck with that."
> With most Enterprise IT just saying "I don't know, works for everyone else, must be something wrong on your end, good luck with that."
This was something new and interesting with COVID WFH. Whereas previously WFH was an optional thing, a bit of a perk, and usually not long-term, we could fault someone's home internet and let them contact the ISP/whatever to figure it out, 2020 changed things.
Since we were telling people they needed to work from home, and if they couldn't they'd need to contact HR, it really was best if we helped them out a bit more.
Not full-on support, but talking through the problem and making a recommendation of how to make things better basically became the norm.
> Beyond DNS there is also a Dynamic Host Configuration Protocol (DHCP) method where, along with the typical network address settings, the client receives the URL for downloading the PAC file. This is done via option 252, but isn’t widely supported, it’s normally not used, and we don’t use it it either.
Consider me the odd one out but I’ve actually heard about (and used) DHCP option 252, but never once heard about WPAD.
If you've heard of, and used, option 252 to configure a PAC file server, you've used WPAD even if you didn't hear of it in name.
And if you did use it, it's exceptionally rare because, AFAIK, it is only supported by Windows and then things which in turn use WinHTTP.
Of course, because the RFC was never adopted, you could have used option 252 for something else because, technically it's reserved for private use. And your (private) use may be different from someone else's WPAD use.
Yes, you are the odd one. Or never administered enterprise network with a lot of Windows machines and services. Especially with ISA Server.
Though for some time there is a push to just disable WPAD discovery, because it can be used for redirecting the traffic for nefarious purposes and most people don't use a classic web-proxies anyway.
That's where WPAD conversations get interesting... I imagine that for a whole bunch of folks something like default-path-to-internet is sufficient, or having everything hit the proxy and then hairpin some things back internally.
But when you start getting into really big private environments there are all sorts of edge cases that Proxy Auto-Config (PAC) files really help with. For those who aren't aware, PAC files are a single JavaScript function that implement tests on the requested URL (and a few other things) and return either DIRECT or PROXY for where that request should go. And this all runs on the client.
It's like a super-robust exception list that's centrally hosted and the OS handles caching it, checking for changes, etc. Changes to it can be made centrally and the clients will generally use the new one within a maximum of 20 minutes. Trying to manage a proxy exception list on each client is... hard.
Think situations like having numerous proxy servers all throughout the world, users that move around the world, and you don't want to change the client config to access different proxy servers. Or some sites that needs to go through some proxy servers, some through others (eg: for compliance or technical reasons), some direct even though they have a public-ish URL, or not wanting to bounce high-traffic internal sites off the proxies at all. It's basically client-side app layer selective routing to proxies and it's great.
WPAD is just a way to automatically discover the PAC file. While it can be hijacked on a malicious network to point devices through nefarious proxies, if the company stuff is set up right, there's nothing any of those URLs could grab. Or better, tack on some always-on VPN or whatnot when not on the trusted network and all the better.
IMO the DHCP way of doing it is kinda cool, but DNS is so much simplier and that's why it's widely supported. DNS is just DNS... Anything that can query DNS can implement WPAD logic, which is why it's supported on pretty much every browser platform and macOS and Windows and such.
DHCP requires the dhcpcd to get the setting, somehow pass it to each app that needs it, etc. It requires a lot more integration. Windows does it, but only for things that use the Windows HTTP libraries (WinHTTP). macOS could, but I understand why they don't bother...
Linux... Could... But proxy support on most Linuxes sucks. HTTP_PROXY/HTTPS_PROXY is awful if authenticating proxies are used (plaintext creds in an environment variable?!?), basically no PAC file support at all, no transparent Kerberos auth... Blah.
Need to go now, but here is my another $0.02 on this:
yes, complex enterprise networks really benefited from WPAD (at least for some time)
yes, it is primarily Windows because... well, because it was implemented there
and yes, it's not used in *nix-like because for 99% of time any of them are not in the AD and doesn't support NLTM SSO, so on any Windows machine you get a seamless and silent auth on proxy, while any *nix-like at best nagged you for credentials with an explicit modal. Oh, you restarted your browser? Who are you? Can you provide your credentials?
> if the company stuff is set up right, there's nothing any of those URLs could grab. Or better, tack on some always-on VPN or whatnot when not on the trusted network and all the better.
yes, just like in any other case, a proper planning (preferably done before deployment) is a key for a proper security, by not having an attack surface in the first place.
Kerberos, actually, not NTLM. And it is used in macOS (a *nix-like) and works great, either via WPAD (DNS), directly configured PAC file, or directly configured proxy server.
It's Linux end user devices, specifically Ubuntu, that gives us headaches, mostly because there just isn't a system-wide HTTP library that handles it all which apps can depend on. So yeah, the prompting you describe is exactly the problem.
(Not that macOS doesn't have it's own proxy problems... Configured on a per adapter/service basis only? With no easy way to get new NICs a proxy setting via some sort of system-default template? WTF?)
You know companies will, as regular practice just give you a laptop to work on. Some will even provide a phone or tablet. These are not cheap. Not cheap at all.
I've yet to hear of a company which would fleet-buy routers reflash with OpenWRT and hand them out.
Because it wouldn’t work. It’s completely infeasible for an moderate sized non-tech business because the “IT environment”, such as it were, of employees homes is absolutely non-homogenous.
You can, as an end-user IT support department, rely on basically only two things about any user’s home kit and ability to interface with it: they have power (usually), and they can probably get internet access via wifi somehow. Any other assumption about their home environment will prove incorrect in some wild way.
I’ve run into users that don’t have Ethernet, or WPA2, or any kind of public IP, or next hop latency under 200ms, or more than 1mbps bandwidth. We have users that don’t have home Internet, and for the duration of WFH orders used their company provided phone as a hotspot. I’ve run into users who can’t promise that they’ll have mains power day to day.
Even if the user does have Ethernet coming out whatever their NTD might be (and it might be exotic!) you can’t guarantee that they can follow any kind of instruction to set up a router: they are about as likely to end up without a functional internet connection at all as they are to get whatever box it is you give them to work.
You also can’t even be sure your instructions will work for whatever it is you’re trying to do. Are you taking into account double NATing? Triple? Do you know that none of your users home ISPs have some kind of MAC whitelist on their NTD? What if their current router is built into some kind of HFC modem?
This is not a problem you can solve the mass use case for and address the edge cases: it’s all edge cases, in my experience… and that’s if you’re operating in a single, highly developed first world country.
OP here, and yeah... You nailed it. Absolutely nothing can be presumed about home networks. Most folks seem to think that because their phone shows a good WiFi signal and apps appear to work that all is good.
The stuff we ran into in the suburbs of a major US manufacturing center... Oof. I very quickly learned that a LOT of people really don't have much in the way of internet access at home.
Some other stories:
- The person who complains that a Windows 10 upgrade broke their application and they can no longer work. Real problem? Their home network connection was 2mbit symmetric DSL and the database they were working on would time out sending large results back to them. They seemed embarrassed by their connectivity and concerned about their job due to their internet access. After I managed to find out the real bandwidth and repro the problem, the site was fixed to work on low bandwidth and all was good.
- Users presuming that data on their work-provided mobile phone was unlimited. They'd use it for work, but also their kids' school-from-home, and whatever video streaming... And when you try to push a few hundred GB a month through a mobile, it gets throttled. Key here was when they finally mentioned that they "have bought two other phones but they all have the problem too" as I was able to figure out what was going on.
- Then the guy who was working from "home" but after a lot of digging I saw him VPN'ing from southwest Asia. Guy, I don't care that you hurried back to your home country when COVID hit so you could be with your family. Please just be honest with me when I'm trying to help you get working, because we have solutions for this too. (Eg: Remote work via virtualized desktop instead of trying to load big CAD models locally.)
Hell, the number of sysadmin-type IT folks who are double or triple NATing, with bargain-basement "range extenders" in every room of their house, who then exclaim BUT I AM NOT A NETWORK PERSON when I suggest that is why their wireless at home flaps like crazy is unbelievable. Or probably not for you. ;)
Honestly, though, stuff just kinda comes up and pops into my head. I also need to be careful when sharing not to disclose anything that'd be confidential. There's random things on https://nuxx.net, but otherwise... Nothing comes to mind?
> Are you taking into account double NATing? Triple?
Usually this is not the problem... till some protocols what rely on port numbers. Like those used in audio/video conferencing. Then it's hell.
Overall I want to upvote you twice.
The only working way to provide a such thing would be to hand out the one with a mobile modem integrated, SIM inserted, configured, with WiFi SSID and password written on the case with at least 32pts. And still there would be problems.
How will you handle for, or notice, intermittent power issues due to stretched power cords that are no longer intact internally? </sarcasm, but not really>
Honestly, in my experience here in the US, /most/ people's home networks are just fine. There's just edge cases, and then there's weirdness like this.
I think COVID WFH / School-From-Home was actually a boon for internet access in the US as it forced a lot of people to think about their home internet connection as more than just a binary works/doesn't.
Something like this, for a large company, would be another massive thing to maintain. It'd be a whole additional cost to buy, platform to manage and support and patch and upgrade. And people would still just plug them into whatever network jack they see.
Then there's the whole management/update infrastructure that'd have to be built and secured and maintained... It'd basically be like reimplementing what Cisco offers with Meraki, a small remote VPN box, but for everyone. Not cheap at all, and for most users, just not necessary.
What would the router do in this case? On the other hand, companies will loan out commercial hotspot products (aka what you get from the eg Verizon store so employees can get Internet access at home.
OP here. And anecdotes, but almost everyone at this company as a Verizon mobile that has data enabled for tethering as needed, but then some people would...
...use it for Netflix and actually hit the "unlimited" data caps and get throttled.
...name their mobile network the same as their home network with the same password.
...place the phone somewhere weird (in the basement where they work, up against an earth-backed concrete/rebar wall, and complain that it's "too slow".
There's almost no end to weird things people can do with tech and then blame just the tech. There are constant new problems that need to be handled on a case by case basis, because the root causes are not solvable solely by technical means.
I used to work for a company that does development of things like TV apps, and physical hardware. They’d distribute wireless APs to engineers which automatically connected to the corporate VPN and exposed that via the AP for devices which couldn’t be directly configured to do so but needed access to development servers only accessible internally.
We have those as well, for similar uses. And for folks for whom any connection to the public internet may want to be avoided. The stuff's expensive, though, so they are not for the day-to-day end users.
Like sure, you could ship a router, firewall, IPS, WAF, et all. But if you're not stopping outbound, controlling clients ability to talk to each other, and blocking things like UPNP, then you're still basically on an untrusted network.
Your options are basically make the network unuseable or mark it untrusted.
At that point you might as well just force all traffic through a VPN.
The cost of this sort of thing for a company comes from support and maintenance, not initial device acquisition or setup. As a thought exercise, consider a company with 50,000 of these...
- How do you patch and update them?
- How do you deploy configuration changes?
- How do you support them as they break?
- How do you refresh them as they go EOL / get outdated?
And, while this could solve the router layer problem, it doesn't solve the problems with people's home internet connections. Almost no one would use this in place of their home router, so it'll get plugged into them, and then this becomes nothing more than an additional layer of complexity on top of the home network.
After a fresh install of Windows and allowing the updates finish I was surprised by MyAsus nagging prompts asking for login. It never asked me if I want that absolute ad riddled crapware disguised as a support tool.
This doesn't surprise me. One thing I know about Windows is you can't rely on it when you need it.
So in the middle of an outage yesterday, the Windows 10 start menu stopped working. You can open it but when you click on stuff nothing happens. Reboot doesn't fix it. Fortunately I had cmd pinned to the taskbar so I am literally starting programs from that.
The root cause for this is the inability of our industry to properly define standards and then enforcing/sticking to them. Everybody just hacks something that kinda-sorta works and whoever has the larger market share is right and everybody else has to suffer, even if they themselves want to do the right thing and do it properly.
Let's face it, we all suck at this and have to pay for it with this kind of meaningless waste of time "troubleshooting".