Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think there's still a lot of blame on Mikrotik for having such bugs in their management service and other daemons. I explicitly opened up the winbox port to be able to remotely manage Mikrotik routers I deploy (I considered their VPN implementations to be an even higher attack surface), as did many other admins it seems.

The winbox protocol supposedly runs over TLS and requires a username/password before anything is possible so I thought it should be safe enough, but through this bug anyone can download any file with no authentication (and the user db was storing passwords in plaintext which certainly didn't help)!

The web server vulnerability, sshd vulnerability, the smbd vulnerability - all are their fault. Had they used standard, well-tested open source packages there would be no problems, but they had to write their own custom implementations of these protocols for "reasons". I hate to think how many remotely exploitable bugs are lurking in their ipsec implementation.



Creating named address lists on Mikrotik routers is pretty trivial, so it's easy to create Remote_FW_Access_Allowed and add several remote IPs or netblocks to it. Then set up a firewall rule to allow Winbox (or other) port access from that address list (using an address list instead of a Src Address is on the Advanced tab when setting up the firewall rule).

Using source address lists with short timeouts it's also easy to set up port knocking - first port connection attempt adds to "Knock1" for 5 seconds, second port connection attempt from an IP on "Knock1" adds to "Knock2" for 5 seconds, (repeat for X knocks), connection attempt from an IP on "KnockX" adds to "Fully_Knocked" for (duration) (or "none static" for a permanent add). You can also do both a temporary add with a duration and a separate "Has_ever_knocked" with no timeout to build a list of all remote IPs that have ever fully knocked.

The UI could certainly be more friendly, but I think that's because they're avoiding having things that can only be set up from the command line.


Vulnerabilities are almost unavoidable.

Leaving a management port on a router open to the entire internet is a very bad practice. Would you leave an RDP port open to the world?

If you require remote access, at least restrict it to known management IP addresses.


Why is it that vulnerabilities are almost unavoidable? I’m not trying to be a smart-ass; I’m an analyst at an MSP and I’m doing my first pen-test soon. I’m under no illusions that my job title or growing responsibilities make me a security expert (or anywhere near it). Is it because the software stack is just too complex for network programmers to handle? (Not that router OSes are the only pieces of software that have vulnerabilities; and I imagine that you’d say that vulnerabilities are almost unavoidable in general.)


I'm not an expert either, so take this with a grain of salt. At the risk of sounding glib, I'd think the biggest cause of this unavoidability is that security professionals have to be "right" (in the sense of plugging every hole) every time, whereas black-hats need to be right (in the sense of finding said vulnerabilities) only once (or a few times depending on the vector, but you get the idea). Being on a Blue Team strikes me as a hard, thankless job, and I'm grateful for the people who volunteer for it.


Well, think about the number of abstractions on top of abstractions that make up all software. From the bits on the wire, being translated into binary, to machine, to higher-level languages. Then let's talk about frameworks on top of frameworks. Unless every contributor remembers every specific detail, edge-case, or assumption (and even if they manage to, we're still only human) then any mistake could potentially have disastrous ramifications. As bugs are unavoidable, you're going to have vulnerabilities. Vulnerabilities are just useful bugs.

Now, of course, at least bothering with CYOA is expected in security, but is rarely implemented up to snuff.... But then again, security is a "cost center", no?


In my book, the problem is that vulnerabilities are usually of two kinds - bugs or more specifically unintended and unexpected interactions between different subsystems. Bugs are like the use after free in a kernel modifying a little state, leading to ASLR circumvention leading to RCE.

Unintended system interactions are bigger in my opinion, since they tend to combine bugs across systems, or they even combine multiple unintended system interactions into bigger and more complex unintended system interactions. These things grow wild - some of the things people do with meltdown, rowhammer are wild and just enable even crazier things. On a higher level, things like server side request forgery, dns rebound attacks to circumvent firewalls are powerful tools to make existing attacks more powerful. I'm no where near an expert, just an interested admin, but a lot of these mechanics are wild.

Now where's the point to all that rambling?

Point is, most software is written and grown in very uncontrolled ways. Software outside of aviation or the space sector is written to get done, and if bugs occur, they do occur. A lot of software systems are running huge stacks with massive components - again to get done - and no one is scrutinizing all of the interactions going on in there.

With my product hat on, that's fine. Selling things is a good way to get paid. But from a security point of view, most software systems are just waiting to grow big enough until the right people care and it'll be ugly.

This is also why I largely consider our application servers to be overly resource hungry remote shells. Puts me in the right mindset.


There is a saying "If someone can make it, someone can break it"

This applies to physical security also.

There are way too many attack vectors for you to plug every possible hole.

20 years ago do you think anyone was considering that you could determine the contents of memory otherwise inaccessible your process just by reading the memory accessible to you in certain manners (Rowhammer, [1])

Or that a device taped under your desk could read your encryption keys right out of the air? [2][3]

Or that an attacker could intentionally cause errors by overclocking/undervolting "glitching" your device to cause it to skip certain instructions in order to gain access to it? [4][5][6]

Or that exploiting flaws in the way a CPU tries to predict the next instructions could lead to privileged information leakage? [7]

A sibling commenter hit the nail on the head. You have a large surface area to protect. They only need to find one tiny crack.

But by far the most common vulnerabilities are simply someone not properly validating input[8][9][10]<-(a ton of specific attack incidents listed here), or not allocating memory properly [11]

[1] https://en.wikipedia.org/wiki/Row_hammer

[2] https://www.theregister.co.uk/2015/06/20/tempest_radioshack/

[3] https://www.tau.ac.il/~tromer/radioexp/

[4] https://toothless.co/blog/bootloader-bypass-part1/

[5] https://av.tib.eu/media/32392

[6] https://www.multichannel.com/news/black-sunday-fix-dbs-pirat...

[7] https://www.wired.com/story/foreshadow-intel-secure-enclave-...

[8] https://www.pcworld.com/article/148007/security.html

[9] https://blog.detectify.com/2016/04/06/owasp-top-10-injection...

[10] https://codecurmudgeon.com/wp/sql-injection-hall-of-shame/

[11] https://engineering.purdue.edu/ResearchGroups/SmashGuard/BoF...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: