I tried out your command and got different line count compared to the output of the commands mentioned in the post. The man page for comm says it assumes inputs to be pre-sorted. To do that, you need to sort the input files before passing them to comm.
# show only items in both a and b
comm -1 -2 <(sort -u a_list) <(sort -u b_list)
# show only items unique to a
comm -2 -3 <(sort -u a_list) <(sort -u b_list)
# show only items unique to b
comm -1 -3 <(sort -u a_list) <(sort -u b_list)
Get a copy of Merriam Webster's Visual Dictionary and just go through its approx. 1000 pages of high-quality illustrations. It efficiently introduces you to countless tip-of-the-ice-berg topics/industries that are very much remotely related to tech. It humbles you and evokes that feeling of self-insignificance like when you look at aerial view of cities through an airplane window.
> It humbles you and evokes that feeling of self-insignificance like when you look at aerial view of cities through an airplane window.
What does it matter how small you are compared to a city, the world, or the Universe? How is that supposed to be humbling? It's more inspiring than "humbling" to know that there is a vast world which I am not knowledgeable about. Being as big as a city, the world or the Universe wouldn't fill me with self-importance; it would fill me with "what's the point".
+1 for skip32.c since it's O(1) space and O(1) time. With the prime approach, you may need to call it multiple times to reject numbers greater than 2^32-1. Also, a generalized Feistel algorithm may be used to generate permutations with fewer than 32 bits with the same constant space/time requirement and may be helpful in skipping specific IP ranges.
Thanks, the theory of generating secure-ish permutations of arbitrary size (at least, small ones) isn't that well developed and I hadn't considered generalizing feistel structures to other bit sizes. It's an interesting research topic IMHO. '
I have to admit that I'm slightly chagrined by the disparity of approach between a lame-ass pentester and an associate professor and two PHD candidates. Reverse elitism, it lives.
I still find the decision to release to be an interesting choice; I didn't because I felt that it would lead to an increase in Internet background radiation, and that basically it's an obvious approach and anyone working in this area will come to exactly the same conclusions. Beer soaked napkin calculations will lead everyone to exactly the same scanner. From my POV there was little benefit in giving this crap to people who wouldn't make the same calculations and write the same code.
What amused me for a while was how long we could get away with being the top "malicious" source on DShield [hint, this requires a ridiculously high packet rate - but it was less than 50k PPS :]. But if you use zmq and distribute your targets that way between a bunch of scan agents, you can get off the top 10 list. Also you can do a scanner like this with a lot less SLOC. The correct architecture is "ip distributor", "scan agent", "listener". They're sort of conflating.
Thanks for the link. Awhile back, I had implemented an n-bit cipher (where 1 <= n <=64) with an unbalanced Feistel network and used hash function as the basis of the round functions. I then wrap it within a cycle-walking function to form a cipher for any number smaller than 2^64. I did all this so I can generate scrambled 5-char path IDs of short URLs (e.g. test.com/xA7bc) with domain size of 62^5.
It's actually more of a hack to deal with malformed GIFs and goes directly against spec. To address GIFs where each frame has a 0 frame delay, most GIF decoders implement a minimum frame delay value.
GIF performance seems improved in safari, at least in 6.x, though this is just anecdotal evidence. The GIFs with 0.02 second frames (the actual ones, not just the "here's what it looks like under x browser" ones) seem to render with the same frame rate for me in Safari and Chrome now.
OTOH when developing for Apple platforms there's not a huge loss when developing only for iOS 6+ (85% - 90% of users), whereas on Android you're pretty much required to target very outdated versions of Android as well. For example we target Android versions starting with Froyo.
I don't think the lipo tool will ever be regarded as hack. It's pretty much required when supporting multiple architectures / instruction sets. At some point there will likely be a ARMv8 architecture (if it doesn't exist yet).
Also many libraries make use of lipo to create convenient static binaries that can be used both in the simulator as well as on the device.
One use of UDID is verifying in-app purchase receipts. It's demonstrated in the sample code associated with Apple's article titled "In-App Purchase Receipt Validation on iOS".
Never noticed that as I don't have any in app purchases that actually cost me anything. This usage would seem to be more because the ID is available rather than because it is needed though.
The bit about non-public APIs on that page is interesting though - if there is a genuinely necessary use case for device IDs it may not be affected.
The UDID embedded in receipts prevents/deters people from sharing them with others using MITM techniques. Sharing does happen and Apple's article helps address that issue.
True, but that is only one solution and a flawed one, as device ID can legitimately change - no developer needs to know whether I am using the phone I bought today or the one I bought a year ago.
Non-consumable in-app purchases are restorable on any iOS device you can sign in with your iTunes account. When you buy a new iOS device and use StoreKit's restore transaction feature, Apple will generate a new receipt with UDID of that device. In-app purchases are tied to your iTunes account whereas embedded UDIDs are tied to devices you sign in with your iTunes credential.
Lowercasing the first character--by itself alone--does not address inverted case scenario (e.g. "PaSsWoRd" and "pAsSwOrD" evaluate to "paSsWoRd" and "pAsSwOrD" respectively).
One way to resolve that issue is to create two versions of the password (one with normal casing and one with inverted casing, but both with the first character lowercased), sort them, and pick the first result.
["pAsSwOrD", "PaSsWoRd", "PAsSwOrD"].map(function canonicalize(pwd) {
function invert(str) {
return (str == str.toUpperCase()
? str.toLowerCase()
: str.toUpperCase());
}
var head = pwd.substring(0, 1).toLowerCase();
var tail = pwd.substring(1);
var vers = [head + tail,
head + tail.split("").map(invert).join("")];
return vers.sort()[0];
});
Which gives the following result:
["pAsSwOrD", "pAsSwOrD", "pAsSwOrD"]
The three different variations get canonicalized into one version. With it, you can just store one hash instead of three.
There's a bug in the first optimization example. Specifically, class "point"'s "up" and "down" methods still reference the "move" method as "move" instead of "b". It can be fixed by replacing the original statement "p.move(10)" with "move.call(p, 10)" for the "up" method (and similarly for the "down" method).
Despite the bug, I think it illustrates a good point. You can adopt a specific coding style that help you achieve higher level of minification/obfuscation.
It would be nice if languages like TypeScript can perform this transformation for you.
But not that's not what caused the problem mentioned in this post.
(edited for clairifcation)