I don't use WiFi as a matter of practice, but I'm curious: What if you could keep all the "whitelisted" MAC's continually logged in to your network, or, at least, you could keep track of when they log out. The idea being that MAC spoofing is not possible if the particular MAC that the attacker wants to spoof is currently logged in. This is generally true with Ethernet, correct? Is this true with WiFi as well? (Assume the traffic is encrypted.)
And in fact, it seems this guy's hack relies on someone "rejoining" the network, triggered by a deauth frame. Without that "rejoining" step, I don't think he could get very far. If his target is continually connected, and there's no way to force a "rejoin", and all the traffic is encrypted, then what can he do? The problem to me sounds like the fact that someone can send a "deauth" frame and have it be accepted, and the Apple Mac gives no warnings that the connection underwent a "rejoin".
Indeed, execve in exec_command (job.c). That does not make it slow, since it's the very last part of the game, and one you could not really do without.
What would you have done differently? (I'm serious. I'd like to know.)
One of the tough things for anyone aiming to replicate Facebook is that Facebook used some devious methods to get up and running. Zuckerberg misapproprited hundreds of photos of his classmates and their personal information, and then sent them provocative emails that would cause most students to check on what's been posted about them, or what's been posted about others, i.e., they would visit the site.
It's like the story of the YouTube guys posting some of their own videos to get things started. Then they eventually had to upload some copyrighted content. They took a risk.
Then there's the story of Bittorrent. I believe Bram Cohen initially seeded some porn to get things kicked off.
Or the guy from ThatHigh who recently told of how he had to create fake profiles.
It seems that it is quite difficult for user contribution and sharing solutions to start from zero. Alas, you need to have content on offer from day #1. And it needs to be compelling content, in terms of quality, quantity or both.
Zuckerberg broke the rules. He stole student's personal profiles from the university's network. And he got away with it. Luck was in his favor and he knows it. Others who would try this now might not be so lucky.
Diaspora relies on people to submit their own content, but it had no compelling content to begin with. Not only did they start with no content that would draw people in, but if I'm not mistaken they expect people to run their own web servers. This is not impossible to imagine but why web servers? I guess because they want to replicate Facebook.
Solution: Don't replicate Facebook. Build something a little different. Stop thinking only in terms of web servers and web clients. Think peer-to-peer. Think in terms of application-agnostic _connections_, not applications. Do this and you instantly have something that is 100x more useful than Facebook. Because it does not have to operate within the contraints of web servers and web browsers.
But there's still that problem of compelling content...
What's really needed, in my opinion, is a search engine to track shell companies. The lawyer interviewed is shooting from the hip when he says it is "a shell game".
Without the use of shell corporations the patent trolling game becomes much more difficult. In programmer lingo, it makes it "non-trivial". Using a system of shell corporations and hiding behind them makes patent trolling much more feasible as a pure play and makes it possible to do at scale, as Intellectual Ventures does.
If we pick up the shell and reveal the cretins hiding underneath, it would have a real effect on patent trolling. I can say this with 100% certainty.
Maybe even a more profound effect than making prior art easier to locate.
No domain name! I love it! How often do we see this on HN? Fantastic.
As for the idea in the title, isn't this more or less what Viaweb did?
Create a DSL for generating HTML elements and structure that can itself be embedded in an HTML page. Then let users POST commands along with their data to your custom interpeter (that you built from Lisp/Scheme).
Do you mean that community projects are by nature "dead" projects?
What does "dead" mean, exactly?
My OS is a community project. OSX/iOS are built from community projects. Lots of scripting languages are community projects. Mozilla is a community project. Wikipedia is a community projects. I could go on.
I realise I may think a bit differently than many programmers, but I care less about how much a particular chunk of code is actively changing than whether it works really well over the long term (simple, stable, reliable, secure). I like "timeless" software than quietly continues to work for many years, remaining relatively unchanged. In my experience, well-engineered software like that is often immune from so-called "bit rot". Because it was designed correctly, with minimised complexity and maximised portability as a top priorities, from the beginning.
From a design and implementation perspective, there are no real impediments to a decentralised social network that cannot be overcome. However first you have to decide what you mean by "social network"? Does it have to be a clone of FB or G+, save for the centralisation element? Or does your definition allow some changes to their approach? For example, what if the network was private? What if there were no ads? What if it was comprised of lots of smaller networks of maybe 100-200 people (like your "friends" on FB) instead of being one massive, public image gallery/chatbox? What if it didn't require the web, as FB does? What if it was application-agnostic?
What do you demand from a "social network"?
Does it have to be a FB/G+ clone?
Anything is possible, so to speak. But not everything is necessarily ready to be received based solely on technical merit. How much marketing and PR is needed?
If it were dead it would mean few people are using the software, and especially that few new people were adopting it. It would also mean few or no people are actively developing or maintaining the software. A project that has few users but a lot of development activity is definitely still alive because it is always possible for new features to eventually turn into adoption (Firefox).
Most of that stuff is pretty small, but I think this announcement could actually be a step forward. Since February - it seems to me there has been a shadow hanging over this project with a promised over-haul of federation code. I do not know if that will happen now, but there is less risk that it will happen outside the view of those who want to participate in setting the direction. Most people of course, will just write articles and comments and mail list posts and never submit any code (like me!). They may continue to complain that they have a limited a voice in the direction of the project.
The project still has to have active committers who are a subset of the interested "stakeholders". It doesn't matter if they get a little paycheck from D* Inc or not - its still only going to include some people, which will be those people who have a record of submitting acceptable code. This is true of every single open source project I know of.
Do you mean that community projects are by nature "dead" projects?
No, but they're not necessarily live projects, either. The key question is how much commit activity is coming from the people who are walking away from the project. If they were doing all the committing, and they're leaving, it's dead. If on the other hand there's an active community of committers outside the original developers, it's alive.
So there is an assumption that number of commits means something? I'm just not quite sure what that something is.
What if the software as released is "rock solid"? That is, it's so simple, effective and reliable that it doesn't need to be changed, except for bug fixes?
What if the software is merely a "platform"? (And not only in the marketing sense of that word.) That is, the platform only "does one thing and does it well", and does not generally need to be "actively" developed (no commits except bug fixes), but... of course people can easily build things on top of it. For example, Ruby or Python programmers can do whatever they want. Total freedom. We give them the ability to create a connection to a social network they choose and they can send/receive over it to/from other members as they wish. We do not impose rules on that or try to manage it in anyway. We only provide the platform. The platform is application-agnostic.
The "platform" basically stays the same. It does what it's supposed to do, create networks, and that's all. If we measure by number of commits, one could say the development of the "platform" is "dead".
tl;dr what if someone releases a _platform_ that developers can build on, but number of commits to the _platform_ remains near zero? Because (apart from any bugs found) "it just works."
To my knowledge, Diaspora is closely intertwined with Ruby and web development. This makes it difficult to separate the "platform" from lots and lots of Ruby or other scripting language programming, mainly aimed at webpages, and people changing UI stuff to their liking. And personal preferences can vary greatly. (And there's more to the internet than just webpages. FB has to be webpages because it relies on the web, specifically one person's website: Zuckerberg. Another social network (or newtork of networks) might not be so limited.) Does the dynamic, highly personalised aspect of viewing webpages have to be part of the _platform_? Can we separate the personalisation from the basic functional element of the platform? (spawning decentralised networks)
A project that isn't being actively developed is dead. The idea of a "finished" program that does everything it needs to, one "so simple, effective and reliable that it doesn't need to be changed, except for bug fixes" is an attractive one but it's a myth; there has never been such a program, and I doubt there ever will be.
In fact, most of my kernel is "dead". There is code in there that hasn't been changed in over 30 years!
I'm even communicating over a "dead" protocol. When were the last changes to TCP?
I'd even guess you are using some "dead" software yourself. Low level stuff that no one has the desire nor energy to modify.
(To be clear, I am not suggesting that we should not try to improve programs, continually. I'm only pointing out that perhaps sometimes code works for what it's supposed to do, no one has come forward with something "better" and hence the code does not need to be fiddled with endlessly in the absence of serious bugs.)
Then I hope you have a plan in place for when, not if, they break.
>In fact, most of my kernel is "dead". There is code in there that hasn't been changed in over 30 years!
If the kernel has people who take responsibility for it, and make changes to it, then it's not dead.
>I'm even communicating over a "dead" protocol. When were the last changes to TCP?
The fast open draft was published in July.
>(To be clear, I am not suggesting that we should not try to improve programs, continually. I'm only pointing out that perhaps sometimes code works for what it's supposed to do, no one has come forward with something "better" and hence the code does not need to be fiddled with endlessly in the absence of serious bugs.)
Sure, but I really don't think that's true. Possibly because the lower-level layers are still evolving - code written in low-level languages more than about 10 years ago (before the AMD64 architecture existed) probably won't work correctly on a modern system, and most high level languages have had incompatible changes over the same time period (I know Java's supposed to be an exception to this - allegedly you can still run the original java demos from 1994 on a current JVM). The fact is I've tried and failed to run several programs from >5 years ago, but I've yet to find one that still works without having been maintained.
Still waiting for Ethernet to "break". IP as well. UDP too. And netcat. It's been like 20 years. I'm still waiting.
I also wasn't aware that RFC drafts were the same as "commits".
Originally we were talking about "number of commits". Low number of commits means "dead", so they are say. Are you in agreement with that or not? If so, what does "dead" mean?
Now you are saying if software is maintained (fixing bugs) it's not dead. Who said it was? I certainly didn't. I even went so far as to clarify that.
Let's assume some software is maintained. There's someone to take responsibilty. As you have suggested. But there's no commits, except to fix bugs.
If there's no bugs to fix (maybe one every 15 years), then there's no commits. But if _number of commits_ tells you whether a project is "live" or "dead" then how do you call this a "live" project, if is has almost no commit activity?
My original comment was about the idea of "number of commits"-->"dead" as carrying some deeper meaning, e.g. about the quality of the software.
I like software that works and keeps on working. I really do not care that much if people are committing to it or not. In fact, I'd prefer they didn't because in many cases they only succeed in breaking it or in creating new weaknesses or insecurities.
The original netcat just keeps working. Last "commit" was in the 1990's.
Can I do this with a hand-written assembly program, i.e. not necessarily one that has been compiled with as and subject to GNU default optimisations or "constraints"?
The commands you want are "stepi" (single-step one instruction), "disass" (disassemble at the current point in the program), and "info registers" (show you what's in all of the registers). These work equally well for hand-written assembly and for any arbitrary compiled program.
It's been many years since i've done hand-written assembly, so I can't say for sure, but it should be able to do instruction stepping on arbitrary programs.
You can use "layout asm" to see the dissassembly as you step through your program. "layout reg" will then split the view and show you the registers at the same time.
I don't use WiFi as a matter of practice, but I'm curious: What if you could keep all the "whitelisted" MAC's continually logged in to your network, or, at least, you could keep track of when they log out. The idea being that MAC spoofing is not possible if the particular MAC that the attacker wants to spoof is currently logged in. This is generally true with Ethernet, correct? Is this true with WiFi as well? (Assume the traffic is encrypted.)
And in fact, it seems this guy's hack relies on someone "rejoining" the network, triggered by a deauth frame. Without that "rejoining" step, I don't think he could get very far. If his target is continually connected, and there's no way to force a "rejoin", and all the traffic is encrypted, then what can he do? The problem to me sounds like the fact that someone can send a "deauth" frame and have it be accepted, and the Apple Mac gives no warnings that the connection underwent a "rejoin".