Hey, this is very close to something I've been needing recently (and in Go, nonetheless).
Is there any way to get a similar thing for a sliding window of a stream? For example, to be able to report (estimated) 90th percentile latencies for server requests in the last 5 minutes, hour, and day.
But for all but the most extreme cases, it's sufficient to just keep all the values in memory until they fall out of your window. Even if you're getting 1000 requests/second, that's still only 300,000 values that you have to store.
There may be a way to implement this in perks/quantiles by adding another piece of metadata for the timestamp. There is a space cost for this, but it may be okay or even opt-in. Maybe I'll look into this soon.
Edit:
I do agree that if you're working with datasets that fit in memory, you're probably better off keeping all the samples to find your percentile and not using this package. In fact, perks will not compress for datasets under 500 values.
This is probably obvious, but for the first time it occurs to me that motion blur is essentially the same as anti-aliasing. They are eye-tricking hacks to work around a lack of resolution in the medium -- screen resolution, in the case of pixel anti-aliasing, or "time" (framerate) resolution in the case of motion blur.
Recently I've been wondering if as very high-resolution displays become commonplace, anti-aliasing will become obsolete. If I could play an FPS video game on a 500dps monitor, would anti-aliasing make any perceptible difference? At some pixel pitch, even text anti-aliasing won't matter.
The same thing seems to apply here. If we had 5000Hz screens (and could run our animations quickly enough to keep up), would applying artificial motion blur buy you anything?
Those are some extremely insightful and promising thoughts that deserve more research. If you venture more in those explorations, I'd love to read thoughts you have :)
Disabling anti-aliasing is already recommended as a method to increase FPS on the Retina MacBook Pro for precisely that reason... unfortunately, the details still look ugly if you do that. Non-antialiased text also looks quite decent and is nice and sharp.
You could also do this without a server at all. You could just have a script put your data into the html/js template and open up the page with something like bcat.
Isn't the point not having to worry about presentation and messing the templates? Perhaps I just don't have your command line fu but I don't see how I could do what you are talking about in a single line/simply.
That's pretty awesome. I hadn't seen it before. DataFart has the one advantage that you don't need to pipe the data in locally. If the source data is on your local machine, this is a much better solution.
Fortunately, I implemented a cache layer on top of GitHub's API recently, so many examples are working on http://bl.ocks.org. But only those that are lucky enough not to get a connection timeout. Hopefully the API will be back up soon!
I wish the docs included basic examples of how to use various packages and functions. I end up searching for golang packageX example instead. And often I end up on StackOverflow where someone else is using it wrong and ten others are trying to correct them.
I completely agree, but only after a good read of Effective Go and honestly the language spec. The language spec is documented and completely, completely readable. I have no doubt it's a result of the simplicity driven by the simple/fast compiler. Just seeing the examples get you in the Go states of mind and prepare you for the standard library.
I do think that sometimes the more complex packages are aided by examples and I think the std lib could take on a few more examples. Fortunately, http://gobyexample.com someone is already doing that.
I had this problem right when I started using rbenv. Eventually with some help I realized that just starting bash was slightly slow, and that all the various bash scripts that rbenv runs were compounding this for a 2+ second startup.
In my case, everything was fixed by moving the the things that only need to be run in login shells out of my ~/.bashrc (where they should have been in the first place).
This is a cool idea, and I love the convenience of deploying a single-binary app when using Go for servers.
When deploying web servers, though, I'd prefer to leave the images and other static resources out of my binary, because this means I can use an rsync-based deployment with --copy-dest and --link-dest. --copy-dest means that deployments are blazingly fast (I only have to copy changed files) and --link-dest means that deployments are cheap on space (unchanged files are hardlinked to the copies). Granted, bandwidth and storage are cheap and getting cheaper, but it still adds up, particularly for large server clusters.
I believe this also includes the bitmap marking GC changes which will make the GC copy-on-write friendly. This is pretty important for a lot of folks running Ruby web servers.
Readline has vi mode, and it's amazing. I use vi mode in zsh, bash, ruby, python, mysql, etc. and I can't live without it any more.
I still have fit-of-rage moments when I try to use things that either (a) try to roll their own line editing or (b) use some inferior readline clone.
rlwrap is nice, but typically the shells I'm using have other functionality that I miss too much (tab-completion) if I use it. For instance, I've given up using rlwrap with the Scala repl for this reason.
I hear that. At some point Ubuntu screwed up and linked the MySQL client with libedit instead of readline, which has caused me much wailing and gnashing of teeth. Don't get me started on all the ways Ubuntu / Debian has fucked up the packaging of MySQL.
Is there any way to get a similar thing for a sliding window of a stream? For example, to be able to report (estimated) 90th percentile latencies for server requests in the last 5 minutes, hour, and day.