This speaks to me as I coincidentally decided to start automating my provisioning to standardize to the same vagrant environment.
I'm not a sysadmin or servers guy, but these days I happen to manage most of the devops and so getting servers and machines set up is my responsibility.
Ansible docs are not great for getting started, and I pretty much just learned by skimming through some Playbook, and figuring it out their — often intuitive — purpose.
After a few hours I got to a close to getting my machines set up with most of my Django stack: Nginx, Gunicorn, Redis, supervisor, celery, etc. The only thing that I couldn't properly set up was PostgreSQL. And here is something that most of these automation tools lack: debugging. When my postgres roles was failing, I had no clue how to even start debugging it, so after an hour or so, I just stepped back and went in to set it up manually.
I feel some of these tools need to be more friendly with devops that don't have quite the same knowledge and experience than the target audience of sysadmins and ops people. That said, I learned a ton starting to use Ansible and automated a big chunk of the process.
Robert Cialdini also brings the same theory in his book Influence. Not only that a tragedy prompts other to the same action, but he showed some studies where a type of tragedy prompted a seemingly irrelevant event.
For example, 2 days after a suicide appearing on the news, it is significantly (don't remember the exact numbers) more likely for there to be a plane crash. He attributes this to the fact that some pilots that have had suicidal thoughts are triggered by the suicidal news.
Another interesting study looked at how mortality rates of car accidents are higher after suicide-related news. This study found a surprising number of car deaths in which the driver was stepping on the gas pedal, instead of the break, which might be an indicator of suicide.
The study regarding car crashes also looked into the correlation between whether or not car crashes only killed the driver or other people too, and found a clear correlation in additional deaths:
Murder-suicides leads to more "accidents" with head-on crashes etc., while "clean" suicides with nobody else involved tends to lead to an increase in "accidents" that don't risk other peoples life.
"Influence" is a fantastic book. Though it's downright chilling in how it tears apart the illusion of how much control we have of how we act.
I've been wanting to build a quick side project for a couple weeks, so decided to give it a try today and put it up in App Engine instead of heroku.
Put simply, App Engine has a higher learning curve. I remember using heroku for the first time a couple years ago, and it was smooth and seamless. Can't say the same about App Engine. Installation isn't a blink, docs are scattered around, and even a simple Flask app isn't straightforward.
I understand they might offer more features, better prices, so side-projects are unlikely their target audience, but nevertheless if they want developers love, it should "just work".
Long-time Appengine developer here. I'm glad that GCE is here now, because a lot of the criticism of GAE came down to people expecting infrastructure (and infrastructure prices) instead of a platform. A massively and seamlessly scalable platform.
Which brings me to my point: yes, GAE does have a steeper learning curve, but almost all of that is due to scalability constraints. If you think you will ever need to scale past one server, it's well worth the initial pain.
Or put another way, you get to choose between a small development learning curve at the start for a pretty scary sysadmin learning curve when you need to scale your app.
It's built from scratch. I'll be open sourcing it all very soon.
I'm using Meteor for the front and backend and the UI was just recreated with CSS. PhoneGap provides the shell and basically hijacks the DOM from the Meteor server and re-fires events. I'm going to be implementing the FastRender tomorrow so it'll render all html and json with one request.
It's definitely easy to see his book as heavily influenced by Taleb, but I supposed everything would be correctly referenced to Taleb — I never actually went to check references. Despite that, the book is a great summary of thinking biases and is clearly explained.
Here's when setting a work-in-progress (WIP) limit is really useful. Don't allow you to have more than say 3 tasks on your plate. Can be painful at first — you can be stuck in all tasks, and not being able to do work at all — but helps so much getting shit done.
Just got my first Mac and I've been looking for this type of articles and general articles for people power users switching to OSX from Windows. While not a definitive guide this one has some good stuff. I was surprised by how few articles there are on this topic.
I think it's absolutely great and how software development should be done,
They should be teaching it to everyone and it should be the default - you start with app engine on any project and only choose something else if it can't be used.
Curious about your argument here. Care to explain?
Let's say you're a college student or career changer who just learned python. You can write a script, upload it, and have the world use it for free.
You then learn sql and build complex apps.
Learn about bigtable and build a viral app that scales to anything.
Learn golang and make your python code more efficient.
And you can do this all free to start off with.
The alternative - learn linux, web servers, db admin, security. Or php/mysql and pay shady hosters $20 a month and buy domain name.
If you're experienced and know all about servers etc, you can now ditch all that knowledge and outsource it to someone else and focus on higher value things.
It also encourages you to write scalable applications - no in-request long operations, queues, eventual consistency, no filesystem access dependencies, planning for failure on every level etc. Using App Engine is a course on best practices in web development.
We've been following Moore's Law for decades now. What's really stopping us from a real mobile and internet-of-things revolution is a breakthrough in charge storage devices. I wonder how long until one of the many "new discoveries on X material to create new battery" can be practically feasible and marketable.
Moore's law actually helps battery powered devices too. As the switching elements get smaller they consume less power. The problem really is that we've at the same time increased our demands on the devices to the point where the gains were undone.
5 years ago cell phones had a longer battery life than smart phones do today.
Smart phones have gotten pretty good about scaling power usage. When your not actively using them they can hold their charge for a long time (even while they are still listening for incoming calls). The only time you run into short battery life is when you are actually them to do intensive stuff. On the rare case where some background app is eating your battery life, you can even feel your phone get hot and know you should reboot it (or close the offending app).
If you read any green tech blogs, there seems to be a constant stream of battery breakthroughs in the lab. But why do they not materialize in commercial batteries? Li-ion has been stuck at about the same level for more than ten years.
I was trying to find some meta research on what fails with the commercialization of these technologies. There's probably hundreds of breakthroughs per year, or at least that's the impression you get. You'd think at least some of them could be scaled up in a relatively straightforward way. Or maybe it's just press release hype and the research hasn't been going anywhere for fifteen years...
I've always so defer the exact same thing. I always thought it would make an interesting blog; someone who just goes through the "breakthroughs" of five years ago and actually contacts the researchers and writes up what went wrong.
It could require a lot more effort as there's no glamorous press releases to just copy paste - you'd have to contact people who might not be so willing to advertise their failures, and there might be rules preventing them from saying much.
But oh how it would be interesting! Just aggregate some blogs with a five year delay so there should have been ample time to have developed something.
Or just even write a few longer investigative pieces on why technology X failed to materialize despite immense promises.
I'm not a sysadmin or servers guy, but these days I happen to manage most of the devops and so getting servers and machines set up is my responsibility.
Ansible docs are not great for getting started, and I pretty much just learned by skimming through some Playbook, and figuring it out their — often intuitive — purpose.
After a few hours I got to a close to getting my machines set up with most of my Django stack: Nginx, Gunicorn, Redis, supervisor, celery, etc. The only thing that I couldn't properly set up was PostgreSQL. And here is something that most of these automation tools lack: debugging. When my postgres roles was failing, I had no clue how to even start debugging it, so after an hour or so, I just stepped back and went in to set it up manually.
I feel some of these tools need to be more friendly with devops that don't have quite the same knowledge and experience than the target audience of sysadmins and ops people. That said, I learned a ton starting to use Ansible and automated a big chunk of the process.