Lets wait a few years until all the bells and whistles are bolted (in ad hoc, product specific manner (until someone invents ... a formal query language ;) in layers and tiers that take a NoSQL data store to a full blown database which requires management.
(Didn't we go through this with the XML hype? Wasn't it the "simple" golden nail? Have you checked out the "Simple" landscape of SOAP and W[TF]S-*?)
We're not living in past; its more like back to the future. (Just wait for it, and "No-NoSql" ..)
It's another case of "right tool for the job". If you're just looking for a data store, NoSQL may be right up your alley. If you need a query language, you may want SQL.
I mean, the fact is that there are a lot of servers out there even today that are running SQL databases to be used simply as dumb data stores. It's not the most efficient coding, but it happens, at least partially because MySQL is available on every shared host and their brothers (wait, what?).
NoSQL datastores are the new object databases without the compile-time shackles. (This already makes them better than ODBMS failures; they're not as hard to adapt to new programming languages or frameworks.)
I have believed for a very long time that you cannot be a good OO developer unless you really understand data modelling (which includes relational data modelling). There are often good reasons to break the "rules" of data normalisation (which is pretty much what NoSQL data stores do) but you shouldn't do so unless you know why you're doing this.
IMO, most data is more interrelated than people think that it is, and at least having a good model of how the data is related will make your entire design better.
Talk about mis-application; I know this is off topic but the over use of virtual machines everywhere is starting to trip me out. Why in the world would you buy a brand new machine to run your production web site and then put VMWare on it and run one machine instance?? I could understand if you buy a big powerful machine and carve it into multiple smaller machines all serving separate and unique purposes - that is a common and good use case for it but people aren't just doing that, they are taking it overboard.
Using virtualization has a lot of benefits even when you have just one virtual instance. In your example, the real overkill is VMware, not virtualization per se!
Instead, kernel-based virtualization such as BSD Jails, Linux-VServer or OpenVZ is the right approach. This type of virtualization comes without any performance penalties! And by using hard links, you can avoid wasting disk space, too.
The benefits are better hardware abstraction, i.e. easier movement to other machines, improved backup possibilities, etc. In fact, the BSD manual recommends the usage of jails for years, if not decades.
I'd say that this kind of virtualization is still underused, while inappropriate kinds of virtualization start to become overused. Maybe this is due to "successful" marketing of VMware. They promote the right techniques, but the wrong technology.
It allows for easy live migrations for maintenance, and allows us to build VMs that are happily 100% oblivious of the underlying hardware. While most of our servers are split into multiple VMs, we consistently deploy new hardware with VMs even if they're intended for a single function simply because of consistency and ease of maintenance.
Virtualization is good in e.g. cloud scenarios where the usage is spread out to different machines through out the day. Rather than have 10 machines averaging 5% utilization at all times you can have 1 machine with e.g. 60% utilization all the time. A win.
Sadly, pointy-haired bosses have started putting developers on VMs which is exactly the opposite use case: all people loading their machines at roughly the same time (e.g. I've heard people claim that in Java 1/3rd of time is spent compiling. If you have more then 3 devs....). A fail.