Hacker Newsnew | past | comments | ask | show | jobs | submit | valgog's commentslogin

This is a very cool SQL SELECT to bash converter to easily parse and analyse data from your CVS files :D


Wow, wonderful work!

As an additional information about already existing execution plan visualisation tools:

Depesz has written the classical PostgreSQL Execution Plan Visualiser years ago.

http://explain.depesz.com/

Of cause it is not so nicely pretty as the one from Tatiyants, but I use it now and then and it became a standard explain visualisation tool for many PostgreSQL users.

The table format from http://explain.depesz.com/ is very useful and one can understand a lot of details about your execution plan without the need to visualise in the form of a graph.

Also the default execution plan visualiser in pgAdmin looks really cool.


Thank you. Pev was definitely influenced by explain.depesz, it's a great tool


This library makes it really easy to write REST services using python. Saved me a lot of time when doing prototyping of simple (and not so simple) services.


This library makes it really easy to bootstrap a simple RESTful service with practically no effort. Really nice for prototyping your services.


Actually stolon is more comparable with Patroni, the core component used by Spilo.

One can call Patroni the HA machine, and Spilo an AWS based infrastructure that uses it. So one can use Patroni for managing Postgres in your own data center.


Instead of stopping occupation of 3 neighbour nation territories and supporting war machine, they are "freeing" the software.

Of cause it is a window of opportunity for the software integrators. But it is bloody money they are getting.

(Did not want to be political, but it is a political news article)


Yes, 100 data model changes are schema changes (that can be one or more table structure change)


There are definitely a lot of much more interesting videos from PGConf US 2015, especially the one from Robert. My talk was a 'keynote' and not really a conference talk :)


Actually if you really need to update large JSON documents efficiently, probably the only really efficient technology would be ToroDB http://www.8kdata.com/torodb/


I would love to hear more - this is the first I'm hearing of Toro - it never comes up in NoSQL talks or conversations. Any decent success stories?


Actually without providing some benchmark results, the statement, that PostgreSQL has performance problems because of on-copy updating of JSONB, is groundless.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: