At the very least, Kubernetes natively supports exposing Services (targeting an Ingress controller or otherwise) by configuring AWS ELBs when informed to run on AWS.
All you have to do is using the "type: LoadBalancer" field in your Service's manifest.
Also, I understand we can configure the objects max size (with tune.bufsize).
But can we configure the whole cache size (ie. the number of tune.bufsize KB cached objects) ?
A known (and now fixed) kernel issue affects the scheduler and cgroups subsystem, triggering crashs under kubernetes load (fixed by 754bd598be9bbc9 and 094f469172e00d).
The fix was merged in Linux 4.7 (and backported to -stable, in 4.4.70).
So if you run an older kernel, maybe you are hit by this?
A few improvments I enjoyed when migrating from Swarm to Kubernetes (older Swarm version, though):
* Namespaces, to group objets and prevent names clashs. Ideal when many devs runs their own copy of the same distributed app from the same manifests and having the same names.
* ConfigMap, to share common configs and project them to files in the containers (and keep them updated in the container, etc), and to provide the configuration separately
* Automatic (EBS, or GCS, or NFS...) volume attachment, ie. to the node hosting your database; and re-attaching elsewhere if the node fails and the database is re-scheduler on an other host. Automated database HA/failover is very nice.
* Ingress manifests, to version the http routing with your app. Though this is very young and incomplete yet.
But yes, k8s is much more complex than Compose (and Swarm), so if the later works for you, changing is probably overkill.
No API means no automation, no Chef/Puppet/Ansible/Salt or other kind of scripts. Everything in the head of the person who clicked through the web interface.
The stored data format isn't compatible between major versions.
Because of that, upgrades are either stressfull and cumbersome (ie. slony + switchover to a promoted and upgraded slave), or imply a large downtime (ie. pg_upgrade).
Also, because of that (WAL format), you can't use native replication between different major versions of PostgreSQL.
It's just a shorthand for '== 0' (since !0 is true, and !<non-zero> is false). strcmp(3) returns 0 when the two strings are identical. So "if (!strcmp(a, b))" means "if the strings a and b are the same.
He understands that, but he does not like to use this shorthand for strcmp, because it conveys a conflicting meaning.
I'm much the same, and I prefer explicitly writing "== 0" in those situations.
I'm not sure I see it as conflicting meaning versus how C treats boolean expressions. Logical, I want to know if the strings are not equal and that uses boolean syntax to tell me that as opposed to comparing to zero which doesn't convey the meaning of the statement as fast.
It's an artifact of implementation. In some zany world, strcmp could return EQUAL_ENUM. What you are ideally expressing is that you want to know that both strings are equal as determined by strcmp, not that the strcmp return value is nonzero. Now it just so happens that EQUAL_ENUM=0 in this world, so a side effect is that !strcmp(...) will still work. And we also have plenty of examples using this shorthand that it doesn't seem unnatural to a lot of devs. (Doesn't make it any less boots-on-head strange... when in Rome, do as the Romans do...)
I'm not saying its not screwed up, and I'm sure some enums would be interesting, but they are using it in a logical context so I feel like the ! makes a fair amount of sense.
I do wonder what would have happened if truth had been defined like it is in Forth[1]? That would make me go with the enums.
1) False is 0 (all bits set to 0), and True is -1 (all bits set to 1 giving a twos-complement -1).
All you have to do is using the "type: LoadBalancer" field in your Service's manifest.
See https://kubernetes.io/docs/tasks/access-application-cluster/...