Hacker Newsnew | past | comments | ask | show | jobs | submit | vazamb's commentslogin

We are in a similar situation, both working from home. It very much depends on how lazy we are during the day. If musli bowls and lunch plates haven't been handwashed when we start cooking dinner it's a "dishwasher day". We have a half size one, which means one day's worth of dishes fills it up at least 3/4.

I suspect it might be more water efficient and energy efficient to do this instead of repeatedly heating up water, which takes 30s or so and wastes a lot of cold water / hot water that gets left to cool down in the pipes.


I am not a designer/UX person but still thought I would give it a go. I couldn't get through more than 30% of it because of how boring and repetitive it was.


Which also might solve the mystery of why it's only available in the Netherlands? No need to built a different keyboard.


What are the advantages of this over CDK? It's trivially easy to spin up a loadbalanced service https://docs.aws.amazon.com/cdk/api/latest/docs/@aws-cdk_aws...


Hiya! My team is also the team that writes and maintains the CDK construct you linked to.

I think it’s different approach - that’s all. Copilot Is more focused around a full developer experience including setting up CI/CD, deployment environments and operational tooling like logs, stats, etc.


CDK is terrible on so many levels, it's possible that this is less so.


Expanding failures of CDK due to popular demand:

- It introduces an edit-test-debug cycle from hell, it doesn't do what you think from the mental model of a user-friendly CF template generator library (doesn't statically ensure stuff it emits actually works even superficially)

- It builds upon an already difficult and leaky abstraction (CloudFormation) so you just get additional failure modes and cognitive burden

- Glacially slow iteration, can easily take 10-30 mins to teardown/setup your app due to CloudFormation.

- fails to provide upsides from the costly tradeoff it makes to jump into turing complete programming languages, like 3rd party cdk components or iterating expressions at a repl

- don't even get me started on the npm dependency hell (you'll randomly be debugging mystic npm warnings already on the next day when installing a new submodule of cdk deps, when version mismatches appear), the other-languages-to-npm bridges, the "you can't use this without an IntelliSense style IDE" API design, buggy cli tools, etc

To summarize it doesn't manage complexity or give you a good abstraction. Probably because it tries to take on all of AWS functionality at once.

(Also the GP linked Fargate example is far from the functionality from the copilot example, like enabling you to actually dev and deploy the app - ecr, codebuild, codepipeline etc)


Petrol stations in Germany make almost no money on petrol. I think it was something along the lines of 2-3 cents/liter. I am quite sure that they will be happy to have customers stay in their shop for longer.


That's still quite some decent margin. It takes about 5 minutes for me to fill my 40L tank, which would be about 1€. Everything is automated, there is absolutely no human involved, and it works 24/24.


There are extreme construction costs for petrol stations. It's highly regulated due to the fire risk and other issues. You can't even lead surface water into the gutter without prior filtering.

https://www.abwasser-analysezentrum.de/branchenuebersicht/mi...


Definitely. Especially in the current climate.


Good for them. Even more ugly coffee from shabby machines at inflated prices :)


Is ugly coffee the kind which isn’t topped with latte art? ;-)


It's spending about 4 to 5 EUR for about 350 to 400ml, not tasting any good in any variation i've tried (because captive audience at the moment) and afterwards angrily wondered why i did it AGAIN?


Why this being downvoted:

You need to use a process pool in Python, thread pools are for I/O bound tasks and cannot use more than 1 core.


I didn't downvote him, but his response isn't at all relevant to what was being discussed. I was discussing the API design of asyncio and how they had to add the "loop" parameter to ensure the reactor would be thread safe by default.


I would love to know what the problem is. We do dozen of deployments every week with a ALB + ECS + Fargate setup. We upload a new container image, create a new task and launch as many tasks as desired (so if we want 2 containers running we launch 2, for a total of 4). ALB calls the /health endpoints on the new containers and if they pass the healthchecks it drains connections to the old containers and stops the tasks. This has worked seamlessly for a long time now without any downtime during deployments.

EDIT: I should mentioned that we are using AWS CDK for all of this. All it does is register a new task as the default task for a service and ECS/ALB does the rest.


I would like to know the same, we have moved almost everything to fargate and ecs and have had zero issues.


I find ECS particularly, ECS on EC2, to be really painful for small deployments.

I just want a cluster that scales in and out to the amount of memory my tasks need, I'm not very concerned about CPU. Until capacity providers it was more or less impossible to do so without having a bunch of excess capacity provisioned. Even with capacity providers, I can't seem to get a cluster to scale in to 0 instances when no tasks are needed.

I have one ECS service that requires an EBS volume mount, which means that when the task definition is updated, I need the service to stop, so that the new task can mount the same volume. This deployment model is essentially impossible without implementing a custom deployment strategy.

Fargate makes things a bit easier, and now that you can use EFS volumes with it it might make more sense. But overall, everything in ECS just seems poorly designed, clunky and hacky, like many AWS products.


You can’t do EFS on Fargate in CloudFormation yet though.


True. But you can do it with a custom resource.

https://gist.github.com/guillaumesmo/4782e26500a3ac768888daa...


You also cant use capacity providers with cloudformation yet. Likely due to the fact, that you also cant delete capacity providers, nor change a cluster's default capacity provider.


The default model works for most use cases, but it's not really flexible for non-standard cases and it fails completely at larger scale. If you have a few dozen containers taking 100,000 requests per second and you want to slow warm them with a segment of traffic for example, it's much easier to do on Kube than on ECS. ECS is also possible but it's just not as transparent or easy to work with.


Same here -- I perform the exact same kind of deployment you mentioned using CloudFormation. My only grief is poor rollback detection / control.


That’s not a Blue Green deployment.....


I don't think they are saying it was, unless I'm misreading. They're talking about standard ECS deploys. Yo add my anecdata, I do lots of ECS deploys via terraform into production and it works pretty seamlessly.


I don’t have a problem with deploys with CF. But it only let’s you configure a minimum healthy percentage. Which is good enough if you only need to validate that an instance is in an acceptable state via a health check.


PGMs also provide the intuition behind GANs and variational autoencoders.


I think the better scalability does not come from lambda itself but because you have to design for share-nothing concurrent executions from the start.


I think it is already through libraries. At least for lambda in python you have libraries (Zappa for example) that allow you to write basic flask and Django apps and then deploy them to lambda. These apps can still be deployed the "old" way with no or minor modifications


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: