There are a couple points that resonate with me. I strongly believe there is a need to provide transparency and accountability to cloud engineers. How many cloud enginners know the exact list of resources they own in all accounts and regions they have access to? How much money do cloud provider make out abandonned, zombie, long-forgotten resources?
What they built sound nice and feature-rich but also quite complex. I just finished adding a (simplier) feature to the SaaS I am building. It generates individual reports with the list of resources owned by the user, the associated cost and flag for thise which are likely unused.
I will post of Show HN soon, but if you are interested in cloud efficiency, I would love you feedback! https://li10.com
By default, the instance is deployed to a public subnet but any ingress traffic is not allowed by the instance's security group. This is needed for the instance's ability to connect to AWS SSM service (egress only).
The user can also deploy the instance to a private subnet but this would require them to manually ensure connectivity to the AWS SSM via NAT gateway, VPC endpoint or other means.
I am working on eliminating waste in cloud resources.
Cloud providers made too easy to start resources. But unless there is a stringent upfront process (that usually defeats the purpose of using the cloud), it is hard to keep track of who owns what, and what is still needed. Decrypting long cloud bills quickly impossible, and users do not have a clear understanding of the per-resource cost they generated.
I believe the solution is rooted in transparency and accountability for both users and cloud providers.
I am creating a tool what generates a cost and security cloud report which is sent weekly for each cloud user or team.
I intend to release it as a open-source tool as well a SaaS service as part of www.li10.com
I totally agree with you!
There are so many layers we could optimize, with various level of difficulty and return in term of cost and co2 reduction.
Legacy tech stack are especially wasteful: HW + OS + App running 24/7 regardless if it used or not.
My intuition is that AWS claims they can reduce CO2 emissions from your onprem stack by 80% because *IF* it was rewritten into a serverless app, it would consume energy only when it runs.
A friend of mine is heavily into AWS. Over the past two years he moved their entire organisation off of EC2 instances. They where apparently running EC2 for everything, and the reflect my experience as well. Organisations mostly see AWS as a hypervisor in the cloud.
We tried to move so many customers to some sort of container, auto-scaling EC2 (i.e. turning the instances off when not in use), Aurora, Lambda, you name it, anything that doesn't necessarily run 24/7. The customers don't like it. They don't feel safe, it's no familiar to them, they don't understand how they should manage it or make up reasons as to why they need to run MariaDB one EC2 or why EFS or S3 won't be able to replace their EC2 backed NFS share.
When it's done correctly, seeing an entire business running without a single VM or physical server is really a amazing thing to see.
Hi there! I'm Matt. Leader in cloud technologies, with a 17-year track record in both the US and EU. As a Senior Director of Cloud Solution Architecture at NTT DATA, I lead the development of AWS serverless, SaaS, big data, and ML applications. I also coach our cloud engineering talent pool.
Back in the Flux7 days, an AWS consulting startup I called home, we pulled off a solid 50% YoY growth, paving the way for an acquisition by NTT DATA. It was quite an adventure, demanding a blend of technical skills and business acumen.
Dedicated to fostering sustainable IT practices, I built https://www.li10.com, a SaaS helping cloud engineers lighten their cloud environmental footprint.
Location: Germany
Remote: Yes, worked remotely for US customers for 6 years
Willing to relocate: No
Technologies: AWS, serverless, CI/CD, Python/Node/Typescript, MongoDB, Docker, LLM
Resume: https://www.linkedin.com/in/mattbuchner/
Email: [email protected]
The recommended spec is 4 cores and 6GB. Running this 24/7 for a year in US where the carbon intensity is about 400 gCO₂eq/kWh would produce about 150kg of CO2 per user per year.
I appreciate the intent of this project, but it is not a sustainable approach.
If you are taking a couple of pictures a day, you only need to run this service for a couple of minutes per day, the rest is wasted. With Google Photos, as a SaaS, users are sharing computing power and each users are emitting less CO2.
>If you are taking a couple of pictures a day, you only need to run this service for a couple of minutes per day, the rest is wasted.
You can configure your server to sleep (scheduled or use WoL), which will halve (or more) that carbon emission number.
Speaking of which, individual action will never approach the level that corporate action could. Take a look at the practice of gas flaring (2022 estimate of ~357 million tons CO2) which is about ~45kg/person on earth. Changing flaring to capture will have a bigger impact than a minisucle fraction of the population not running a server.
But profits trump everything, so I guess we're stuck shaming individuals instead.
Turning a service on/off makes a lot of sense for this kind of application.
I sincerely didn't post my comment to shame anymore.
Need for computing is growing fast and it is actually not negligible at all. See the link below, data centers emitted 300 MTCo2 in 2020, similar to the number you mentioned.
> See the link below, data centers emitted 300 MTCo2 in 2020, similar to the number you mentioned.
Wow, I didn't realize there were so many datacenters. My comment about corporate action still stands though - there's no reason why companies running those data centers cannot switch to renewable sources for even a portion of their usage.
I'm no fanboy but I was impressed to see how Apple's office is 100% powered by solar (75% through rooftop, rest through an offsite farm) :
According to Renewable Energy World, the campus will run entirely on renewable energy. It will generate 17 megawatts (MW) of solar on rooftops and also be supplemented by 4 megawatts of Bloom Energy fuel cells. They’re hoping that this onsite generation will cover around 75% of power requirements during working hours. The remaining energy needed will be supplied by a 130-megawatt off-site solar farm in Monterey County.
Most cloud providers already claim being carbon neutral for EU and US regions. Unfortunately, there is a lot of waste there too, so there goes the little bit of green energy we have.
I don't understand how or why this defeatist, nihilistic attitude is growing in popularity. Do people just lack the self awareness to realize their inaction is tantamount to malice?
Do you not understand your power and actions are so little next to these corporates to the point that it really doesn't matter? The other day I thought maybe me and my gf should merge our facebook account, you know, to save up space in facebook
It's not defeatist or nihilist, it's understanding that policy and law (or an individual CEO's or politician's choice) has orders of magnitude more impact that individual actions.
> I don't understand how or why this defeatist, nihilistic attitude is growing in popularity.
The attitude of "here's a simple step that will cut down your individual carbon emissions, and also please pay attention to the primary sources of the problem"? Because that attitude doesn't seem defeatist or nihilistic at all? I don't think the "profits trump everything" comment was an endorsement of that philosophy.
If I misunderstood, and you were talking about the people hiding behind the companies and industries who are destroying the environment so that they can enrich themselves at everyone's expense then I'd guess that those folks are entirely self-aware and simply don't care that they act out of malice.
It seems like they're willing to hurt anyone and destroy anything if it might grant them a little more money. I wish people would start seeing them for the threat that they are, and spent a lot more of their time thinking about what should be done about those kinds of psychopaths instead of spinning their wheels worrying about how long their computer stays awake.
inaction is bad, but wasted and misdirected action can be every bit as harmful. Focusing on the people who are consistently doing the most harm in the shortest amount of time is only logical unless people have become too defeatist to believe that anything can be done to stop the abuses of industry.
So you support Bill Gates creating an uncountable amount of CO2 with his inefficient OS (not to mention ewaste from CPU limiting), but a single individual running their server for a few hours longer than you like is “tantamount to malice”? Do you even care about Earth, or do you just care about reciting capitalist talking points?
If you are running the machine anyway to do other tasks like backing up and running Jellyfish then it might be worth it.
How did you get to 150kg of CO2 per user? I'm going to assume you mean each person has one instance rather than sharing an instance like a family would. So thats 375Kwh per year, which works out as drawing 42W. That seems rather high to me.
I'm hoping to try and run this on my Pi4 which idles at under 1W. You could run on a Macbook M1 which only under heavy load conumes 42W, but I doubt this is going to need consistent heavy load: https://www.anandtech.com/show/17024/apple-m1-max-performanc...
My personal server is a 9900K with 64GB ram and normally around 35 various docker containers running on it. It's very rare to see it ever above 10% CPU utilization.
For example, I host between 5-10 minecraft servers for my kids, but they sleep while the kids aren't playing on them and only spool up when a link is requested.
A properly configured Linux box would barely need a trickle of energy when nothing is demanding its resources, so I'd guess that the estimates are likely a bit on the high side.
For perspective, this is about half the CO2 that you will exhale in a year of normal breathing. For similarly useful environmental tips, try not exercising as much - wouldn't want to increase rate of exhalation!
I am not sure this argument is correct. Animals have a carbon footprint. More importantly, it seems like control over data is a stronger concern than a such a small amount.
> I appreciate the intent of this project, but it is not a sustainable approach.
Self-hosting anything isn't sustainable. Even your modem or router isn't sustainable. But having additional backups and privacy is worth something for me. However since I'm in Europe, electricity prices here are through the roof and I try to minimize power where possible. I self host Immich on my Intel Celeron J1800 NAS that uses 19.1Wh on average. I cannot run any ML stuff that it comes with though. But for me the carbon intensity should be about 250 gCO₂eq/kWh [1]. So running my NAS produces about 42 Kg of CO₂ per year for 2 users [2]. Including 10 or so additional applications besides Immich. That sounds pretty reasonable to me.
> What you are saying makes sense. If you are running other valuable apps, then it isn't a waste.
'Valuable' is pretty subjective though. Most of the services are for entertainment purposes. But I also host quite a few useful things such as Bitwarden and CouchDB for Obsidian Sync.
> I calculated 20.9kg / user / year.
You are right. I forgot the 24 hours in my calculation:
(0,0191 * 250 * 365,25 * 24) / 1000
I also noticed that my comment was incorrectly formatted. I've since edited my original comment. If I factor in my complete homelab I use about 40Wh, which corresponds with about 90kg of CO₂ yearly. Which doesn't sound all that bad to me.
First, do you suppose it's running at full pelt 24/7? If suggest the specs are that high to speed to batch operations. Most of the time it'll be idling.
And second, letting Google or Apple manage this doesn't mean it doesn't use energy. They're offsetting or generating? So am I.
Multiuser support is free, face detection is cheap enough embedded camera chips do it, metadata indexing is cheap with any recent DB engine, search is cheap with them as well.
The only computationally expensive thing here is face matching. Photoprism recommends two cores and 3G RAM and claims to be able to run on a NAS.
You should write it as how much it adds to electricity bill, not in kg of CO2.
Kg of CO2 is a unit of fearmongering. And if you believe it's not, then the problem is why people are allowed to buy electricity so cheaply, not a photo-sync service.
By the way, a cow produces 120Kg of methane a year.
It shows illustrates the dependency between GDP, population, energy and emissions. Assuming we don't want to reduce population and GDP, we are left with optimizing energy and emission intensity.
Here is my summary of the mistakes Hertz made:
- hired the vendor based on a cool demo & vendor size
- did not have a Hertz employee as Product Owner
- tried to go big fast instead of taking an iterative approach
- did not implement proper tests & validation
- spent $10M without building internal skills
Disclaimer: I am totally bias here as I am consultant and our business model is to help our customers with their digital transformation while teaching them devops and cloud technology.
$10M for a project of this size and importance does not seem unreasonable to me. The big mistake is that they didn't seem to have a plan to take ownership of the project and learn the skills required to carry the project forward without Accenture.
Having been involved in a very similar project I think the mistakes would be on both sides.
"hired the vendor based on a cool demo & vendor size"
It is common that companies don't chose the right people to assess vendors and review the suggested approach. Often the consultancy sends in sales teams to pitch to the boardroom with demos and "we're Big Company X and know what we're doing". However it seems that Hertz knew enough to ask for specific functionality that didn't get delivered.
"did not have a Hertz employee as Product Owner"
Part of leading a project is making sure that your client, internal or external, is properly educated and involved in necessary decisions. Accenture shouldn't have accepted the PO role as it is a conflict of interest.
"tried to go big fast instead of taking an iterative approach"
I don't recall reading if this was Hertz's decision, it could well have been Accenture's suggestion and in the early stages of this relationship Hertz would have trusted their "expert" recommendation.
"did not implement proper tests & validation"
This would have been Accenture's responsibilty.
"spent $10M without building internal skills"
It was $33M and you're right, although we don't know what skills they built internally. They might have been better off building a team internally but often companies prefer to hire "expert" as it's often easier from a cost centre and administrative view.
Specifically: "Your customers are idiots, that's why they hired "experts." They don't know how to judge expertise (otherwise they wouldn't be hiring you). They hire based on emotions, recommendations and epaulettes because that's the best they can do."
I agree both Hertz and Accenture share some responsibility here. I didn't mean to blame it all on Hertz.
"They might have been better off building a team internally". For a business critical projects, building the team internally and using the right partner is the way to go imho... if budget allows, which they had plenty.
What they built sound nice and feature-rich but also quite complex. I just finished adding a (simplier) feature to the SaaS I am building. It generates individual reports with the list of resources owned by the user, the associated cost and flag for thise which are likely unused.
I will post of Show HN soon, but if you are interested in cloud efficiency, I would love you feedback! https://li10.com