Mining Bitcoin requires both hardware and electricity, and the cheapest electricity is solar. There isn't any severe scarcity of the raw materials to make solar panels, or of sunlight, so Bitcoin miners can buy as many solar panels as they want and it would only increase the economies of scale for producing them for other purposes too.
Solar has inconsistent output. There is none at night and it varies based on weather during the day. Mining hardware wants a fixed constant amount of power. The logical thing for miners to do is to somewhat overbuild the amount of generation they need and then sell any surplus to the grid, and sell to the grid during the day and buy it back at night. The same incentives hold if the miners and the generators are two different parties, and the result is to increase the amount of generation capacity by more than the amount of consumption and have "too cheap to meter" during periods of above-average generation. (You were never going to get "too cheap to meter" during periods when generation is low and demand is high.) And even during short periods when demand significantly outstrips supply, then their incentive is to stop operating those few days out of the year because the spot price of electricity makes mining unprofitable then, which allows the generation capacity installed to do mining be used to support the rest of the grid and inhibits the price of electricity from rising above the point where mining becomes unprofitable even for people who already have mining hardware. It's basically a buffer that buys electricity when it's cheap and sells when it's expensive.
Bitcoin has a volatile price. When the price is high, miners buy hardware and increase or pay someone else to increase generation capacity. When the price declines, the mining hardware becomes idle but the power generation capacity still generates fungible electricity that can be used for any other purpose. The result is that miners pay to install a lot of generation capacity during the boom, and have the incentive to prioritize investing in more generation rather than newer/more efficient mining hardware because it's the thing that's still worth something if the price declines, and that generation capacity then gets offloaded into the grid during the bust, with the result that grid prices go up some during the boom and down by even more during the bust. By the next boom some of the generation added last time has already been sold to non-miners or locked into long-term contracts so now they're back to adding new capacity again.
"Incentive to fund increases in generation capacity but then not use all of it" has what effect on average prices?
You're making a lot of highly idealized assumptions that don't hold true in reality.
Most significantly that the increased demand due to mining will result in grid operators investing in proportional new capacity to offset it over a reasonable time scale. Instead of just driving up prices due to basic supply/demand.
Also that miners are only consuming electricity when renewables dominate the mix. Otherwise they're responsible for more CO2 emissions to do something useless.
Plus in markets like Texas, miners also manage to get subsidies intended for actually useful customers like factories to go offline at peak times. So ratepayers are essentially paying protection money so they won't over stress the grid by performing their useless work.
In a world where bitcoin miners had to install new solar capacity to entirely offset their peak usage and sell back to the grid any excess then sure, seems like that wouldn't be a big societal net negative like it is right now.
Or we could use all that "free solar energy" to benefit humanity through a million other more useful endeavors. Such as developing and deploying batteries.
One thing we do not lack is demand for more energy.
> Or we could use all that "free solar energy" to benefit humanity through a million other more useful endeavors.
Please tell me where I can get unlimited solar panels for free. I'll rent a truck and be there straight away.
> One thing we do not lack is demand for more energy.
Market demand is the willingness and ability to pay money for something. If the demand was actually unlimited then why isn't there either a Dyson sphere around the sun already or a 0% unemployment rate from everyone having a job building one?
> They started it because the drivers people used to use from hardware vendors would routinely blue screen windows, which made MS look like the reason windows would crash. Hardware vendors are notoriously inept at software.
But hardware vendors also want Windows licenses to include with their hardware, so it's pretty easy to say "do the hardware program certification if you want the discount" and that's exactly what they did in the early days, and it worked fine. Even the peripherals (which are increasingly rare now anyway) still want to be able to put the Windows logo on their product.
At which point we still have the same question: Why are they harassing the WireGuard developers, who have their own reputation for not being inept at software and therefore shouldn't need a Microsoft certification program to assure their users that their code is trustworthy to install?
> Why are they harassing the WireGuard developers, who have their own reputation for not being inept at software
I would guess this is just large organizations Seeing Like a State whereby they "seek to force administrative legibility on their subjects by homogenizing them".
At which point we're back to, why is Microsoft acting like a government and treating their users like property of the crown instead of autonomous adult human beings who should be free to choose what software they want on their own PC?
Sorry, that was yesterday's HN Wordle! (that's the New York Times-acquired wordplay game Wordle, quite the popular wordplay game--just joking that I created a word game of my own)
Useless reflection to ignore below (forewarned!)
I hesitated to post; in the end, the value of the comment was so low, I expected non-wordplay-fans to scroll past and lose nothing, so I left it in the hopes at least one person would find the answer themselves and be pleased about it.
No drama, I don't mind a puzzle or oblique reference. I'm also a grandparent and spend too much time on pointing out that what one person is thinking of isn't always the same as what another is, and that there's often yet another way of looking at a statement.
I liked your comment, I guessed the word, and had fun pointing out ambiguities at play.
LLMs give you the boring (i.e. statistically probable) answer. You could probably get it to say "money" almost regardless of what the original question was because it's so generic. It might even say that for a name without all the right letters.
From the more than 300 possibilities we can then consider the context. We're talking about Microsoft here, and the problem suggests we're the sort of people who expect anagrams to have secret meaning, so we should prefer an answer implying some kind of conspiracy or kabbalistic nonsense. The obvious candidates are therefore mason and Satan. Between these, Satan would require reusing a letter the candidate set only has once, and one of the other words on the list was stone. We can form two five letter words if we're allowed to reuse letters and thereby get stone mason.
This is the most irrefutable possible proof that we're being pointed to a masonic conspiracy rather than Microsoft's usual popular association with the antichrist.
EFF is more like classical liberal. They generally oppose regulation of speech/tech and oppressive laws like DMCA 1201 (anti-circumvention) but promote things in the nature of antitrust like right-to-repair. Everything is required to be crammed into a box now so that often gets called "left" because the tech companies (also called "left") have found it more effective to pay off the incumbents in GOP-controlled states when they don't like right-to-repair laws, although Hollywood ("left" again) are traditionally the ones pressuring Democrats to sustain the horrible anti-circumvention rule when they're in power.
It turns out trying to fit everything into one of two boxes is pretty unscientific.
> Maybe if you aren’t paying attention to the car industry you’ll disagree with me but the problem here is the Model S and X are positively ancient with about zero dollars spent on keeping them updated and they’ve become completely irrelevant to the market as a result.
In practice they essentially got replaced with the Model 3 and Y, which didn't exist when the models being discontinued first came out.
It's because of the decline in battery prices. When the Model S came out, an electric car with that range had to be that price. Now it's overpriced for what it is so they'd either need to design one which is significantly more premium while still selling into an inherently lower volume market segment, or lower the price to reflect the current battery costs and then have it be too close to the Model 3.
What they really need to do is continue to move down, i.e. release a subcompact with less range than the Model 3 but on the cheap.
The obvious problem with steer-by-wire is that in the traditional design, it's not uncommon to lose power assist but not the mechanical connection to the wheels, so you can still steer the car. To completely lose steering control you'd need significant mechanical damage.
If the whole thing goes through the computer then there are lots of new ways to fail. Steering wheel position sensor goes bad on the highway? Computer gets bad data. Control wires get disconnected or damaged? No data. Completely unrelated wires get shorted and fry the computer? No steering. Anything pops the wrong fuse? No power, no computer or steering motors.
Some of those can be mitigated with redundancy but you're still vulnerable to common causes. You have three position sensors and someone dumps their beverage down the steering column, are there any left and do you have any good way to determine which one(s)? The vehicle took some minor damage allowing water to get somewhere it's not intended to, any way to guarantee you're not about to lose both sides of a redundant electrical system the next time it goes through a puddle infused with conductive road salt?
Of course, a counterpoint is what's been happening in aviation. Autopilot became a thing. Autoland became a thing. And, to keep improving planes (first military planes, then commercial aircraft), it was much easier to drop the mechanical connection to the wings.
Autopilot started as a help to pilots, and evolved to something that is a necessity and pilot control inputs are "suggestions" or "goals", not inputs like turning the wheel on a bike. To be followed in what you might refer to as "the long term" from the perspective of controlling the aircraft, but in the short term, the computer is to fly the plane in a way IT thinks is reasonable. An extreme example would be to enforce the flight envelope. But today there exist autoland-only airports (as well as huge airports that go autoland-only if things are too hard for humans, like LHR)
Most of today's passenger aircraft cannot be flown if fly-by-wire is not operational. Most of today's aircraft actually used for passenger transport cannot land without fly-by-wire.
A number of military aircraft, and rocket planes and rockets, even the ones carrying humans, and more and more passenger planes cannot be flown by humans, not just because the mechanical force humans can generate cannot move the control surfaces (which "can be fixed" with hydraulics, if you don't mind serious caveats), but because the human brain is incapable of generating sufficient control inputs at a fast enough rate, or just can't keep stable flight going.
Hilariously, this also goes for hobby quadcopters. They are flown by algorithms. Humans can't do it. Not fast enough. Humans provide direction. Algorithms, even AI algorithms that aren't even guaranteed to succeed at all (in professional/military drones), actually fly the thing.
But, yes, you're entirely correct by saying "then there are lots of new ways to fail". It also works better, cheaper, faster, safer, more comfortable, ... if it doesn't fail.
And ... robotaxis are already far safer than even a good human driver. So whatever the problems ... they don't actually make things worse.
Also you should check out geohot's business. A lot of cars already are "fly-by-wire". Their solution? They now have 2 CAN buses instead of one. One for the critical stuff. Cylinder timings. Checking the oil levels. Turning the wheels. Actuating the brakes. That sort of stuff. A second CAN bus for your bluetooth music, and displays and what have you. I hear a certain new Mercedes now has like 7 buses. We are making things safer.
Planes are probably the most controled machines we have. Everything gets checked twice or more, everything gets tracked and there is a clear requirement to do it like this because, as you said, its not possible for humans to control a fighterjet or a big plane.
Cars are non of that and we have billions of them on the street.
Cars also became a lot more expensive due to their complexity which def creates problems for a lot of people who can't afford all of that. I'm really torn by this because I think its very good that my side mirror shows me if there is a car next to me but in our capitalistic economy, we are excluding a lot of people from affordable cars. Drive by wire needs to be cheaper and easier to fix/repair.
Btw. Waymos are slowly learning to drive on highways so I might agree that they drive saver than humans in certain controlled envs. For sure not in any environment.
But that is the "tradeoff" people are going for. What irritates me about Waymos is that they are not really cheaper than taxis and Uber. If we want people to become more mobile ... Waymo does not appear to be the answer.
And that was always the trade that was proposed. Sure, Waymo's (and Uber) will displace a LOT of taxi jobs, but they'll be way cheaper than taxis. Well ... they're not. And at that point, from an economic perspective, this is just taking things away for not much in return.
Once again people get a lot of possible choices and once again they choose for the more expensive one, putting more people out of business, out of a job, and as you say out of society. Now they're saying "yeah but this is good for autistic people and women, who can now travel by taxi without ever seeing anyone". How, exactly, does anyone think that's a good thing for society? Seriously?
Plus I'm a bit of the opinion, if Waymo is already breaking their own proposed social contract now ... imagine what they'll do in 10 years.
> Sure, but an attacker could still overwrite your kernel which your untouched bootloader would then happily run.
Except that it's on the encrypted partition and the attacker doesn't have the key to unlock it since that's on the removable media with the boot loader.
They could write garbage to it, but then it's just going to crash, and if all they want is to destroy the data they could just use a hammer.
> The attacker does this when the drive is already unlocked & the OS is running.
But then you're screwed regardless. They could extract the FDE key from memory, re-encrypt the unlocked drive with a new one, disable secureboot and replace the kernel with one that doesn't care about it, copy all the data to another machine of the same model with compromised firmware, etc.
> Full disk encryption protects from somebody yanking a hard drive from running server (actually happens) or stealing a laptop.
Both of these are super easy to solve without secure boot: The device uses FDE and the key is provided over the network during boot, in the laptop case after the user provides a password. Doing it this way is significantly more secure than using a TPM because the network can stop providing the key as soon as the device is stolen and then the key was never in non-volatile storage anywhere on the device and can't be extracted from a powered off device even with physical access and specialized equipment.
> The device uses FDE and they key is provided over the network during boot, in the laptop case after the user provides a password.
Sounds nice on paper, has issues in practice:
1. no internet (e.g. something like Iran)? Your device is effectively bricked.
2. heavily monitored internet (e.g. China, USA)? It's probably easy enough for the government to snoop your connection metadata and seize the physical server.
3. no security at all against hardware implants / base firmware modification. Secure Boot can cryptographically prove to the OS that your BIOS, your ACPI tables and your bootloader didn't get manipulated.
> no internet (e.g. something like Iran)? Your device is effectively bricked.
If your threat model is Iran and you want the device to boot with no internet then you memorize the long passphrase.
> heavily monitored internet (e.g. China, USA)? It's probably easy enough for the government to snoop your connection metadata and seize the physical server.
The server doesn't have to be in their jurisdiction. It can also use FDE itself and then the key for that is stored offline in an undisclosed location.
> no security at all against hardware implants / base firmware modification. Secure Boot can cryptographically prove to the OS that your BIOS, your ACPI tables and your bootloader didn't get manipulated.
If your BIOS or bootloader is compromised then so is your OS.
Well... they wouldn't be the first ones to black out the Internet either. And I'm not just talking about threats specific to oneself here because that is a much different threat model, but the effects of being collateral damage as well. Say, your country's leader says something that makes the US President cry - who's to say he doesn't order SpaceX to disable Starlink for your country? Or that Russia decides to invade yet another country and disables internet satellites [1]?
And it doesn't have to be politically related either, say that a natural disaster in your area takes out everything smarter than a toaster for days if not weeks [2].
> If your BIOS or bootloader is compromised then so is your OS.
well, that's the point of the TPM design and Secure Boot: that is not true any more. The OS can verify everything being executed prior to its startup back to a trusted root. You'd need 0-day exploits - while these are available including unpatchable hardware issues (iOS checkm8 [3]), they are incredibly rare and expensive.
> Say, your country's leader says something that makes the US President cry - who's to say he doesn't order SpaceX to disable Starlink for your country?
Then you tether to your phone or visit the local library or coffee shop and use the WiFi, or call into the system using an acoustic coupler on an analog phone line or find a radio or build a telegraph or stand on a tall hill and use flag semaphore in your country that has zero cell towers or libraries, because you only have to transfer a few hundred bytes of protocol overhead and 32 bytes of actual data.
At which point you could unlock your laptop, assuming it wasn't already on when you lost internet, but it still wouldn't have internet.
> The OS can verify everything being executed prior to its startup back to a trusted root.
Code that asks for the hashes and verifies them can do that, but that part of your OS was replaced with "return true;" by the attacker's compromised firmware.
That's premised on the attacker never having write access to the encrypted partition, which is the thing storing the FDE key on a remote system or removable media does better than a TPM. If the key is in a TPM and they can extract it using a TPM vulnerability or specialized equipment. Or boot up the system and unlock the partition by running the original signed boot chain, giving the attacker the opportunity to compromise the now-running OS using DMA attacks, cold-boot attacks, etc. Or they can stick it in a drawer without network access to receive updates until someone publishes a relevant vulnerability in the version of the OS that was on it when it was stolen.
Notice that if they can modify/replace the device without you noticing then they can leave you one that displays the same unlock screen as the original but sends any credentials you enter to the attacker. Once they've had physical access to the device you can't trust it. The main advantage of FDE is that they can't read what was on a powered off device they blatantly steal, and then the last thing you want is for the FDE key to be somewhere on the device that they could potentially extract instead of on a remote system or removable media that they don't have access to.
There is no real advantage of a central signing authority. If you use Debian the packages are signed by Debian, if you use Arch they're signed by Arch, etc. And then if one of them gets compromised, the scope of compromise is correspondingly limited.
You also have the verification happening in the right place. The person who maintains the Arch curl package knows where they got it and what changes they made to it. Some central signing authority knows what, that the Arch guy sent them some code they don't have the resources to audit? But then you have two different ways to get pwned, because you get signed malicious code if a compromised maintainer sends it to the central authority be signed or if the central authority gets compromised and signs whatever they want.
All PKI topologies have tradeoffs. The main benefit to a centralized certification/signing authority is that you don't have to delegate the complexity of trust to peers in the system: a peer knows that a signature is valid because it can chain it back to a pre-established root of trust, rather than having to establish a new degree of trust in a previously unknown party.
The downside to a centralized authority is that they're a single point of failure. PKIs like the Web PKI mediate this by having multiple central authorities (each issuing CA) and forcing them to engage in cryptographically verifiable audibility schemes that keep them honest (certificate transparency).
It's worth noting that the kind of "small trusted keyring" topology used by Debian, Arch, etc. is a form of centralized signing. It's just an ad-hoc one.
> a peer knows that a signature is valid because it can chain it back to a pre-established root of trust, rather than having to establish a new degree of trust in a previously unknown party.
So the apt binary on your system comes with the public keys of the Debian packagers and then verifies that packages are signed by them, or by someone else whose keys you've chosen to add for a third party repository. They are the pre-established root of trust. What is obtained by further centralization? It's just useless indirection; all they can do is certify the packages the Debian maintainers submit, which is the same thing that happens when they sign them directly and include their own keys with the package management system instead of the central authority's, except that now there isn't a central authority to compromise everyone at once or otherwise introduce additional complexity and attack surface.
> PKIs like the Web PKI mediate this by having multiple central authorities (each issuing CA) and forcing them to engage in cryptographically verifiable audibility schemes that keep them honest (certificate transparency).
Web PKI is the worst of both worlds omnishambles. You have multiple independent single points of failure. Compromising any of them allows you to sign anything. Its only redeeming quality is that the CAs have to compete with each other and CAA records nominally allow you to exclude CAs you don't use from issuing certificates for your own domain, but end users can't exclude CAs they don't trust themselves, most domain owners don't even use CAA records and a compromised CA could ignore the CAA record and issue a certificate for any domain regardless.
> It's worth noting that the kind of "small trusted keyring" topology used by Debian, Arch, etc. is a form of centralized signing. It's just an ad-hoc one.
Only it isn't really centralized at all. Each package manager uses its own independent root of trust. The user can not only choose a distribution (apt signed by Debian vs. apt signed by Ubuntu), they can use different package management systems on the same distribution (apt, flatpak, snap, etc.) and can add third party repositories with their own signing keys. One user can use the amdgpu driver which is signed by their distribution and not trust the ones distributed directly by AMD, another can add the vendor's third party repository to get the bleeding edge ones.
This works extremely well. There are plenty of large trustworthy repositories like the official ones of the major distributions for grandma to feel safe in using, but no one is required to trust any specific one nor are people who know what they're doing or have a higher risk tolerance inhibited from using alternate sources or experimental software.
Nothing, I can’t think of a reason why you would want to centralize further. But that doesn’t mean it isn’t already centralized; the fact that every Debian ISO comes with the keyring baked into it demonstrates the value of centralization.
> Each package manager uses its own independent root of trust.
Yes, each is an independent PKI, each of which is independently centralized. Centralization doesn’t mean one authority; it’s just the way you distribute trust, and it’s the natural (and arguably only meaningful) way to distribute trust in a single-source packaging ecosystem like most Linux distros have.
> cen·tral·i·zation: the concentration of control of an activity or organization under a single authority.
I mean people try to motte and bailey this all the time. You have someone proposing or defending a monopoly by putting it up against the false dichotomy alternative where no party trusts any other party whatsoever and then everyone is required to do everything on their own because no delegation is possible.
There is an alternate which is neither of those things, and it's a competitive market. You have neither a single authority nor the total absence of trust. Instead there are numerous alternatives that each try to maintain a good reputation for themselves because people can choose freely among them without their choice being coerced by tying it to numerous otherwise-unrelated factors.
Notice how this is importantly different. If you have a PC, you can install Debian or Arch or Windows; if you install Debian, you can install software with apt or flatpak or snap; if you use apt, you can use the official repositories or numerous third party ones. If you have an iPhone, you get iOS and you get Apple's store and everything else is anti-competitively excluded.
My point was that Debian, etc. as conceptually distinct organizations, and so there’s no point in centralizing beyond their organizational boundaries. Each already performs centralized key management, but nobody would particularly benefit from a single global keyring for all Linux distributions, because nobody (?) is transferring package formats across distribution families.
> We are truly in the Information Age now, and I suspect a similar thing will play out for the digital realm.
The analogy seems to be backwards though. It would be as if we previously had a scarcity of land and because of that divided it up into private property so markets could maximize crop yield etc. and then someone came up with a way to grow food on asteroids using robots, and that food is only at the 20th percentile of quality but it's far cheaper. Suddenly food becomes much more abundant and the people who had been selling the 20th percentile food for $5 are completely out of the market because the new thing can do that for $0.05, and the people providing the 50th percentile food for $10 are also taking a hit because the price difference between what they're providing and the 20th percentile stuff just doubled.
The existing plantation owners then want to put a stop to this somehow, or find a way to tax it, but arguments like this have a problem:
> Why would a writer put an article online if ChatGPT will slurp it up and regurgitate it back to users without anyone ever even finding the original article?
This was already the status quo as a result of the internet. Newspapers were slowly dying for 20 years before there was ever a ChatGPT, because they had been predicated on the scarcity of printing presses. If you published a story in 1975 it would take 24 hours for relevant competitors to have it in their printed publication and in the meantime it was your exclusive. The customer who wants it today gets it from you. On top of that, there weren't that many competitors covering local news, because how many local outlets are there with a printing press?
Then blogs, Facebook, Reddit and Twitter come and anyone who can set up WordPress can report the news five minutes after you do -- or five hours before, because now everyone has an internet-connected camera in their pocket so the first news of something happening now comes in seconds from whoever happened to be there at the time instead of the next morning after a media company sent a reporter there to cover it.
The biggest problem we have yet to solve from this is how to trust reports from randos. The local paper had a reputation to uphold that you now can't rely on when the first reports are expected to come from people with no previous history of reporting because it's just whoever was there. But that's the same thing AI can't do either -- it's a notorious confabulist.
And it's the media outlets shooting themselves in the foot with this one, because too many of them have gotten far too sloppy in the race to be first or pander to partisans that they're eroding the one advantage they would have been able to keep. Damn fools to erode the public's trust in their ability to get the facts right when it's the one thing people would otherwise still have to get from them in particular.
This assumes the limiting factor is content generation, not ability to read and verify.
You make the point later in your comment, but consider it a minor issue. “Randos”
the actual limits are verification, and then attention. Verification is always more expensive than generation.
However, people are happy to consume unverified content which suits their needs. This is why you always needed to subsidize newspapers with ads or classifieds.
> This assumes the limiting factor is content generation, not ability to read and verify.
Content generation is the thing copyright applies to. If you want to create a reward system for verification, it's not going to look anything like that.
It mostly looks like things we already have, like laws against pretending you're someone else to trade on their reputation so that people can build a reputation as trustworthy and make money from subscriptions or ads by being the one people to turn to when they want trustworthy information.
> However, people are happy to consume unverified content which suits their needs. This is why you always needed to subsidize newspapers with ads or classifieds.
I suspect the real problem here is the voting thing. When people derive significant value from information they're quite willing to pay for it. Wall St. pays a lot of money for Bloomberg terminals, companies pay to do R&D or market research, individuals often pay for financial software or games and entertainment content etc.
But voting is a collective action problem. Your vote isn't very likely to change the outcome so are you personally going to spend a lot of money to make sure it's informed? For most people the answer is going to be no, so we need something that gives them access to high quality information at minimal cost if we want them to be informed.
Annoyingly one of the common methods of mitigating collective action problems (government funding) has a huge perverse incentive here because the primary thing we want people to be informed about is political issues and official misconduct, so you can't give the incumbent politicians the purse strings for the same reason the First Amendment proscribes them from governing speech.
So you need a way to fund quality reporting the public can access for free. Advertising kind of fit but it never really aligned the incentives. You can often get more views by being entertaining or inflammatory than factual.
The question is basically, who can you get to supply money to fund factual reporting for everyone, whose interest is for it to be accurate rather than biased in favor of the funder's interests? Or, if that's not a thing, whose interests are fairly aligned with those of the general public? Because with that you can use a patronage model, i.e. the content is free to everyone but patrons choose to pay money because they want the work to be done more than they want to not pay.
The obvious answer for "who" is then "the middle class" because they're not so poor they can't pay a few bucks while still consisting of a large diverse group that won't collectively refuse to fund many classes of important reporting. But then we need two things. The first is for the middle class to not get hollowed out, which we're not doing a great job with right now.
And the second is to have a cultural norm where doing this is a thing, i.e. stop teaching people illiterate false dichotomy nonsense where the only two economic camps are "Soviet Communism" in which the government is required to solve everything through central planning and "greed is good" where being altruistic makes you a doofus for not spending all your money on blackjack and cocaine. People rather need to be encouraged to notice that once their basic needs are met, wanting to live in a better world is just as valid a use for free time and disposable income as designer shoes or golf.
> what if the US would use actual physical gold coins instead of dollars?
The problem here is, what if the demand for dollars increases?
In principle the US would get more gold and mint more currency, but gold is a finite resource. "All the gold ever mined" is around 200,000 metric tons, ~32k troy ounces per metric ton is ~6.4B troy ounces.
In 2022 (just before the recent gold rally) the price was ~$2000 per troy ounce, i.e. "all the gold" was worth ~$13T. Meanwhile the M3 money supply in the same year was ~$20T. What happens if you try to buy $20T worth of gold to mint currency when only $13T worth has ever been mined, and not all of that is even on the market? The answer is that you can't, so instead the result is deflation, which is bad.
Or to put it a different way, what do you think the economic effect of the recent gold rally would be for a country whose currency was still pegged to gold? It just got way cheaper to import foreign products than buy domestic ones, and way more expensive for foreign countries to buy your exports, so how's the unemployment rate looking? The amount everyone owes on their mortgage hasn't changed but the nominal value of their houses just got cut in half so now they've lost their jobs and are underwater. What happens when they start to default and foreclosures don't allow the banks to recover the principal?
The primary function of money is its trade value, to "lubricate" the real economy to let goods and services flow. When the value is unstable, people are inclined to not spend or not accept that currency, which contradicts the free flow of it and in severe cases harms the economy.
Crypto'currencies' have the same problem. By nature, they are no currency but investment for which instability is required. No crypto bro would hype their 'currency' because there would be no pumping. Arbitrage trades are considered being for fools or insiders.
Bitcoin has the same problem. There is no inherent reason you can't have a cryptocurrency where there is no maximum number of coins to ever be mined and instead the limit is that mining them requires a fixed amount of computation.
That would give you the characteristics you want from a medium of exchange, because there is a rate limit on how much can be created (doing so requires e.g. electricity). Then the value is relatively stable, if you accept it as payment on Monday it would still be worth around the same amount on Friday, but the long-term result is a slow reduction in value on multi-year timescales as compute gets cheaper, so you don't get the speculation that results in high volatility and it doesn't strongly compete with real economic activity for investment resources.
The argument you'll get from goldbugs and whatever is that nobody would want a currency which is inherently inflationary like that, but that's clearly contrary to evidence. Most government currencies are inflationary, even on purpose, and it doesn't matter as long as the rate of inflation isn't so high that people holding it transiently for use as a medium of exchange are losing a significant amount in that short period of time. Especially when the rate of inflation is predictable (the rate at which computers get faster is reasonably consistent) so that anyone entering into a long-term contract denominated in that currency can reasonably predict its future value on the delivery date. Or people could just use it as a medium of exchange and denominate their contracts in something else.
You are right, the harm of unstable currencies is a matter of public perception but boiling the frogs slowly still does harm. Inflation is a significant, longterm, bottom-up wealth pump and simply pointing to empirical evidence is a point against unstable currencies, not for them.
How cryptocurrencies are mined is secondary in that regard too because their values is also purely based on perception, since there is no large authoriry backing that currency by eg. demanding , spending and regulating it.
Mining Bitcoin requires both hardware and electricity, and the cheapest electricity is solar. There isn't any severe scarcity of the raw materials to make solar panels, or of sunlight, so Bitcoin miners can buy as many solar panels as they want and it would only increase the economies of scale for producing them for other purposes too.
Solar has inconsistent output. There is none at night and it varies based on weather during the day. Mining hardware wants a fixed constant amount of power. The logical thing for miners to do is to somewhat overbuild the amount of generation they need and then sell any surplus to the grid, and sell to the grid during the day and buy it back at night. The same incentives hold if the miners and the generators are two different parties, and the result is to increase the amount of generation capacity by more than the amount of consumption and have "too cheap to meter" during periods of above-average generation. (You were never going to get "too cheap to meter" during periods when generation is low and demand is high.) And even during short periods when demand significantly outstrips supply, then their incentive is to stop operating those few days out of the year because the spot price of electricity makes mining unprofitable then, which allows the generation capacity installed to do mining be used to support the rest of the grid and inhibits the price of electricity from rising above the point where mining becomes unprofitable even for people who already have mining hardware. It's basically a buffer that buys electricity when it's cheap and sells when it's expensive.
Bitcoin has a volatile price. When the price is high, miners buy hardware and increase or pay someone else to increase generation capacity. When the price declines, the mining hardware becomes idle but the power generation capacity still generates fungible electricity that can be used for any other purpose. The result is that miners pay to install a lot of generation capacity during the boom, and have the incentive to prioritize investing in more generation rather than newer/more efficient mining hardware because it's the thing that's still worth something if the price declines, and that generation capacity then gets offloaded into the grid during the bust, with the result that grid prices go up some during the boom and down by even more during the bust. By the next boom some of the generation added last time has already been sold to non-miners or locked into long-term contracts so now they're back to adding new capacity again.
"Incentive to fund increases in generation capacity but then not use all of it" has what effect on average prices?
reply