At some point you end up testing the peripheral and/or host rather than the cable. For example, cables often state that they can handle up to 240W ... but no 240W USB-PD chip has ever gone into production -- you won't even find one at the hottest USB-PD trade shows[0] in China.
It could be reasonable for computers to be allowed to trigger a data throughput test and the peripheral would state "I support up to 40Gbps of receiving/sending", and then send a simple pattern that can be generated on the fly. But a lot of devices can't receive/send that 80Gbps of data for long enough to perform a decent test - the storage, RAM, buffers, etc get depleted or act as bottlenecks.
If you know enough to accurately interpret the measurements you get from that, you know enough to write your own computer program to try to send 80Gbps from one computer to another and use DMA to process it in real-time without hitting storage (which a lot of peripherals likely don't have the CPU to accomplish).
If you don't know enough to write those test applications, you probably don't know enough to interpret the results of a built-in test function and the measurements would confuse and frustrate a lot of well-meaning, nerdy, but under-educated consumers who make assumptions about why they're not actually getting the rated speed.
Idk, my opinion doesn't go one way or the other here. Perhaps I myself don't quite know enough to be a good judge of that concept.
> For example, cables often state that they can handle up to 240W ... but no 240W USB-PD chip has ever gone into production -- you won't even find one at the hottest USB-PD trade shows[0] in China.
Your information is out of date. You can buy 240W chargers from Framework which I assume are just rebranded Delta chargers:
I think you’re overthinking the bottleneck side of things: RAM to RAM would be sufficient to capture if the cable is capable of 40Gbps.
All an end user cares about is if the cable is the bottleneck, if you think you have known-good devices. If I have a MacBook and a good NVMe enclosure, I want to know if my cable is fast enough, rather than have it quietly fall back to 3.2 or worse.
You don't need to test at 240W. You primarily need to test that it can handle 5 amps with limited voltage drop. You can also test that it handles 48 volts but basically any cable can handle 48 volts. The chance that either one of those very mild operating conditions compromises the other when you combine them is minimal.
>no 240W USB-PD chip has ever gone into production
This is because the cross-sectional-area of the conductor would create an inflexible cable – and even then the connector (even though rated) could never handle a sustained 240W in the real world.
Fires. Fires everywhere... this is why no 240W chip exists.
240W for USB-PD is only 5 Amps (USB spec only calls out for 240W at 48V) which can be safely carried by a standard 16AWG conductor.
USB-IF certifies plenty of USB cables as being tested safe for 240W. The reason 240W chargers don't exist is due to cost and a chicken-and-egg problem. There’s not really any demand for it.
Just so I understand: would "extra snubbing" mean the USB-C cable wiggles less when plugged in (i.e. tighter tolerances)?
If so, this would probably mean it'll break/deform easier, too, no?
My above perspective is literally after decades of replacing burned-out devices (both freelance residential and IBEW datacenters), which "technically" are installed correctly — but know their realworld-alities.
There is an electrical circuit that suppresses the voltage spike when you suddenly unplug the cable, to suppress arcing. This improves the immediate physical safety at high power levels, and improves the amount of wear that happens. No physical changes.
My lay understanding is that USB-C PowerDelivery isn't even initiated until comms have established the supported wattage? ...or perhaps some very low 5W USB-A-like amount. On sudden disconnect, I presume you're talking about a debouncer (RC) circuit?
----
The concern I have is less about initial arcing (i.e. intentional [dis]connections), and more about long-term sustained powerdraw (I have seen soooooo many melted neutral terminals on 120V receptacles) on a loose connection. Connections become loose for a variety of reasons (including but not limited to bad installation), particularly on thermal throttlers (e.g. small wires, corrosion, cycling).
Does low voltage world have the same 80% derating as insidewireman-land (NEC/AHJ)? i.e. does a 240W PD USB-C allow continues 240W delivery (by protocol/standard/regulator), or is it neutered to 180W for "long-term loads" == 3+hr runtime (e.g. a computer display), with only ≥181W-peaking allowed..?
I just cannot see how such a small connector/cable can deliver sustained 240W, in the realworld that I've lived in.
> My lay understanding is that USB-C PowerDelivery isn't even initiated until comms have established the supported wattage? ...or perhaps some very low 5W USB-A-like amount. On sudden disconnect, I presume you're talking about a debouncer (RC) circuit?
Correct that this is only a worry about disconnects.
> The concern I have is less about initial arcing (i.e. intentional [dis]connections), and more about long-term sustained powerdraw
I think devices usually monitor voltage to make sure there isn't too much loss, and you're probably not going to get enough loose pins at the same time to see dramatic issues.
It's a valid concern, but it's a concern you'd see on almost any type of plug, isn't it?
> Does low voltage world have the same 80% derating as insidewireman-land (NEC/AHJ)? i.e. does a 240W PD USB-C allow continues 240W delivery (by protocol/standard/regulator), or is it neutered to 180W for "long-term loads" == 3+hr runtime (e.g. a computer display), with only ≥181W-peaking allowed..?
They're not worried about heating that takes more than 3 hours, so that specific kind of derating isn't part of the spec.
The 3 or 5 amp limit is designed around continuous load.
> I just cannot see how such a small connector/cable can deliver sustained 240W, in the realworld that I've lived in.
Well for sustained current we're worried about the amps, right? You get the same resistance and heat in the plug regardless of voltage.
Before USB C, we were putting 3 amps over a single pin each way in a USB Micro connector. Now with USB C we're putting 5 amps over 4 pins each way, with the new pins almost as big as the old pins.
>you're probably not going to get enough loose pins at the same time to see dramatic issues ... it's a valid concern, but it's a concern you'd see on almost any type of plug, isn't it?
nVidia_12VHPWR_sweating_bullets_.gif
(if unfamiliar, the 12VHPWR is the fire hazard found on some modern GPUs)
----
In the trade-offs of amps verse volts, there are tradeoffs to be made. Yes, I agree that amperage is the primary generator of heat... but is voltage not the primary degenerator of insulations/gaps (particular one so user-interfacing). In a perfect world...
kids_phone_cord.frayed
----
Thanks for the great discussion. I'm learning/adapting. This oaf breaks.things.lots
I'm aware of the nvidia thing. But these particular pins have much less individual leeway, and they're a few mm apart in a pretty tight shell so you can't get the same kind of crooked install.
More voltage has more dangerous aspects, but 48 isn't all that high and in a steady state it's not causing problems.
Plenty of USB-C cables are capable of charging at 5 amps continuously and do so today. How is the voltage relevant to how much power is dissipated in the cable? That’s the only difference between a 100W charger and a 240W charger.
I'm envisioning some future frayedAF school laptop cord, where an increasing voltage correlates to higher likelihood that those amps can more-readily arc/jump (across melt, muck, and matter).
At the end of the day, an increase of either voltage and/or amps calls for a sturdier design (of ports and cables).
> At the end of the day, an increase of either voltage and/or amps calls for a sturdier design (of ports and cables).
The cables themselves are already plenty tolerant from an insulation standpoint for 48V. Voltage is low enough to not harm anyone. The ports, as already mentioned elsewhere, are designed to have snubber circuits for rapid reduction in voltage during an unplug. There's a keep-alive to cut voltage as soon as it doesn't detect things plugged in anymore (or, perhaps, the cable gets damaged and can't communicate).
Seems to me like the sturdier design is already accounted for. I don't think "it's small therefore I don't like it" is a valid reason to distrust the standard inherently.
It could be reasonable for computers to be allowed to trigger a data throughput test and the peripheral would state "I support up to 40Gbps of receiving/sending", and then send a simple pattern that can be generated on the fly. But a lot of devices can't receive/send that 80Gbps of data for long enough to perform a decent test - the storage, RAM, buffers, etc get depleted or act as bottlenecks.
If you know enough to accurately interpret the measurements you get from that, you know enough to write your own computer program to try to send 80Gbps from one computer to another and use DMA to process it in real-time without hitting storage (which a lot of peripherals likely don't have the CPU to accomplish).
If you don't know enough to write those test applications, you probably don't know enough to interpret the results of a built-in test function and the measurements would confuse and frustrate a lot of well-meaning, nerdy, but under-educated consumers who make assumptions about why they're not actually getting the rated speed.
Idk, my opinion doesn't go one way or the other here. Perhaps I myself don't quite know enough to be a good judge of that concept.
0: https://asiachargingexpo.com