Been using one of the Analog demo boards, I think one of the ADIN1100 parts which is 10BASE-T1L instead, to solve a problem that occurred when I let a mechanical engineer order a cable for me. No twisted pair inside. Ethernet on either end. We barely found space inside the pressure vessel for the demo board (project at this point running on fumes, so instead of designing a tiny little board with just the few required parts I bought a pair of these boards), and connected the other one to the other end of a pair of wires in the cable.
Astonishingly, not only did it work the first time, it worked well enough to stream video. I'm hoping to see a more standardized PoDL interface crop up so that I can use two wires for data and power combined, for lower-power subsystems.
There's a lot of application for this standard, I hope it catches on enough that I can keep buying chips for it for a long time.
About 4 years ago I tried to use 10BASE-T1 with PoDL (power over data line) in a robotics project. I gave up after about 18 months of trying to put together the first prototype. I was able to get engineering samples (before they were on the market) of the magnetics and related stuff from Pulse, but I couldn't manage to get a PHY from anyone.
The reason I wanted to use it is power consumption. I had previously planned to use 100BASE-T (normal 100Mbps ethernet) in a robotics project but it turns out that it uses about 0.8W per port (1.6W per cable) so it was using by far using more power than my various microcontrollers. Faster ethernet uses even more power. There is a standard for low power ethernet but it's not a huge improvement and you really have to go out of your way to select components to use it.
I ended up using CAN, but it will be really nice when these standards become well-supported.
It does! The collision detect signal is repurposed somewhat for PLCA but is in fact working and backwards compatible with 10BASE-T MACs if you ignore the additional functionality. The stations with PLCA will try to avoid collisions with each other but you can barge in and collide with them anyway and it will still work.
The main warts in the standard are dynamic PLCA node ID allocation and the fact that PoDL is undefined for non-point-to-point topologies, besides how odd it is to implement SCCP for PoDL.
I've been thinking that a way around the PoDL aspect is to ignore SCCP and to just use periodic square wave pulse trains with a certain frequency to indicate to the PSE that it should switch power on. Basically an "I know what I'm doing" indicator to prevent sniffer adapters and unterminated connections from being provided live DC voltage.
Does anyone know the safety implications of a multi drop bus like this for safety critical systems? Typically if one device fails, it can bring the entire network down, hence why single drop buses have become popular for diverse networks
You should have a plan for network instability or outages that causes a safety stop for anything affected by that fault domain. If uptime is important then you might have two separate networks side-by-side. You can also split a large fault domain into a few smaller networks. There is also the possibility of being clever with how things are wired so that the PHY can be physically cut out of the bus by a watchdog, but that doesn't help with cable faults and introduces its own possibility for failures.
It's also possible to have another set of emergency stop signal wires side by side with the digital communications in case there's an issue.
Isn't CANbus a multi-drop bus? Do car companies run safety-critical functionality off of it?
--------
My understanding is that 10BASE-T1S is a replacement for CANbus. Whatever CANbus does, 10BASE-T1S should do it better. It happens to be Ethernet (albeit a much slower Ethernet), but Ethernet/IP/TCP is something modern programmers understand better than UARTs / CANbus.
Actually, CAN XL is just around the corner with up to 20 Mbps vs 10 Mbps of 10BaseT1S. The other difference is that CAN XL has a paylod of 2048 bytes and will have data link layer security from the get go. While 10BaseT1S has just a little bit over 1 kbytes and MACsec is still ery expansive.
That said, both technologies are just prototypes and expected availability is in about 3 years.
I'm not an expert on automotive systems specifically, but in general with silicon level stuff like this when used in small, highly integrated systems, just having a PHY isn't nearly enough. Really, you want it to come baked into all the MCUs and MPUs your particular vertical uses so you don't need to waste time making your board designers wire up more shit and you can use the vendor library for your MCU instead of writing more shit. Most automotive microcontrollers come ready to go with a full CAN interface including all the libraries and such that you'd want, as well as dev boards that pin it out to whatever connector so you don't need to waste a board spin cycle creating an EDU.
So you kinda caught me here, I'm pulling my own vague spin on Cunningham's Law because in my industry (satellites) there has suddenly been some interest in using CAN for potentially fault-prone critical systems and this post caught my interest. Traditionally we use single ended buses for everything but traditionally we do a lot of things ¯\_(ツ)_/¯
10BASE-T1S is also kind of interesting in the same way, especially if it's potentially possible to use an existing linux network stack on top of it.
In 10BASE-T1S a device can fail but it won't impact the other nodes on the bus unless there is a physical failure in the wire at the connector. All the nodes are wired together on the same cable and see the same data. The device doesn't need to do anything for the other nodes to receive the data.
First of all: this is multidrop. So if a single device fails, there is no immediate feedback to the network meaning all the other devices can still communicate with each other.
This different to all other Ethernet-based solution, which requires some sort of switch either central or distributed within each device. When something fails here, it has much deeper impact. About 5 years ago GM thought about this and required switches within each and every device. Then signals should be transmitted redundantly parallel over different links. The problem here is the proper dedublication and verification of the data. Because that costs ressources in terms of time and compute power and memory.
There isn't the connector for 10BASE-T1S or other Automotive Ethernet standards such as 100BASE-T1. OEMs (= car manufacturers) will often define their custom connector. E.g., the Ethernet signal might just occupy two pins on an much bigger ECU connector.
If you're just experimenting just do everyone a favor and use RJ45 or screw terminals. These special connectors are really expensive and require special crimp tools.
RJ45 needs to die. As does every other connector that was ever used for more than one incompatible thing. USB got it right: build a custom connector for your application so that nobody will ever plug the wring thing in.
I don't care what you replace Ethernet RJ45 with, but make it a standard that is only used for Ethernet.
Weird connectors are hard to hack on. You bring up USB but I couldn't think of a worse example. USB got it wrong, it supports so many different things on a single connector that in some situations it can be impossible to determine what will happen when you connect device A to device B with cable C. Now don't get me wrong, it's great for plugging things into your laptop, but the world of cabling is so much bigger than that, and we don't really need to bring in even more BS connectors where they don't belong.
At the end of the day it's someone's job to actually run cable, terminate and test it, and attach equipment to it that people are responsible for. You have to remember that the physical cable plant is infrastructure itself whether it's connected to live equipment or not. There's already a big enough problem with people trying to do things like route display cables through buildings, for example - the workaround that I'm seeing a lot is that people are using specialized signal converters that send HDMI/DisplayPort over a pair of Cat5e/Cat6 cables, just so that structured cabling can use normal cabling/networking supplies (i.e. keystone jacks, patch panels, conduit sizes, testing/tools, etc.) without wasting so much time with planning for projectors/displays ahead of time etc.
RJ45 is for twisted pair cabling. It's not specifically for Ethernet. 802.3cg only uses 1 pair. If you use the center pair then it works for TIA-568A, TIA-568B, or USOC. It's great for experimenting since everyone already has twisted pair patch cables to play with. Barring that, just give me screws that I can put the wires into so I don't need to think about connectors at all.
I did HDMI over Ethernet at home. It works with a 19 m long Cat 6 cable plus two 3 m patch cables. Beware of bending the cable too much if it must go inside pipes in the walls.
Whatever connector you stick on the end, as long as it gains popularity someone will use it in weird ways. The second your start selling these connectors widely and for affordable prices, they're going to get reused.
See: USB over D-SUB, serial over RJ45, the wide range of proprietary protocols that cheap IDE cables were (and probably still are) use for today, and weird edge cases like reusing the sturdy DMX ports to power sex toys.
8P8C is fine for ethernet, it doesn't need replacing. Nobody uses it for telephony modems anymore and very rarely will an average user encounter a compatible plug that wasn't made for ethernet.
I'd rather see fringe use cases like serial over RJ45 use a better connector than to have to cut off, strip, and wire up a new connector for every new network appliance I'll buy in the next 10 years.
To be extra pedantic, it's serial with an "8P8C modular connector", not over RJ45. Sure, the physical connector is the same thing, but the RJ- standards define the uses of the various pins, not just the physical connector. RJ45, RJ49, and RJ61 all use the same "8P8C modular connector". RJ45S is for only one data line, with a programming resistor. RJ49C is an 8P8C carrying ISDN via NT1, RJ61 is four telephone lines on an 8P8C. There's no RJ standard for RS232 or for Ethernet at all!
Everyone (including me) still calls it RJ45 because "8P8C modular connector" is far too long a name.
It's purely an analog device. Presumably they used USB because the cables are cheap, shielded, and have enough conductors for what they needed. (Probably ground plus signal wires for two linear pots and a pressure sensor.)
I hope they designed it so it doesn't fry something if you connect it to a real usb device, but I'm not inclined to try it.
You can run it on whatever you want. Ethernet pairs in a harness with dozens of other wires and MIL-DTL-38999 connectors are pretty standard in aerospace.
> The cable harness used today is one of the three heaviest subsystems in the vehicle (weighing up to 60 kg)3. Traditional Ethernet cables use four differential pairs for data transmission, adding weight and routing complexity, which is not optimal for automotive applications. To address this, new IEEE standards were developed to support Ethernet transmission over single twisted pair cables, which, coupled with the reduced cable harness lengths enabled by zonal architecture, can offer significant cable savings and weight reduction.
Because in cars that are at least 1000kg, a few grams definitely matter.
Actually the weight savings is irrelevant. The advantage is in the simplification of the cable assemblies! Did you know that just about every cable assembly in every car is made by hand‽ Hackaday wrote an article all about it:
By having just one wire to transmit data and power to components in each zone it will be a significant cost savings. It also makes the various automotive engineer's jobs a lot easier.
So that makes me a bit concerned. This a single conduit with collision detection. All fine and good.
So does that mean every PCB and subassembly/controller in a car will be daisy-chained? Would one single broken wire disable the entire car or just the downstream modules? Do you create a "backbone" down the length of the car and fan the signal out like a nervous system?
It won't be that everything in the car is connected on one bus. What is more likely is that you have a zonal ECU that controls a group of a few sensors and actuators. That zonal ECU will likely be connected to other parts of the vehicle through more traditional Ethernet. The 10BASE-T1S standard requires that it can support a minimum of 8 nodes on the bus although you can support more but you aren't going to be hooking up absolutely everything together on it.
>Would one single broken wire disable the entire car
Not in this case but try shorting the two CAN bus wires together in an older vehicle.
There is a TON of work in designing, routing, manufacturing and installing harnesses which I think is more important than the mass savings. Harnesses really complicates design and assembly, because you have harnesses that pass through multiple parts and subassemblies managed by different teams.
The multidrop feature of this standard means that in a car, you could have a main harness that rarely needs any updates, and as features are added to the next model of car, most updates could be handled with a local wire harness update.
A harness mistake(s) cost Airbus billions on the A380 program.[1]
Save a few dollars, times 10's of thousands of vehicles in higher production models.
I've posted this before in more detail, but the short version is that when I was working at Ford Motor Co in the late 90's I remember seeing some internal documents championing how they saved ~$200 off a production Taurus (at the time a ~$20,000 vehicle) via a bunch of $10 and $20 individual cost savings. It was a big deal, added up to real dollars.
There is more cost savings than you might think in a simplified wiring harness.
Some 3rd party organization builds the harness, then it ships to the manufacturer, where it is usually installed by humans.
Besides the material savings of less actual wire, you most likely have labor savings on the harness build, and possibly on the installation if the new harness is easier to install based on the reduced overall weight and complexity.
The networking methodology side would likely not be overly complex. We already have CANbus device networks, and the associated software stacks. Changing to an ethernet based approach is a well-understood transition that would not require major changes, at least not beyond the incremental updates and other things that the engineers are already likely to be working on.
There are benefits weight and cost and system complexity and network specifications, etc. There are certain things you just cannot do with CAN bus that manufacturers want to do. Using traditional Ethernet is possible (and I've done it), but the second you bring it up to penny pinchers they get a headache and the conversation is over. Having something like this allows you to make the transition while not just increasing capabilities, but actually decreasing costs.
Dollars? You know that automotive calculate in 1/10 of cents. Every cable needs an appropriate connector. Every wire needs its dedicated pin in the connector. Did you know, that the connector housing is directly molded into the case because its cheaper?
Alone Ford at one point sold over 6.6 million vehicles a year.
So, you're telling me, the manufacturers would've switched to this new amazing way of doing networking in the car, but it would've cost them an extra dollar and added a few hundred grams to the weight of the car, so they just had to wait for this standard to come along?
10/100BASE-T1 and its relatives are a long time coming. Car manufacturers are actively making these standards, they aren't "waiting for it to come around".
And if it was only in one place, yea, maybe not worthwhile. But what happens if you save $5 each on 300 different systems in a production run of a million cars?
But you don't save 5 dollars on 300 systems, you save less, per car. The price and weight of the cables has very little to do with the decision to use this vs other ethernet standards.
Automotive manufacturers will spend 6 figures in NRE costs to save $0.003 in unit costs. If they only redesign every few model years, use the same hardware across several models, and they sell millions of cars per year, the math works out pretty well.
I've said it in another reply, but the automotive industry would've switched to this amazing new standard, except it would've added a couple of dollars to the cost and a few hundred grams to the weight of the car, so they just had to wait for this?
10BASE-T1S is more of alternative/replacement/upgrade to CAN and CAN FD, which use a single wire pair as bus, too. And automotive 100 MBit/s and 1 GBit/s Ethernet are also single-pair. (But they are point-to-point and therefore more expensive, as the article explains.)
Some low-power low-speed low-cost ethernet would also be very interesting for home automation. Thread/Zigbee just has 250kb/s and that is already enough. Cables offer more reliability and the possibility for low voltage dc power supply.
Most HNers are probably too young to remember any alternatives to Ethernet like Token Ring, but that’s what it sounds a lot like oddly enough.
The only other thing I don’t understand is if each node has the opportunity _not_ to transmit, that seems like it would cause jitter in late and see if he bunch of nudes sometimes transmit and sometimes don’t. Not sure what I’m missing
What a horrible cookie consent screen. Using Firefox on Android I get just one option, "accept all", with no option to close or minimize the blocking popup, and no option to accept anything less than all cookies.
The standard should be written, and let the devices decide what speed they want to speak. If sender and receiver of a packet both want to speak 1Gbit over the link with clever QAM and adaptive channel coding, then let them do that... And if another device only supports 10kbits with a single mosfet and pull up resistor, then thats fine too...
The purpose of the standard should be to define who gets to speak when on the link, and how those speeds should be negotiated, and how devices can fairly share the capacity and be functionally compatible.
The cynic in me says that if standards were written this way, with no upper bound on data rate, symbol rate, bandwidth, frequency use, etc, then standards writers would soon be out of a job.
Just like the RS-232 serial standard originally was 300 bps, but now people use functionally the same protocol to talk 9600 bps, 57600 bps, 1000000 bps, 25,000,000 bps, etc... The original RS-232 standards group is long dead, yet the standard keeps going on because people can use the same spec (with a few tweaks along the way) faster and faster for 6 decades!
Not really. There is no ring, neither physically, nor logically. It's based on CSMA/CD, like the "good old" coaxial cable Ethernet. (Optionally it can use additional methods to avoid collisions, though.)
One effect of standards like these is that they are a higher barrier to entry. In a previous job doing embedded development we had some requests from automotive manufacturers but didn't apply for the tenders, because that would have required that we redesign our product. Some parts were bought and that wouldn't have been possible, and for others the upfront investment was too much considering the risk.
Astonishingly, not only did it work the first time, it worked well enough to stream video. I'm hoping to see a more standardized PoDL interface crop up so that I can use two wires for data and power combined, for lower-power subsystems.
There's a lot of application for this standard, I hope it catches on enough that I can keep buying chips for it for a long time.