I'm a firmware engineer. The languages I use most frequently for professional purposes are C and Rust. C is less "safe" than Rust, in the sense that a larger share of C code is susceptible to subtle footguns. Sure, if you poke at edge cases and fill your code with unsafe blocks, Rust will do bad things too. To me, this seems like a reasonable way of thinking about safety: it's all relative, and it's all context-dependent.
The conversation about language "safety" always leads to a bunch of goalpost-moving. Is a language with a safe type system still safe if its runtime is buggy? What if the OS/kernel has a bug? What if the CPU has a bug? How about anomalies in I/O hardware? Keep going down the rabbit hole, and you'll eventually reach the conclusion that all software is extremely fragile and could explode into a million pieces if a single cosmic ray goes the wrong way. Thinking about safety in an absolute sense isn't productive.
As an aside, your CS buddies told you a bunch of nonsense.
The conversation about language "safety" always leads to a bunch of goalpost-moving. Is a language with a safe type system still safe if its runtime is buggy? What if the OS/kernel has a bug? What if the CPU has a bug? How about anomalies in I/O hardware? Keep going down the rabbit hole, and you'll eventually reach the conclusion that all software is extremely fragile and could explode into a million pieces if a single cosmic ray goes the wrong way. Thinking about safety in an absolute sense isn't productive.
As an aside, your CS buddies told you a bunch of nonsense.