Hacker Newsnew | past | comments | ask | show | jobs | submit | idubrov's commentslogin

The problem with 240V is that there are few NEMA standards used is USA (although , they look bulky and are kind of unsafe around the kids.

Which is not a problem for dedicated circuits (you could wire receptacle just for this specific dryer / hot tub / RV / whatever, plug it and never remove), but would be a bigger problem if 240V is shared between appliances.

I think, NEMA 6-15 / NEMA 6-20 would be the best (they look very similar to standard 120V outlets; they have tamper resistance; 15A or even 20A should be plenty enough), but nobody wires them.

Also, there are other minor differences like:

1. 50Hz in Europe vs 60Hz in USA. 2. The voltage between ground and "power" lines would be different. In Europe it would be 240v between line and ground and 0V between neutral in ground, in USA it would be 120V between ground and both power connectors, due to how 240V is typically delivered to single-family homes in USA (split phase). Should not make much difference (you are not supposed to have any current to ground anyway, and any short-circuit should trigger the circuit breaker), but maybe it will affect safety somehow? 3. Seems like in some cases in USA you can get 208V instead of 240V (and Europe is nominally 230V).


(everything below assumes US and NEC / local electrical codes)

I don't think code prohibits it, but in my (not very educated) opinion, you might get into some gray territory if you try install them everywhere.

There are some safety related provisions which are mandatory for regular 120V (GFCI and/or AFCI), but I don't think they are required for 240V circuits. Building inspectors might have questions if you install these outlets everywhere. Although, again, I don't remember any limitation of where you can have them.

Another issue is the type of the receptacle. Apparently, Leviton makes receptacle that might be allowed in US and is combination of regular 120V plus European style 240V. However, it is only limited to 2.5A, which is very little (I think, this receptacle is primarily designed for hotels / shared spaces where you only want to charge your devices). Also, I don't think it has ground for 240V.

Probably, the best would be to use US 15A/20A receptacles (NEMA 6-15 / NEMA 6-20), which look very similar to the regular 120V ones (the difference is blades are horizontal).

However, I've never seen any actual plug using them (even though I did install these 240V in my garage, expecting some 240V equipment). But you can rewire plugs on your "imported" equipment / appliances.

There are also some interesting differences of the supply: in US it is typically split phase 120V+120V=240V, but sometimes (according to the internet) it could be 3 phase 120V with 208V between phases; in some parts of the world, it would be 3 phase 230V with 400V between the phases. This probably would cause some differences in how grounding works, might affect safety.

But yeah, generally, 240V should not be a problem. Power-hungry equipment (water heater, range, electrical dryer, EV chargers) -- they all typically use 240V already.


No, there is a proper us code-compliant solution for outdoor 240 receptacles - it’s that 4 prong 50amp one … you see them coming out of the ground at ski resorts, for instance, and they are also used on those outdoor PDUs you see at concerts and parades …

Edit: I think it’s a 14-50r.

Edit2: You almost certainly don't need these. A plain old 20amp 110 receptacle, on a dedicated circuit, GFCI, is a perfectly reasonable and code-compliant receptacle to put all over the outside of your house and will power anything you might need.

I can believe that a 15amp (pressure washer, weeder, etc.) might seem underpowered but I am skeptical that a 20amp one would be ...


Right, there are code compliant solutions. I was just musing over an idea of having 240V "everywhere" (including inside the house in every room, having 240V for special needs is, of course, a solved problem).

14-50R is not what I would use in that case, they are bulky, unsightly and don't offer tamper resistance.

6-20R / 6-15R have variants that are tamper resistant, and they look like regular outlets.

"20A should be enough for everything".

We used to have 2.5-3kW kettles before we moved to US, which would require 20-25A (although, I don't think the math is that simple -- circuit breakers don't trip on "exactly 20A").

Also, the issue with 20A receptacles is that, again, I have never seen 20A plugs (NEMA 5-20P) on appliances (it has neutral connector "flat" rather than vertical). Which is understandable, why make them if nobody has 20A receptacles anyway. There is a requirement to have two 20A branches in a kitchen, but commonly they are wired to 15A receptacles.


Your anecdote about coffee makers is a good one - it bears repeating that one of the most intensive household loads are old-school coffee percolators, etc., and many school and church kitchens have circuits just for them.

As for 20amp, it may interest you to learn that I have a commercial microwave with two magnetrons that is actually 20amp (and has the horizontal pin, etc.).


Yeah, it shows that I have zero knowledge about commercial appliances. I guess, I wouldn't be surprised if they routinely use 20A plugs / receptacles. Would make sense. Maybe, hospital equipment, too?


You would routinely see IEC sockets in use in Europe in commercial kitchens, factories, anything outdoors like a music festival etc. Partly it's for higher power appliances (including 3 phase power), but also the waterproofing.

They're also the socket found in datacentres (in Europe) to connect a UPS or PDU.

Home users see them at campsites, marinas, and for charging electric cars without a special car charger.

https://en.wikipedia.org/wiki/IEC_60309


I actually have a set of 4 20A receptacles within about 3' of me. I had to have an electrician wire a dedicated line, the main cost was the labor, might as well use 4-wire cable wired to 240V, I got 4 120V outlets out of it and he used the 20A type that takes either the usual household plug or the 20A version.


Wiring 240V is (typically) not an issue, as it is standard (in the US) to get 240V from the transformer to the house. The devil is in the details: what current would you wire it for, what receptacle are you going to use and what are you going to plug there.


ICE-powered pressure washers go up to 4,000 psi. I have never seen an electric pressure washer above about 3,000 psi and they're usually no more than the ballpark of 2,000 psi.


Your high power examples are all either dedicated wires (no outlet) or single-use outlets (like the dryer.) I'm saying there is no general use high power outlet permitted.



Interesting.

When I was doing my research on ULPower engines (which can burn 100LL, but they would prefer non-leaded), there were very few airports who offered unleaded fuel (UL94). I think, San Carlos / KSQL was the only one I found?

I wonder what KRHV closure would mean for unleaded fuel? So everybody would just switch back to 100LL? How is that supposed to help?

Wouldn't be more practical (from the point of lead pollution) to enforce non-leaded fuels in those small airports instead? My cursory research shows that lot (?) of these light planes (and, perhaps, the majority of the trainer/weekend hobby aircrafts) would happily burn non-leaded fuel (UL94, for instance), with corresponding STC.


> Wouldn't be more practical (from the point of lead pollution) to enforce non-leaded fuels in those small airports instead

Definitely. It solves one problem.

It doesn't solve the "this airport takes space that could be used for parking lots" problem.


I have some extensive (self-assesment :) ) experience building this kind of application, and my answer would be "no, but maybe" (I was one of the first engineers & architect on our zero to ~500k codebase).

Some of the arbitrary, random things I've learned:

1. Given how nice and powerful language is, Rust works relatively well with less experienced engineers. Potentially. You can build very nice APIs which are straightforward to use.

If you can stay in this territory, everything is great.

However, there are some hard walls in Rust which are really difficult to jump over. And once you hit them, you really need somebody who understands Rust really really well and is capable of working around those issues.

2. It was quite hard to find the right balance between engineering time, compilation/linking time, safety, ease of use, performance, etc. Might be my personal biases, but I had to make some questionable decisions to keep compilation times / turnaround times at bay (like, unsound plugin system, custom serialization framework or test framework using nightly Rust features).

3. The type system is really, really nice. This alone compensates for a lot of things.

4. Ecosystem? Simply amazing. High quality libraries, documentation and everything.

5. Ecosystem again? Lots of things are still missing.

6. Performance? I'm 90% trolling here, but my experience was that Rust is not "blazing fast" by default. Not for "enterprise" software. You have to do some legwork sometimes. I've built some simple tool to do certain transformation between JSON and XML, and out of gate it was ~2x slower than Java equivalent (yes, with release build). Turned out, strings are not that cheap to clone if all you have is a bunch of strings. I did make it like 5x times faster than Java in the end, but it did require some weird tricks (like forcing hash map to look for a "derived" key than I give it).

There were some other cases where performance was reduced by simple things (like, having "heavy" Result vs having error variant boxed).

I think, this is "easily" counteracted by adopted practices and libraries (optimized error types, for instance), some standard patterns (like don't be afraid of Arc, they are better than to move huge data chunks around), maybe, good profilers as well.

This is probably also my biases talking here, frankly, I don't think it mattered at all (performance was killed by database, as is very common with enterprise systems). Also, in the end, it was fast (outside of database woes).

7. Overall, I find Rust very exciting language to work with, which was a big driver (but that doesn't necessarily scale -- you'll have to have some answer prepared when less experienced engineers will ask you "but why can't I do like I did in Typescript in those trivial 500 lines of code").

Would I do it again? Probably, but with understanding that whoever pays for it, might be paying for my "fun" on top of the product they are getting. Which is not necessarily a bad thing -- "fun" is also a factor in attraction and retention of engineers.

I'm also not going to lean into the "dark side". Like, if all you care is to get some half-broken whatever out as quick as possible, Rust might not be the right choice. It makes you think about "right or wrong" a lot, imo.


> Turned out, strings are not that cheap to clone if all you have is a bunch of strings.

Yeah, if you're doing a lot of cloning, you'll probably run into performance issues at some point. A common way to solve that problem is usually to use references instead of cloning.

Of course, writing your code that way takes more work/thought/planning.


>A common way to solve that problem is usually to use references instead of cloning.

Right. I think, we ended up with having everything from the list:

1. String 2. &str 3. Cow<str> 4. Arc<str>, for interned strings (thin Arc would be even better & there is probably a crate for this) 5. Something like owning_ref::ArcRef<Owner, str> 6. One-off tricks where you actually need to construct a new string, but don't want to really construct it (for example, for hash lookup).

#5 I think is undervalued, actually; it's amazing for "enterprise" kind of stuff where you have large trees of data you need to pass around & you don't want to use straight borrowing (like &'a Whatever) because lifetimes are too infectious. And you don't want to use Arc at every corner (like, say, Java would do, not quite, but in semantics).

My problem, though, was to explain all the nuances given that they usually have nothing to do with the "business" part of the problem somebody was solving.


> My problem, though, was to explain all the nuances given that they usually have nothing to do with the "business" part of the problem somebody was solving.

Yeah, I completely agree. A GC provides a lot of benefits in terms of clarifying the intention of business logic.

Unless, of course, performance/memory usage is an important part of your business logic, in which case Rust is exactly what you want.


Maybe I'm weird, but I find the explicit ownership, mutability and lifetime information encoded in Rust definitions very helpful at understanding a new code-base. Something that otherwise needs to be documented in comments / external documentation in Java, but most often it is missing, and then recovering such information from the code is similarly hard to recovering the types in a dynamic language.

Lack of GC in makes it harder to write but easier to read.


It gives you a lot more information, but a lot of that information is more about lower-level mechanics than "business logic". I think it really depends on what you're writing and what's important to it.


It can be actually tied pretty well to business logic. You can explicitly model some business rules, e.g. a subscription cannot exceed the lifetime of the user account, etc. Similar to how you can use static type system to prohibit invalid states. Here Rust gives more tools of this kind, than other languages.

Another one I really love is ability to destroy objects on final operation. E.g you close something and it can't be used any more. Most other languages can protect using such closed object only with runtime exceptions.

Some languages like Java don't even make a distinction between "object A is composed of B and C" vs "uses B and C" (in both cases they'd be references)


Or, to get the computational equivalent of what Java is doing (immutable, interned strings), use a Rust string interning library like what servo uses [1], or just `Arc<str>`

[1]: https://docs.rs/string_cache/


At my previous job I've built a tool that was capable of doing that (we were merging XMLs with form definitions). The main idea was an interactive mode.

Initially, tool would merge based on series of heuristics and then user would manually adjust "matching" nodes (user could say "actually, this A on the left and B on the right are the same, it's just that it was heavily modified").


It seems like if the editor produced hints this would work better, but your target audience also shrinks.


Well, I have this anecdote. We switched from serde to our own serialization / deserialization scheme (it still uses serde, but only for the JSON part), which is heavily based on dynamic dispatch, and actually got it faster.

Wasn't apples to apples comparison, but it was some times faster at the time (my memory doesn't serve me, but something around 3x to 5x). Also, compilation speed went down (well, at the time :) ). It was mostly due how some of the features work in serde (flatten and tagged enums), though.

I made a separate, cleaner, experiment (https://github.com/idubrov/dynser), which does not show that dramatic improvement (again, wasn't apples to apples, there were other factors which I don't remember), but shows some.


I have a small shop. 8x12 lathe, X2 mini-mill, some amount of woodworking equipment (table saw, track saw, etc).

Haven't done anything in the last year or two, though. Turns out, long commute and job at a start-up sucks your soul really hard :(

Though, equipping my machines with power feed taught me Rust and got me in that startup on the first place :D


FWIW, it is possible to make it a bit more ergonomic:

  pub struct SmartRef<'a> {
    container: &'a Container,
    widget: usize,
  }
  impl <'a> std::ops::Deref for SmartRef<'a> {
    type Target = Widget; // Widget provides non-traversing functionality
    fn deref(&self) -> &Widget {
      &self.container.widgets[self.widget]
    }
  }
  impl <'a> SmartRef<'a> {
    fn children(&'a self) -> impl Iterator<Item = SmartRef<'a>> {
      self.container
        .widgets[self.widget]
        .children.iter().map(move |w| SmartRef { container: self.container, widget: *w })
    }
  }

  fn boo(container: &Container) {
    let root = SmartRef { container, widget: 0 };
    println!("name: {}", root.name);
    for child in root.children() {
      println!("child name: {}", child.name);
    }
  }
(removed GAT remark -- it does not apply here; was thinking about generalizing this with traits)


Hmm… that would work, but at the cost of requiring the data to be immutable or use interior mutability. It also removes the size advantage of storing indices over pointers, unless you only make SmartRefs temporarily rather than storing them in your data structures.


Yes, the idea is that you only create them temporarily.

Mutability is also possible to some extent with these "smart" pointers. It gets a bit trickier and less ergonomic, though. See https://play.rust-lang.org/?gist=fbf1c24397e7020c95774bf0906...

Another option would be to store something like "Rc<RefCell<Container>>" instead of "&'a mut Container", in which case you will be able to achieve something that behaves like multiple mutable references (with all the concurrency issues of them).


#4 highly depends on a project size. On big projects, some of these decisions might become effectively one way doors.

I've been struggling a bit with these issues on the project of ~70k lines. I cannot even imagine what the refactoring would look like if we had, let's say, 1 million LOC.

To be fair, though, we use Rust the way it wasn't specifically designed for (large "enterprise" software, think Java-like enterprise).

I think, potentially, Rust could offer a much better story for this kind of software (assuming we are not mad and the issues we are facing are not because we doing something completely wrong :) ). In my opinion, the key thing would be to allow building "bridges" between pieces of the system which are "ownership-incompatible", so your decisions around ownership are not "one way doors" anymore (at the cost of translation / adapter layer).

Some random things which I think would be helpful:

1. Better self-referential structs, to allow going from "owned A + borrowed B" into "fully owned A+B" (rental crate helps here, though). Basically, hiding lifetimes in scenarios where you cannot easily change the original data structure to "own".

2. GATs. Honestly, this one is my speculation, but it seems like certain patterns which are hard to express now (abstraction of a "mutable reference", for example) would be possible with GATs. In our case, this would allow to bridge the gap between "trait object" world and "parametric over trait" world. The issue I was having is it is hard to express "mutably borrow from self" with traits (this is something similar to the issue "streaming iterator" crates solves). I was able to hack something using arbitrary self types, but it's quite... hacky.

3. Trait objects stable(r) ABI. Again, purely my speculation, but would allow to go back from "trait reference" world into "trait object" world. I won't go into details here, but trait objects want to "borrow" from something and it is not always easily possible (think that favorite vector+indices data structure) -- being able to "fake" those borrows would be nice (maybe).

Issues #2 and #3 specifically happens around deciding on data structure: regular structs have one set of tradeoffs, vectors with indexes -- another. In a big enterprise software, I would like to have an option to use whatever works in a particular spot and still have it API-compatible to the rest of the system.


> #4 highly depends on a project size. On big projects, some of these decisions might become effectively one way doors.

The transformations between T, Box<T>, Rc<T>, Arc<T>, etc. are mechanical, so I expect someone will write a refactoring tool that makes a giant PR for you automatically. (Subject to certain limits, like if you're actually cloning the ref-counted pointer, it's indeed harder to go back.) Would that satisfy your need?

> To be fair, though, we use Rust the way it wasn't specifically designed for (large "enterprise" software, think Java-like enterprise).

IMHO, this is a valid use case for Rust. I'm not saying everyone should stop using Java (in some cases I think it's significantly faster to write) but Rust has some strong performance advantages and no data races in safe code.


>Would that satisfy your need?

It's not always possible to change the data structure -- different ways of modeling data have different trade-offs. So, for me it is about not having to make a choice than about tools that will help you to change your mind.

Also, it could be something like structure coming from a 3rd-party crate using borrowing and you want to stick it into "Arc" of some sort. Or put it (with the thing it borrows from) into a lifetime-less struct, so you don't have to care about these lifetimes.

>IMHO, this is a valid use case for Rust.

I very much hope so :)


> It's not always possible to change the data structure -- different ways of modeling data have different trade-offs.

I agree with "different ways of modeling data have different trade-offs", but I don't understand how that leads to "it's not always possible to change the data structure". I revisit trade-offs all the time.

Could you explain? I might need a concrete example.

> Also, it could be something like structure coming from a 3rd-party crate using borrowing and you want to stick it into "Arc" of some sort. Or put it (with the thing it borrows from) into a lifetime-less struct, so you don't have to care about these lifetimes.

Yeah, certainly the refactoring becomes harder (maybe implausible to do automatically) when you can't change both sides in one PR, and when you have to convince someone else to change their interface / bump the major version. It still can be done (partially?) by hand at least; it's just a matter of cost/benefit.


>Could you explain? I might need a concrete example.

You might want different trade-offs in different places.

Like, in our case, the conflict is between three different representations:

1. Typed Rust structs 2. Vector with indexes 3. Untyped structs (HashMap of strings to values, essentially)

None of them covers 100% of the use-cases we have (though we also not sure exactly are these the use-cases we will have year from now? three years from now?), and some parts of the system needs to work with all of them.

>Yeah, certainly the refactoring becomes harder (maybe implausible to do automatically) when you can't change both sides in one PR, and when you have to convince someone else to change their interface / bump the major version. It still can be done (partially?) by hand at least; it's just a matter of cost/benefit.

One case was Transaction from postgres crate, which uses lifetime. But I want to stuff it in Arc. Would be possible, if Transaction itself used Arc instead of borrowing, but there are about zero reasons for them to change API that way.


I think the typical way to move from one lifetime to another, especially for a borrowed object, is copying?


Cloning is not always possible (performance reasons, non-cloneable data, etc) and would not necessarily remove lifetime (for example, it could be a struct, defined somewhere else, with a lifetime parameter).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: