> I'm not sure what the difference is with regards to pointers.
In C# most objects are 'class' objects, which are implicitly passed by pointer, although some are 'struct' objects passed by value. In Go rather than making the decision once at the type level, the decision to pass by value or pointer is made explicitly every time an object is used.
> I'm not sure that Go really had anything to figure out or invent
If you look at the discussions for Go (which started as early as 2009 [1]) [2] [3] [4] [5] generics seems as big or a bigger project as the other improvements made to the language over the last decade. My impression is there being a broad spectrum of potential trade-offs across compile speed, execution speed, convenience, etc.
The totality of the prior art here is above my pay grade, but I can quote the Haskell people they roped in [4]:
> We believe we are the first to formalise computation of instance sets and determination of whether they are finite. The bookkeeping required to formalise monomorphisation of instances and methods is not trivial ... While the method for monomorphisation described here is specialised to Go, we expect it to be of wider interest, since similar
issues arise for other languages and compilers such as C++, .Net, MLton, or Rust
I think C# occupies a fairly nice design spot also (and takes advantage of its class/struct system to get fast compiles but also performance when needed). But it's not like the C# design couldn't be improved on. As far as I know you still can't write math code that works across 32 and 64 bit floats. And array covariance is implemented in a way that isn't type-safe and relies on run-time checks and exceptions. [6]
In C# most objects are 'class' objects, which are implicitly passed by pointer, although some are 'struct' objects passed by value. In Go rather than making the decision once at the type level, the decision to pass by value or pointer is made explicitly every time an object is used.
> I'm not sure that Go really had anything to figure out or invent
If you look at the discussions for Go (which started as early as 2009 [1]) [2] [3] [4] [5] generics seems as big or a bigger project as the other improvements made to the language over the last decade. My impression is there being a broad spectrum of potential trade-offs across compile speed, execution speed, convenience, etc.
The totality of the prior art here is above my pay grade, but I can quote the Haskell people they roped in [4]:
> We believe we are the first to formalise computation of instance sets and determination of whether they are finite. The bookkeeping required to formalise monomorphisation of instances and methods is not trivial ... While the method for monomorphisation described here is specialised to Go, we expect it to be of wider interest, since similar issues arise for other languages and compilers such as C++, .Net, MLton, or Rust
I think C# occupies a fairly nice design spot also (and takes advantage of its class/struct system to get fast compiles but also performance when needed). But it's not like the C# design couldn't be improved on. As far as I know you still can't write math code that works across 32 and 64 bit floats. And array covariance is implemented in a way that isn't type-safe and relies on run-time checks and exceptions. [6]
[1] https://research.swtch.com/generic
[2] https://docs.google.com/document/d/1vrAy9gMpMoS3uaVphB32uVXX...
[3] https://go.googlesource.com/proposal/+/master/design/go2draf...
[4] https://arxiv.org/pdf/2005.11710.pdf
[5] https://go.googlesource.com/proposal/+/refs/heads/master/des...
[6] https://codeblog.jonskeet.uk/2013/06/22/array-covariance-not...