The TL;DR is this: not having a default type makes you consider which size to choose. This is generally good. But there are two good cases in which having a default helps a lot: examples and tests. Both of these are small, and don't really care about the details.
In real code, inference means you can often not get away with annotating numbers, because the type is inferred. But in these two cases, there often isn't something to do the inference.
Long ago, there was fallback, and then it was removed, and now we're considering putting it back. This is a great example of Rust's practicality and a scientific/empiric mindset to building the language.
Thanks! I support having the most convenient defaults. Not only for integers, but for other stuff too. Is there a default now for 3.33? Is there a default for "string in quotes"? If there is, integers should be the default too.
I believe that everybody can invent as much red tape as he wants, the hard thing is making it concise and effective for the everyday use, not for the writers of compiler.
Digressing, were there any discussions about using var like in Swift (instead of let mut)? What is the rationale for not making that more minimalistic?
> Digressing, were there any discussions about using var like in Swift (instead of let mut)? What is the rationale for not making that more minimalistic?
There were (I think it was somewhere on GitHub). One of the things `let mut` facilitates is:
> Digressing, were there any discussions about using var like in Swift (instead of let mut)? What is the rationale for not making that more minimalistic?
I was interested in the answer to this too. I found these which helped me understand:
The thing is, for floats there's a default that makes sense. For strings, there's a default that makes sense.
Integers, on the other hand, are harder. You _probably_ want i32, even on 64 bit machines, but then an i32 can't index an array, so for integers used that way, you want a machine-sized integer.
Unless you need the extra precision, using a float makes more sense than a double.
But with integers, it's different. It's not so much that an i32 _can't_ index an array, but that it _might not be able to_. On a 64 bit machine, you could have a very large array that's bigger than an i32 in size.
> Unless you need the extra precision, using a float makes more sense than a double.
No. Floats are OK for big storage, but for a lot of uses you want to make all the intermediate calculations in doubles and only when storing the results to convert to floats again. See
(at the moment top of the front page) where people have problem with Intel using 66 bits mantissa internally for the input of 53 bits (double!). You don't want to have partial results as floats unless you're doing graphics and simple calculations where you need much less bits. You'd would never like to calculate a bridge construction with floats, even if the needed parts would at the end be written with only 4 digits.
That's what I intended as part of 'if you need the precision.' You're absolutely right that even if the beginning and end are floats, you may want to do the intermediate calculations as a double.
Therefore it's often important to have fp constants with as much bits as possible. But there are exceptions again: unless you use them to initialize float arrays etc.
Regarding indexes for arrays, there are typical uses that should be recognized: if I write a[ 1 ] I'd like to be able to address the element 1. If I write a[ 0xffffffffffff ] it's also clear it's more than 32 bits.
Right. There's tons of reasonable choices here. Which is why Rust currently has no defaults, and is why you need to write 5i today.
This whole topic is very much still under discussion. While many want defaults, what they default to is still a question. And there's a sizable group that doesn't want any default at all.
There is little performance advantage to using a 32-bit float. Their precision is low enough that you can't do anything useful without risking the correctness of your program.
They are, like shorts, an artefact of the past. Unlike shorts, their limitations are not within the intuition of the average programmer, which leads to widespread abuse and bugs.
Floats (and shorts) have a significant performance advantage, when they're appropriate. For example, you can fit 2x as many floats into cache compare to a double and good cache behaviour is really important these days, it's one of the reasons languages like C/C++/Rust with good control over memory layout can be very fast. Also, you can operate on 2x as many floats with vectorised instructions; again, SIMD is important these days, especially for things like games (where the precision of a float is fine).
This is too brutal, you have to consider the application domain. For physics simulations I would agree with you. For signal processing or graphics applications float is usually perfectly fine (and even overkill for many DSP applications).
People mentioned the cache efficiency gain with floats, and that is already relevant to rust. If in the future maybe rust could be used for GPGPU, then the difference between float and double often becomes even more dramatic. You definitely want to keep float support for this.
Rust doesn't have a default for float types, it forces you to be explicit. `let x = 3.33;` without any other way to infer it will cause an error, forcing you to either write `3.33f32` or `3.33f64`.
The whole point of 1.0 is that they won't experiment like this anymore. They're using the freedom to break things now with the promise to lock it down later. I'm sure there will still be some experimentation, but the core language should be stable.
One thing rust has going for it is a very clear vision (safety, control, speed), which is based on fundamental concepts (e.g. lifetimes). Consequently, it has less of a need for a BDFL to declare what is or is not rust-ific.
The default seems to be "inferred". That is, 1 + 1i will work, but 1 + 1 is ambiguous. I've just noticed this in use, and don't know what this is officially called. I don't think this will actually be a real annoyance in practice.
It's just type inference. Rust doesn't do automatic integer coercion on arithmetic operations, so once it knows the type of any of the numbers it can deduce the types of all the rest.
http://doc.rust-lang.org/guide.html
"The first thing we'll learn about are 'variable bindings.' They look like this:
"And so on: "let (x, y) = (1i, 2i); let x = 5i; x = 10i; "
I had to double check if the default (when the i is not written) is something else than int. It is confusing.