For my second attempt at this post (after a BSOD), here (on time yet!) is day 22 of the Blogging A-to-Z challenge.
Today's topic: the var
keyword, which has sparked more religious wars since it emerged in 2007 than almost every other language improvement in the C# universe.
Before C# 3.0, the language required you to declare every variable explicitly, like so:
using System; using InnerDrive.Framework.Financial; Int32 x = 123; // same as int x = 123; Money m = 123;
Starting with C# 3.0, you could do this instead:
var i = 123; var m = new Money(123);
As long as you give the compiler enough information to infer the variable type, it will let you stop caring about the type. (The reason line 2 works in the first example is that the Money struct can convert from other numeric types, so it infers what you want from the assignment. In the second example, you still have to declare a new Money
, but the compiler can take it from there.)
Some people really can't stand not knowing what types their variables are. Others can't figure it out and make basic errors. Both groups of people need to relax and think it through.
Variables should convey meaning, not technology. I really don't care whether m
is an integer, a decimal, or a Money
, as long as I can use it to make the calculations I need. Where var gets people into trouble is when they forget that the compiler can't infer type from the contents of your skull, only the code you write. Which is why this is one of my favorite interview problems:
var x = 1; var y = 3; var z = x / y; // What is the value of z?
The compiler infers that x
and y
are integers, so when it divides them it comes up with...zero. Because 1/3 is less than 1, and .NET truncates fractions when doing integer math.
In this case you need to do one of four things:
- Explicitly declare x to be a floating-point type
- Explicitly declare y to be a floating-point type
- Explicitly declare the value on line 1 to be a floating-point value
- Explicitly declare the value on line 2 to be a floating-point value
// Solution 1: double x = 1; int y = 3; var z = x / y; // z = 0.333... // Solution 3: var x = 1f; var y = 3; var z = x / y; // z == 0.333333343
(I'll leave it as an exercise for the reader why the last line is wrong. Hint: .NET has three floating-point types, and they all do math differently.)
Declaring z
to be a floating-point type won't help. Trust me on this.
The other common reason for using an explicit declaration is when you want to specify which interface to use on a class. This is less common, but still useful. For example, System.String implements both IEnumerable
and IEnumerable<char>
, which behave differently. Imagine an API that accepts both versions and you want to specify the older, non-generic version:
var s = "The lazy fox jumped over the quick dog."; System.Collections.IEnumerable e = s; SomeOldMethod(e);
Again, that's an unusual situation and not the best code snippet, but you can see why this might be a thing. The compiler won't infer that you want to use the obsolete String.IEnumerable
implementation under most circumstances. This forces the issue. (So does using the as
keyword.)
In future posts I may come back to this, especially if I find a good example of when to use an explicit declaration in C# 7.