When should I use:
Int32 myint = 0;//Framework data type
vs
int myint = 0;//C# language data type
What are the similarities/differences? I seem to recall being told that there are performance differences? Is this true?
When should I use:
Int32 myint = 0;//Framework data type
vs
int myint = 0;//C# language data type
What are the similarities/differences? I seem to recall being told that there are performance differences? Is this true?
In C# the language-defined keywords for data types are just aliases for their respective BCL types. So int is exactly the same as Int32, long is the same as Int64, etc.
In general I'd think it's best to stick with the language-defined type names, though. There are definitely no performance differences as those aliases are resolved by the compiler.
Main with System.String[]. Where did you get this bit from?int is an alias for System.Int32, string is an alias for System.String. There are no differences.
That said, it's better practice to use int and string, not their System equivalent, mostly for readability. If you use a source style checker such as StyleCop, it will enforce replacing all System.String and System.Int32 declarations with their shorter equivalent.
using System; at the top you wouldn't have to qualify the type name with its namespace on each use :)I used to advocate using C# language aliases over CLR types, but I changed my preference to using CLR types after I heard that Jeffery Richter advocates the CLR type in his book CLR via C#, my first hand dealing with other devs new to the CLR, the ability to see that the CLR, instead of C#, is the center of attention, as well as color clues between reference and value types helps in visually parsing code quickly.
Interesting: The compiler doesn’t allow you to type enum using the “Int32” but it does when you use “int”.