4 Aug
2008
4 Aug
'08
12:39 p.m.
On 04-08-2008 11:41:26 +0200, Sjoerd Mullender wrote:
I've seen this kind of thing quite often, but I have to ask, why? The standard says about the sizeof operator: "When applied to an operand that has type char, unsigned char, or signed char, (or a qualified version thereof) the result is 1." So why bother multiplying with sizeof(char)?
Because I learnt it this way, and I like it for the case (if ever) when the size of a char will be different than 1 byte. It just makes explicit that I think I'm allocating a string here, and not something else.