chipKIT® Development Platform

Inspired by Arduino™

Type Casting

Created Wed, 07 Aug 2013 03:13:06 +0000 by jmlynesjr


jmlynesjr

Wed, 07 Aug 2013 03:13:06 +0000

I just saw this in some code:

 for(int i=0; i<=150e3; i++);                     // used as a delay loop

What is this really doing?

In my mind 150e3 is scientific notation usually associated with floating point numbers. It's also 150000 and ints are +/- 32767. Or are they?

The code is also doing things like: long xxxx = 16.03e6 long yyyy = 5.06e6 long zzzz = 49.99975e6

Aren't longs also supposed to be ints? How's chipKIT handling this?

Thanks, James


majenko

Wed, 07 Aug 2013 08:14:53 +0000

On a small MCU (8 or 16 bit) an int is 16 bits - i.e., -32768 to +32767

The PIC32 is 32-bit though. An int on a 32-bit device is 32 bits. That is -2147483648 to 2147483647.

For specifically 16 bits of data there is the "short int" type (or just "short" for short [pun intended]).


avenue33

Wed, 07 Aug 2013 08:52:24 +0000

Why not using the [u]int{8|16|32|64}_t types instead of the misleading short, word, int, long ones?

An int16_t remains the same whatever the platform, while int has different meanings on a 8-, 16- or 32-bit MCU.


majenko

Wed, 07 Aug 2013 08:59:53 +0000

Why not using the [u]int{8|16|32|64}_t types instead of the misleading short, word, int, long ones? An int16_t remains the same whatever the platform, while int has different meanings on a 8-, 16- or 32-bit MCU.

Because the intXX_t family of types are implementation specific - i.e., they are defined in header files, and as such aren't portable. Yes, the majority of systems implement them, but not all, and not all of them implement them named the same. I have seen uint_32, uint32_t, uint32...

char is 8 bits regardless A short int is always 16 bits a long int is always 32 bits a long long int is always 64 bits, if implemented.

These types (except the long long int, which is optional) will always be implemented, and will always be the same size.

An int by itself is implementation specific and usually selects one of "short int" or "long int" depending on the target archetecture.


avenue33

Wed, 07 Aug 2013 09:42:47 +0000

The [u]int{8|16|32|64}_t types family is defined in stdint.h, a pretty standard and ubiquitous C99 library, and implemented in Arduino, chipKIT/MPIDE, LaunchPad/Energia, Digispark, Teensy/Teensyduino and Wiring.

The only notable exception is Maple/MapleIDE.

Main benefit is, you know how many bits are available by just reading the name of the type.

Anayway, what's important is to have access to the [u]int{8|16|32|64}_t types.


jmlynesjr

Wed, 07 Aug 2013 14:05:08 +0000

Ok. Thanks. Great information on the type definitions. PIC int 4 bytes.

150000 is less than 2000000000+, so that one is not an issue from a magnitude standpoint.

What about the assignment part of the question? Assigning a number that includes a decimal point to an int type? Assigning a number in scientific notation to an int type?

Like: long xxxx = 49.99975e6, what does xxxx end up equal to internally and why? 49000000 or 49999750 ?

The code I am reviewing works, but I'd like to make sure it's not working by accident to save future issues.

Thanks, all,

James


majenko

Wed, 07 Aug 2013 14:53:11 +0000

The compiler takes a chisel, places it on the decimal point, and smacks it with a hammer.

Anything after the . is discarded.

139847.430739 is truncated to 139847 0.00000004 is truncated to 0 49.99975e6 is 49.99975 * 10^6, which is 49999750, so remains as it is. the xxxey format is just a human representation of the number. The compiler uses the actual number in its raw form.


jmlynesjr

Wed, 07 Aug 2013 17:49:22 +0000

Ok, that's why it works.

I was expecting truncation in all cases where a decimal point was used.

I'm surprised the xxxey notation works as stated. I guess my mind is still stuck somewhere in the 70's and FORTRAN IV where xxxey would have only related to floats.

I don't like the style, but I'll accept it. :D

Thanks again, James