15,671,383 members
See more:
Consider that I'm having a hexa value in the uint8_t variable. Now how can i convert this to decimal value?

CSS
uint8_t one[]={0x00,0x00,0xea,0x60};
int a=*one;
NSLog(@"%d",a);

Output is : 0

I have tried some this like this
uint32_t one={0x0000EA60};
int a=one;
NSLog(@"%d",a);

Now i got the output 60000.

Why this happens ?
In my project i'm having like uint8_t variables (this is bcz of other reason) or guide me to convert uint8_t to uint32_t variables?
Posted

## Solution 1

You must create the 32-bit value by adding the byte elements shifted to the correct position:
C++
uint32_t a = (one[3] << 24) | (one[2] << 16) | (one[1] << 8) | one[0];

[EDIT]
It must be of course:
C++
uint32_t a = (one[0] << 24) | (one[1] << 16) | (one[2] << 8) | one[3];

[/EDIT]

You may also use C casting:
C++
uint32_t a = *((uint32_t*)one);

But casting requires that the endianess[^] (byte order) of your system is the same as used by your one array.

v2
SViki 20-Mar-15 4:47am
But the output is 1625948160... Which is not exactly 60000 in this case.
Jochen Arndt 20-Mar-15 5:13am
You are right. I used the wrong byte order (I have noted that this is important but did not followed my own advice). Use:
(one[0] << 24) | (one[1] << 16) | (one[2] << 8) | one[3];

## Solution 2

C++
int a = *one;

is the same as
C++
int a = one[0];

hence the zero result.

SViki 20-Mar-15 5:06am
Yah you are right CPAllini How can i overcome this.

## Solution 3

C#
unsigned char *bytes = {0x00,0x00,0xea,0x60};
NSData *data = [NSData dataWithBytes:bytes length:4];
NSData *timeInt = [data subdataWithRange:NSMakeRange(0,4)];//Ex: 0000ea60
interval = CFSwapInt16BigToHost(*(int*)([timeInt bytes]));
NSLog(@"%d",interval);

The Output is : 60000 :-)