Jan Engelhardt wrote:
Bit representation is left to the CPU, so 1 << 1 will always be 2,
regardless of whether the byte, when sent out to the network,
is 01000000 or 00000010. Endianess becomes important as soon
as the packet is on the network, of course.
Well yes, that's why I'm asking. I'm not concerned about data from just the
CPU's perspective.
I'm writing a driver that talks to hardware that has a shift register. The
register can be shifted either left or right, so all the bits obviously have
to be in order, but it can be either order.
What I want to do is to have the driver detect when byte-endianness doesn't
match bit-endianness when it writes the the word to a memory-mapped device. I
think I can do that like this:
#if (defined(LITTLE_ENDIAN) && defined(BIG_ENDIAN_BITFIELD)) ||
(defined(BIG_ENDIAN) && defined(LITTLE_ENDIAN_BITFIELD))
#error "This CPU architecture is not supported"
#endif
--
Timur Tabi
Linux Kernel Developer @ Freescale
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
[Index of Archives]
[Kernel Newbies]
[Netfilter]
[Bugtraq]
[Photo]
[Stuff]
[Gimp]
[Yosemite News]
[MIPS Linux]
[ARM Linux]
[Linux Security]
[Linux RAID]
[Video 4 Linux]
[Linux for the blind]
[Linux Resources]