|
not necessarily. for example, if you're writing cross-platform code that reads a particular format of binary file, and you know the files are always written in Intel order, you can write ReadShort and ReadInt functions that will convert those data types on-the-fly to the appropriate order, for the machine running the code.
Cleek | Image Toolkits | Thumbnail maker
|
|
|
|
|
Since endianess is hardcoded for a particular platform (I don't know of any platform/CPU architecture which has a variable endianess), you can do these decisions at compile time, rather than runtime. A good side effect of handling endianess at compile time, is that you'll always get the best possible performance on systems in scenarios where endians match.
--
100% natural. No superstitious additives.
|
|
|
|
|
Jörgen Sigvardsson wrote: A good side effect of handling endianess at compile time, is that you'll always get the best possible performance on systems in scenarios where endians match.
of course.
i was just pointing out that it's possible (if not necessarily ideal) to handle it at run-time, too.
Cleek | Image Toolkits | Thumbnail maker
|
|
|
|
|
We use MIPS cores and our reference designs have jumpers on which let you decide one endianess.
There are can be a test at startup to make sure this is set to match the code.
Elaine
The tigress is here
|
|
|
|
|
endian jumpers! Who would've thought!
--
100% natural. No superstitious additives.
|
|
|
|
|
There are/were some file formats that specified the endianness of the file in the header.
This allowed the file to be saved in a format that was optimal based on the type of machine that would be reading/writting the data the most.
There are also formats that use both types:
http://ccrma.stanford.edu/courses/422/projects/WaveFormat/[^]
...cmk
Save the whales - collect the whole set
|
|
|
|
|
Yeah, but you wouldn't be testing the system itself then, just the input from the file header.
--
100% natural. No superstitious additives.
|
|
|
|
|
True, you wouldn't need to directly test your system - if you hard coded the endianness of it into your code.
If you want portable code then you will need functions such as:
SysToBig( long& ), BigToSys( long& ), SysToLittle(), ...
Those functions need to know if they are a no-op (e.g. SysToLittle() on a PC), or actually perform a swap.
To me, i generally treat endianness like socket code, always call the ntols, ... functions.
On a PC they do swaps but on other (big-endian) systems they are a no-op.
_I_ don't need to know the system endianness, but the ntols funstions do.
...cmk
Save the whales - collect the whole set
|
|
|
|
|
I too use ntol functions, mainly because of the reasons you mention. It's fool proof.
--
100% natural. No superstitious additives.
|
|
|
|
|
|
Joshua Quick wrote: That means endianess is not determined at compile time.
Yes it is. PowerPC nor Intel have variable endianess. Nor is it possible to run Intel code directly on a PowerPC and vice versa, so it's not like the same code is shared between the platforms. The Universal Binary contains sections (I think they call it bundles or forks or something like that at Apple) for each platform - copies of eachother. The Intel section is executed on an Intel machine, and the PPC section is executed on the PPC.
The data however may be of varying endianess - I don't know. If you look at the guidelines you provided a link to, it only speaks of data having different endians, and that you have to guard against that - and something that may be a real problem, as the OS is released on platform with differing endianess.
|
|
|
|
|
Jörgen Sigvardsson wrote: Yes it is. PowerPC nor Intel have variable endianess.
I was referring to the Apple "development" platform.
You are correct though. A universal binary does contain both a PowerPC and Intel build of the executable.
|
|
|
|
|
union
{
int iValue;
BYTE bytes[sizeof(int)];
};
iValue = 1;
if ( bytes[0] == 1 )
// then little endian
else
// big endian
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br />
Peter Weyzen<br />
Staff Engineer<br />
<A HREF="http://www.soonr.com">SoonR Inc.</A>
|
|
|
|
|
It seems a bug of VC++ 6 that although RFX_Text is under Unicode mode, it converts input unicode text into ansi code, it's no problem if each character is ansi charecter, but for those asian characters e.g chinese, the problem is that the input text to RFX_Text will be cut half, which causes the output text from database is a half of original input text, I think it's because of the ansic conversion of RFX_Text under unicode mode. Does anyone have any idea to solve this problem? thanks in advance.
|
|
|
|
|
Well, if your database backend is using single byte characters, then you're not going to get far, so that's item#1 to check.
Other than that, VC6 comes with source (including RFX_Text) so you could write your own that doesn't do a conversion down to ANSI
Steve S
Developer for hire
|
|
|
|
|
Hello!
How can I store data easily and free? I tried mysql++ but it has a conflict with managed classes. I would like to use the advantages of basic SQL commands, but without having to buy a server...
Thank you in advance,
Alin Stoian
|
|
|
|
|
|
Sorry, maybe i didn't make myself very clear..
I want to use it in a C++ program.. are there any libraries for Access?
Alin Stoian
|
|
|
|
|
yes please see again my post
whitesky
|
|
|
|
|
Yup. If you're using VC6, there's the CDao* classes.
Personally, I'd use the ATL Database classes. Of course, these aren't available in the Express edition of 2005. You can use ADO classes, or ODBC as well.
There is no 'Access' library per se, since Access uses either a desktop engine (MSDE) which is API compatible to SQL Server, or it can use JET, which uses DAO or ISAM internally, depending on the data source.
Steve S
Developer for hire
|
|
|
|
|
|
hi all
I have to read(write) large amount of data from(into) file.
I want to make decision whether to load this data(whole) to buffer directly (using CFile::Read() ) method, then modify data and write to file(using CFile::Write() ) method.
Or using loop and fread and fwrite methods, read per 5 bytes (for example), modify each 5 bytes and then write to file and so on.
How do you think which will be faster solution ?
thank you.
|
|
|
|
|
big_denny_200 wrote: large amount of data
So probally a good solution is to limit the number of HD access, I think that this can be a reason to Read and write the data in only one access using Read/Write and a big buffer.
If this will required a lot of memory, there is the usefull class CArchive, (very simple to be used) where you can set the buffer length to a big value (like 10000 or more) and reduce the HD access according to the amount of memory required.
|
|
|
|
|
If you are going to be modifying almost all of the data you read in, then it *might* be faster to load, modify, write all at once, however that would also lock your program up. If you need to keep things moving along, then doing so in a second thread, using a loop, might make more sense, plus using a much larger value than 5 bytes at a time, maybe 1024 or 4096 bytes at a time?
¡El diablo está en mis pantalones! ¡Mire, mire!
Real Mentats use only 100% pure, unfooled around with Sapho Juice(tm)!
SELECT * FROM User WHERE Clue > 0
0 rows returned
Save an Orange - Use the VCF!
|
|
|
|
|
I have owner-drawn list box.
when calling GetScrollPos on scrolling, it is always returning 0.
how can I solve this?
-Sarath
|
|
|
|