|
One other thing I can think of - what's the value of settings.getAccessesToSimulate() ? We've shown that your file reading code is correct - but are you reading the right number of bytes?
|
|
|
|
|
Sorry for the late reply.
That was the first thing I checked and after I did get it fixed ( I wasn't making a constant reference of all things!) the number currently I'm using is 1000 though it could range from 0 to 1,000,000.
This is on Fedora if it makes any difference though it should not.
|
|
|
|
|
I've just tried your code on OS X - worked fine there as well.
One last suggestion - you could try using the C file handling functions (open , read , close ), like so:
int fd = open(fname, O_RDONLY);
const int nBytesRead = read(fd, buffer, 1000);
close(fd);
That at least tells you how many bytes it's read?
[edit]In Visual C++, this probably ought to be:
int fd = _open(fname, _O_RDONLY|_O_BINARY);
const int nBytesRead = _read(fd, buffer, 1000);
_close(fd);
VC++ uses the "ISO C++ conformant" names rather than the POSIX API names...ho-hum.
[/edit]
On the same note - std::ifstream has a method called gcount that tells you how much the last read actually read:
binFile.read(buffer, 1000);
std::cout << binFile.gcount() << std::endl;
Just tried both of those on OS X - both give the expected results with files that contain embedded NULLs.
modified on Thursday, December 11, 2008 7:00 PM
|
|
|
|
|
This is insane.
After restarting the computer, things worked perfectly, the way they should have in the first place. Why would something like this happen?
Thanks much Stuart, you and rest helped, and I got to (re)learn a few more things.
Cheers
|
|
|
|
|
Mustafa Ismail Mustafa wrote: After restarting the computer, things worked perfectly
Don't you hate computers sometimes - as recalcitrant as a small child
|
|
|
|
|
Stuart Dootson wrote: Don't you hate computers sometimes
With extreme prejudice. I have a computer that I'd like to introduce to a baseball bat at 90mph. One day I will, instead of relegating it to our family "museum".
I have a slightly different thing now. Bear with me, its been a few years since I've worked this "heavily" with C++.
A different binary file, is being read and this time, I need to clump every 4 bytes into one structure and this structure would be a member in an array of its type. Suggestions?
|
|
|
|
|
Mustafa Ismail Mustafa wrote: A different binary file, is being read and this time, I need to clump every 4 bytes into one structure and this structure would be a member in an array of its type. Suggestions?
Pffff - kind of depends on a lot of things , like:
- Does the file have the same endian-ness as the computer you're working on?
- Do you know how many elements the array has?
- What format is the 'structure' you talk about?
Possible solutions include:
- Interpreting the file byte-by-byte, to read into the structure
- Read the file a structure (four bytes at a time), assuming that the structure can be represented as four contiguous bytes in memory. I'd use
#pragma pack to do that, with a static assert[^] to ensure that the structure really takes up four bytes in memory. If endian-ness is an issue, then you can reverse bytes at this point. - Read the whole array in, if you know how many members it has. To ensure that padding doesn't get in the way, you could read in the correct number of bytes (
sizeof structure * element_count ), then use a byte pointer to step through the bytes, casting the pointer to a structure pointer when accessing the bytes
Yes, I've done this stuff a fair bit. From reading and interpreting executables (not just PC ones - that's where the different endian-ness has come in) through decoding data received over a serial link to reading and interpreting gigabyte-sized data files logged when testing embedded systems.
I generally use memory-mapping so I don't need to bother with reading the files myself. In the case of gigabyte size files, this means mapping different bits of the file at different times, 'cause there's not enough RAM/address space to map the whole thing at once.
I find that using custom iterators (with the help of the Boost Iterator library[^]) helps, as you can abstract the conversions/file access/pointer incrementing into the iterator, allowing you to concentrate on the structures you're actually reading from the file. This especially helped with the big memory-mapped file thing, as I could hide the mapping/unmapping of the file, and (in effect) get a random access iterator over the file that meant I never had to deal with the file *as a file*. That was sweet - maybe I ought to write an article about it.
In general, I'd advise using layering in your application's architecture, so you deal with the details of the binary file *in a single, self-contained place*, allowing you to deal with higher-level concepts withouyt worrying about the lower-level ones.
|
|
|
|
|
I solved it by reading a 4-byte chunk at a time, throwing that into the struct of the array at that point and updating the pointer using seekg();
Thanks for everything Stu. If you ever come Jordan ways, look me up, I'll stand you a couple of pints
|
|
|
|
|
Hi,
your code looks fine to me, at least the part shown.
I think all the data is there, and the code that follows is doing something wrong.
IIRC the read function returns a byte count, you should look at it to make sure.
|
|
|
|
|
I'll do that to double check. Currently, its hacked so that it reads it character by character into a vector (this works 100%) which I then return. I just don't want to read it character by character.
As a side note, after I've read the values in "buffer" in the code it has 4 characters. Sizeof gives me 4. The read and get and getline all break at the \n which is their default delimiter; I'm looking for away to ignore any delimeters and to read the raw byte values.
|
|
|
|
|
when the data is binary, there are no delimiters by default; however you should not use anything that is string-oriented since string methods will react on special characters such as NULL and sometimes NEWLINE.
AFAIK writing a char array to the console in C++ is considered a string operation, so it will stop at the first NULL, interpret every NEWLINE, TAB, etc.
Not sure what sizeof is supposed to do on a char*.
getline for sure is a string operation, not applicable to binary data.
etc. etc.
One advice: fix your reading code until you are completely satisfied, and add error detection code;
do this before you start using your data since now you are building on quicksand and that will keep you doubting everything and hence slow you down.
|
|
|
|
|
Luc Pattyn wrote: when the data is binary, there are no delimiters by default
That's what I thought.
Luc Pattyn wrote: AFAIK writing a char array to the console in C++ is considered a string operation, so it will stop at the first NULL, interpret every NEWLINE, TAB, etc.
That is exactly what I was expecting, so when I was printing it out, I was using cout << hex << buffer[i] << endl; in the loop, but obviously it didn't work. Anyways, I started printing out the data that way to see what was wrong. There is something wrong with the reading and that's why I've hacked it at the moment. I'll be back to it when I have more time.
Luc Pattyn wrote: Not sure what sizeof is supposed to do on a char*.
Gives you the number of bytes and since a char is one byte, it effectively gives you the number of elements in it
|
|
|
|
|
Mustafa Ismail Mustafa wrote: cout << hex << buffer[i] << endl
as buffer is a char*, buffer[i] is a char and gets output as such.
If you want to see its numeric value, you must do something to it, as a minimum cast it to an int.
|
|
|
|
|
Luc Pattyn wrote: f you want to see its numeric value, you must do something to it, as a minimum cast it to an int.
Eh? I will try that as soon as I finish the procedure I'm writing.
|
|
|
|
|
You are reading a binary file, why do you need a delimeter ? Just read whatever number of bytes you need to read and be done with it.
|
|
|
|
|
That's what I was used to but for some reason I cannot understand, every time I hit the \n delimiiter, it breaks the read, and in my particular case, instead of reading 1000 bytes its reading 4, 3 characters + the delimeter.
|
|
|
|
|
Mustafa Ismail Mustafa wrote: When reading a binary file, and you have no clue what the delimeter...
Taken at face value, these two seem to be a bit contradictory.
Mustafa Ismail Mustafa wrote: Is there a single function that can do that...
Do what? Read the entire file into a buffer?
"Love people and use things, not love things and use people." - Unknown
"The brick walls are there for a reason...to stop the people who don't want it badly enough." - Randy Pausch
|
|
|
|
|
DavidCrow wrote: Taken at face value, these two seem to be a bit contradictory.
Believe me I know. I'm a bit out of practice with C++, been a couple of years at least and I hit this snag.
DavidCrow wrote: Do what? Read the entire file into a buffer?
Read n number of bytes without being stopped by the delimeter. Any delimeter. And this is the dilemma since binary files should not have a delimeter!
|
|
|
|
|
I've used this function without problem for binary files (MPEG2 TS) before.
|
|
|
|
|
using c++~~~~ from two set like (1,2) and (+,-) how to geneate a list like
[(+1,+2),(-1,+2),(+1,-2),(-1,-2),(+2,+1),(+2,-1),(-2,-1),(-2,+1)]but no(-1,-1)etc.
thanks!
|
|
|
|
|
Have you tried looking at the ADT Set?
|
|
|
|
|
Where can I find the knowledge about ADT set?
And is there any book to recommand?
Thanks ~~~
|
|
|
|
|
Books, I'd recommend "The C++ Standard Library: A tutorial and reference" by Nicolai M. Josuttis.
However, you can find an excellent source about all the Abstract Data Types here: Clickety[^] and here: Clickety[^]
|
|
|
|
|
|
No worries mate
|
|
|
|
|