Why I am asking that ? Even if _open return 3, further more, the reading of boot section has failed.
say that my boot device is not NTFS:
if (! ntfs_boot_sector_is_ntfs(bs))
errno = EINVAL;
BOOL ntfs_boot_sector_is_ntfs(NTFS_BOOT_SECTOR* b)
BOOL ret = FALSE;
ntfs_log_debug("Beginning bootsector check.\n");
ntfs_log_debug("Checking OEMid, NTFS signature.\n");
if (b->oem_id != const_cpu_to_le64(0x202020205346544eULL)) // "NTFS "
ntfs_log_error("NTFS signature is missing.\n"); // <--- my code run by here
Of course, this debugging session ran as admin mode.
I haven't gotten around to check this out myself yet, but I am studying the "Windows Internals" book by Mark Russinovich (the guy creating the Sysinternals suite). There I found that the object name \Device\HarddiskX\DRX (with 'X' being replaced by a digit from 0 upwards; you can find it using the Sysinternals WinObj utility).
It is not clear to me when to use this name and when to use the \Global??\PhysicalDriveX name. Russinovich writes that "The Windows application layer converts the name to \Global??\PhysicalDriveX berofe handling the name to the Windwows object manager" - it seems like that PhysicalDriveX format is some old legacy format. It is far from clear to me!
So you may try a Global??\ prefix, or you might try \Device\HarddiskX\DRX (appearently with X replaced by 2 in your case). When you find out what works, tell it, and I will use it when I get that far myself!
I was not pointing out the name to be used -- I was pointing out the access that must be used. Since your disk name is correct (assuming that drive 2 exists ), the access mode seems like a good subject for investigation. A quick google indicates this is OS-dependent when using the open function.
Be wary of strong drink. It can make you shoot at tax collectors - and miss. Lazarus Long, "Time Enough For Love" by Robert A. Heinlein
it`s a pointer back to a texture (surface) data.
(I found people do this:
I don`t understand how void works though Richard. isn`t a variable a sequence of bytes? long is 8 bytes so the data is split at an 8 size step? which doesn`t make sense since a pixel is made of 3 maybe 4 bytes.
It's just a way of allowing the compiler to generate the correct code for a pointer, without needing to know what it actually points to. It is often used when the ultimate data may be more than one type. In order to access the actual content you need to use a cast, like you have shown above.
As to the structure of the real data, you will need to look at the documentation.
Your struct contains one int and one pointer. If the second member said int* rather than void*, it would be exactly the same at run time - but at compile time, you would be warned if you tried to set .pBits to point to anything but an int. A void* can be set to point to anything.
Remember that in C, a pointer is nothing but a runtime address - no type info, no size info. An array name is a pointer to the start of a memory area; the index is an offset from this start address (the index value must be multiplied with the element size to get the offset in bytes). Runtime info knows only of the start address, and nothing of index limits and element type. In memory, a 1-element DWORD array is identical to a single DWORD variable. It looks like an array because you write code to address memory locations with some offset.
If you ask the compiler to address something pointed to by .pBits, the compiler doesn't know what to find there. Do you want to fetch a single byte from memory? Or a DWORD? Maybe .pBits really is a pointer to a struct, and you want to address a struct member at a given offset (i.e. member name). You have to tell the compiler how to interpret the pointer. This is because you haven't declared it as e.g. a DWORD* but as a void*.
In your example, the programmer tells the compiler: "Treat .pBits as a DWORD*, a pointer to a double word!" Here, the pointer itself is copied to another pointer, which can be used to access the DWORD pointed to - that is just as a convenient shorthand notation. Whether following code uses the typed DWORD* 'pbits' or '(DWORD*)lockedrect.pBits' makes no difference (at least not until you want to change one of the two pointers without changing the other one).
I can't tell why the void* was cast to a DWORD*. My guess, from the name D3DLOCKED_RECT, is that .pBits points to a 3D coordinate, like a 3 element array (or maybe even an array of 3D points). The code you quote wants to manipulate the 3 values as a single unit (e.g for more efficient moving/copying). Why is a void* used, instead of a typed pointer? Probably because the struct can be used with different resolutions (maybe that is what is indicated by the .pitch member?): In some applications, the coordinates are represented by three 16 bit values, in other applications by three 32 bit values. In low-resolution applications, it could even be three 8-bit values. You cast it to whatever coordinate size you use.
The quoted code makes it look as if the coordinates could be three 64 bit values. I very much doubt that any graphical system would use 64 bit coordinates (unless you make 3D model of the known universe...). So probably the use of DWORD is just for efficiency, doing moving/copying of as few operations as possible. That you could probably find out reading the rest of the code where you found this line.
What else would it be? In C, a pointer is an address, nothing more.
In other languages, such as C#, a reference is a the memory address of an object, comparable to a struct, containing not only the values of the members, but also a pointer to another struct, the class object, with pointers to the various member functions. Also in C#, an array reference is a pointer to a "struct" providing the index limit of the array. If you go back to good old Pascal. any string was headed by its length, any array by fields indicating its upper and lower index limits.
Not so with C. For reasons of space efficiency, space couldn't be wasted on such.
If you are programming in C++, a pointer to a class instance is similar: It points to a struct augmented by a reference to a class object (which may in turn have a pointer to a superclass object, with a pointer to an even superer object, and so on up to the very object class with all the attributes common to all objects. When you call a method for some bottom layer object, a search through this hierarchy is made to find method pointer. For virtual methods defined at a high/intermediate level, a pointer may be found at a lower level that where the virtual function is defined, and different (sub)class objects may provide different pointers to their respective implementations of the virtual functions. Static members at various levels of subclassing may be located in the class objects, common to all subclasses.
If you implement an array in C++ as class with an array member, you may of course store the maximum index as another class member and route all accesses through a member function verifying that no access violates the index limit. Roughly speaking, you could say that that's what happening in C# (or good old Pascal). But both for "performance" reasons (don't make any hard tests! You'd be disappointed!) and for backwards compatibility with classical C, an array name is nothing but a pointer (where you don't have to write the *), and a pointer, whether an array name or an explicit one (requiring a * for dereferencing) is nothing but a memory address.
Oldtimers remember the BASIC functions PEEK(address) and POKE(address) for reading/writing any value at 'address'. I am not sure that C compilers of today allow you to read/write the "array element" at 0[address]. In my student days they did. I certainly hope that they do not today... But then again, there is nothing in the C syntax rules prohibiting it, so why shouldn't you use it?
Additional comment: In the 1980s, there was an intense discussion about whether a data item should provide information about its type or not. If the data item knows whether it is 8, 16, 32 or 64 bits, float or integer, or whatever, it is represented once, not in every single instruction operating on it. The instruction code would not need different formats (/additional bits) to distinguish between e.g. integer add and floating add. So instructions - of which there are usually a lot more than values - would be a lot more compact, and the risk of interpreting a value in the wrong format significantly reduced.
An essential element of this approach was to store index limits once, with the array. If every array access makes is own check against array limits, the code/data for this check could grow significantly - way beyond what you are aware of when reading the code for a single access. It looks so tempting, 'I do it only when there is a risk of exceeding the array limits, but save the check when there is not'. I am 100% sure that the compiler (or even runtine system) is a lot more clever than you to make that decision. In languages with runtime array (/object) descriptors, it is possible. In C, or C++ in classical C usage pattern, it is quite difficult.
In languages with runtime array (/object) descriptors, it is possible. In C, or C++ in classical C usage pattern, it is quite difficult.
The proprietary language in which I worked for many years (designed in the late 1970s) had descriptors built in. The compiler knew the size of each array element, and a descriptor was a pointer to the beginning of the array (as in C++) and its size (number of elements). The compiler then generated code to perform bounds checking on an index at run-time.
If the size of the array was known at compile time, a descriptor wasn't needed; the compiler just did bounds checking against the fixed length. Descriptors were used for dynamically allocated arrays or to reference a subset ("slice") of a larger array.
Much more recently, C++ has added std::span[^] to the STL, which does the same thing.
It is a historic fact, now days on C99, C11 and beyond you might more correctly use a uintptr_t.
All you want really is the address but how big is that address it could be 16bits on a small micro controller, 32 bit on large CPU or 64bits on 64bit cpu. A void* pointer was usually big enough to ensure it had enough bits to point to any valid address on the CPU. So the size of a void* is completely compiler dependent. Back in the day there were also other features a void* could be cast to and from any other pointer without warning. The reason is obvious you want to be able to copy the address to a pointer of type and use normal c pointer functions.
Now move forward and look at a uintptr_t this is the C99 definition in stdint.h for portability under XSI-conformant systems
"an unsigned integer type with the property that any valid pointer to void can be converted to this type, then converted back to pointer to void, and the result will compare equal to the original pointer".
It acts like a void* pointer with one safety added the conversion to and back is guaranteed, which never was with void* and occasionally cropped up and you can safely do it on any pointer type.
It also means when you look at it in a debugger it shows as an unsigned integer rather than a pointer which is more in keeping with what it really is ... an address to somewhere in CPU memory space.