|
Hii
I used the code to calcalate the intersection if any between 2 lines, but how to do it between lines and arcs , and arcs to arcs ?
The time i did Math is already 20 years ago ....
Johan
|
|
|
|
|
These primitives are all simplexes as such can be represented in parametric form, where by the intersections are solutions to the parametric form equations, Dave Eberly's Geometric tools site has code and documentation explaining the concepts for each pair combination.
Geometric Tools[^]
A side note for regular arc to arc or line or line-segment intersections you can perform the intersections upon the basis circle of the arc (becoming circle-circle, circle-line, circle-line-segment) and then to determine if there is an intersection see if the computerd intersection(s) exists upon the arc itself. The following has some examples:
Wykobi Article[^]
|
|
|
|
|
Hi
Thanks, looks good, i will try to translate the ones i need to c# (cpp is not my stuff)
Johan
|
|
|
|
|
Hello, i am transfering a file from one machine to another in my code. Im using Sockets and TCP Protocol. Is there a way on the reciever's end or sender's that i can calculate the approximate time the file transfer is going to take? any tutorials or ideas will be helpful. Thank you
Regards,
Christian Pace MCAST Student
|
|
|
|
|
Hi,
assuming you know the total amount A of data (file size is easy at the sender, you could inform the target about it), you could measure the time T it has taken so far to transmit/receive an amount X of the data, and then use T*(A/X - 1) as a rough estimate for the remaining time, with some caveats:
- don't do this as long as T is very small and/or X is less than A/10, it would be completely wrong (and cause a divide-by-zero when X is zero)
- it is assuming a linear behavior; if you have reasons to expect something non-linear, you could and should compensate for that (say the file is larger than your filesystem's cache, then the first part might transfer much faster than the remainder, you could compensate for it).
Luc Pattyn [Forum Guidelines] [My Articles]
- before you ask a question here, search CodeProject, then Google
- the quality and detail of your question reflects on the effectiveness of the help you are likely to get
- use the code block button (PRE tags) to preserve formatting when showing multi-line code snippets
|
|
|
|
|
Hello m8, I pated the reciever's code below do you think im implementing what you said correctly?
/
/Retrieve Stream from the socket that is connected to the client machine
NetworkStream incomingNetworkStream = new NetworkStream(clientsocket);
FileStream fsout = new FileStream(parser.TemporaryFilesPath + @"\database.zip", FileMode.OpenOrCreate, FileAccess.Write);
long size = Convert.ToInt64(incomigFileSize);
long rdby = 0;
int len = 0;
DateTime starttime = DateTime.Now;
bool isfirst = false;
if (incomingNetworkStream.DataAvailable)
{
while (rdby < size)
{
byte[] buffer = new byte[65535];
len = incomingNetworkStream.Read(buffer, 0, buffer.Length);
fsout.Write(buffer, 0, len);
rdby = rdby + len;
if (downloadpercentage > 10)
{
if (!isfirst)
{
EstimatedTransfertime = DateTime.Now.Subtract(starttime);
isfirst = true;
}
Minutes = EstimatedTransfertime.Minutes * (100 / size - 1);
Seconds = EstimatedTransfertime.Seconds * (100 / size - 1);
}
downloadpercentage = (int)(((double)rdby / (double)size) * 100.00);
}
fsout.Flush();
fsout.Close();
isReading = false;
}
else
{
}
THank you
|
|
|
|
|
Hi,
yeah, that's almost it although I have several comments:
1.
I would update downloadPercentage before the > 10 test, so the test uses the latest information.
That way you don't need isFirst at all.
2.
I explained using percentages for clarity, but you could as well use downloadFraction which would grow from zero to one, all it takes is to replace all the numbers 100 by 1 (and test for > 0.1).
3.
it does not make sense to handle minutes and seconds separately; the way you have it you are multiplying both minutes and seconds by a factor which initially is large (e.g. if you did 20% in 3 minutes 20 seconds, then the factor would be 4 and the estimate 12 minutes and 80 seconds). Choose one unit of time, probably seconds, and get the DateTime.Now.Subtract(starttime).TotalSeconds; after the multiplication you can have TimeSpan convert it back to minutes and seconds.
4.
new byte[65535] looks silly to me; in file I/O the underlying code favors powers of 2 (disk sectors and clusters are powers of 2 for a reason), so try and stick to those, use 65536 or 32768 or something similar (and write it as 0x00010000 or 0x8000 to make that clear). Chances are the stream does not care and will provide arbitrary amounts of data anyway, but when it happens to provide multiples of 512 bytes, the file I/O would benefit.
5.
there is no need to get a new byte[] for each iteration of the while loop, you could create it once
and reuse it.
6.
the sequence flush - close does not make much sense; close always implies flush; an explicit flush really only is useful when you want to make sure some data gets actually written to the file/stream
which you don't want to close yet, but might get lost if the app suddenly would crash.
7.
I am not sure what if (incomingNetworkStream.DataAvailable) is meant to do; I would hope your communication is event driven, or you use a separate thread and a blocking read...
8.
I suggest you use a big try-catch block, after all networking and file I/O sooner or later will fail for whatever peculiar reason a system can come up with.
That's all.
Luc Pattyn [Forum Guidelines] [My Articles]
- before you ask a question here, search CodeProject, then Google
- the quality and detail of your question reflects on the effectiveness of the help you are likely to get
- use the code block button (PRE tags) to preserve formatting when showing multi-line code snippets
|
|
|
|
|
I think there's no "magic". Send the total size of the file first, so the receiver knows the size to calculate the remaining time, just like previous poster said.
I just have an extra: if you are transmitting large files through a non-linear network (like the Internet), maybe you should estimate the remaining time based just on the speed of last X bytes transmitted. Let's suppose you have a 100MB file to transmit. You calculate the average speed of the last 1MB received, and use this value to calculate the remaining time (RemainingMB / Speed). So, the remaining time will be adjusted according to the speed variation.
Regards,
Leonardo Muzzi
|
|
|
|
|
someone please help on the thinning algorithm used to identify a character .I am not being able to code the algorithm in c# though i have the whole logic and code written in C . can some1 please help me .
the c code is as follows ....how do i code the same in c# please help
http://pages.cpsc.ucalgary.ca/~parker/thin.c[^]
|
|
|
|
|
After a quick look through the code I think it just translates straight into C#.
The only tricky bit is the b parameter to t1a which must be called using ref.
I suggest you pick up an introductory C# book. If you understand the C you should have no problems translating it to C#.
As a side issue it's generally not a good idea to provide links to code. People will be wary of following them. Why not just copy and paste it - if it's not too long.
Regards
David R
|
|
|
|
|
Hello,
Did not run the C version of the algorithm you've pointed, but there are some already made implementation of thinning in C#, which are made in AForge.NET Framework[^]. One example is to use math morphology filters for this: [^]. Another is simple skeletonization filter[^].
|
|
|
|
|
Hi. I'm currently using AES Rijndael cryptography throught my system. Actually, I was making some tests and found an error. The thing is, when I try to encrypt/decrypt some specific texts, I loose some data in the end of the resulting string. For instance:
15ª Instância
13º lugar ªªAB
The above text, when encrypted/decrypted, results in loosing the two lasts chars "AB". I made some tests and figure out that any 4 characters after the "ªª" are lost. If you put 5 characters ("ABCDE"), then a new data block is created by the algorithm to seize the last letter and nothing is lost.
Aparently it has something to do with the "ª" and "º" characters. If the last block of information (a block has 16 chars, 128 bits, in Rijndael) has any of these special chars, then the last chars of the block are lost in the roundtrip. If there is "ª" or "º" in a block that is not the last one, then everything runs fine.
I'm using the code provided in MSDN site:
http://msdn.microsoft.com/en-us/library/system.security.cryptography.rijndaelmanaged.aspx[^]
and padding the string block myself (using PaddingMode.None in Rijndael object). Does anyone knows why these special chars causes this error?
|
|
|
|
|
Hi,
my guess would be you are mixing up character counts and byte counts somehow. In UTF8 your special characters would take 2 bytes each.
Luc Pattyn [Forum Guidelines] [My Articles]
- before you ask a question here, search CodeProject, then Google
- the quality and detail of your question reflects on the effectiveness of the help you are likely to get
- use the code block button (PRE tags) to preserve formatting when showing multi-line code snippets
|
|
|
|
|
Thanks for the advice, that was the problem. In fact, the real problem is that PaddingMode.PKCS7 doesn't work properly. Using this mode, in the roundtrip, I always get the error "PKCS7 padding is invalid and cannot be removed.". That's why I tried to do the padding myself, but it is not possible to do it in the string, because of the special characters that takes more than one byte.
So, the solution was: I implemented the PKCS7 myself, in the byte array, right before the encryption, using PaddingMode.None for the RijndaelManaged. That worked fine. However, in the decryption, it was not necessary to implement it, the PaddingMode.PKCS7 worked correctly.
Thanks to this article, "Notes On Padding" section:
http://www.codeproject.com/KB/security/Cryptor.aspx[^]
|
|
|
|
|
Hello there,
I'm starting up a little personal project that I'm not too sure what the best way to tackle is. I'm hoping someone could give me a few pointers and/or some ideas on questions I should answer before I dig in.
The problem I'm trying to solve to determine file and project dependancies in a big sprawling codebase with tens of thousands of files in it. This codebase has evolved over time and can be quite unwieldy. There are many sub projects inside the build tree and lots of interdependancies between these projects. I want to be able to map these depenencies out.
I found a library that will allow me to moniotr file activity, for example file opens and creates. So my thought is if I run a full clean build and monitor and record all of the file activity of a build I can generate a dependency graph of the entire project. This will also allow me to be able to determine what I should build and in which order when I want to build a tiny piece of the tree. I'm sure there will be other interesting things I can do with this information. I'd also like to try and visualize the entire project and maybe create a file change heat map from it.
My quandry is how best to record the file activity so that I can build a depenecy tree. Ideally I'd like to be able to do this in a multi-threaded way, as our build system can utilize multi-proc machines and have multiple files building at once, and be able to tell which files need to be built before others and which files are grouped in a project and so on.
My current proposed approach is to record all file activity to a logfile and the post process it to generate the depenecy graphs. I'm still a little hazy about all the data that I need to record. I'm currently thinking I'll figure that out as I go when I find I'm missing some important information.
Any pointers or thoughts would be greatly appriciated.
Thanxx,
Adam
|
|
|
|
|
wouldn't static code analyzation be a better choice by simply parsing all of the includes assuming its a c or c++ program.
or another option to have you project just output the result of the preprocessor in visual studio it will tell you which files it included.
a programmer traped in a thugs body
|
|
|
|
|
Hmm, I'm not sure how hard it would be to create something that would properly process it statically. There is some thinging here that needs to be done around tracking geneology, but once I figure that out I would thing this shouldn't be too hard, but then again I always get blindsided by some of the small details I didn't see when I was thinking about a problem at a higher level.
Although I don't think all of the files that need to be processed are C/C++ files, there are resource files as other types of files.
And these projects aren't built in VS, they're built in our own build system base off of nmake.
Adam
|
|
|
|
|
Since you are using nmake I'm going to assume you are using gcc while this won't give your resource files it will let you see your source dependencies. How are you using the resource files are they linked into the some type of executable shouldn't you be able to see what you linking in in your make file.
this is from the gcc online manual
-
M
Instead of outputting the result of preprocessing, output a rule suitable for make describing the dependencies of the main source file. The preprocessor outputs one make rule containing the object file name for that source file, a colon, and the names of all the included files, including those coming from -include or -imacros command line options.
Unless specified explicitly (with -MT or -MQ), the object file name consists of the basename of the source file with any suffix replaced with object file suffix. If there are many included files then the rule is split into several lines using \-newline. The rule has no commands.
This option does not suppress the preprocessor's debug output, such as -dM. To avoid mixing such debug output with the dependency rules you should explicitly specify the dependency output file with -MF, or use an environment variable like DEPENDENCIES_OUTPUT (see Environment Variables). Debug output will still be sent to the regular output stream as normal.
Passing -M to the driver implies -E, and suppresses warnings with an implicit -w.
a programmer traped in a thugs body
|
|
|
|
|
Actually we build using the VC compiler, just not inside VS or a VS project. That system doesn't scale well for our needs. While this is an interesting approach I'm still favoring the monitoring of the file activity external to the compiler for serveral reasons:
1. We have C/C++ and C# projects in our tree, and possibly other languages as well that I haven't had to deal with yet.
2. We deal with a number of different building tools that may not have output options like the more mature C/C++ compilers do.
3. Since I have a number of differnt build tools I need to track dealing with each one separately and maintaining the output processing code for handling each tools output sounds like a big task and fragile
Taking a more build tool agnostic approach to gathering this information I feel will make the tool more reliable and take less maintenance work. I won't be trying to track down all of the differnt build tools we use and the necessary command line options to gather the information. And then have to figure out how to process each tools output to gather the information I need.
Adam
|
|
|
|
|
I need some error correction for an integer of 32 bits.
I've found several solutions, like 'reed solomon', 'FEC', 'parity bit'. But all of those are for a lot of bits.
I just need error correction for 32 bits and the error correction should be maximum 24 bits.
Does any of you have any idea of how to clear this problem and which of the error corrections I should use (or even create one myself)?
|
|
|
|
|
Hello,
http://en.wikipedia.org/wiki/Hamming_code - that one is pretty easy, scalable (you decide of length of data chunk, longer chunk - lesser bits will be taken for correction purposes).
Why parity bit is for lot of bits? You can use 1 parity bit for 2 bits of data, for 20 bits, for 200... (of course it will be less effective).
Btw - Hamming codes can correct error bits, not only check if data is not corrupted.
|
|
|
|
|
I see, hamming_code would be great indeed (thank you). But it's only possible for single error correction. In the best case I would like to correct every bit, but that's impossible, but I would like to have the highest efficiency.
I'm now busy with trying parity, horizontal + vertical + diagonal. And then check if the parity's are ok.
But is there another way than the hamming? Because I think this one is not really efficient.
|
|
|
|
|
Hello,
Actually, I think that Hamming is very efficient for what it offers (especially for longer streams, you add 1 bit for every *2 bits of data), if you need more bits corrected - use shorter version etc.
If I have understand you well and by
Deresen wrote: trying parity, horizontal + vertical + diagonal
you mean putting input into matrix and adding parity bits to every row/column, then you won't be able to fix any bits if you will have 2 errors (this is sometimes true, sometimes not, but still, you can't guarantee correcting 2 bits) and the cost of parity bits is much higher then using hamming.
You can also google for Convolutional code, but I don't know much about them, so I can't guarantee it is what you have been lookng for.
|
|
|
|
|
whats wrong with Reed-Solomon?
If I understand your requirements, you need an RS(4,1) code over GF(2^8) or RS(8,2) over GF(2^4), and because the codeword size is small you can use Euclid's algorithm to efficiently compute the key equation rather than Berlekamp-Massey (modified).
check this link out:
http://en.wikipedia.org/wiki/Reed%E2%80%93Solomon_error_correction[^]
|
|
|
|
|
To be honest, I did not really understand the reed solomon error correction.
And I've read this '2 byte errors per 32-byte block', this also means 2 bits per 32 bits. And that's to less for me.
The big problem is that I also have to check the error correction, if that is right. So I have to correct that stream also. Is this a possibility with reed solomon? And could you please give a small example of how the reed solomon works, for instance with a byte?
|
|
|
|
|