|
When you say persist do you mean...
a) After setting up string_list the next time you look the data isn't there?
b) The thing won't write to a file automatically?
c) Something else!
Cheers,
Ash
|
|
|
|
|
If elements in your STL list are more than just a simple type, you can wrap it up into an object, and just have STL list of objects.
I used it in a few places in the following article: ProSysLib: Dissecting the Process[^]
|
|
|
|
|
I need to generate the .H file (contains the interfaces, attributes...) from the DLL COM c++.
|
|
|
|
|
In Visual Studio, use #import .
It generates (and includes in your project) two files, with the extensions .tlh and .tli (short for 'type library header' and 'type library include'), and automatically includes them in your project.
You'll find more in the documentation for #import .
Hope this helps,
Pablo.
Pablo.
"Accident: An inevitable occurrence due to the action of immutable natural laws." (Ambrose Bierce, circa 1899).
|
|
|
|
|
Yes, I have the tlh and tli files
but I need the .H and .C files exactly
|
|
|
|
|
copy/paste from tlh/tli?
Pablo.
"Accident: An inevitable occurrence due to the action of immutable natural laws." (Ambrose Bierce, circa 1899).
|
|
|
|
|
I Think there is a method to have directly the .h file contains interfaces...
|
|
|
|
|
Use the OLE/COM Viewer. This allows you to save the definition out to IDL, .c or .h files.
|
|
|
|
|
Yes, it seems to work for me , but when I try to export to .h file
i have the error message :
---------------------------
OLEViewer 2.0 Interface Viewers
---------------------------
Error running MIDL.exe: 2
have you idea about this ?
|
|
|
|
|
Try the steps outlined here[^].
|
|
|
|
|
HI,
I have an application in which is not using mfc.But i got a requirement that i need to create and display a dialog box. Existing application is a Regular DLL.
Thanks & Regards,
Rajeev
|
|
|
|
|
Rajeev.Goutham wrote: i need to create and display a dialog box.
Add a dialog resource to your project and use DialogBox() [^] to run it.
Rajeev.Goutham wrote: Existing application is a Regular DLL.
A DLL is not an application.
Binding 100,000 items to a list box can be just silly regardless of what pattern you are following. Jeremy Likness
|
|
|
|
|
Using Windows API to create Dialog see this
|
|
|
|
|
Hello,
Please have a look at the code ,I can't realise the funcation.Thank you very much !
----------------------------------------------
BOOL CMultiProgressDlg::OnInitDialog()
{
.......................
hDownLoaderDlgWnd = this->m_hWnd;
...........................
}
void ShowProgress(LPVOID lpProgress)
{
CProgressCtrl *bpm_COMM= (CProgressCtrl*) GetDlgItem(hFlashDownLoaderDlgWnd,* (int*)lpProgress);
for(int i = 0; i< 100; i++)
{
bpm_COMM->SetPos(i);
Sleep(100);
}
}
void CMultiProgressDlg::OnOK()
{
// TODO: Add extra validation here
HANDLE hThrds[3];
DWORD dwThreadId[3] ;
UINT ProgressnID[3] = { IDC_PROGRESS1,IDC_PROGRESS2,IDC_PROGRESS3 } ;
for(int i = 0; i < 3; i++)
{
hThrds[i] =CreateThread(
NULL,//LPSECURITY_ATTRIBUTES lpThreadAttributes, // pointer to security attributes
0, //DWORD dwStackSize, // initial thread stack size
(LPTHREAD_START_ROUTINE)ShowProgress,// LPTHREAD_START_ROUTINE lpStartAddress,
&ProgressnID[i],
0, //DWORD dwCreationFlags,
&dwThreadId[i]);//LPDWORD lpThreadId
}
}
|
|
|
|
|
It would help if you i)formatted your code properly so it is clear and easy to read, and ii)explained exactly what your problem is.
Binding 100,000 items to a list box can be just silly regardless of what pattern you are following. Jeremy Likness
|
|
|
|
|
You want to show 3 simultaneous progress bars? What's the process that will be updating them? ...You should not access any dialog items directly from an independent thread like that, instead, send a message to the dialog via PostMessage() telling it to update whatever progress bar you want to update. Within your dialog class, have a message handler that updates the progress bar accordingly.
|
|
|
|
|
Hi,
I need to retrive a list from IP server.
Do you know how to do it by AsyncIO?
Thanks!
|
|
|
|
|
I think you would need to do it by using a socket connection or HTTP feed.
Binding 100,000 items to a list box can be just silly regardless of what pattern you are following. Jeremy Likness
|
|
|
|
|
You can use bosst asio.
And as you are looking to retrive any collection object, I would propose you to use boost serialization, it is more easier to code and design these kind of requirement.
|
|
|
|
|
Hi,
I have following MPI code
#include "mpi.h"
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define ARRAYSIZE 2000
#define MASTER 0
int data[ARRAYSIZE];
int main(int argc, char* argv[])
{
int numtasks, taskid, rc, dest, offset, i, j, tag1, tag2, source, chunksize, namelen;
int mysum;
long sum;
int update(int myoffset, int chunk, int myid);
char myname[MPI_MAX_PROCESSOR_NAME];
MPI_Status status;
double start = 0.0, stop = 0.0, time = 0.0;
double totaltime;
FILE *fp;
char line[128];
char element;
int n;
int k=0;
MPI_Init(&argc, &argv);
MPI_Comm_size(MPI_COMM_WORLD, &numtasks);
MPI_Comm_rank(MPI_COMM_WORLD,&taskid);
MPI_Get_processor_name(myname, &namelen);
printf ("MPI task %d has started on host %s...\n", taskid, myname);
chunksize = (ARRAYSIZE / numtasks);
tag2 = 1;
tag1 = 2;
if (taskid == MASTER){
fp=fopen("integers.txt", "r");
if(fp != NULL){
sum = 0;
while(fgets(line, sizeof line, fp)!= NULL){
fscanf(fp,"%d",&data[k]);
sum = sum + data[k];
k++;
}
}
printf("Initialized array sum %d", sum);
offset = chunksize;
for (dest=1; dest<numtasks; dest++) {
MPI_Send(&offset, 1, MPI_INT, dest, tag1, MPI_COMM_WORLD);
MPI_Send(&data[offset], chunksize, MPI_INT, dest, tag2, MPI_COMM_WORLD);
printf("Sent %d elements to task %d offset= %d\n",chunksize,dest,offset);
offset = offset + chunksize;
}
offset = 0;
mysum = run_kernel(&data[offset], chunksize);
printf("Kernel returns sum %d", mysum);
for (i=1; i<numtasks; i++) {
source = i;
MPI_Recv(&offset, 1, MPI_INT, source, tag1, MPI_COMM_WORLD, &status);
MPI_Recv(&data[offset], chunksize, MPI_INT, source, tag2,MPI_COMM_WORLD, &status);
}
MPI_Reduce(&mysum, &sum, 1, MPI_INT, MPI_SUM, MASTER, MPI_COMM_WORLD);
printf("\n*** Final sum= %d ***\n",sum);
}
if (taskid > MASTER) {
start= MPI_Wtime();
source = MASTER;
MPI_Recv(&offset, 1, MPI_INT, source, tag1, MPI_COMM_WORLD, &status);
MPI_Recv(&data[offset], chunksize, MPI_INT, source, tag2,MPI_COMM_WORLD, &status);
mysum = run_kernel(&data[offset], chunksize);
printf("\nKernel returns sum %d ", mysum);
stop = MPI_Wtime();
time = stop -start;
printf("time taken by process %d to recieve elements and caluclate own sum is = %lf seconds \n", taskid, time);
dest = MASTER;
MPI_Send(&offset, 1, MPI_INT, dest, tag1, MPI_COMM_WORLD);
MPI_Send(&data[offset], chunksize, MPI_INT, MASTER, tag2, MPI_COMM_WORLD);
MPI_Reduce(&mysum, &sum, 1, MPI_INT, MPI_SUM, MASTER, MPI_COMM_WORLD);
}
MPI_Finalize();
}
int update(int myoffset, int chunk, int myid) {
int i,j;
int mysum = 0;
for(i=myoffset; i < myoffset + chunk; i++) {
mysum = mysum + data[i];
}
printf("Task %d has sum = %d\n",myid,mysum);
return(mysum);
}
and I have following cuda code
#include <stdio.h>
__global__ void add(int *devarray, int *devsum)
{
int index = blockIdx.x * blockDim.x + threadIdx.x;
devsum = devsum + devarray[index];
}
extern "C"
int * run_kernel(int array[],int nelements)
{
int *devarray, *sum, *devsum;
printf("\nrun_kernel called..............");
cudaMalloc((void**) &devarray, sizeof(int)*nelements);
cudaMalloc((void**) &devsum, sizeof(int));
cudaMemcpy(devarray, array, sizeof(int)*nelements, cudaMemcpyHostToDevice);
add<<<2, 3>>>(devarray, devsum);
cudaMemcpy(sum, devsum, sizeof(int), cudaMemcpyDeviceToHost);
printf(" \nthe sum is %d", sum);
cudaFree(devarray);
return sum;
}
I am getting following output
Here is my output when I run above code -
MPI task 0 has started on host
MPI task 1 has started on host
MPI task 2 has started on host
MPI task 3 has started on host
Initialized array sum 9061Sent 500 elements to task 1 offset= 500
Sent 500 elements to task 2 offset= 1000
Sent 500 elements to task 3 offset= 1500
[node4] *** Process received signal ***
run_kernel called..............
[node4:04786] Signal: Segmentation fault (11)
[node4:04786] Signal code: Invalid permissions (2)
[node4:04786] Failing at address: 0x8049828
[node4:04786] [ 0] [0xaf440c]
[node4:04786] [ 1] /usr/lib/libcuda.so(+0x13a0f6) [0xfa10f6]
[node4:04786] [ 2] /usr/lib/libcuda.so(+0x146912) [0xfad912]
[node4:04786] [ 3] /usr/lib/libcuda.so(+0x148094) [0xfaf094]
[node4:04786] [ 4] /usr/lib/libcuda.so(+0x13ca50) [0xfa3a50]
[node4:04786] [ 5] /usr/lib/libcuda.so(+0x11863c) [0xf7f63c]
[node4:04786] [ 6] /usr/lib/libcuda.so(+0x11d167) [0xf84167]
[node4:04786] [ 7] /usr/lib/libcuda.so(cuMemcpyDtoH_v2+0x64) [0xf74014]
[node4:04786] [ 8] /usr/local/cuda/lib/libcudart.so.4(+0x2037b) [0xcbe37b]
[node4:04786] [ 9] /usr/local/cuda/lib/libcudart.so.4(cudaMemcpy+0x230) [0xcf1360]
[node4:04786] [10] mpi_array(run_kernel+0x135) [0x8049559]
[node4:04786] [11] mpi_array(main+0x2f2) [0x8049046]
[node4:04786] [12] /lib/libc.so.6(__libc_start_main+0xe6) [0x2fece6]
[node4:04786] [13] mpi_array() [0x8048cc1]
[node4:04786] *** End of error message ***
Kernel returns sum 134530992 time taken by process 1 to recieve elements and caluclate own sum is = 0.276339 seconds
run_kernel called..............
devsum is 3211264
the sum is 134532992
Kernel returns sum 134532992 time taken by process 2 to recieve elements and caluclate own sum is = 0.280452 seconds
run_kernel called..............
devsum is 3211264
the sum is 134534992
Kernel returns sum 134534992 time taken by process 3 to recieve elements and caluclate own sum is = 0.285010 seconds
------------------------------------------------------------- -------------
mpirun noticed that process rank 0 with PID 4786 on node ecm-c-l-207-004.uniwa.uwa.edu.au exited on signal 11 (Segmentation fault).
Perhaps the sum does not look correct. Not sure what is causing segmentation fault. Can anyone help?
Thanks
|
|
|
|
|
Hi,
I am trying to run following code which contains OpenMPI and CUDA.
#include <stdio.h>
#include <stdlib.h>
#include <cuda.h>
#include <cuda_runtime.h>
#include <sys/time.h>
#include <mpi.h>
#define NREPEAT 10
#define NBYTES 10.e6
int main (int argc, char *argv[])
{
int rank, size, n, len, numbytes;
void *a_h, *a_d;
struct timeval time[2];
double bandwidth;
char name[MPI_MAX_PROCESSOR_NAME];
MPI_Status status;
MPI_Init (&argc, &argv);
MPI_Comm_rank (MPI_COMM_WORLD, &rank);
MPI_Comm_size (MPI_COMM_WORLD, &size);
MPI_Get_processor_name(name, &len);
printf("Process %d is on %s\n", rank, name);
printf("Using regular memory \n");
a_h = malloc(NBYTES);
cudaMalloc( (void **) &a_d, NBYTES);
MPI_Barrier(MPI_COMM_WORLD);
gettimeofday(&time[0], NULL);
for (n=0; n<NREPEAT; n )
{
cudaMemcpy(a_d, a_h, NBYTES, cudaMemcpyHostToDevice);
}
gettimeofday(&time[1], NULL);
bandwidth = time[1].tv_sec - time[0].tv_sec;
bandwidth = 1.e-6*(time[1].tv_usec - time[0].tv_usec);
bandwidth = NBYTES*NREPEAT/1.e6/bandwidth;
printf("Host->device bandwidth for process %d: %f MB/sec\n",rank,bandwidth);
MPI_Barrier(MPI_COMM_WORLD);
gettimeofday(&time[0], NULL);
for (n=0; n<NREPEAT; n )
{
if (rank == 0)
MPI_Send(a_h, NBYTES/sizeof(int), MPI_INT, 1, 0, MPI_COMM_WORLD);
else
MPI_Recv(a_h, NBYTES/sizeof(int), MPI_INT, 0, 0, MPI_COMM_WORLD, &status);
}
gettimeofday(&time[1], NULL);
bandwidth = time[1].tv_sec - time[0].tv_sec;
bandwidth = 1.e-6*(time[1].tv_usec - time[0].tv_usec);
bandwidth = NBYTES*NREPEAT/1.e6/bandwidth;
if (rank == 0)
printf("MPI send/recv bandwidth: %f MB/sec\n", bandwidth);
cudaFree(a_d);
free(a_h);
MPI_Finalize();
return 0;
}
To compile I am using :
mpicc mpibandwidth.c -o mpibandwidth -I /usr/local/cuda/include -L /usr/local/cuda/lib -lcudart
To execute I am using :
/usr/local/bin/mpirun --mca btl tcp,self --mca btl_tcp_if_include eth0 --hostfile slaves -np 5 mpibandwidth
I am getting error after executing :
error while loading shared libraries: libcudart.so.4: cannot open shared object file: No such file or directory
My PATH and LD_LIBRARY_PATH variables are:
PATH = /usr/lib/qt-3.3/bin:/usr/local/ns-allinone/bin:/usr/local/ns- allinone/tcl8.4.18/unix:/usr/local/ns-allinone/tk8.4.18/unix:/ usr/local/cuda/cuda/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/ sbin:/usr/sbin:/sbin:/usr/local/lib/:/usr/local/lib/openmpi:/usr/ local/cuda/bin
LD_LIBRARY_PATH = :/usr/local/lib:/usr/local/lib/openmpi/:/usr/local/cuda/lib
:/usr/local/lib:/usr/local/lib/openmpi/:/usr/local/cuda/lib
libcudart.so.4 is present in usr/local/cuda/lib and it is in LD path
Any idea what is missing?
Can someone help please
Thanks
|
|
|
|
|
If you are certain that the file exists in the library pointed to then the issue must rest with the mpirun command. Are you sure that it uses LD_LIBRARY_PATH to locate its libraries?
Binding 100,000 items to a list box can be just silly regardless of what pattern you are following. Jeremy Likness
|
|
|
|
|
Yes I am pretty sure that mpirun uses correct libraries. I have several other openmpi program which work fine on cluster. Only this one, mixture of mpi and cuda doesnt work.
All the libraries do exist in pointed directories. and LD paths mentioned above are correct.
I have tried running the program only using master node and it works fine. But when I include slave nodes it gives error mentioned in original post.
All the slaves have same installation with correct LD paths
Not sure why its not working 
|
|
|
|
|
Ron1202 wrote: Not sure why its not working
Sorry, nor me. I can only suggest you look inside the program and see if you can add any debug code that may help to diagnose it.
Binding 100,000 items to a list box can be just silly regardless of what pattern you are following. Jeremy Likness
|
|
|
|
|
I'v implemented an IE browser in my application base on CHtmlView, the "back" and "forward" functon are always avaliable, but I want to make the "back" button disabled when the current page point to the first position of the history list, and make the "forward" button disabled when the current page point to the last position of the history list.
I can't find the method to the history list, so that I can't implemented the function, is there anyone can inform me how to do it? thanks very much!
|
|
|
|
|