|
Maybe the GDI it's not the best option. Have a look to DirectX or OpenGL.
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
|
|
|
|
|
Hmmm - the following code using GDI+ in a window uses 3%-5% cpu on a dual Xeon 3.2GHz machine:
Gdiplus::Bitmap SrcBitmap(L"C:\\test768768.bmp", FALSE);
Graphics DstGraphics(*this);
REAL angle = 0.0f;
for (int i = 0; i < 125; ++i)
{
DstGraphics.ResetTransform();
DstGraphics.RotateTransform(angle);
DstGraphics.TranslateTransform(450.0f, 450.0f, MatrixOrderAppend);
DstGraphics.DrawImage(&SrcBitmap, -384, -384, SrcBitmap.GetWidth(), SrcBitmap.GetHeight());
angle += 10.0f;
if (angle >= 360.0f)
angle -= 360.0f;
::Sleep(70);
}
"Great job, team. Head back to base for debriefing and cocktails."
(Spottswoode "Team America")
|
|
|
|
|
Mark Salsbery wrote: dual Xeon 3.2GHz machine
Good hardware!
If the Lord God Almighty had consulted me before embarking upon the Creation, I would have recommended something simpler.
-- Alfonso the Wise, 13th Century King of Castile.
|
|
|
|
|
I like it alot although the SATA bus sits unused
"Great job, team. Head back to base for debriefing and cocktails."
(Spottswoode "Team America")
|
|
|
|
|
Thanks for posting, Mark.
Yes, GDI+ would be a better alternative. My problem is that I have to use GDI, not GDI+. I am using a third-party API library to load special image formats. That libray was GDI-based. I thought about mixing GDI/GDI+, but hesitated to do so thus far.
Best,
Jun
|
|
|
|
|
Once the image is loaded, what format is it in?
Those CPU usage numbers seem a tad high for 12.5fps, even using straight GDI.
"Great job, team. Head back to base for debriefing and cocktails."
(Spottswoode "Team America")
|
|
|
|
|
As an example, even this totally un-optimized GDI code uses about 18% CPU (GDI+ only used to
load the bitmap - it's converted to a HBITMAP) ...
Gdiplus::Color clr(0xF0,0xF0,0xF0);
HBITMAP bmp;
if (Gdiplus::Ok == SrcBitmap.GetHBITMAP(clr, &bmp))
{
HDC hDestDC = ::GetDC(*this);
HDC hMemDC = ::CreateCompatibleDC(hDestDC);
::SetGraphicsMode(hDestDC, GM_ADVANCED);
HGDIOBJ hOldBitmap = ::SelectObject(hMemDC, (HGDIOBJ)bmp);
REAL angle = 0.0f;
for (int i = 0; i < 125; ++i)
{
static REAL pi = 3.1415926535f;
REAL angleradians = (angle * pi) / 180.0f;
REAL anglesin = ::sin(angleradians);
REAL anglecos = ::cos(angleradians);
XFORM XForm;
XForm.eM11 = anglecos;
XForm.eM12 = anglesin;
XForm.eM21 = -anglesin;
XForm.eM22 = anglecos;
XForm.eDx = 450.0f;
XForm.eDy = 450.0f;
::SetWorldTransform(hDestDC, &XForm);
::BitBlt(hDestDC, -384, -384, SrcBitmap.GetWidth(), SrcBitmap.GetHeight(), hMemDC, 0, 0, SRCCOPY);
angle += 5.0f;
if (angle >= 360.0f)
angle -= 360.0f;
::Sleep(20);
}
::SelectObject(hMemDC, hOldBitmap);
::DeleteDC(hMemDC);
::ReleaseDC(*this, hDestDC);
::DeleteObject((HGDIOBJ)bmp);
}
-- modified at 11:03 Thursday 8th March, 2007
"Great job, team. Head back to base for debriefing and cocktails."
(Spottswoode "Team America")
|
|
|
|
|
Actually my code is very similar to yours here, except that I call SetWorldTransform() twice, one with the rotation matrix and the other with the unit matrix. Maybe that is where I can improve. I will definitely try it out tomorrow (It's 10:30 PM now and I don't have the code at home). I'll post the outcome...Thanks again.
Best,
Jun
|
|
|
|
|
I have tested the same thing, but it still eats up ~35% of Dual 3.2GHz CPUs. My code is listed in the following, which basically does the same thing:
void CRotateDlg::OnBnClickedButton1()
{
UINT nClassStyle = CS_OWNDC;
CString m_ClassName = AfxRegisterWndClass ( nClassStyle,
NULL,
(HBRUSH)GetStockObject(BLACK_BRUSH),
LoadIcon(NULL, IDI_APPLICATION));
LONG winExtStyle = WS_EX_TOPMOST;
LONG winStyle = WS_POPUP | WS_VISIBLE | CS_OWNDC;
HWND hWnd=::CreateWindowEx( winExtStyle,
m_ClassName,
"",
winStyle,
0, 0, 768, 768,
NULL, NULL, NULL, NULL);
CDC *m_pDC = new CDC();
m_pDC->Attach(::GetDC(hWnd));
CDC *m_pBufferDC = 0;
CBitmap m_bitmap;
if(!m_pBufferDC)
{
m_pBufferDC = new CDC();
}
m_pBufferDC->CreateCompatibleDC(m_pDC);
m_bitmap.LoadBitmapA(IDB_BITMAP1);
m_pBufferDC->SelectObject(m_bitmap);
double angle = 0.0f;
for(int i = 0; i < 100; i++)
{
angle += 0.1f;
float cosine= (float) cos(DEG2RAD(-angle));
float sine = (float) sqrt(1.0 - cosine*cosine);
m_pDC->SetGraphicsMode(GM_ADVANCED);
const int xr = 384;
const int yr = 384;
XFORM xform = { cosine, sine,
-sine, cosine,
xr-xr*cosine+yr*sine,
yr-yr*cosine-xr*sine};
m_pDC->SetWorldTransform( &xform );
m_pDC->BitBlt( 0, 0, 768, 768,
m_pBufferDC,
0, 0,
SRCCOPY);
m_pDC->ModifyWorldTransform(&xform, MWT_IDENTITY);
Sleep(20);
}
m_pDC->Detach();
delete m_pBufferDC, m_pDC;
}
Best,
Jun
|
|
|
|
|
There's alot of math going on in the loop that could be optimized.
The angle could be kept in radians so it doesn't have to be converted from degrees every
iteration.
The translation value (eDx,eDx in the XFORM struct) maybe doesn't need to be calculated every
iteration.
The m_pDC->ModifyWorldTransform(&xform, MWT_IDENTITY); isn't necessary in the loop.
I know you're using a third-party bitmap format but if it's drawable with GDI then using GDI+
should be trivial and can cut the CPU usage alot.
The video card and driver can influence the performance as well. My machine that I'm testing is
same CPUs/speed with a ATI FireGL V3100 128MB video adapter.
I'm goig to try your code on my machine - I'll report the findings
Mark
"Great job, team. Head back to base for debriefing and cocktails."
(Spottswoode "Team America")
|
|
|
|
|
Mark,
I have implemented the mixed GDI/GDI+ approach to my project and it works well. The CPU usage reduces to ~10%. I didn't expect GDI+ gives such a boost. Thanks for help.
Best,
Jun
|
|
|
|
|
Jun Du wrote: I have implemented the mixed GDI/GDI+ approach to my project and it works well. The CPU usage reduces to ~10%.
Excellent! I didn't realize GDI+ was significantly faster either
Cheers,
Mark
"Great job, team. Head back to base for debriefing and cocktails."
(Spottswoode "Team America")
|
|
|
|
|
My project have a list control and each item(colume and row) have data inside.
I want to map message when user double click at each item, and the program show the data that user have double click.
What the message that happen when user double click at each item and how to get the data from the item?
|
|
|
|
|
Do you need to NM_DBLCLK and NM_CLICK ?
|
|
|
|
|
Does anyone PLEASE know how to perform the following (example is in Java, using Apache library) in C++:
import java.io.*;<br />
import javax.xml.parsers.*;<br />
import java.security.*;<br />
<br />
import org.w3c.dom.*;<br />
<br />
import org.apache.xml.security.signature.*;<br />
import org.apache.xml.security.transforms.*;<br />
import org.apache.xml.security.Init;<br />
<br />
import org.bouncycastle.util.encoders.Base64;<br />
<br />
<br />
public class IRMark {<br />
<br />
<br />
public static void main(String args[]) throws Exception {<br />
<br />
Init.init();<br />
<br />
if (args.length!=1) {<br />
System.out.println("Use: IRmark <file> ");<br />
return;<br />
}<br />
<br />
FileInputStream fis=null;<br />
try {<br />
fis=new FileInputStream(args[0]);<br />
} catch (FileNotFoundException e) {<br />
System.out.println("The file " + args[0] + " could not be opened.");<br />
return;<br />
}<br />
<br />
byte[] data=null;<br />
try {<br />
int bytes=fis.available();<br />
data=new byte[bytes];<br />
fis.read(data);<br />
} catch (IOException e) {<br />
System.out.println("Error reading file.");<br />
e.printStackTrace();<br />
}<br />
<br />
<br />
String transformStr =<br />
"<?xml version='1.0'?>\n"<br />
+ "<dsig:Transforms xmlns:dsig='http://www.w3.org/2000/09/xmldsig#'" <br />
+ "xmlns:gt='http://www.govtalk.gov.uk/CM/envelope'" <br />
+ "xmlns:ir='http://www.govtalk.gov.uk/taxation/SA'>\n"<br />
+ "<dsig:Transform Algorithm='http://www.w3.org/TR/1999/REC-xpath-19991116'>\n"<br />
+ "<dsig:XPath>\n"<br />
+ "count(ancestor-or-self::node()|/gt:GovTalkMessage/gt:Body)=count(ancestor-or-self::node())\n"<br />
+ " and count(self::ir:IRmark)=0 \n"<br />
+ " and count(../self::ir:IRmark)=0 \n"<br />
+ "</dsig:XPath>\n"<br />
+ "</dsig:Transform>\n"<br />
+ "<dsig:Transform Algorithm='http://www.w3.org/TR/2001/REC-xml-c14n-20010315#'/>\n"<br />
+ "</dsig:Transforms>\n"<br />
;<br />
<br />
DocumentBuilderFactory dbf=DocumentBuilderFactory.newInstance();<br />
dbf.setNamespaceAware(true);<br />
DocumentBuilder db=dbf.newDocumentBuilder();<br />
Document doc=db.parse(new ByteArrayInputStream(transformStr.getBytes()));<br />
<br />
Transforms transforms = new Transforms(doc.getDocumentElement(), null);<br />
<br />
XMLSignatureInput input = new XMLSignatureInput(data);<br />
XMLSignatureInput result = transforms.performTransforms(input);<br />
<br />
<br />
MessageDigest md = MessageDigest.getInstance("SHA");<br />
md.update(result.getBytes());<br />
byte[] digest=md.digest();<br />
<br />
System.out.println("IRmark: " + new String(Base64.encode(digest)));<br />
}<br />
}
|
|
|
|
|
Honnestly, I don't think that a lot of people will spend half an hour trying to understand what this piece of code is supposed to do. If you have a question, then ask directly what you are trying to achieve and tell us what you already did and where you are stuck.
You'll get much more answers this way.
|
|
|
|
|
can't you read ? it's supposed to "...generates an IRmark value for an input document..." .
whatever that's supposed to be.
|
|
|
|
|
Oww, yes, that's true. And I didn't spot neither the:
The value has to be placed inside documents to be signed by the XPE when used in a EDS/IR deployment.
Now, it makes sense
|
|
|
|
|
As long as you use the Bouncy Castle library you will be ok.
|
|
|
|
|
Ok, I am sorry.
What it is basically doing is:
(1) taking an input XML file, and extracting just the <Body></Body> node structure, we'll call this the 'nodeBody'.
(2) With the 'nodeBody', it then checks if there is a <IRMark></IRMark> node within it, if there is it deletes the <IRMark></IRMark> node from the 'nodeBody'.
(3) With the resultant 'nodeBody' node it does a C14N Transform, the result of which, we'll call the 'result'.
(4) The 'result' is then passed through to a SHA1 hash algorithm, the result of which we'll call the 'digest'; (a byte[] of size == 20 bytes).
(5) The 'digest' is then converted into a Base64 string, the result of which we'll call the 'IRMark'.
I am ok with (1), (2), (4) and (5) above. BUT I would appreciate PLEASE PLEASE how on earth to do a C14N Transform in MFC C++ (unmanaged).
In C# I can do this as follows, but I am not allowed to use .NET in the project I am working on:
/**************************************
* Get the <bod></body> part of the xml:
* ************************************/
XmlDocument doc = new XmlDocument();
doc.PreserveWhitespace = false;
doc.LoadXml(xml);
XmlNamespaceManager nsmgr = new XmlNamespaceManager(doc.NameTable);
nsmgr.AddNamespace("env", "http://www.govtalk.gov.uk/CM/envelope");
nsmgr.AddNamespace("ctf", "http://www.govtalk.gov.uk/taxation/CTF");
XmlNode node = doc.SelectSingleNode("//env:GovTalkMessage/env:Body", nsmgr);
doc = new XmlDocument();
doc.PreserveWhitespace = true;
doc.LoadXml(node.OuterXml);
nsmgr = new XmlNamespaceManager(doc.NameTable);
nsmgr.AddNamespace("env", "http://www.govtalk.gov.uk/CM/envelope");
nsmgr.AddNamespace("ctf", "http://www.govtalk.gov.uk/taxation/CTF");
node = doc.SelectSingleNode("env:Body/ctf:IRenvelope/ctf:IRheader/ctf:IRmark", nsmgr);
/**************************************
* Remove <IRMark> node
* ************************************/
if (node != null)
{
XmlNode nodeIRMark = node.ParentNode;
nodeIRMark.RemoveChild(node);
}
/**************************************
* Transform the body:
* ************************************/
XmlDsigC14NTransform c14n = new XmlDsigC14NTransform(true);
c14n.LoadInput(doc);
c14n.Algorithm = "http://www.w3.org/TR/2001/REC-xml-c14n-20010315";
Stream stream = (Stream)c14n.GetOutput(typeof(Stream));
|
|
|
|
|
If you are looking for a XML parsing library, you can take a look at tinyXML[^]
|
|
|
|
|
Thanks. I'll investigate that tomorrow.
|
|
|
|
|
How to get local time in seconds?
I am using MS Visual Studio 2005.
I could get UTC time in seconds below.
Are they correct?
Thanks!
#include <time.h>
#include <sys timeb.h="">
double Clock::getUtcSeconds()const // returns UTC time in second
{
time_t UtcTimeValue;
return time(&UtcTimeValue);
}
string Clock::getUtcHMSTime()const
{
char buffer[10];
string strUtcTime;
time_t UtcTimeValue;
time(&UtcTimeValue);
struct tm* UtcTime = gmtime(&UtcTimeValue); //convert the calender time to UTC time
strUtcTime+=itoa(UtcTime->tm_hour, buffer,10); //convert to decimal number
strUtcTime+=" : ";
strUtcTime+=itoa(UtcTime->tm_min, buffer,10);
strUtcTime+=" : ";
strUtcTime+=itoa(UtcTime->tm_sec, buffer,10);
return strUtcTime;
}
Yonggoo
|
|
|
|
|
|
We want to keep our system cross-platform.
Any ANSI standard way to get local time in seconds?
Yonggoo
|
|
|
|
|