D
db_ch
Guest
Hi
I am working on a big industrial application which we use for years to control a machine. The application is used for several types of those machines as the interface is always the same, just the speed of the machine is different. With the newest and fastest of those machines, we are getting big unexpected problems.
There are some synchronisation tasks we have to do, for example, we are getting an image from a camera and a PE-Signal from the machine and we have to match them together. This is implemented with WaitForSingleObject on one side and WaitForMultipleObject on the other side. Both WaitFunctions have timeouts to ensure that we don't match anything wrong together.
On the newest machine, we have a timeframe of only 30ms. If I get the image more than 30 later than the PE-Signal, I have to throw it away because it isn't sure anymore that this was the right image, it could also be already the next image.
And there is my problem: The WaitFunctions aren't accurate enough. I developped a small test application which tests that behave. I have two tasks and they both Wait and Signal each other. In normal case, one turnaround takes not more than 5 microseconds (Depending on the system where I run it). But from time to time it takes up to 10ms. This is on my computer where is no load on it. If I put some load on the system, I get times up to 100ms!
I already tried the following to improve:
- Use of polling and a Boolean instead of the WaitFunction -> Normal turnaround < 0.05 microseconds, but max. turnaround also up to 100ms.
- Set Process and Thread priority to the highest possible -> Absolutely no effect.
I made the time mesurements with QueryPerformanceCounter, they should be accurate. I also analyzed the situation with the new Concurrency Visualizer:
You can see that sometimes the thread waits for the Signal a very long time. There is no other thread of my application disturbing, this must be a windows process or something else.
As you can see in the trace I also inserted a Sleep every loop to prevent from using the CPU 100% of time and giving it free to be used by other processes.
Is there any way to improve this? I don't need the "speed" it has right now with 5 microseconds, this could be always 1ms for me, but I need much less deflection! I could also live with a less responsive user interface (which is not used very often) or other disatvantages. What we can't do is going onto another OS, the application is huge and uses a lot of winapi. And we have devices that only support windows.
Actually we are using Windows XP on the system, the tests I ran also on Windows 7 and 8. Could it maybe be better on a Windows Server Version?
Thank you for your help!
Continue reading...
I am working on a big industrial application which we use for years to control a machine. The application is used for several types of those machines as the interface is always the same, just the speed of the machine is different. With the newest and fastest of those machines, we are getting big unexpected problems.
There are some synchronisation tasks we have to do, for example, we are getting an image from a camera and a PE-Signal from the machine and we have to match them together. This is implemented with WaitForSingleObject on one side and WaitForMultipleObject on the other side. Both WaitFunctions have timeouts to ensure that we don't match anything wrong together.
On the newest machine, we have a timeframe of only 30ms. If I get the image more than 30 later than the PE-Signal, I have to throw it away because it isn't sure anymore that this was the right image, it could also be already the next image.
And there is my problem: The WaitFunctions aren't accurate enough. I developped a small test application which tests that behave. I have two tasks and they both Wait and Signal each other. In normal case, one turnaround takes not more than 5 microseconds (Depending on the system where I run it). But from time to time it takes up to 10ms. This is on my computer where is no load on it. If I put some load on the system, I get times up to 100ms!
I already tried the following to improve:
- Use of polling and a Boolean instead of the WaitFunction -> Normal turnaround < 0.05 microseconds, but max. turnaround also up to 100ms.
- Set Process and Thread priority to the highest possible -> Absolutely no effect.
I made the time mesurements with QueryPerformanceCounter, they should be accurate. I also analyzed the situation with the new Concurrency Visualizer:
You can see that sometimes the thread waits for the Signal a very long time. There is no other thread of my application disturbing, this must be a windows process or something else.
As you can see in the trace I also inserted a Sleep every loop to prevent from using the CPU 100% of time and giving it free to be used by other processes.
Is there any way to improve this? I don't need the "speed" it has right now with 5 microseconds, this could be always 1ms for me, but I need much less deflection! I could also live with a less responsive user interface (which is not used very often) or other disatvantages. What we can't do is going onto another OS, the application is huge and uses a lot of winapi. And we have devices that only support windows.
Actually we are using Windows XP on the system, the tests I ran also on Windows 7 and 8. Could it maybe be better on a Windows Server Version?
Thank you for your help!
Continue reading...