Timeout on dedicated card?

Jul 29, 2013 at 4:10 PM
Hello all,

Still working on the same project - but now with larger problem sizes. This causes launch timeouts on the GPU. Since I am far enough that I am confident this is a viable solution for me, I went ahead and bought a second GPU (GTX650) to run my displays so that the more powerful card (GTX650 Ti) can run the program - but for some reason it's still timing out.

Any ideas on this?
I am running in Window 7, displays are attached to the primary GPU (GTX650), I have made sure the code is running on the secondary GPU (GTX 650 Ti). I am testing this by simply running an infinite loop on the card. (While(true))

All help is appreciated.
Jul 29, 2013 at 8:40 PM
So temporarily I changed the Timeout period to 30 seconds and all is well... I just need to figure out how to make this second GPU run without that "hack" as that's iffy at best for a final solution.
Coordinator
Aug 10, 2013 at 6:45 AM
I do not see why you consider changing the timeout as a hack. If you really must use long or infinite kernels then you have no choice. The timeout is a safety mechanism.