This project is read-only.

Alea.cuBase

May 16, 2013 at 8:47 PM
Edited May 17, 2013 at 6:34 AM
Is there some benefit of using Alea.cuBase over CUDAfy? I read that there is no need for translation..
May 17, 2013 at 1:40 PM
Probably best not to ask such a question here! The fact that CUDAfy uses a translation stage is rather irrelevant - it is transparent to the user (unless they choose otherwise) and the end result is GPU code in form of PTX. Translation to CUDA C can also be an advantage for debugging. CUDAfy is also open source and dual license (LGPL and commercial), cuBase is commercial only.
You really need to try and make your questions more specific. There will be no magic wand solution for "out of core" without you needing to think through your algorithm and work with other CUDAfy users. Out of core will be there in a year with NVIDIA Maxwell and AMD's hUMA.
May 17, 2013 at 3:53 PM
If you're not writing pure CUDA, then at some point your source has to be translated. As Nick said, the step is transparent to both developers and end users (especially since the compiled version can be included in your .NET assembly as a resource for deployment). Also, having the generated CUDA source available lets you spot problems in your source, and lets you use NVidia's NSight debugger, which is invaluable.

I haven't used Alea.cuBase, but I've been delving into CUDAfy quite a bit recently and have nothing but good things to say about it. The fact that the dev team appears to be very active here is also a huge plus.
May 17, 2013 at 8:10 PM
Ofcourse Nick. Thats all nice but they have support Dynamic Parallelism and Hyper-Q via runtime API on the other hand CUDAfy can't do it.
May 17, 2013 at 10:17 PM
Hyper-Q should just work if the device supports it - I think you'll need at least a Titan. Dynamic parallelism will be tougher, maybe impossible with driver API, at least for now. The question is what on earth do you actually want to do? Then you can begin discussing what is better.
May 18, 2013 at 5:30 AM
More info. These are two crucial technologies. http://www.nvidia.com/object/nvidia-kepler.html
May 18, 2013 at 7:14 AM
Yes we know all about them and even attended GTC2012 and heard the words directly from NVIDIA CEO's mouth (via a massive PA system of course). As I said Hyper-Q will just work with CUDAfy, but dynamic parallelism may need to wait until it is possible via driver api.
What exactly are you trying to do?
May 18, 2013 at 12:38 PM
Edited May 18, 2013 at 12:48 PM
And do they have support for any AMD device, as CUDAfy does?
May 20, 2013 at 7:49 PM
I guess there is also a fundamental question of approach here. It looks like Alea.cuBase uses LLVM rather than CUDA C as an intermediate step (how they do that I am not sure: does it use a variant of Mono LLVM perhaps?). I do think that is a neat approach, but if I wanted to choose a technology to use for a big system, I think I would feel a lot safer with the Cudafy approach. It is open source and the code-generation allows for a certain amount of hedging-your-bets. Whatever technology may come to dominate the market (even Intel MIC), it is always possible to target this with a code generation approach (OK, and driver API). It might be quite some work, but still eminently doable. I am not sure if the same is true if the interface is via LLVM in which case I think you are more at the mercy of the hardware vendor (does anyone have any more information on this point other than https://github.com/ispc/ispc/issues/367 say?).

I am interested in this future-proofness aspect - what do people think?


May 20, 2013 at 10:52 PM
I have no clue as how cuBase handles the gpu binaries, and I'd really love to know, it's an interesting problem. Since they don't require cuda's toolkit installed, it would seem they are producing ptx or assembly code directly.
As for future-proofness, I agree that's the main requirement for any comercial project or large in-house server solution. Here the open source dual licence is the big winner, imo: it's open source but it has a comercial venture behind it, willing to mantain and evolve it. And cuBase's product is too expensive for poor 3rd world bastards such as I.
Also, let us not forget linux / mac compatibility, and language independence (personally, I only use c#, but as long as you use .NET, you can use CUDAfy).
May 21, 2013 at 2:05 PM
Actually having it be translated has advantages by CUDAfy. And that is that you can analyze kernel with AMD Kernel Anlyzer.

You can't really do that with Alea.CuBase.