As someone who has designed and taught those courses, my experience (admittedly only one persons) is that you pick what will work with the least hassle - because you'll have plenty of hassle elsewhere and probably no real time to deal with any of it without making more.
This is actually one of my favorite comments of all time, because it's how software wins. The software that students use is the software the industry uses about five years later.
> The software that students use is the software the industry uses about five years later.
which is why it's anti-competitive for a company to sponsor university courses (such as providing educational versions for free). It should be disallowed, unless the course is _specifically_ teaching the software, rather than a general course.
Sort of -- but basically no course is going to teach X and Y, if they're functionally equivalent ways to learn about Z, because almost no course is specifically about X or Y, it's about Z, and learning both X and Y isn't germane to learning Z, just learning one is enough.
As long as the companies behind X and Y both have a fair shot at sponsorship, this isn't really anti-competitive. It's literally a competition in which the companies compete for student and faculty attention.
Anti-competitive would be a company saying "you must teach X and not Y in your class about Z because you use Xco's mail services" or some other such abuse of one contractual relationship for an unrelated gain.
>Nvidia also spends a metric shit ton of money to make sure professors use and teach on their platform.
Do you have a source for this claim? Or do you simply mean that since they spend money making it better that professors end up using it on their own accord?
I hold an NVidia instructors cert from when I worked in academia. They even give you access to hardware while you’re running courses on it. It’s super easy and totally free.
I'm sure plenty of professors use CUDA in their courses because it's what they actually use. At the same time, in 2013 when I was in college I took a course on "parallel computing" as a CS elective. The professor told us on day 1 that NVidia was sponsoring the course and had donated a bunch of GPUs to the clusters we could remotely connect into for the sake of the class. Naturally we used CUDA exclusively.
I know for a fact that this happened at a lot of schools. I don't know if it's still happening since I'm not in that world anymore, but don't see why it would have stopped.
CUDA is extremely simple, the classes might as well be on rails. OpenCL is like impossible without graphics and/or CUDA/distributed computing/operating system experience.
I'm not sure if I really agree - the level of abstraction used for each is extremely similar. There's not really any "Graphics Pipeline Specifics" pollution in OpenCL or CUDA.
I don’t remember any alternatives in uni. Maybe OpenCL but only lightly mentioned