with the rise of the gpgpu and newer videocards allowing for sometimes hundreds of streams in parallel, it almost seems a waste to leave a dedicated videocard rendering to screen every cycle when you’re not actively playing the latest billion-polygon 120fps stereoscopic doom clone rather than looking at each of these readily available pieces of drop-in hardware as a relatively inexpensive means of adding processors to a workstation for the purpose of handling any number of tasks. looking at developments like opencl and amd’s heterogenous system architecture, it at least seems to be the direction that hardware vendors are pushing us, and with the processing, memory and bandwidth advantage that affords us i would say it’s not a bad push. after all, at the moment it’s about the only way to fit a multiprocessor computer into a mini-itx case, which in turn is the best solution for running a multiprocessor machine on the lowest wattage possible. now, i know there is nothing stopping anyone, at the application level, from taking advantage of these developments, but what about at the system level? what would that even look like? could it possibly be less work than hardware video drivers?
just found a thing: weibin sun and robert ricci from the university of utah have done some work to this end with the linux kernel (https://code.google.com/p/kgpu/ http://www.cs.utah.edu/~wbsun/kgpu.pdf). pretty neat.