Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Considering the vast majority of our time behind the computer is spent with software that uses 2D graphics, it is kind of ridiculous that we still don't properly exploit our graphics hardware for vector graphics.

Luckily the hardware vendors are already working on fixing this. Here's an example of a particularly problematic scene rendered using NVIDIA's GPU-Accelerated Path Rendering, compared to Skia, Cairo, QT and Direct2D:

https://www.youtube.com/watch?feature=player_detailpage&v=zI...

NVidia: ~220 fps

Skia: 6 fps

Cairo: 29 fps

QT: 0.3 fps

Direct2D: 48 fps

(On a sidenote, I guess this is why IE10 is often faster at rendering SVGs than the other browsers: it uses Direct2D, whereas Firefox and Chrome use Skia).

Using graphics hardware should be more energy-efficient, so maybe we will finally see a change because of mobile devices with HD screens.



Just to throw this in: my iOS project Ejecta[1] also does all the Vector drawing on the GPU using OpenGL ES2. It's not as advanced as NVidia's implementation (e.g. Ejecta computes all bezier points on the CPU and just draws on the GPU), but it's still much faster than what Apple is doing in CoreGraphics.

[1] http://impactjs.com/ejecta


Have you looked into using Loop-Blinn[1] for drawing Beziers? I'd love to be able to use the GPU for vector graphics and Loop-Blinn seems to solve the problem of drawing curves, but I'm still looking for a way to get coverage-based antialiasing when drawing straight lines with the GPU.

[1] http://http.developer.nvidia.com/GPUGems3/gpugems3_ch25.html - https://github.com/hansent/lbfont is a good example


Anti-aliasing seem to be why we still prefer pixel art over vector art. I thought high-res would eliminate this problem (more pixels, less worry about artifacts), but apparently not yet.

The Loop/Blinn work is really cool, and of course we now have hull shaders to really exploit it. It is a problem many people are actively working on...e.g. with Direct2D, but doing everything on the GPU is currently infeasible given round trip times between the CPU and GPU. Great if you can render everything in one pass, not so great if you can't.


If you set the drawsAsynchronously or acceleratesDrawing property on a CALayer (the former is public and the latter is private), then the CGContext passed to you in -[CALayer drawsInContext:] or -[UIView drawRect:] will be GPU-accelerated in the same way. WebKit on iOS uses this to accelerate 2D rendering.

Having said this, don't go turning this on everywhere, like on every tiny little UILabel. That GPU-accelerated CGContext is rendered in an offscreen pass using render-to-texture so it is often faster to use the CPU. This property was more designed for, say, a full-screen drawing app on a retina iPad that uses a a single screen-sized CGContext for its drawing canvas, or for the Safari case where there are a number of same-sized CGContext tiles to draw into.


That is a really cool project, thanks for sharing!

> (e.g. Ejecta computes all bezier points on the CPU and just draws on the GPU)

I guess that's where the biggest gain comes from with NVIDIA: it's the only pure-GPU option right now, which probably eliminates quite a few bottlenecks. Freeing up the CPU is also never a bad thing.


Is Safari/native WebKit hooking straight up to Core Animation on OS X and iOS? I know when the Windows version was still around its WebKit was using Direct2D, even before IE did. I wonder what that would look like in the comparison.


> (On a sidenote, I guess this is why IE10 is often faster at rendering SVGs than the other browsers: it uses Direct2D, whereas Firefox and Chrome use Skia).

I think Firefox on Windows uses Direct2D, at least for Canvas, not sure if for img tags.


The hard bit isn't the drawing of the vectors - it's the conversion from vectors to triangles (triangulation) it's non-trivial and can be slow for complex (and complexly filled shapes).


The NVIDIA solution is to not have to do that:

http://www.youtube.com/watch?v=0IDyZof2pRI

(for those interested in the technical details of the NVIDIA thing, this is a lecture explaining it in detail).

EDIT: He explains in more detail in the first two minutes of part two:

http://www.youtube.com/watch?v=PitV33ex5U4


Reading this they talk about "baking" into a resolution independent form. That sounds very much like triangulation to me.

If not, what is it?

http://www.slideshare.net/Mark_Kilgard/nvpathrendering-frequ...

(Q. 37)


http://de.slideshare.net/Mark_Kilgard/gpuaccelerated-path-re...

This looks more sophisticated than the kind of tessellation I'm used to but they still have triangle fans!


Fascinating.

I didn't know about this. I've been out of this game for a while. That looks great. I wish we'd had it back when I was working on a GPU XAML renderer.

Thanks!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: