Im gonna need to dive further into this, I do js canvas and webgl rendering to videos with the chrome/puppeteer via 'puppeteer-screen-recorder'.
But they do at times cause some issues. Latest 3d renders have been causing memory issues that I think would be solved with bigger boxes but havent need to investigate for a little while.
Have thought about it just outputting frames and then later having ffmpeg sticth them into a video, but havent gotten around to really testing it.
Im guessing this limited to 2d canvas, but excited to
check it out. Thanks!
WebGL/3D works fine, just with some additional dependencies (e.g. mesa drivers) and a little more setup in Nodejs to create the context and copy the framebuffer to node-canvas to do the image encoding.
Here's a little 3D animation I've rendered using a similar technique (plus WebGL) in a docker container:
The main thing to watch out for is whether you need specific WebGL extensions that might not be supported. Array instancing is the main one I use, which is supported.
But they do at times cause some issues. Latest 3d renders have been causing memory issues that I think would be solved with bigger boxes but havent need to investigate for a little while.
Have thought about it just outputting frames and then later having ffmpeg sticth them into a video, but havent gotten around to really testing it.
Im guessing this limited to 2d canvas, but excited to check it out. Thanks!