webhostingfoki.blogg.se

Vega gpu and houdini sidefx
Vega gpu and houdini sidefx













  1. Vega gpu and houdini sidefx full#
  2. Vega gpu and houdini sidefx code#

Houdini is excellent technology, I am making it better.

Vega gpu and houdini sidefx full#

Ndickson In a nutshell, getting *every*thing on the GPU would take an effort comparable to re-writing all of VEX … I completely agree, which is why we must take an asymmetric approach with Mathematica Kernels connected to Houdini to greatly increase Houdini's functionality toward the path of full automation and greatly increased performance and efficiency. I wouldn't predict that it's never going to happen, but it'd almost certainly have to be a very strict subset of current VEX functionality, which may be perfectly acceptable for some things.

Vega gpu and houdini sidefx code#

In a nutshell, getting *every*thing on the GPU would take an effort comparable to re-writing all of VEX and all of the code that it relies on, and the overhead due to sending all of the data needed may mean that it'd be worse off before it even began any real computation on the GPU. VEX has functions that queue up geometry modification operations to be done after the VEX part is done, so there'd be no possible benefit for those operations.

vega gpu and houdini sidefx

VEX has functions that can find the closest point on a detail or send a ray into a detail, supporting most of Houdini's primitives, including NURBS surfaces, curves, metaballs, etc, meaning that all of that would need to be ported to the GPU. VEX has regular expression parsing functions, so you either need to ban them or port all of those to the GPU. As an example of a complication, in VEX, you can request parameter values from any node, or topology data from any SOP's geometry, meaning that if you want VEX on the GPU, either you need to ban those operations or send all of that data to the GPU, which would mean that it could be significantly slower than doing it on the CPU unless you're doing a ton of computation. So any advice on the most optimal way to build a multi GPU rig would be greatly appreciated.Not to burst too many bubbles, but this seems a bit like “Toronto is a city, and Las Vegas is a city, so why not have a CN Tower in Las Vegas?” In concept, it may be possible, but it'd be a gigantic amount of work, probably way more work than the original, it'd have to be done from scratch, and it probably won't be worth it to go to Las Vegas just to see a copy of the CN Tower there you'd need to be able to do other stuff while you're there, else it'd be a huge disappointment. My original thinking was that each card would be used totally independently.

vega gpu and houdini sidefx vega gpu and houdini sidefx

And in general, a rig with multiple GPU's will always get throttled down to the slowest GPU.

vega gpu and houdini sidefx

Ideally I'd like to use my 2 old cards, and then eventually add something beefier as well: 660ti 3gb - viewport card 1070ti 8gb - for sim and rendering Titan RTX 24 gb - TBD on this last card, but it will be something big for sim and rendering I'm being told on the Tom's hardware forum that using these cards will basically result in bottlenecking everything because the 660ti is so old. So I'd like to get some advice if it's possible to mix and match different cards to do this, or if they need to be relatively similar or the same card. I understand only one GPU can be used for OpenCL, and one is used for viewport. I'm building a new rig and am trying to understand the most optimal way to build in multiple GPU's for sim and rendering.















Vega gpu and houdini sidefx