Jump to content

Recommended Posts

Posted

为什么不让bif支持gpu呢 这样可以节省大量cpu资源 速度也会更快

Posted (edited)

This has been talked about here + other places but same conclusion.

 

Edited by Neminem
  • Thanks 1
Posted
1 hour ago, Luke said:

@seanbuffor @rbjtechdidn't you ask this before? Do you recall where?

@NeminemTks. 

 In summary, due to the very small image being processed, even with h/w tone mapping being applied, it takes longer to move the data into the GPU to then process it and move it back, than it does to simply process it directly in the CPU.    If you want things done quicker, then running multiple instances of the BIF generator with Emby on different parts of the library is the way to go. 

  • Agree 1
  • Thanks 1
Posted

我的I/o不是瓶颈 最大的问题是cpu几乎沾满了 根本无法正常使用 我看的bif被弃用了应该使用更新的技术 emby是否考虑使用呢?

Posted
15 hours ago, Ansell said:

My I/O isn't the bottleneck; the biggest problem is that the CPU is almost completely full, making it unusable. I've noticed that BIF is being deprecated and a newer technology should be used. Should we consider using Emby?

HI, what makes you think bif is being deprecated?

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...