Jump to content

Need Server Recommendations


troyhough

Recommended Posts

Guest asrequested

Wait we're just taking about SDR to SDR? Ahh that's boring.

 

hehe....I was just responding to a comment that 2160 reduced to 1080 would be worse than 1080. I actually think it's better.

  • Like 1
Link to comment
Share on other sites

lightsout

hehe....I was just responding to a comment that 2160 reduced to 1080 would be worse than 1080. I actually think it's better.

Ah I see. I got my hope up for a second. Back to madvr.
Link to comment
Share on other sites

Guest asrequested

Ah I see. I got my hope up for a second. Back to madvr.

 

MadVR? Huh? That's an entirely different topic....

Link to comment
Share on other sites

lightsout

MadVR? Huh? That's an entirely different topic....

It absolutely is, but I thought you were showing off some premature form of tone mapping with mpv. Not a lot off options right now for that.
Link to comment
Share on other sites

Guest asrequested

I think the server side tone mapping will commence in the next round of betas, after 4.4 stable is released. I'm looking forward to messing with that.

  • Like 1
Link to comment
Share on other sites

Guest asrequested

For GPU accelerated decoding, I've been reading what the mpv guys say. They do not like it, and recommend to always use software whenever possible. For encoding, I'm less informed. But I've read/watched a lot of reviews on using the latest Ryzen CPUs, and one consistent thing that comes up is that video editors are choosing to use them. Why exactly, I'm not sure, but apparently they make light work of their tasks. FWIW.

Link to comment
Share on other sites

For GPU accelerated decoding, I've been reading what the mpv guys say. They do not like it, and recommend to always use software whenever possible. 

 

Why are you posting such incredible nonsense?

 

No details, no fact, no arguments, no justification, not naming a specific codec and first of all neglecting the fact that there are many different hardware accelerated decoder implementations (Emby supports 6) and each of them is a kind of its own.

  • Like 2
Link to comment
Share on other sites

SHSPVR

For GPU accelerated decoding, I've been reading what the mpv guys say. They do not like it, and recommend to always use software whenever possible. For encoding, I'm less informed. But I've read/watched a lot of reviews on using the latest Ryzen CPUs, and one consistent thing that comes up is that video editors are choosing to use them. Why exactly, I'm not sure, but apparently they make light work of their tasks. FWIW.

 

That easy to why video editor choosing to use AMD as it is multi-core and threaded performance it better watch this 4K Video Editing PC on a BUDGET

  • Like 1
Link to comment
Share on other sites

Guest asrequested

Why are you posting such incredible nonsense?

 

No details, no fact, no arguments, no justification, not naming a specific codec and first of all neglecting the fact that there are many different hardware accelerated decoder implementations (Emby supports 6) and each of them is a kind of its own.

 

Take it easy. They document it pretty well.

 

 

https://mpv.io/manual/master/#options-hwdec

Link to comment
Share on other sites

mickle026

Lol, love this discussion.

 

No matter hardware or software it does not matter. Unless you copy each frame and every pixel from the original and save it in a format that is true to rendering that on screen then you are going to change it from the original in some way, either unnoticeable or noticeable.

 

The whole point of changing the format is to reduce the file size or enable better packeted transmission to enable flawless playback.

 

Everyone's perception may be different of the same content when it's very good, so here you are delving into that and it seems to me as expecting what each other sees to be the same.

 

Clearly it's not.

 

Anyway the argument seems to have descended into Hardware vs sotware, which is best?

My option is it doesn't matter, each write what they are told to. Hardware is actually software performed by physical electronic parts acting on instructions but running at the speed of the hardware.

Software is pretty much the same, instead of the gpu electronics it's cpu electronics mimicking hardware, so if the end result is the same, what's the difference?

 

The difference is always the algorithm used and that may vary from vendor to vendor / settings used, including tone mapping.

 

The end result you all strive for is to copy the original as close as possible.

 

Pixel peep away folks ;)

 

Lol

  • Like 1
Link to comment
Share on other sites

Take it easy. They document it pretty well.

 

 

https://mpv.io/manual/master/#options-hwdec

 

All I can see is that they are talking about problems they're having when 

interfacing with certain hw acceleration APIs, but I can't see any evidence

or actually not even a statement that a certain hw decoder would output

inferior quality when decoding (except the mentioned bugs).

 

Interestingly we haven't encountered any of those problems (different ones

of course, though).

 

The extreme variety of different transcoding situations is a huge challenge

and requires a large amount of work for fine tuning, testing, bug-tracking

and bug-fixing.

You can either take the challenge and work on it until it works perfectly,

or you can state in your documentation that users should avoid using

this feature because it doesn't work well....

  • Like 1
Link to comment
Share on other sites

SHSPVR

If you read it all, you eventually get to this:

 

5e5bf1f8bf3ee_Annotation20200301093239.j

 

There refer to older Video device which lack any kind of good GPU accelerated decoding and other device bear in mind that have been around for a long time, that dose necessarily mean they don't like GPU accelerated decoding

Edited by SHSPVR
Link to comment
Share on other sites

SHSPVR

Anyway the argument seems to have descended into Hardware vs sotware, which is best?

My option is it doesn't matter, each write what they are told to. Hardware is actually software performed by physical electronic parts acting on instructions but running at the speed of the hardware.

Software is pretty much the same, instead of the gpu electronics it's cpu electronics mimicking hardware, so if the end result is the same, what's the difference?

 

Yes you right actually software = firmware (decoder) acting on set of instruction to tell the hardware what to do.

Do keep in mind that are there 3 Type of decoder: CPU Software, GPU common refer to as HW Acceleration and then are Real McCoys Dedicated Hardware Decoder and all there not the same as Dedicated Hardware Decoder is far superior vs the other two.

Edited by SHSPVR
Link to comment
Share on other sites

Guest asrequested

All I can see is that they are talking about problems they're having when 

interfacing with certain hw acceleration APIs, but I can't see any evidence

or actually not even a statement that a certain hw decoder would output

inferior quality when decoding (except the mentioned bugs).

 

Interestingly we haven't encountered any of those problems (different ones

of course, though).

 

The extreme variety of different transcoding situations is a huge challenge

and requires a large amount of work for fine tuning, testing, bug-tracking

and bug-fixing.

You can either take the challenge and work on it until it works perfectly,

or you can state in your documentation that users should avoid using

this feature because it doesn't work well....

 

 

Ok, I see the distinction you are making, accelerator vs decoder. My apologies, I hadn't considered that.

 

And of course striving to improve is always a very good thing, and sometimes necessary.

Link to comment
Share on other sites

Ok, I see the distinction you are making, accelerator vs decoder. My apologies, I hadn't considered that.

 

No need to apologize, driving a discussion to one or another edge can also be enlightening sometimes..

 

Well - a "distinction,  accelerator vs decoder" - I don't think that I said anything like that.

 

So, let's summarize once again:

You said that hardware decoding would result in worse quality than software decoding, but the documentation you referenced didn't confirm that statement. Instead they are talking about problems they are having when using certain hw acceleration components and that in certain cases, those problems can cause inferior decoding quality, and they also say that using hw acceleration in general can cause a lot of problems and that's why they are strongly discouraging the use of hw accelerated decoding. 

 

Now, we're getting to the core misconception:

  • they are discouraging the use of hw decoding in THEIR application because of the problems that are occurring in THEIR application 

    .

  • But we don't have those problems, so in what way is this related to Emby at all?
Link to comment
Share on other sites

Guest asrequested

 

No need to apologize, driving a discussion to one or another edge can also be enlightening sometimes..

 

Well - a "distinction,  accelerator vs decoder" - I don't think that I said anything like that.

 

So, let's summarize once again:

You said that hardware decoding would result in worse quality than software decoding, but the documentation you referenced didn't confirm that statement. Instead they are talking about problems they are having when using certain hw acceleration components and that in certain cases, those problems can cause inferior decoding quality, and they also say that using hw acceleration in general can cause a lot of problems and that's why they are strongly discouraging the use of hw accelerated decoding. 

 

Now, we're getting to the core misconception:

  • they are discouraging the use of hw decoding in THEIR application because of the problems that are occurring in THEIR application 

    .

  • But we don't have those problems, so in what way is this related to Emby at all?

 

 

Well, they do distinguish between acceleration and decoding.

 

5e5c59a0ef6af_Annotation20200301165250.j

 

My train of though was stuck on the wrong track. I was thinking acceleration and not a direct decoding. Acceleration would appear to have some rough edges in some cases. Where a direct decode would appear to have less issues. 

Link to comment
Share on other sites

Well, they do distinguish between acceleration and decoding.

 

5e5c59a0ef6af_Annotation20200301165250.j

 

My train of though was stuck on the wrong track. I was thinking acceleration and not a direct decoding. Acceleration would appear to have some rough edges in some cases. Where a direct decode would appear to have less issues. 

 

I'm really afraid, but that new track is just another dead end...

Link to comment
Share on other sites

Guest asrequested

I'm really afraid, but that new track is just another dead end...

 

Ha, yes. I was admitting I was wrong :)

Link to comment
Share on other sites

Forget about that, most of the difference is inside ffmpeg code only and you only need to provide parameters in a slightly different way.

 

The one significant difference is that format parsing is done by ffmpeg in one and by the hw implementation in the other case.

Link to comment
Share on other sites

Consider these variants as the same thing.

It doesn't make a difference to the resulting quality. 

Link to comment
Share on other sites

Guest asrequested

Forget about that, most of the difference is inside ffmpeg code only and you only need to provide parameters in a slightly different way.

 

The one significant difference is that format parsing is done by ffmpeg in one and by the hw implementation in the other case.

 

Like the difference between nvenc/nvdec and cuda?

Link to comment
Share on other sites

Like the difference between nvenc/nvdec and cuda?

 

Nvidia decoding is the only case where both variants are implemented in ffmpeg:

 

CUVID - e.g. h264_cuvid  => Standalone hw decoder

NVDEC - (plugs into h264) => Hwaccel decoder

 

The decoding implementation is exactly the same, but h264_cuvid is getting fed the videostream directly while in the other case, it's the ffmpeg software decoder parsing the video stream, but then it is feeding the parsed frame data to the hardware as well.

Edited by softworkz
Link to comment
Share on other sites

Guest asrequested

Nvidia decoding is the only case where both variants are implemented in ffmpeg:

 

CUVID - e.g. h264_cuvid  => Standalone hw decoder

NVDEC - (plugs into h264) => Hwaccel decoder

 

The decoding implementation is exactly the same, but h264_cuvid is getting fed the videostream directly while in the other case, it's the ffmpeg software decoder parsing the video stream, but then it is feeding the parsed frame data to the hardware as well.

 

That reminded me of something your buddy Phillip Langdale told me a while back. It's good information :)

 

5e5c65ce7db81_Annotation20200301173626.j

  • Like 1
Link to comment
Share on other sites

lightsout

But I've read/watched a lot of reviews on using the latest Ryzen CPUs, and one consistent thing that comes up is that video editors are choosing to use them. Why exactly, I'm not sure, but apparently they make light work of their tasks. FWIW.

Seems fairly obvious. Ryzen has all the cores, even on the mainstream platform. Video encoding loves cores.
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...