Jump to content

Improve Performance


Bert

Recommended Posts

Malterstorp
On 12/17/2021 at 9:26 AM, Malterstorp said:

Hi. I have a similar problem! The performance on the FireTV Cube is terrible when it comes to displaying images. The Cube is connected via 5GHz and reaches about 300-400 MBit. However, the images in the gallery are loading and loading, which is very annoying when viewing. Even if you then look at completed images go so the first 3-4 images without problems and then comes a placeholder image. You have to go back and then click again on the image and then it is displayed.

Would a solution via an eSATA SSD be a possibility here? I use a DS918+ with Ironwolf HDDs. 

 

IMG_8854.JPG

An update to a newer app version for the FireTV was able to solve the problem. Thanks

  • Like 1
Link to comment
Share on other sites

13 hours ago, Malterstorp said:

An update to a newer app version for the FireTV was able to solve the problem. Thanks

That's great news.  Were you behind a couple updates or just the previous release?

Link to comment
Share on other sites

On 12/23/2021 at 3:01 AM, Malterstorp said:

An update to a newer app version for the FireTV was able to solve the problem. Thanks

To be honest, I just got Firesticks for each device and quit using the built in App (Samsung, LG) on the TV's.  Much happier with the overall setup.  I did add an SSD as a cache through USB, no memory upgrades, but two 1 TB NVME.  None of that made much difference IMO.  In fact, I think the USB SSD for transcoding was a negative but haven't had much time to play.

Link to comment
Share on other sites

Paddaclus
On 12/17/2021 at 2:06 PM, FrostByte said:

You can also change cache and vacuum from the system.ini.  They have been hidden options for a long time.  Sorry, I thought the new tab would have been in stable by now.  

  <VacuumDatabaseOnStartup>false</VacuumDatabaseOnStartup>
  <DatabaseCacheSizeMB>1024</DatabaseCacheSizeMB>
  <OptimizeDatabaseOnShutdown>true</OptimizeDatabaseOnShutdown>
  <DatabaseAnalysisLimit>1000</DatabaseAnalysisLimit>

 

Thank you for this mate.  It has only been a day but I think this has solved my "slow loading home screen" problems.  Much appreciated.

  • Like 1
Link to comment
Share on other sites

22 hours ago, Paddaclus said:

Thank you for this mate.  It has only been a day but I think this has solved my "slow loading home screen" problems.  Much appreciated.

Thanks for the feedback !

Link to comment
Share on other sites

On 12/24/2021 at 10:08 AM, Bert said:

To be honest, I just got Firesticks for each device and quit using the built in App (Samsung, LG) on the TV's.  Much happier with the overall setup.  I did add an SSD as a cache through USB, no memory upgrades, but two 1 TB NVME.  None of that made much difference IMO.  In fact, I think the USB SSD for transcoding was a negative but haven't had much time to play.

Hope you had a wonderful Christmas.  If you don't mind me asking, how did you setup these two NVME sticks via USB? Any reason you didn't add them in the NAS NVME slots and set them up for read/write cache in Storage Manager?  Or did I misunderstand and you added a 3rd SSD via USB?

Link to comment
Share on other sites

1 minute ago, cayars said:

Hope you had a wonderful Christmas.  If you don't mind me asking, how did you setup these two NVME sticks via USB? Any reason you didn't add them in the NAS NVME slots and set them up for read/write cache in Storage Manager?  Or did I misunderstand and you added a 3rd SSD via USB?

The last part. Added two NVME and a 3rd SSD via a 2.5 disk to usb 3.0 case I had impulse bought. 

  • Like 1
Link to comment
Share on other sites

I've got that setup in place right now on my 920+ and happen to be doing some benchmark testing to see if the USB SSD is beneficial to TV/DVR recording or not. If it were an internal drive I'm sure it's helpful but since the external SSD is limited by the USB3 interface it's hard to guess.

I know my NAS (920+) only has USB 3.2 Gen 1 Ports and not the Gen 2 ports.  So it's limited to 5MB max throughput vs 10Mb that gen 2 has.  Of course there is overhead going from USB3 to SATA so realistically it's more like 3.5MB of actual usable bandwidth. Later in the week I'll be able to check this as well as I'm moving the data off my Synology to another server so I'll be able to benchmark the difference between the bandwidth available mounted internal vs external USB vs eSata.

  • Like 1
Link to comment
Share on other sites

  • 3 weeks later...
Dusan78

Hi there,

Was wandering if there is a final consensus as to what is the best way to set up DS920+ for fasted browsing.

Currently I have 2x 16tb EXOS 7200rpm drives plus the 500Gb m.2 NVME Read cache + upgraded to 8Gb Ram

 

I absolutely hate having to wait for the movie/show thumbnails to show up as I scroll.    

NVME M.2 Read cache I believe helps once files are there but, that starts again after changes it looks like.  So I always have to slowly scroll through all the movies and shows for the thumbnails to load and then it will be faster until some time or a restart.  

Should I be using a faster drive just for cache folder.    What is the best solution for keeping all these small thumbnails in memory for quick access.

Thank you!

 

Link to comment
Share on other sites

I'm back doing some testing of this later tonight.

What you just mentioned, gives me another idea to test that I think will have the biggest impact on overall use (besides DVR/Transcoding) and that would have a fast NVME drive where cache and metadata is located.  That's a lot of small files that could be loaded very quickly not waiting on spinning rust or the read/write cache to load it.  Some items like this are best always being on SSD.

That's one of the reasons I believe running on Windows or Linux seems faster than a NAS even if using the same CPU. Typically you would be using an SSD of some type for your OS and this would typically be the same drive Emby is installed on, it's cache, meta-data, logs, transcoding, etc...

I'm flushing the cache on my Synology920+ at present so I can try and set them up as mount points for Emby to directly use.

Link to comment
Share on other sites

Dusan78

That's what I was wandering if there is a way to permanently mount these cache files directly to the NVME cache drive.  But, since you don't directly have access to that drive from file manager you can't just use it as a custom Cache folder.

Looking forward to getting this fixed up.  My 500gb m.2 nvme drive is never used more than 30-50gb.  And it would be a waste to use a HDD slot for this purpose.

 

Cheers!

Dusan

Link to comment
Share on other sites

image.thumb.png.c1e3dddb454a4065c0014fd1bd0e5a6a.png

On round two I think I have a winner.  I've partitions, formatted, created a RAID0 drive but then put EXT4 file system on it. I had volumes up to 6 (4 drives, 2 nvme) so I made this 7.
After rebooting the NAS I went into Storage Manager and it says it thinks I've imported a drive from a another system and would I like it to disassmble/re-assemble the volume to make it usable.

In not too long I'll know if I have a drive that can be used from the GUI vs having to do everything via SSH. Looking good so far.

Link to comment
Share on other sites

Dusan78

I was just about to do this as well as per an older reddit post I saw where you mount and format the cache drive via SSH and then it looks like a basic disk in Storage Manager.

One concern I saw was someone mention that the files disappeared  at some point or after reboot.  Since you're already almost done with this I wanted to ask you a favour before I go out and buy another nvme.  Could you try putting in another m.2 and create another basic disk.  Then see if in Storage Manager you could create a Mirror Raid using these two slots.  

If this works than we could easily use these slots as regular m.2 nvme disks and install Emby or whatever package we want here.  That would be pretty great.

Can't wait to hear your results.  If this works I'll go and buy an extra nvme tomorrow.  You've also done the read-write cache as well so you can tell me if there is even a point of doing all this.

Cheers!

Dusan

Link to comment
Share on other sites

I've played a bit before I wrote the last message. I do have two 1TB Samsung NVMEs in the box and already did a mirror and a 2 disk stripe (no parity). You can't (well I couldn't) get Storage manager to do anything for me like that.  I had to do it all from the command line. I've powered down cold plus I did another reboot and the info stays but you have to do the creation at the command line.

I did this a while back to try transcoding on it but never tried the metadata and cache as well which is probably what's needed for the quick loading screens. I've got two SSH screens open right now each doing a copy. One metadata and one cache. I'm not moving the folders but just doing a copy as I'll be pulling these sticks out as soon as I'm done these tests probably tomorrow.

I wasn't looking forward to giving the commands to tell people how to partition/format these "drives".  Would you mind sending me a PM to the site you saw this on?  If the instructions are clear it might be best if we just point anyone to those instructions for the "destructive" part.

Link to comment
Share on other sites

image.thumb.png.a7610c739d48774d1f95c1361c2c5876.png

Once I made it that far I created a new folder on Volume7 called Emby and setup a few things:

I copied my metadata to the new location which is located here Library->Advanced tab->Metadata path:
Once the copy is finished I'll change the current "/var/packages/EmbyServer/var/metadata" to "/volume7/Emby/metadata"

mkdir /volume7/Emby/metadata
cp -rv /var/packages/EmbyServer/var/metadata /volume7/Emby/

Next up I copied the cache from it's default locaton to my new location then filled in the new location to Settings->Cache path: which was blank: /volume7/Emby/cache

mkdir /volume7/Emby/cache
cp -rv /var/packages/EmbyServer/var/cache /volume7/Emby/

Let me start out by saying this makes a world of difference. Synology should allow this setup via the GUI as it is by far better than using the two NVMEs for read/write caching as far as Emby is concerned.  The write caching may be of help when moving files to the box or other things like that but for Emby, no comparison.  The next two pics tell the story. I ran the standard benchmark built into Storage manager on a WD 14 TB drive. This is one of the ever popular external drives I shucked.  BTW don't do that as there is much better drive for the money. 

image.png.73ce5b2d2cfd373a7341eca5ce44f6c3.png

Here is one NVME benchmarked.  This is not a mirror or striped raid, just a single stick. Notice the IOPs have a "K" as in thousand after it and the Latency is us vs ms so micro vs milla seconds. Of course this is a fast NVME I'm using but still. It won't even do the throughput test.  LOL

 

image.png.b793371321eaeddefafa0b45272f0792.png

I'm not going to do anything with the other stick for now.  Next up testing is putting the transcoding folder on this disk doing a bit of testing followed by 3 DVR of mpeg 2 channels while starting to watch each channel and using FF/pause like is typical during sports.

Link to comment
Share on other sites

image.png.065a1f91cb2a5e34e226a39806a0734f.png

image.png.87dcef921a3e9040a3bc0c06427ea4b9.png

image.png.a7c18cf240d6124466dea34ce98d0723.png

 

I wanted to only record mpeg2 and not AVC channels so after looking around at different channels this was the highest bitrate I could find this time in the morning.

So at this point I have the cache & metadata folders running from the NVME and have set the transcoder and DVR path to the nvme at well.
image.png.57397333273f0caf9f2a987a8c449689.png

What is interesting is my HDDs are still clicking away making a racket as usual and DSM shows something going on that I don't feel like tracking down at the moment.  Could be an index or a computer backup but it looks like this:

For graphs coming up BLUE is always the HDD pool and RED is the NVME.  It's dumb the way it does the graphing as it wants to show the total of all drives even when I've selected just the NVME drive keep in mind you only care about the red after this first one.

image.png.55f9e3c29ca0a6fc1d6412fdc9dddab5.png

While the 3 recordings going to the NVME look like this (no playback).
image.png.3fecdc968369d575fd3c3f142cfbcde3.png

3 recordings and 1 playback

image.png.6c75d8e7350c93af859d15ddf7e75a20.png

3 recordings and 2 playbacks
image.png.e8c24992d1dced52201ae1b74b7902bc.png

3 recordings and 3 playbacks
image.png.3fa8677db6b7cfc6a8504cd49679c8ab.png

Those graphs make it look like utilization is pretty high.  But let's switch the view to IOPs.
Thought I had a screen shot but it was 132 read and 198 writes per second.  Remember the benchmarks from earlier?
Here's what happens if I start to hit the drive while Emby is using it.  All the red you see in the graphs above just about touching 100% as very misleading as you can now see how low those numbers should have been when I actually saturate disk by just starting the benchmark again. You really can't even see them as they are down near the baseline.


image.png.c37b4e4e79a3b632d83b2ef83f58ae69.png

Anyway, from what I just experienced I can say this is a much better way to make use of NVME sticks in the Synology specifically for Emby use! Best of all only 1 NVME is needed, not two unless you want to stripe them and really go ZOOM.

It's really a shame this can't be setup using the GUI without having to SSH into the box and use tools like fdisk.

image.png

Link to comment
Share on other sites

Dusan78

I hope you got my PM.  In regards to recording and transcoding I believe having those on nvme wouldn't really offer much benefit as you're writing to disc sequential streams of only less than 24Mbit/s which is only 3Mb/s and wouldn't be a problem for any HDD just like reading sequentially from the movie library.  

Nvme has greatest benefit being able to perform 10k IOPS and access 20k small thumbnail files almost instantly and make the interface work without any delay or loading.  But, I don't really use recording features so maybe real world experience is different.

 

Link to comment
Share on other sites

Except that's not true because of the nature of IOPs. What becomes a huge factor is wait time or translated the amount of time the CPU sits idle waiting for the IO subsystem to return results it needs to continue because the IOPs are low.

Think of storage performance as Bandwidth  * IOPs.

Bandwidth is how much or how fast data can be written per transaction.  IOPs is the amount of transactions that can happen per second.

Just to give some perspective, lets say you have a super fast HDD that spins at 15K RPM. 15/60=250. That means for a new disc you can read any area (random read/write) of the platter at most 250 times per second. That's the theoretical limit.
15,000 RPM = 250 IOPs
10,000 RPM = 166 IOPs
7,2000 RPM = 120 IOPs
5,400 RPM = 90 IOPs

If the stars line up correctly you can write a full sector of data or two for each revolution of a spinning disk. But you can never do more IOPs then the rotation of the disk. If you have 512 byte sectors and write a 256 byte operation you would have 256 bytes per IOP. If you had a 1K file that can write two sectors worth of data in one rotation you have 1K bandwidth per IOP.

If you have 4K sectors the above would be the same but you may be able to write 4 or 5 sectors of data if you have continuous blocks available before the disc needs to complete it's rotation which could be 20K to 24K bandwidth per IOP.

You can see how much the RPM figure of a drive affects how many transactions can be performed per second at the most. If you do a RAID of 5 or 10 drives the IOPs is the same as a single disc so nothing is gained.  Bandwidth goes up as you introduce discs but the amount of transactions does not change. You could stripe 10 HDDs together with no parity at all and do a write operation so each disc writes 1/10th of the data, but the actual write that each disc does is still limited by the IOP of that disc.

Unfortunately the Synology benchmarking tool doesn't show true bandwidth and IOPs per second but some other "reference" it's come up with it calls bandwidth and IOPs which is not correct.  But you can at least compare apples to apples and oranges to oranges this way since they are calculated the same way.

The NVME has orders of magnitude better IOPs and far lower latency. If you were to look at real benchmarks that calculate the IOPs correctly you would see any type of HDD based raid with 5400 RPM drives is going to turn in real world numbers in ideal situations of 75 or so.  The NVME will be 1000+ easily.

That 75 number is any/everything being written to disk.  If you write a log entry, if Synology write a "file touched" entry, some other service is using the disc, etc that 75 number available to Emby could be 50 or lower while the NVME is still 1K+.

This is why I always recommend putting the DVR and transcode on some type of SSD vs spinning rust device as it completely removes any type of IOP related delay.  As you point out the amount of bandwidth isn't high.  That's why using an SSD attached via USB3 is still very effective even if the Bandwidth number is much lower.  The USB3 SSD still has very high IOPs which is key.

Earlier I mentioned WAIT times or CPU having to wait for the previous instruction to complete.  When this happens you have lost minimal 1 IOP as you have nothing ready to give to the storage system to handle.  That wait time could just as easily cost you 5 to 10 IOPs of time since you likely have 3 or 4 threads competing for these IOPs. On Windows and standard Linux/Unix you can easily look at this wait time to determine your CPU to Storage system optimization but on every NAS I've looked at the commands to view this are not present. An example, on Windows you want 0.5 wait time or lower.  At a value of 5 your performance is about 1/2 of what it needs to be and you're wasting at least half your CPU use as the storage system is a bottleneck.

Hopefully that explanation helps to understand how much performance can change switching to some type of SSD even when it seems like the amount of data being used isn't high.

Link to comment
Share on other sites

Dusan78

That actually makes total sense.  Thanks for the detailed explanation.  

By the way I did a benchmark with my nvme samsung 970 evo.  My numbers were way better then you 980 drive.  Why would this be?

 

Speedtest.png

Link to comment
Share on other sites

bjjones

And I followed up and benchmarked my 250GB to just see how it fell speed-wise vs. the newer Samsungs and it's closer to the 970 above.

The latency seems high on @cayars 980, I'm surprised the 1TB size has that much of a difference

Screenshot 2022-01-13 153605.jpg

Link to comment
Share on other sites

It's the 980 I was using, but not the 980 Pro. So it's more like a commodity NVME then the highly coveted pro version.
I wouldn't waist a 980 Pro in the Synology box as it doesn't have the PCI lanes to take advantage of it.

Keep in mind I mentioned my system was already under load as well when I was doing these tests which kills the benchmark results on something like a NVME.
I was almost going to wait and benchmark the NVME when the system wasn't getting hit but then thought it's better to show how much this helps Emby even with lots of other things
going on.  It hit me this morning I was running Syncthing in the background transferring all files on my Synology over to a new ZFS storage pool which had both 1G NICs fully saturated.

So real world experience should be far better!

 

Link to comment
Share on other sites

usnscpo

Could I trouble someone for a detailed "how to?"  Although I'm semi-comfortable with SHH and the command line, I don't know much about linux and the commands to suss this out.  Many thanks!

Link to comment
Share on other sites

Dusan78
38 minutes ago, usnscpo said:

Could I trouble someone for a detailed "how to?"  Although I'm semi-comfortable with SHH and the command line, I don't know much about linux and the commands to suss this out.  Many thanks!

There should be a guide on here after some more testing but, if you are impatient and would like to experiment yourself you can just do a google search for:

"Use NVME SSD as storage volume instead of cache in DS918" and it should be the first reddit result.  Just make sure if you follow it and have DSM 7, you'll need to click on "Online Assemble" in storage manager after reboot.  Since NAS wasn't designed to be used like this I'm also not sure what will happen to that volume after DSM 7.0.1 update 2.

Always be prepared to lose the data on that disk and have a backup in place.

 

Link to comment
Share on other sites

1 hour ago, usnscpo said:

Could I trouble someone for a detailed "how to?"  Although I'm semi-comfortable with SHH and the command line, I don't know much about linux and the commands to suss this out.  Many thanks!

I need to write this up which I'll try this weekend. I'll make sure to have you covered if you can answer:

How many NVMe drives do you current have?
Are they presently being used as cache?
How many discs do you have and what format array are you using?
Just in case something goes wrong, do you have a proper backup of your system?

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...