Jump to content

Change owner of server, start new server on new system


Recommended Posts


Hi Guys,

I have a question regarding Emby Server account and movement.

I'm currently running Emby server on my Synology 1019+ and all runs smooth.

Synology has DSM 7 latest version, Emby has latest version (, so all good.

Soon I will have my new NAS ready, which will run on TrueNAS.

So my plan now is that a friend take over my Synology as is, means with Emby server and with all media. (more or less plug and play).

I will then install emby again on my new TrueNAS server and start from scratch. I will copy over the needed media files to new server and start all over.

Question now is, how to do that ?.

My thinking is like this :

1. copy all necessary files from Synology to new TrueNAS server.

2. My friend buy a Emby premiere in his name and email address.

3. Somehow change the account so that my friend now is owner of my existing Emby server with all existing media and setup on the Synology.

4. I install Emby server on my new TrueNAS system and login with my normal credentials where I have my Emby premiere.

I would like to hear from you expert guys if this is possible and how this exactly can be done.

Link to comment
Share on other sites

Hi, Take this as just my personal opinion and a bit biased. I've been doing a lot of testing of Emby Server on different OSes from Windows to Android to Linux to FreeBSD/TrueNAS core as well as different NAS boxes.

While TrueNAS core has strong storage ability, it would be my last choice with the close runner up of FreeBSD (which it's built on). I don't want to offend anyone running it currently but in today's world there are better choices in general.  It's kind of a mess with it's plugin/jail subsystem that get's little support these days as far as updates or needed fixes. For months there has been SSL/TLI trust errors for containers using mono and it's damn hard to fix because the tools they have use mix matched versions. You need to downgrade some tools to use them to update certs, then after replace them, etc...  That's just bad and "hacky".  The TrueNAS plugin/jail feature while hot when it came out is nothing compared to what you can do with docker and tools that build on it. The hypervisor used in TrueNAS core isn't up to par compared to others available today.  Basically TrueNAS core is showing it's age and hasn't kept up technology wise with other platforms (storage aside).

If you're new to FreeBSD coming from Windows or other Linux distros you will likely spend a great deal of time Googling things at times when the "most simple" command lines don't work or you hit quirks in FreeBSD.  Part of the problem with this is the similarity of utility names with Linux (since they are both unix style OS) so a search for something even with freebsd in the search will still probably turn up 50 to 1 with results specific to Linux vs what you want.  It can get frustrating.  Here's an example, I'm no slouch with networking or different protocols and services.  Testing Emby Server yesterday on a few Linux OSes I mounted some libs for testing using both cifs/samba and NFS.  5 to 10 seconds later I've got a mount to use and can setup some libs for testing.  Now the same thing trying to get a mount on FreeBSD turned into a 5 hour exercise of searching, downloading things, adjusting permissions, files, OS config files.  If I changed the OS files and things didn't work I reverted changes via snapshots so I didn't go down a rabbit hole.  But 5 hour later I finally gave up and plugged in a 4 TB external USB drive with files to test Emby Server. You can see where some of my bias comes from. :) 

Emby Server is probably the least of things to worry about. Here are some key questions I would ask before making a selection of OS to run your NAS.  Is this primary for only Emby and media?  Or do you want to run some type of plugin/docker environment? If yes, do you plan on only a couple or dockers or can you see yourself with 20 to 30 apps? Perhaps you want to start off running some on the same machine but would like to move them to another later. Does this machine need to run any VMs at all for any purpose?

For example if you want to run a couple VMs you would be best off NOT installing a NAS OS directly on the bare-metal, but instead install a hypervisor like ProxMox or KVM. You could then run your NAS in a VM with direct access to hardware. The is a great solution as you can partition the server by function.  The basic NAS in one machine. Docker in another machine. A Test VM for testing and trying stuff. A VM to run NextCloud all by itself so you can tune it for performance if you want something like that. Maybe even a dedicated VM strictly for Emby vs installing it directly in the NAS container.  The reason I mention ProxMox is that it's pretty easy to learn and from one UI you could manage a dozen computers running different things moving VMs easily from one machine to another. A setup like this is also great if you are the more technical type and want to run a software based router/firewall/wifi manager with full access control, intrusion detection, DNS ad/spam busting and want to have additional functionality such as OpenVPN or Wireguard VPN.

ProxMox isn't for a Celeron or i3 but if you are planning on building a new i7/i9 with QuickSync built in and maybe even an Nvidia GPU this is the way to go as it will give you so much flexibility. Done correctly (not that expensive) it can do the job of 5 or 6 traditional computers without the headaches. Even if you never get past installing the one VM running the NAS it won't be a bad thing.

Now it's pretty obvious where I stand on TrueNAS core so let's talk about it's new brother TrueNAS scale.

This is a completely different OS with really the only thing in common being the name "TrueNAS" :)  It's built on a foundation of Linux with many high end enterprise and cloud features built in.  With it there is no need for ProxMox either. It's technically still in beta but this is more to do with large clusters.  If using it on a single computer or 3 in the house it should be fine.

The beauty of TrueNAS SCALE is that it allows you to combine both storage and computer power together from multiple machines working as a cluster. It's has the typical TrueNAS GUI and functionality but with a different underlying OS and feature set. It's a game changer. With a few of these machines you could essentially have your own little AWS in house if you wanted or just a "simple NAS" as it has flexibility.

For most home use one of the first things I would use as criteria is the storage subsystem and it's requirements. Do they meet your own requirements? For example do you need ZFS storage or will Btrf or exfat work? This has to do with the way you setup your storage and how flexible it's going to be. Do you need LVM or iSCSI support? Then there is the big question for home use, what are the requirements for adding storage?  This is a killer many people overlook but so important!

Let's break that down into another way of asking this question. Do you want to start with one Big 16 TB disk and add additional disks as needed or are you OK starting with 4 disks 16 TB drives at the start? when you use that storage pool up are you going to be ok with adding another 4 drives at one time?  It's one things to bulk buy drives at the start but quite another when you have to always do that.

How about mismatched disk sizes? Do you already have drives laying around but they are different sizes?  IE you have some 12 TB and some 16TB drives you want to use. Some OSes will let you do this and other won't.  Some OSes can allow you to pull a single disk and replace it with a bigger drive so you can upgrade a drive at a time if you like.  Synology/Qnap for example let you add drives and swap them out if you built a SHR RAID. Here you can see on my Synology 920+ a single 12 TB drive and 3 14 TB drives.

Also this depends if you want striping or not across your drives (parity) for protection or just a big JBOD which has no protection but gives you the total space of the drives without a reduction from the file system (btrf/zfs) and of course the parity itself.

So these are things you need to think about for your storage requirements which will dictate which OSes you can/can't easily use (or likely shouldn't). It's actually what I would consider a critical question to answer that will lead to what NAS platforms will likely work best for you.  Unraid is quite popular because you can add a drive as you go.  It doesn't have stellar drive performance because your files are stored on a single disk without striping that can double to quadruple access speeds. For a pure media server with only a few concurrent users it might be exactly what you're looking for in a NAS.

There is yet another NAS software package you should consider as well that I haven't mentioned which sort of fits in the middle of all of this and could be just what you're looking for which is Open Media Vault.

Of course only you can decide what' best for you and your environment and needs but I would answer a few of those storage questions as well as what you want/need to run and then see which software fits your use and budget (especially storage upgrades) case the best.

Setup a few virtual machines and install the software and play with them.  You can create 4 or so small VM disks to use for storage pool management as well.

Hope I didn't bore you with this or scare you but I'd hate for you to have regrets 2 or 3 months down the road when converting software would be much harder.

Link to comment
Share on other sites


Hi Cayars,

Thanks a lot for your well explained comments, I really like that.

So far I didn't choose the software package yet, I was also looking at Open Media Vault and this looks maybe a little more simple.

Since I'm still new to all this it would be better if as simple as possible.

The main target here is a media server, Emby and Plex installed to serve the household with media. This means that performance and "some safety" is the priority.

Later I will add two more ssd's and mirror these for backing up some cad/cam work related files, where safety is the main priority.

The hardware I got so far is :

Case : Fractal Design Define 7

MB : Asus WS C246 PRO/SE

CPU : Intel Xeon E-2288G

Ram : Samsung ECC UDIMM 64GB

PSU : Corsair HX750

System Drive : Samsung 970 Evo Plus 250GB

Storage Drives : 3X WD Red Plus 14TB

Since transcoding will be very seldom or never, I have no plan to add any GPU at the moment, the CPU is still powerful enough and with onboard graphics

to handle that in case this should be needed a few times.

I will be the only user, maybe later a neighbor will be added, but that's it.

So maybe I should give Open Media Vault a try, or what do you think..??



Link to comment
Share on other sites

You have some nice hardware there to build out with.
That CPU does have a built in GPU which is the Intel UHD Graphics P630 which will do QuickSync transcoding in hardware.

You mentioned "some safety" so with that in mind I'm going to recommend picking up one additional WD Red Plus 14TB drive for a total of 4 drives starting out.  With this setup you will be able to do some form of RAID which basically helps protect you data in case of a single drive failure.  At a high level you will get roughly the storage of 3 drives when starting with 4 drives in this fashion or roughly 45 TB of usable storage after partitioning, formatting & parity.

With that hardware I would look at TrueNAS scale or OpenMediaVault.

Try setting them up in VM to be able to play around with them and get a feel for the way they each work. You could setup 4 small virtual disks to work with setting up the storage system to get a feel for that as well as explore the UI of both along with the feature sets. Then try and expand the storage to get a real world practical example of how you would need to do that.

You really can't go wrong with either of those two OS and you will be very happy when you have this build out complete.

Link to comment
Share on other sites


Hi Cayars,

Thanks again for you inputs.

My situation is that a friend will take over my Synology, included all media, so I have to be able to copy over all files from the Synology to the new NAS,

which means I need around 25TB of space from the beginning.

So theoretical I could, like you said, add one more WD Red Plus 14TB, and then either setup Raid with protection of a single drive failure, or even go full safety and run in Mirror.

Actually I could also do that with the existing three WD drives I think, run with protection of a single drive failure, in this case it will then leave me 28TB of space.

The question would then be, is it possible to extend later on if more space is needed..??

Since TrueNAS Scale is still beta, I think I will go for Open Media Vault.







Link to comment
Share on other sites

TrueNAS Scale is only beta because of the clustering features.  Short of trying to connect 6 or more nodes together it's solid. Even 2 and 3 nodes is working really well.  For a single use computer you won't really hit the "beta" parts.

So I wouldn't let that scare you.

I don't remember exactly off the top of my head OVM storage requirements but I'm not sure you can expand the storage pool with a single disk on OMV as it's pretty basic as far as storage technology goes as it uses mdadm RAID.  I myself have lots of storage in different formats (on purpose) but for a NAS type device I don't think you can do better than BTRFS for general use.  I have quite a bit of storage in this format.  One type of storage system is never best for all uses so you build to fit the environment.

But for general media storage BTRFS is what I use.  Have a read https://www.electronicdesign.com/industrial-automation/article/21804944/whats-the-difference-between-linux-ext-xfs-and-btrfs-filesystems is a pretty decent read.

I do remember how much data you said needed to be copied over which is in part why I mentioned adding the 4th drive.  With only 3 drives total and one parity you would have filled it and then have to start thinking about expanding it. The the additional drive I know you would be safe for a while regardless of what OS you go with so it was a safe recommendation. :)

Storage is the heart and sole of the system so I'd spend a bit of time making sure you understand at least at a high level what's different about them and how that affects expansion.

I would not even think twice personally of building a NAS on top of a standard Linux distribution like Fedora (one of the more state of the art, storage wise) and adding your own GUI. There are a lot of good management programs, monitor solutions and graphical utils that can be ran from a web browser that could easily be used.  Plus you could still have a traditional  GUID desktop and all it's utilities when needed for those special jobs.  Just because a Linix distro has a GUI, doesn't mean you have to have it active all the time.

So that's something else to think about.

Link to comment
Share on other sites


TrueNAS Scale looks interesting, I agree on that, but since I'm new to all this it looks a little scary at the same time.

Furthermore I think so far there is not much learning information out there, so maybe this is more for experienced users.

On the other side, if I'm only going to use this server as a media server and for some backups, then it also can't be that difficult to setup I would guess.

It would, as far as I can read, give me more possibilities regarding, the storage itself, safety, expanding, more control of all processes and activities, etc,

but at the same time it can also give me a lot of headache I'm afraid 😁  

I'm coming from different Synology and Qnap boxes, where you don't have that much you can do wrong actually, so it's hard for me to imagine this step up.

I think that maybe your suggestion with adding a 4th disc to my system and use OVM is the more simple way for a user like me,

and at the same time also enough for my needs I would say.

I'm still waiting on some parts for my new server, so I still have time to think it over and make a final decision 😁

I'm on Windows 10 Pro here, should I just use that to run VM Machines for testing these OS, or would you suggest an better app for that..??

Edited by bacardi8
Link to comment
Share on other sites

If this is mainly just to run Emby and have good reliable storage It doesn't really sound like you need a NAS per say. If you've had Synology and have have added, removed or updated disks and know how relatively painless that is and want that you want btrfs that I mentioned earlier. Take a look at Garuda Linux https://garudalinux.org/index.html which is going to love your hardware. 

Garuda uses BTRFS as the default filesystem with zstd compression.  They sum it up nicely with this: "BTRFS is a modern, Copy-on-Write (CoW) filesystem for Linux, aimed at implementing advanced features while also focusing on fault tolerance, repair and easy administration. We use automatic snapshots out of the box."

It's built on the Linux-zen platform & optimized for multimedia and gaming. What you might find appealing as well is that they have several GUI desktops including: KDE, Xfce, GNOME, LXQt-kwin, Wayfire, Qtile, BSPWM, i3wm and Sway. KDE Dr460nized is very Mac like while Xfce is lightweight and still nice looking without the flash. You can see some of them here: https://garudalinux.org/downloads.html

Two standout things about this Linux besides what's already mentioned. They have a nice package center with pre-compiled apps so you don't have to build them like many typical Linux builds.  So that save aggravation, time and you end up with less stray files on the file system. This is kind of like using Synology Package Center to install things.  They also have good GUIs for system utils and especially the storage setup parts. Being that they use btrfs you have flexible expansion as well so you can add drives at will and they can be mismatched brands and sizes. Being that they use btrfs they utilize snapshot technology and do auto snaps before update/upgrades so if something ever goes wrong you can roll back from the boot up menu!

It's really hard not to like that and Emby Server will run fast on it as well.  Like I mentioned before with docker installed and Portainer (nice web interface) you can easily pick other software to install as well. Typically docker apps are geared to web vs desktop. A few selected apps installed this way and you have all the goodness of a NAS with a true GUI underneath as well.  If you can install windows on a PC you can install Garuda as well.

This would actually be a much stronger solution then OVM at least as far as your data/media is concerned. It also allows you to run standard Linux apps vs trying to track down special versions that you can install on a typical NAS with a crippled/limited Linux kernel.


Link to comment
Share on other sites


Hi Cayars,

That's looks really great, I took a short look at the homepage, I never hear about that before.

But now you are also confusing me I must say 🤣🤣 (ment in a good way of course).

I have a few qustions regarding that.

Garuda is an operation system like Windows as I can understand it, so would this be same as a second computer then...???

1. This will operate as a stand alone computer...???

2. This will still need own monitor, keybord and mouse..??

3. Will it be possible to have the storage visible on my main Windows machine to transfer files between these two machines..??

4. Will it be possible to use web interface on my main Windows machine like with other NAS operating systems...???

5. In general, this server should operate similar to a Synology/Qnap in the daily use, if this is possible...???

Maybe it is possible, Ithen it could be perfect due to that the first look at Garuda is amazing.

I will need a little more of all this explained if possible, it seems you are the specialist in this field " also in this field" I really persuade that.



Link to comment
Share on other sites

So here's the deal.
A NAS is typically a Linux platform with many stock Linux commands removed and usually replaced with custom web pages to do the same things (but their way).  A NAS like Synology, Qnap, WD, etc isn't going to have a "desktop GUI" but instead will use a web based GUI. Most/many of these web based GUIs are modified versions of open source/public domain software.

Linux and Windows are mainly considered desktop operating systems when they have a GUI, but both can be run headless as well requiring management tools or command line operations.

Underneath the "GUI" is a computer at heart regardless of what the box or NAS looks like. It may not have keyboard, mouse or monitor (VGA/HDMI) ports but it's still a computer.

What you're building is a true computer in every sense that can optionally have a mouse, keyboard and monitor(s) hooked up to it.  It can run Windows, Linux, or a NAS OS (stripped down Linux version with web on top). Ready for more confusion? :)

It could also not directly run an OS at all but instead run what's known as a hypervisor. Hyper-V, VMWare, Virtual Box as all examples of these in some form typically with a desktop GUI.  Then there are hypervisors made specifically with web GUIs such as Proxmox Virtual Environment https://www.proxmox.com/en/proxmox-ve. PS look at the other products like backup they have on the home page as well.

So you could (I would) install PVE directly on that beast of a computer you are building.  You then use it to create virtual environments to run something like Garuda in.  You simply pass through the actual HDDs used for the bulk of your storage so Garuda has direct access (the only access) to those drives. Now you still have a desktop GUI if needed which could be used with actual keyboard, mouse, monitor or used virtually through ProxMox or any other remote control program you like.  Besides only the desktop GUI running in the Garuda VM you could install numerous different web based admin packages like Cockpit which is basically a NAS GUI. :) Redhat uses this for the "NAS" OS https://www.redhat.com/sysadmin/intro-cockpit

You can see and play with it in virtual machines they use to demo things. https://lab.redhat.com/installing-software-yum or https://lab.redhat.com/imagebuilder  They get hammered and sometimes the demos are slow so don't let that fool you.

You could run everything just using this but I would install it in a virtual machine under Proxmox.  This way you can use Garuda with Cockpit install almost strictly for it's media and backup features without installing other things in that VM.  Keep it clean. The only exception I would probably make is also installing Emby Server here as well. Emby would have direct access to the media this way.

Now via ProxMox you could spin up a couple of other VMs (virtual machines).  One could be running a super lightweight Linux installation with Docker installed as well as Portainer (web based kick ass Docker UI).  Now you can install tons of different docker apps with great control over it.

You could spin up another VM to directly run Next Cloud if having a virtual web office and MS office compatible environment is something you would like.  A combination of Next Cloud and Collabora Online or ONLYOFFICE go hand in hand for this VM. https://nextcloud.com/athome/ https://nextcloud.com/collaboraonline/ https://nextcloud.com/onlyoffice/


Not enough?

How about another VM via ProxMox running Mistborn https://gitlab.com/cyber5k/mistborn  You really have to just check this one out to understand it but it's a package of many other open source software geared at being super secure.  Even in your own house you can't access this without using WireGuard.  WireGuard is a state of art private VPN that is fast!.  So right off the bat you have a home VPN solution to access anything when remote.  But Mistborn allows you to install modules that many people use to build Mistborn Network Security which is like having a security team in house. The package really allows you to tailor it that it. It can run NextCloud as well but I'd set that up in it's own VM.

In the docker VM already mentioned there are so many choices but that would be an ideal play to run a Reverse Proxy, Network monitoring (Checkmk, Zabbit, NetData), etc.

Every Piece of software I've mentioned is Open Source with some companies selling the products bundled with support.
But hopefully you get the idea, this can be a simple setup like OMV or taken to a whole new level that no typical NAS can touch like running your own cloud service.

It's all up to you in what you want it to be. Hope I don't give you option anxiety but the computer you're building gives you so many options it would be a sin to at least not think about. :)

Link to comment
Share on other sites


Hi Cayars,

Really great stuff you provide me here.

Of course I get even more confused by reading all this 🤣, but by reading more and more about these things on the different homepages and around on the internet,

I also starts to understand more of the things that you are trying to tell me.

For sure I can see the flexibility by creating a lot of VM Machines like you would do by using Proxmox, but I don't see this as something I will be using that much.

I'm more into things like the Garuda Linux which you informed me about, even I never used any kind of Linux before. I'm getting a little hooked on this since I read more about it.

Since my new server will be mainly a media server, it could look like a good choice. Furthermore I also get a little hooked on the new Fedora 35, this also look like a winner in my case,

what do you think about that, do you have any experience with that OS..??

Sill having in mind that the main focus is on media delivery for the household with high performance, by using Emby and Plex, furthermore there will be a need for a kind of "file-station", 

similar to the one we know from Synology's DSM, where I can access my files from outside, send friends a download link and so on. 

Everything else that is possible by these OS will be something that I would like to play around with to check it out and to gain experience. 

Link to comment
Share on other sites

Looks like my objective of getting you interested and exploring options is working. :)

Fedora is a great and leading Linux OS that many others are derived from. This is overly simplified but generally speaking they are all the same. But different distributions of Linux choose different core packages that make up their version. They may pre-tune the OS for SQL database use, for gaming, for storage or high IO throughput.  These are all things that can be tuned on any distro.  This is mostly the same for the desktop.  You see Garuda offers 4 or 5 of them to choose from.  You should be able to load them on other distros as well.

Each distro typically has their own package manager for point and click installation of apps.  Some require compiling, some have pre-built executables. That makes it easy to quickly install things and keeps you from having to drop to a command line and do a "sudu apt install XXX" command.

Different flavors of distros typically handle updates differently as well.  Some put out daily updates, some weekly, monthly or quarterly.

The beauty of Linux is that you can add/remove things from it to make it your own.  You could start with a "thin" Linux and install things as needed (no bloat) or install a thick Linux that has many things already installed and setup to match the OS best it can be.  Because Linux is so popular, people have far different needs.  I for example use both. I have a robust desktop, but tend to use a lot more of the thin/stripped Linux packages in docker or containers and only add what's needed.  This way my docker images need less resources which adds up when you 20 or more docker or containers running.

As an example of what I was talking about being able to put your own Linux together.

Garuda Linux is a pretty new distro tuned for great performance (so it's not a good choice on older hardware or VM use) based on Arch using btrfs file system as it's default.  It uses Calamares installer for desktop selection and has many custom desktops.  For example many people really like the heavily customized KDE Plasma with a dark, neon look. It's a very modern Linux with attractive feature set that uses a rolling update. It's ultimate edition with the Dragnized (Dr460) is a very lavish desktop that will give an other OS a run for it's money GUI wise.

If you are interested in what some of the difference are in general between distros here's a decent read comparing Arch to others.

You asked if I like Fedora.  I happen to be using Red Hat right now which is a Fedora based distro. But don't read anything into that as I have a bunch of different Linux distros setup as well. I have and use a lot of different technologies and typically pick the best tool for the job. :)

Link to comment
Share on other sites


Regarding Garuda Linux,

If you say You could start with a "thin" Linux and install things as needed (no bloat),

                    Which ones are you then referring to..??


Link to comment
Share on other sites

It would likely be different for desktop vs server or GUI vs no GUI but for a list of thin desktop client this does a good job:

Typically a thin or lightweight client is used for docker, containers or virtual machines where it might run only one or two services.  This way you only need to install any dependencies the app needs and you keep it minimal so it uses less resources. But you can certainly pick a distro you really like and configure it the way you want it to be just like the guys behind Garuda did.

PS Garuda would be a thick release for sure but for a starting OS that's a good thing because you will have a GUI with lots of configuration options.  As an example if you wanted to do something simple like set the network card to use a static IP vs DHCP and keep those settings on next boot in Garuda it's as simple (no easier) then doing it on Windows.  Try doing that on a headless server and the procedure is going to be wildly different depending on the distro and if it's a systemd type system or not.  You'll need to run a couple commands as well as modify system files.

To get a feel for the different distros and their package managers you really just need to install and play. If your computer now has the ability to run virtual machines then you can do it right there. You probably won't want to install an advanced file system like zfs or btrfs in a virtual machine however as it will hurt your performance but it's still fine for playing but might be a touch sluggish.

If you do that you will likely gravitate to one or two you like the most.

Link to comment
Share on other sites


Hi Cayars,

Since that I'm still waiting on some parts for my server, I spend some more time to read about all different suggestions from you.

I have decided to go for the Garuda Linux OS on my new server. After this decision I have been reading all that I can find around the internet about that,

and since im new to linux and to btrfs file system this of course brings up a lot of new questions. Specially about the BTRFS system and the setup of this.

So I would like to ask you, as specialist, some of my questions and some of my own thinking.

I'm a little confused about how to start the BTRFS setup, so my questions is related to that and to how I actually would like the server to act if finished.

Should the btrfs system be installed on the system disc only at the first setup, or should it include all available disc's on the server from the start..??, and then later setup different raids..??

What I have in mind and what I have available of hard drives is as following :

1 X Samsung 970 EVO Plus M.2 NVME 250GB  (System disc for operating system)

3 X WD Red Plus 14TB  (Storage for all Media)

 - These 3 drives will be the main tank/pool and will hold all media for Emby and Plex, could run in Raid 5, and must be visible and available on my Windows machine.

1 X WD Red 4TB (Temporarily Working Disc).

 - This drive will act as temporarily location for files that myself and others can download from. We are not talking about creating additional users, but file-station alike.

   Means if a friend is asking for a movie or a music album, I will put these here and send them a download link or give permission to let others download these files.

   This drive don't need any safety or backup.

3 X WD Blue SSD's 1TB (Backup and Metadata location).

 - Disc 1 could be for full system backup and snapshots (if that is possible and give any advantage).

 - Disc 2 & 3 could hold all metadata from Emby and Plex and run in Raid 1 (if that is possible and give any advantage). must be visible and available on my Windows machine.

All this is just my thinking, I don't know if and how it would be possible, and I also don't know if this is a good way to go.

I would like to hear what and how you think about it all. 


Link to comment
Share on other sites

I've been trying to think of the best way to answer this as it doesn't need to be this complex.
Let me try and answer this over the weekend.

But for now I just wanted to say something so you didn't think I forgot about you. :)

Link to comment
Share on other sites


Hi Cayars,

Don't use so much, or more effort, on this setup.

I will try out some different OS and see where I will end up, then I think more questions will show up 😁😁

I'm still reading a lot about the different suggestions, but by doing so it just make me more secure the one day, and more insecure the next day,

so like you already suggested, try it out.

After reading a lot about the Garuda Linux I still went back to Openmediavault to try to compare these two.

I looked in the forums to see what kind of issues people have with these, then I compared these issues between the two systems,

so right now I'm a little more on the OPV side again, due to it seems to fit better to my needs and knowledge, but you know how fast that can change 🤣🤣

My plan now is to try them out directly on my NAS, not on VM, and then see how it works for me. 

I still have my synology running, so its actually not a big deal to copy over some stuff and try out the different OS.

Link to comment
Share on other sites

You want your main storage of all things to be setup correctly from the start regardless if it's just a typical RAID, a BTRFS or ZFS storage pool.  This will be the bulk of storage used on the machine and what you want to design the storage around.  The more drives you have for this starting out the better you will be.  For example if you only have 3 drives and one of them is parity of some kind you loose 1/3 of storage to overhead.  With just a difference of one drive now it's only 1/4 overhead. Most of the time using either 4 or 5 drives is going to be an optimal starting storage size.  It really depends on the OS and the format you go with.  Example if using ZFS the most you might want to use is 4 drives because you'll likely have to then do increments in 4 drives moving forward.  If the file system allows adding a drive at a time it's far less important but you may want to consider long and hard if you want something similar to RAID 5 or RAID 6.  That will be your protection level. Often times you simply want to test it both ways by actually setting up, formatting and allocating the space, running some benchmarks.  Then tear it down and rebuild it the other way and test again.  You will know if one design or layout has a performance edge or perhaps is really bad at random reads or writes.  Then tear that down and start over.  That's not a bad thing to do at all as it gives you a couple practice runs of setting up storage volumes and pools.

It's also a great way to test adding a single drive and see how the OS reacts when you either want to replace a drive or add a drive.  You will also want to pull a drive live and put the system in a "panic" mode as if a drive failed.  Now you can at least start the process of replacing a drive.

It way better to see how this works when you cause it by testing when no real data is at risk!

For the most part I'd say to do a dry run of the OS install planning on using the SSD as the only drive in the system to start with during the install.  Look and see what/how it partitioned the SDD and if the partitions left are going to be big enough as working partitions.  If the OS doesn't warn you or tell you it needs another drive or a larger drive you probably have your answer right there.

I'm thinking you only need the SSD to install with. Then you can put in the 4 to 6 drives you already determined was optimal and build it.  You can certainly hold back a drive or two in the beginning. If you're sitting on 2 drives you're in great shape if you have a failure with a drive ready to use for replacement.  Worse case you have half you next storage adjustment already.

I wouldn't use a single drive for you and your friend to share stuff.  Just make folder or even a small pool from main storage for this.

You will want to get a good backup of the system after it's setup before you start adding all the media.  When I say backup I do mean a true backup with a good backup client that uses compression and deduplication. That will allow you to backup everything but your main volume often with very little actual backup space used. How you decide to backup your main storage is up to you. I know lots of guys who keep backups of the system but not the media figuring it's ok if they loose that as they would just start over with new media but could have a working system back quickly.  Then other people keep a copy of all media like myself.  I don't use a backup for this but just the archive bit of the files to know if they've been modified.
I then "rsync" or essentially copy what's changed to another storage location which happens to not be in the house for an offsite backup.

Nothing wrong with Open Media Vault which is one of the OSes I mentioned that is more like a web based NAS setup for media servers.  It's kind of a plain jane UI but has a nice orderly menu system.

I don't know if you've ever seen it live or have had a chance to explore the UI to really get a feel for it but if you like one evening I can give you a 10-15 minute guided tour of the UI as well as show you my personal NAS software if you want to call it that but it's more like a Person Cloud Service where OMV is a small part of it. Then after showing it to you I can spin up a VM and let you play with it for a couple days. I could create 4 or 5 virtual disks for you if you wanted to play around with the storage subsystem to see how it works.

Link to comment
Share on other sites

  • 2 weeks later...

Hi Cayars,

Sorry for my late feedback, I was trying out some of your suggestions in between, but I also was very busy at work here at years-end.

Thanks again for all your good information's, helping and for your great offers, I really persuade that.

I was still waiting on some parts so I couldn't really move on, but now I have the parts and I also finished the build.

First thing I did was to run the server without any OS, just to get the Bios updated, to check out if fans and cooling is running as expected, to see if all hard drives is detected, etc.

All good, so next was to try to install Open Media Vault. I got this installed on a Kingston 480GB SSD, by the classic "choosing the wrong drive" mistake 😁😁, but all good so far.

I had this new Kingston SSD drive just laying around here so I installed it into the server with the purpose to check out all SATA connectors and M.2 slot on the MB at the same time.

By the way, I followed your advise to add 1 more WD Red 14TB for the main storage, so now I have 4 of these drives installed.

OMV is seeing all my hard drives, so that seems to be great so far.

1 x Samsung 970 EVO M.2 NVME 250GB

1 x Kingston 480GB SSD

3 x WD Blue 1TB SSD

4 x WD Red Plus 14TB

My plan is to re-install OMV to the Samsung 970 EVO M.2 NVME 250GB later, and then use the Kingston maybe for system backup, if that is a good idea.

Or would it make sense to let the installation as is, and then use the Samsung NVME for maybe Emby, Plex, etc...???

So far I tried to play around with the OMV, still without creating any Raid or file system, but just went through all the menu to see what it is. I run some updates, got

the OMV-Extras installed, got Docker and Portainer installed, etc, just to have an idea of how that is functioning. I have no experience using any of these two, but at least I know how to install them now 😂😂

Next step will be to re-install OMV to the correct drive, run updates again, install OMV-Extras again, install Docker and Portainer again, then try to create Raid and file system for the main media storage and shared folders.

First question would be, what Raid would you suggest to me for the main storage..?? I would like to use EXT4 as file system, and I have a full backup of all my media files on a different computer, so with that in mind, what would be the best Raid choice..??, Raid 5 due to capacity..??, Raid 6 due to disc safety..??, Raid 10 due to performance and some safety...??, 

or like I read somewhere, "the best Raid is NO Raid"..??

As a Media server, how much difference in performance would actually be visible between these 3 types of Raids...??

From my point of view a Raid 5 would make most sense, due to the capacity and due to existing backup, but again, this is just my own thinking.

Second question would be, what do you actually mean when you say "true backup with a good backup client that uses compression and deduplication" ..??

I can see that OMV-Extras has some different system backup plugins, is it like these you referring to...??




Link to comment
Share on other sites

18 minutes ago, bacardi8 said:

 I got this installed on a Kingston 480GB SSD, by the classic "choosing the wrong drive" mistake 😁😁, but all good so far.

Happens to the best of us. If you play around enough it will happen <cough> like to me 2 days ago. :)

19 minutes ago, bacardi8 said:

By the way, I followed your advise to add 1 more WD Red 14TB for the main storage, so now I have 4 of these drives installed.

Nice, glad you did that as you'll end up with more of your disks usable this way.  Only 25% vs 33% will go to parity.

20 minutes ago, bacardi8 said:

OMV is seeing all my hard drives, so that seems to be great so far.

I don't know if I mentioned this before but you will want to check your motherboard manual to see if all SATA connectors run at a full 6 GB or if any are shared. Could be shared with other SATA connectors or with other PCI lanes.

23 minutes ago, bacardi8 said:

My plan is to re-install OMV to the Samsung 970 EVO M.2 NVME 250GB later, and then use the Kingston maybe for system backup, if that is a good idea.

Or would it make sense to let the installation as is, and then use the Samsung NVME for maybe Emby, Plex, etc...???

No don't waste an NVME regardless of size as a backup. You can clone the Samsung to an image file or complete disk backup to another location. The Kingston could be used as a 250 GB transcode drive for Emby to speed up files IO on that.  

27 minutes ago, bacardi8 said:

So far I tried to play around with the OMV, still without creating any Raid or file system, but just went through all the menu to see what it is. I run some updates, got

the OMV-Extras installed, got Docker and Portainer installed, etc, just to have an idea of how that is functioning. I have no experience using any of these two, but at least I know how to install them now 😂😂

Next step will be to re-install OMV to the correct drive, run updates again, install OMV-Extras again, install Docker and Portainer again, then try to create Raid and file system for the main media storage and shared folders.

At this stage here is what I would recommend.  Start playing with RAID. Maybe even using smaller disks.  What you want to do is get a 4 disk RAID setup, copy some files over, shut down and remove a drive (fault) and start back up.  You should get warning and errors, but since you've done this on purpose you can see what it's like now so you are prepared later.

Shut down and install a "new drive" (5th drive) and try doing a restore/rebuild.  This is the perfect time to learn how to rebuild a faulty array so when/if it happens when you have good data on it, you are prepared and know the routine. If the restore seems complicated then start over and do it again and again until you have it down comfortably.

While you still have the 4 drives installed, try and add a 5th drive and expand the pool. Try a 6th drive. Try shutting the power off during a rebuild to get simulate the "oh sh*t" experience you would have in real-life and go through the motions of a repair at that point.

If you do those things you will remove the mystery and "what if" you'll always have later especially as your storage amount increases. It the perfect time to play a bit and see how mixed size drives work, etc  Better to learn up front without risk to your media.

40 minutes ago, bacardi8 said:

First question would be, what Raid would you suggest to me for the main storage..?? I would like to use EXT4 as file system, and I have a full backup of all my media files on a different computer, so with that in mind, what would be the best Raid choice..??, Raid 5 due to capacity..??, Raid 6 due to disc safety..??, Raid 10 due to performance and some safety...??, 

or like I read somewhere, "the best Raid is NO Raid"..??

As a Media server, how much difference in performance would actually be visible between these 3 types of Raids...??

From my point of view a Raid 5 would make most sense, due to the capacity and due to existing backup, but again, this is just my own thinking.

Second question would be, what do you actually mean when you say "true backup with a good backup client that uses compression and deduplication" ..??

I can see that OMV-Extras has some different system backup plugins, is it like these you referring to...??




Tough question as it depends on what you're going to run on the machine and if you're going to use VMs or not and level of protection you want.  Ext4 is going to be the fastest for general use as well as give you the most storage, but has no protection built in and really no special features. Both BTRFS and ZFS are resilient file systems that can offer a lot of neat things like snapshots.  I'd never build another storage server without this. You do take a small performance hit in general use as well as give us a bit of storage for the file systems to keep track of these special things. 

Let's say you want to run a VM or two on this machine.  While running the VM you can take a snapshot of the whole machine in a second or two and have a copy of the machine at that state in time.  This is great for testing things.  You might have just installed the OS, then installed 6 other programs and now have a tricky one so you take a snapshot, continue installing to hit a snag. Revert back and start over again.  Snapshots work on BTRFS and ZFS by keeping track of other the changed data.  So if you had a 1 TB VM and took 50 snapshots the amount of disk usage will hardly be more than the original 1 TB because the files system tracks things by blocks and understand that multiple files can share blocks to save space. This also allows you for example to install a copy of the latest greatest Ubuntu to play with so right after the install SNAPSHOT then go on your way doing what ever you want like testing OMV on it. Now you can go back and clone the snapshot from before making it a new VM and try installing cockpit on it, go back and clone the original snapshot and now install webmin on it.  You now have 3 different VMs each possibly with multiple snapshots but the total storage will be only the amount of space for what's unique between them all.

That is powerful.  But a snapshot isn't just for VMs.  Here's the beauty of it.  Snapshot the whole system!  It's just a point in time "capture" of the state of the machine. The file system keeps things as it is then uses new storage for only what changes.  Make a boo-boo, revert. But what if you took a snapshot tonight, get home from work tomorrow and find all your media with funny names and encrypted with a bitcoin ransom? If using EXT4 you're toast as the only hope you have is your backups.  With BTRFS or ZFS, you guessed it revert to the previous snapshot.  Even if you can't execute anything because all the EXE are mangled you can boot from a thumb drive and then revert the snapshot.  That single reason alone is worth it to me.

They both offer other additional things as well such as compression and deduplication to save space but you'll find this doesn't do much for media files especially if they are encoded in H.265.  But if you store other stuff on the box besides media then perhaps a small partition for that. A 5 TB partition setup with compression and deduplication can go a long, long way for things like PDFs, office documents and the like. I've had on a business file server about 35 TB of this type of data in 4 TB of space.

What you can do is play a bit with files system after you get the RAID rebuilds down. Try setting up dual volumes on the pool configured differently and try some copy jobs while timing as well as maybe run a benchmark or two and see what the difference and overhead is on your computer with your setup.  Try playing with the snapshot feature to see if that alone is worth it or not.  Only you can decide what's best.  One thing for sure, once you've actually played with a couple different file systems you'll be that much more in the know personally and not need to ask me or anyone else.

I just went back and looked at your hardware list and see you have a 2288G XEON with 630 Integrated graphics 8 cores and 16 threads and a Passmark score of 17K. It would be a shame to let those threads go to waste without making use of containers and VMs on this box. So you know which file system I'd likely go with. :) PS nothing wrong with using just EXT4 for the boot drives!

So the deal with RAID is pretty simple when you break it down to it's most simplest form regardless of file system so a RAID 6 = ZFS2.
RAID 5 is one parity disk (regardless if pure parity or striped across all disks) or one extra disk per array.
RAID 6 is two parity disks or 2 drives from the pool to parity.

You will see all over the Internet tons of posts that RAID 5 should never be used. There is a historical reason for this but the problem was solved a long time ago so the reason many quote is just gibberish and shows these people do not understand modern file systems but are stuck in 20 year old thinking. But there is a very valid reason why RAID 5 isn't the best choice.  Back when we had monster drives of 500GB you had 4 drives in a RAID and one crashed. You replaced the drive and had to rebuild 1.5 TB of data.

Now take my little 920+ with 4 18TB drives and after file system is around 40TB of data. These modern drives are likely a little faster then those older drives unless they were early SAS drives which are FAST.  So now rebuilding a RAID 5 drive takes 26 times longer. So if a rebuild used to take 4 hours but now takes a bit more than 100 hours or 4 days that's a lot more risk to the array not have a spare. Unfortunately drives tend to fail in similar fashion so if one drive dies, and now you put the remaining drives under the heaviest of load doing a rebuild while still being operational the odd are a lot higher you could have another drive die during rebuild.

Obviously a RAID 6 array gives you a spare drive with good parity during a rebuild. But many people don't even like these odds because the 100 hours is a long time with one drive down. 10 hours in you could loose a 2nd drive and then it's time to start sweating even if you have a backup, because they tend to fail at the worst of times.

So that's the opinion of some "experts", but it doesn't go without merit.

BTW, just went and looked and there are bios, firmware, management as well as linux driver updates on the website for your motherboard:
I took a look to see if there was any issues with SATA ports or other PCI lanes to watch out for and you're good.

Link to comment
Share on other sites

Started a new post.

The type of RAID used does affect your read/write performance but unfortunately a lot of material you find on the internet is just flat out wrong.  Again people living in the IT world of 20 years ago.  You'll see graphs like this with expert analysis


or https://blog.storagecraft.com/raid-performance/


Parity RAID adds a somewhat complicated need to verify and re-write parity with every write that goes to disk. This means that a RAID 5 array will have to read the data, read the parity, write the data, and finally write the parity. Four operations for each effective one. This gives us a write penalty on RAID 5 of four. So the formula for RAID 5 write performance is NX/4.

So following the eight spindle example where the write IOPS of an individual spindle is 125 we would get the following calculation: (8 * 125)/4 or 2X Write IOPS which comes to 250 WIOPS. In a 50/50 blend this would result in 625 Blended IOPS.

Problem is the author is living under a rock and doesn't know what he's talking about.  When you go on and read how calculations are done. This leads back to what I mentioned about a historic problem with RAID 5. No one does this multiple write/read anymore.  The data is broken up in memory (or hardware), calculated how the striping will be and is written once to disk. So instead of RAID 5 & 6 being slow for both reads & writes like is claimed compared to 1 disk but is much faster. There is a penalty on writes compared to reads because all 4 disks (4 disk array) must be written to on a write operation.  If you can write 3 times the amount of data (plus parity) in the same time one drive writes data all going full out then the 4 Drive RAID 4 array has the potential to still be 3X speed assuming the computer is fast enough to calculate the parity (non issue these days).

This is a much better article that gets it correct

Besides the speed RAID 5 & 6 do still have the problem with the amount of time it takes to rebuild an array. For media servers it's a trade off.  It's not mission critical and you won't be going out of business if all is lost.  You'll be inconvenienced and won't have anything to watch for a day until you start loading new media. :)

If you actually have real backups much of this is mitigated anyway.

I don't know how interesting this type of thing is but I've got a server build I'll be finishing up as soon as 2 more parts arrive hopefully before Christmas (probably not).  Dual Xeon with 18 to 20 internal drives bays. It's joining my two other dual xeon servers and hopefully become a 3 way cluster with a distributed file system. I've been wanting to do that for almost 2 years but have been missing the 3rd server. I'm definitely planning on testing different disk layouts and file system performance. I especially want to see how xfs, btrfs and zfs compare especially the last two with 15-20 drives. I know the more drives used the more memory hungry ZFS will be so it will be interesting. Problem is you spend 15-20 minutes configuring the file system, apply it then 1 to 1.5 days later it's ready. Could be a month just doing 15 to 20 tests but that's ok as I love this kind of info and won't be able to do it once things are loaded.

Link to comment
Share on other sites


Great Stuff, Great Stuff..👍👍

All this just confirm to me what I actually find out by reading alot of your suggestions already,

which means that OMV is not the best or the most correct OS to use if we talking about BTRFS (snapshot, mixing drives, performance, etc.).

To be honest, I tried only to install OMV at first point, based on that it for me looks to have more "Synology DSM" alike" user interface so it do look more familiar to me.

On the other side Garuda Linux looks like a winner based on these things, and also based on my hardware, but at the same time it also scares me, due to that I don't have any experience at all with that. 

But okay, I will try it out, then I can learn some more....and get more confused...🤣🤣

This time I better be more awake regarding the different drives...😎😎






Link to comment
Share on other sites

I would definitely give TrueNAS SCALE an install and look over. You want Scale and not Core. Scale is based on Linux and Core is FreeBSD.
They are doing some really interesting things and taking a "NAS" to the next level.
The file system of choice they use is ZFS which is tried and true and probably the most trusted file system in existence.

It's marked as beta as they are still working on some things but not anything you will need right now.
The main parts still being tweaked are node related.

This platform will pretty much do everything for you that could you could want. Killer file system, single to multi-node installations with with a distributed file system, balancing of jobs, VM manager, Container support (far better than docker). The nice this with this OS it the scale out architecture especially for storage. It runs on Linux but they tune the kernel for server type duties on the host side which is ideal.

If I had that 5 to 10 years ago I wouldn't have storage running on 2 Windows Servers, 2 Linux Servers, 4 WD NAS, 2 Synology NAS, a few 8 bay USB/eSATA boxes as well as several external USB3 drives.

With the architecture they are building if you ran out of drive bays instead of doing what I had to do you could add another node (computer) which could be powerful to share compute tasks or use a $40 tower with Celeron or i3 you bought off Ebay or got from a neighbor because it has 4 or 5 drive bays. This new node could be used just for storage and to the "consumer" it just another path in the file system.

It should be easy to manage with it''s web interface and with the ability to setup VMs with it you could run any version of Linux in a VM to learn or play with it.

Of all the things we talked about TrueNAS Scale is probably the one item that would be very hard if not impossible to out grow.
It's worth a 2nd look and then if you don't like it a 3rd look and then a 4th look before you dismiss it. LOL

Have a read what ZFS is about done the TrueNAS way. BTW, they are refining ZFS on Linux as they have a ton of experience with it from FreeBSD which as always the OS used by TrueNAS core.  But FreeBSD is loosing out to Linux in many ways so a couple of years ago they started working on the Linux version of TrueNAS and be able to take advantage of everything Linux has to offer. A million installations of their previous NAS product says a lot. TrueNAS can be found in home labs to large corporations who bank on their file system.  It's that good.

The only downside to ZFS is the memory it uses. It will claim 1 GB of memory per TB of managed storage.  So 16 TB will require 16 GB of memory to keep it humming along nicely. It's a rule of thumb but if performance isn't bad you can go lower then the guide but that's what is suggested to their customers which are often enterprise customers. Btrfs will be 1/2 to 1/2 that amount most likely but isn't as refined as ZFS or have all it's features.

Good conversation in this thread where many people say you just don't need to keep adding memory like recommended if not using deduplication which is the main memory consumer.


Edited by cayars
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Create New...