Jump to content

Out of Space - Need a New Server and Looking for Build Ideas


WhiteGuyTranslating

Recommended Posts

WhiteGuyTranslating

So, 6 years later, I need a new server. I am officially out of space. I've lost 2 HDD's, almost corrupted an array, and reformatted my OS twice (mainly due to jumping on the SSD bandwagon a little too early and, I suspect, a little too cheaply. My family's collection of 161 BluRay's, 830+ DVD's, and 44 TV series (that include long runners like Cheers, Frasier, and The Simpsons) has pushed me to the 13TB mark and the max of my current set up. I have been with Emby as one very loyal member, but as inept as they come when technology is concerned. Like a good drug though, I can't stay away. I admit - I thoroughly enjoyed having a better than NetFlix experience before there was NetFlix. The jealousy of my dinner guests was a delicious aperitif to an evening of fine dining as we watched our favorite flicks.

 

I am, now, at a crossroads. I see the bevy of technology in front of my and I salivate over the prospects. What server should I use and what hardware to accompany it? A back story of my hardware:

 

I have more parts to the machine, obviously, but I feel these are the main players and give anyone who has build a rig themselves a good idea of what I have to work with. The thing is, I am really not interested in keeping any of it. Here's my thought: I've gotten more use out of these parts that, except for the hard drives, were all bought 6 years ago. I've never kept a computer fully intact for more than 3 years in my life. And the system still performs admirably. I am torn at the possibility of going to a NAS set up or staying with my tried and true set up of Windows. The lynch pin for my Windows set up is the LSI raid card. It COULD save me if my motherboard dies, as I would just have to transfer the card to another server, with discs attached, and I am up and running again. That said, I wonder if there are better options out there.

 

See, when I started with Emby, transcoding wasn't really a thing. Now that streaming is built in with Emby, I am noticing FFMpeg chewing up my resources like no tomorrow. My poor 6 year old processor is struggling and, for the first time ever, I actually get stuttering in movies when I play two at the same time. I figure I have two options: just migrate to a beefier, modern processor with more than two cores or try to trace down the issue. Again, not good with the computers but I can open a wallet like a damn savant. Since I am also running out of storage, this seems to be where I am stuck.

 

Getting back to the meat of my thread (long), I am looking for build ideas from those of you in the community. I am hoping that one or two of you have similar usage patterns to me and I might be able to copy your system, if it works great for you. For instance, my server resides in my office with my work/play computer. I can open the well ventilated cabinet, pop out the disc tray, load a movie, and remote desktop into it (it's running Win7 Ultimate). I can watch from the upstairs or downstairs TV via Window 7 Media Center, with emby for MC installed. I also enjoy live TV via a HDHomeRun Prime, for cable. I have a couple wireless devices but everything else is hard wired on a commercial grade network (it was my big splurge when we moved into our first home a few years ago). Also, I picked up a Roku 2, and I am pleasant surprised with it's performance, for such a little and inexpensive thing (by comparison to the two computers running Win 7 MC). I've also used an Xbox 360 in the past but it sounds like a weed eater being tossed into a wood chipper during a wild weekend at Sturgis - all the while on fire.

 

So, what are your thoughts? I am sitting on 13TB of data, money is not really a concern but I don't necessarily want to throw it away. Let's face it, Netflix and other streaming services are out there so I can't get too ridiculous.

Link to comment
Share on other sites

PenkethBoy

Ah the joy of new hardware and what toys to buy :)

 

Essentially you need more storage with a modern up to date processor (future proofing) and a better raid card for Sata III disks or SAS

 

I will assume your network is 1 gig

 

My current setup

 

Emby on an Intel Skull Canyon NUC with 64GB RAM and two M.2 Smasung 950 Pro 500GB drives - at the momnet overkill

Data Store - QNAP Nas via 1gbit network

Other storage - combination of three Nas's - One Qnap and two Xpenology (Synology clones)

NUC is connected to my 4k TV and does any transcoding necessary and stream content to the rest of the house via 1 gig network

 

So if it was me doing this?

 

Option a) minimal configuration - off the shelf boxes

1) Powerful mini pc with option to increase ram to say 32GB or even 64GB (emby does not use a lot at present but might need more in the future with new features) - cpu - could go i3/i5 to start with and then swap to i7 when you need or go the whole hog now and get the best i7 available - you also get Intel Quicksync to help with transcoding which emby supports

2) Graphics card? well the inbuilt graphics on intel chips these days are fine for day to day computing etc but transcoding on NVIDIA cards is just starting to become a thing but not mature yet - so i would have the option to put in a card later when things are more sorted

3) Shift your "data" to a dedicated storage solution - this could be a NAS solution - as essentially you are wanting as much storage as you want/need available to the mini pc via the network

4) You could use a DAS (direct attached storage) solution - either simply as a USB3/3.1 enclosure or more high end and have SATA/SAS enclosure - if you have room for a good SAS controller in your mini pc - or ThunderBolt 2/3 enclosures - the NUC i have supports these but they are poor value. Remember you will ultimately be limited by your network speed so DAS is nice but not going to have any effect for streaming to the back bedroom.

 

Option B) Use a NUC or equivalent - for the server

1) this means you dont have the options of Nvidia or SAS controller

2) have a small pc to hide on the back of the tv or you monitor

3) storage as a NAS

 

Option c) Build a dedicated server

1) Find a suitable large case that will take lots of HDD/SSD - i use the HAF X from Cooler Master - have managed to get 14 drives in with minimal modification and you can add lots of cooling to keep your HDD happy and with a long life. You could reuse an existing case as well

2) Motherboard of choice but modern and with a good CPU - e.g. i7 6700k and memory as you need/want - 2 Lan ports or pop in an extra lan card

3) Make sure the MB has a good selection of pcie slots

4) get 1 or 2 good SATA/SAS raid controllers - i use Highpoint but up to you

5) Good power supply as you are going to want to run multiple disks and need enough connectors without getting spaghetti wiring and you may want a powerful GPU

6) optional graphics card - some higher end MB disable the intel graphics on the chip - so you need at least a basic card to boot - but you also have the option for Nvdia/AMD cards for transcoding if you want

7) if you want even more storage in the future you could build another simple box and connect via SAS as a DAS

8) any other options you can think of :)

 

As for HDD for the data - for longevity i would go with NAS rated drives - as you get better warranty and they are rated to be on 24/7 - yes they are more expensive but you get what you pay for - you dont need to go for enterprise drives but the standard WD Red's are fine and run cooler than their 7200 rpm cousins the Red Pro's - having said that i use HGST NAS 7200 drives in my Nas's and WD in my backup Xpenology servers. As for RAID i would go with Raid6 - double redundancy should you get disk problems - all raid these days with a powerful cpu and good interface card will give you very fast data access - well in excess of any 1 gig networks requirements. Some claim that raid 5/6 are slower than say raid10 - these days its outdated thinking as cpu's are so much faster now that the difference is minimal (although measurable) and in real world applications not going to be noticeable to you or emby (e.g. i get 1000MB/s + out of my 12 disk raid 6 which will challenge even a 10 gig network :)  but for now unusable). The future is 10gig networks but for the average consumer is to expensive still and will be for another 2/3 years - but that can be sorted with a few add on cards when that happens - hence the comment about pcie slots as you dont want to be limited by a lack of them if avoidable and have to change the mb and possible cpu to get 10 gig networking or some other option :)

 

You could have a SSD as a boot drive - but once booted you will struggle to notice the difference - i have a M.2 on my main pc as the boot drive and 10-15 second boot from cold is a nice feature :)

 

My vote in order of preference in Option c then b then a :) but i like playing with new hardware toys :)

 

Anyway some options for you - others will have different options i'm sure - just take your time and weed out the wheat from the chaff :)

Edited by PenkethBoy
Link to comment
Share on other sites

legallink

Another opinion:

 

Just pre-convert all of your media to a streaming friendly format (mp4/AAC if you are using Apple and/or Roku).  You won't be cpu lmited at all, and it should be a pretty seamless experince.

 

I would also subscribe to an off-site backup site like crashplan or backblaze (I use backblaze because upload speeds were about 10x that of crashplan.).

 

In this setup, the power of your cpu isn't all that important, except for transcoding live tv...so on that point, how many live tv streams do you anticipate needing to serve up?

 

I also think raid is overkill for most setups.  A good backup strategy and decent monitoring of hard drives should be good.  If you want to do some sort of drivepooling or related, I think it is better to use drivebender or something similar than a raid card, because it is ideal to reduce the possible points of failure unless you love monitoring of systems (which it doesn't sound like you do).  If you use a raid card, you are always tied to that card, and there is harder/more wear on your drives.

 

I personally love the HGST drives, but the more recent Seagate drives are getting better (see backblaze's blogging on this matter for hard drive reliability).

 

More important than fancy, expensive systems/setups is redundancy/backups (offsite and onsite if you can do it).  Raid is meant to handle hard drive failure, but unless you are doing ECC/Server grade components, Raid isn't designed to handle file corruption.

 

In general, I find that a good SSD drive for at least your OS/Boot drive is important.  You may want to get a 2nd SSD drive to handle all the cache/transcoding/metadata.  I found significant speed boosts when all of that was moved to an SSD.

 

Most everything else, unless you are doing some elaborate things, or serving up a whole bunch of users is a bit overkill.

 

My setup, I have an i5 (2500k) with an SSD as the OS drive and 3 8TB drives (HGST) and 3 4TB drives (mixture of Seagate  and HGST).  I use 2 HDHomerun devices for LiveTV and ScheduleDirect for EPG.

Link to comment
Share on other sites

Harblar

I just went through the process this winter.

 

Things can be done fairly cheap (not counting the storage drives) and be 1000% better in terms of performance.

 

I'm running an unRAID server and couldn't be happier with the results. It's a linux based software RAID that allows you to mix and match upto 24 different drives. 23 can be actual storage, with 1 being a parity drive (allows for the rebuilding of a failed drive. The latest betas even allow for dual parity meaning the ability to recover from a dual drive failure. Data isn't striped across multiple drives, so if you did suffer an unrecoverable drive failure, only the data on the failed drive would be lost (would require 2 drives failing simultaneously, or 3 in the case of a dual parity setup).

 

Everything is managed through a GUI webui, much like emby itself, so it's pretty easy to use, and it can be browsed via Windows file explorer.

 

It also has the ability to host virtual machines and run docker apps.  I'm currently running Emby and Plex on mine and they work great!!! Easily feeds/streams to all my devices.

 

Here's a link to the unRAID forum thread I started when I decided to upgrade my server back in February/March.  I'm currently running 19TB spread over 8 data drives and a single Parity disk. (If I were to upgrade my drives, from the various mix of 1TB-4TB drives that I currently have, to new 8TB drives my capacity would expand to 64TB. Hopefully someone will crack UHD AACS 2.0 soon so I can start fill my Emby Library with some 4K goodness!) ;-)

 

All of my rips are 1:1 MKV rips. I have yet to have a problem with Emby transcoding and streaming them reliably and any Devices capable of Direct play (Emby Theater in my Home Theater)...  Total Sweetness!  :-)

 

Good luck on your build.  I couldn't believe I waited as long as I did once I saw how much better everything worked on the new hardware!

Link to comment
Share on other sites

mhaubrich

This clued me in to my solution: http://www.techspot.com/review/1155-affordable-dual-xeon-pc/

 

I decided I really didn't need a dual-proc setup, although now I'm kind of sorry I didn't just do it.  It was tough finding a regular ATX power supply that I could be sure had two ETX-12 outputs.  Anyway, I got a E5-2670 from eBay for $60 and spent $300 on an ASRockRack mobo (EPC602D8A).  I'm working on a professional degree so I was able to get Windows Server 2012 R2 Essentials at no cost, and I use DrivePool because I've had bad luck with RAID and I prefer to mix-n-match disks, especially when I find them on sale.  I can keep the same pool forever and just keep rotating new disks in when I need/want to.  Everything is in an NORCO RPC-4308 case in my rack (I needed a shallow case for my rack so this NORCO was almost perfect, though I did have to get creative to figure out how to get my SSD in there).  Not counting the drives but with 32GM RAM and an SSD the whole shebang was under a kilobuck, and I've got a pretty kick-butt server I can run VMs and do other fun stuff on.  I love the IPMI on the mobo so I can run the server totally headless.

 

I'm definitely an advocate for drive pooling vs. RAID.  Used to use DriveBender before that product stopped being developed.  I do also like DrivePool pretty well now and would recommend it.

 

If you like this idea but have a bit more to spend and want a slightly more "current" proc for whatever reason, TechSpot had a follow-up article recently: http://www.techspot.com/review/1218-affordable-40-thread-xeon-monster-pc/

 

Good luck!  Have fun!

Link to comment
Share on other sites

  • 4 weeks later...
WhiteGuyTranslating

So, believe it or not, I have been really doing my homework on this and I think have narrowed down my decisions. First of all, I must thank users mhaubrich, Harblar, PenkethBoy, and Legallink. I've gone through all their setups and suggestions to figure out which would work best for my specific situation. Many of the keywords, phrases, and technologies they mentioned I did not understand or was not familiar with. However, after searching (which lead down many rabbit holes), I feel like I have a much better grasp of modern options for media storage. I have come to the following conclusions:

 

For my system, the core storage would need to be raid 5 or 6. While pool based solutions (DriveBender, DrivePool, and the windows server 2012 similar product) are very enticing, if I understand right, they are essentially a software raid 1. While I like the idea of being able to transfer my entire collection to another server if something happens to the motherboard with relative ease, and the lack of a raid controller, which lowers cost, the idea of having double the amount of drives running to ensure parity is not something I find palatable. The idea of unRaid also sounds like a great idea but the limiting factor there for me is the Linux element. I don't want my first foray into Linux to be my movie server - I guess I'm just not as daring as I once was.

 

I like the idea of running Xeon processors for power - a lot. Like it was mentioned before though, I think a lot would have to line up to make that cost effective. However, there is the practicality aspect of a dual Xeon build. I mean, it would be awesome to rip 10 blurays at the same time (4 cores x 10 discs) but you'd have to have 10 disc drives, 10 instances of makemvk running, etc. Also, I'm not sure, but I don't think the fine devs at Emby have more than 8 thread support for their programs. They could have but I am highly doubting it. Future proof? Most likely, but lets face it, by the time we'd need 40 threads, I'd be long dead. Still, for $1500, it's tempting. Then again, pulling 300 Watts in processor power alone does not make this a cheap system in the long run.

 

Now, the idea of extreme performance has me wondering what direction the Devs are going with Emby. More importantly, when they build a new server, what will they build it with or what vision do they have? I do not know how the software, which as MediaBrowser just acted as an interface or better skin for Media Center, now works as a centralized server. However, it seems that this centralization, is a core feature in the Emby movement. Since I am pretty locked into using Emby for the foreseeable future, it makes sense to align my server build with where they see Emby going and becoming for next half decade or so. Will it revert to being more of a data center (low power, low ram, high storage) or a transcoding/distributed computing system (high power, high ram, high compression/medium storage)? I mean, trans-coding did not exist on the first iterations of MediaBrowser and now it is front and center. The ability to connect to a Live TV signal only existed on the head units via WMC and now it can all be done on the server in Emby (I have not tried it as I found it a little daunting and I still have WMC7) but just those two major changes of direction can/have seriously effected how my build will essentially have to go. I mean, sure, I could convert all my media to mp4/AAC which would stop the transcoding but we'd be talking about weeks of conversion and running files as mp4 would take more processor power, so I'm basically back in the same boat with being limited by my old processor again.

 

So, after reviewing all the options, I think I may continue researching this a bit further. I read on LSI's website (which is now Avago, which is now Broadcom) that the card I have can be extended to support up to 32 SATA HDD's. In theory, if I can find a expander card that is compatible with my OS and LSI card, I could just add more 2TB drives to the array for a while longer. I still need to upgrade the processor, but if I can hold off on doing a total upgrade, I might be able to put more fortune into a better class MB and CPU combo.

 

That said, I run into this issue, and maybe someone with RAID experience on here can help.

 

Question: I have the two mini-SAS ports on my controller full. They are multi-lane cabled out to 8 hard drives and those hard drives are in a near maxed out array (13TB nearly full of content). If I remove a multi-lane cable, attach a direct SAS to SAS cable to an expander card, then plug those HDD's into expander card mini-SAS port (keeping the same order) will I be able to migrate/expand the array or will I lose it? In order to do what I want (basically add an expander card into a maxed out system INSTEAD of adding an expander card BEFORE it was maxed out), is it possible to do it after the fact or should I have used more forethought and not maxed out my two mini-SAS ports with HDD's?

Link to comment
Share on other sites

PenkethBoy

So if i understand correctly you have a "hardware" raid setup and managed by the LSI card of 8 disks?

 

I say "hardware" as its still a software raid linked to the LSI hardware so not portable to another controller type - similar to MB bios raids - unless its a high end card with a dedicated Raid chip and memory cache which is enterprise territory

 

You want to connect the LSI to a sas expander and be able to add more drives to the array

 

so essentially you will have

 

LSI port0 > Exp Upstream Port 0 > HDD connected to Downstream Ports 0,1,2....

So you get

Exp Port 0 > 4 disks

Exp Port 1 > 4 Disk

Exp Port 2 > x New disks

 

Some expanders have two upstream ports but iIRC thats for redundancy rather than "splitting" the available disks

Some expanders have two power options - usually you only need to use one not both - Molex and PCIe connectors

 

In theory your disk should be recognised and the Raid maintained - keep the disk in order will help for this

plus the new disks should be visible

 

Be careful what expander you get as some of the cheaper ones have poor performance - also make sure your LSI is getting full pcie bandwidth it needs as the more disk you attach the more the bandwidth is split between the disks so the larger the number of disks the less they have.

 

To give you a different perspective i am today (hopefully) finishing my new build - i will be using 3 MBA's (equivalent of your LSI) to connect 24 drives - no expanders - i looked into it and ignoring the cheap entry level ones as the performance is limited - decided that 3 MBA's were cheaper and more flexible than one controller and an expander. Also my motherboard has 5 pcie x8 slots so the MBA's have full bandwidth available to each card with 8 disks attached which is more than enough with headroom for each disk to provide its full performance.

 

I am not going to raid them on the cards - actually i will flash the bios back to "basic" so the disks appear as "single" - so in essence a 8 port "sata" controller

 

The OS is Win 10 Pro - with Drivepool+Scanner - Windows will see all the disks and then i can configure into different pool(s). Drivepool is a lot more sophisticated than a Raid1 (which does not have parity)

 

I have had various Raid arrays in the past and have raid5 and 6 running at the moment (QNAP and Xpenology(Synology) respectively) the issue that you can face is that when a drive shows errors - yes you can pull it and replace it (then wait many hours for the rebuild) but as you are stressing all the disks in the array(during the rebuild) and if its a large array a second disk can fail also - depending on your raid type this is bad to very bad - data loss probability becomes high. I spend a fair amount of time on the QNAP forums helping people out and the number of double disk issues is common - although the sample is very biased as people on post when they generally have an issue :)

 

So why am i going away to some degree from RAID to DP - if your pool has a disk that shows issues the other disks (and your data) are not affected - even if the disk completely dies with no possibility of recovery (depending on you duplication settings) - you just replace the disk and the lost data is re duplicated onto the new drive if needed or it remains duplicated onto the other drives (which happens when the disk dies - "simply" the pool self heal) theirs more to it than that but you should get the gist

 

Couple of other things i like are the ability to still access the individual disks to get files directly if necessary and should a controller die i can replace it with anything that can be recognised by windows. Also i can have as many disks as i want in any pool - some raid solutions limit the number per array so watch for that as well - can be a expensive lesson that one.

 

I also spent a long time researching this as well - choosing a motherboard with lots of high speed pcie slots (difficult to find actually) - how i could reuse as much of my existing kit as possible and have a flexible system that i could re configure (you never get it right first time) without waiting hours/days for raid syncing/rebuilding and then have a new disk fail on you in the first couple of hours - been there done that!

 

As you can probably tell i have spent a lot of time on this and this post! - lol

 

Have fun and i hope you find the above useful

Link to comment
Share on other sites

colejack

If you're not wanting to mess with hardware much then I would just recommend getting a 8bay Synology (Link) with some 4-6TB drives for storage and get an i5-i7 NUC (Link) to run Emby.

 

This would be the easiest way to have a good setup with having to build anything or deal with large boxes (ie rack mount cases, large desktop cases)

  • Like 1
Link to comment
Share on other sites

MSattler

Build two servers. For storage build a unraid server as mentioned before. This doesn't need a ton of memory or cpu power.

 

Next get a newest gen intel chipset, i5 or i7, so that you can offload transcoding to the gpu via QuickSync.

 

While you could run a emby vm on the unraid server, as far as I know offloading transcoding to the gpu will not work.

Link to comment
Share on other sites

  • 8 months later...

My current setup

 

Emby on an Intel Skull Canyon NUC with 64GB RAM and two M.2 Smasung 950 Pro 500GB drives - at the momnet overkill

 

Awesome! Please tell me how you got got 64gb ram into your Skull Canyon!

I assume you used two 32gb DDR4 SO-DIMM -- where did you find them? What make/model worked?

 

Thx!

Link to comment
Share on other sites

  • 2 weeks later...
WhiteGuyTranslating

So a year later and I pulled the trigger.

 

I noodled around a lot with the various set ups that were suggested and decided that, if something failed, I'd rather it be something I know and could blame myself for making a bad decision. Going 100% with another party's suggestion would leave me to stew about "I should never have listened to xxxxxxx on the forum. Unfortunately, wanting to blame something or someone else for my own poor choices is not a very admirable trait but at least I know I'm inclined to do it!

 

So, my final setup is not really my final set up. I basically went with what will work in my current home and what I plan to have in my next home. So, without further ado:

 

With all this together and after doing the initialization (which I forgot to do at the raid card level and did at the windows level instead, so I had to do it all over again - oops) this rig just flies! It's extremely responsive at my clients, and transcoding is finally working like how I imagined the devs expect it work. My old system barely worked when I had a transcoding session going with anything else running (like accessing the web interface). It was an old AMD dual core from 2011. This i5 system just screams through anything I throw at it.

 

I know that a lot of the guys on here that suggested storage options were not fans of raid5. I am not really either, as I had a raid 5 array fail, but that was my own fault. I got overzealous and added 4 new drives to the array and hit rebuild to expand it. I think I did something else to add yet another issue to the mix and it crashed. I have also had a bad drive, once, but I was able to fix that as intended by removing the drive and adding a good one. But yes, there are more optimized ways to make a large pool of disks for less cost and stress. I did a lot of reading on those various types and came to conclusion that it really comes down to operator experience and comfort. I know Raid5 and so I'm sticking with it. That said, my next plan is to (in another house where I can have a dedicated server room) get a proper 4U server case and house a serious collection of HDDs. My LSI (AVAGO now) card can have 64 drives attached to it so my plan is to save up and prepare to spend about $2000USD on a nice case with a Host Bus Adapter build into the backplane so it's just click and plug and I'm up to massive storage amounts. With that move, I am also looking to go to Raid6 for added security with the hot spare.

 

This current iteration of home server took about 7 years to get to, I am hoping that my grand plan will take much less time.

 

Now, as I said, the system has surpassed my expectations. It actually makes my normal rig for feel a bit sluggish, which is surprising considering the effort and $$$ dropped into it every year or so. I think the key is the m.2 ssd's. What I really like (which was a fantastic suggestion - though I can't remember who specifically said it, sorry) is have a scratch drive for the transcoding and metadata. The bandwith is split so nicely over the PCI channels allows me to transcode movies, going from 1080P in mkv containers to a Windows Phone as almost instantaneous - which I never would have attempted using my old system!

 

I have two things that I noticed regarding this new set up.

 

First, heat! The controller card runs really, really hot. I have put a 120mm fan on it (excessive?) to cool it off and now I can touch it again without getting that shock and surprise of pain. I could not believe when I first had the case open how hot that LSI card got! Then, after cooling that part down, the BPX drives attached to the PCI m.2 slots were reading 60C on idle, so I needed to look into that. I found some copper heatsinks that you can add with thermal glue/tape to the chips and that has reduced the heat down to 42C at idle. I added a 100mm goose neck fan just to make sure some air current makes it from the front of the case. The rest of the case, even the CPU, does not get over 39C so I'm not too worried about the overall cooling. It's just some spot cooling. I'm going to keep an eye on it from time to time, during transcoding mainly.

 

The second thing that, admittedly, has me totally confused is the optical drive. The LG drive will not show up to my two burning programs (MakeMKV and MagicDVDRipper). I checked the bios, thinking that drive was setup as ACHI instead of SATA, but there is not an option to switch it to SATA. The Gigabyte board has another option (INTEL Rest-something, I think) but that option killed the option to boot from the m.2 drives, which seems odd as they are supposed to be on a separate channel than the SATA ports. Oh well. After attempting changes, and basically reseting back to defaults, I (for whatever reason) tried to see if the programs recognized the movie in the drive. Sure enough, the blu-ray showed up and was able to be read and could be ripped. Hooray! Even a blind pig finds a truffle from time to time (I know, pigs find truffles from smell, so I don't get that saying either)! However, that was when I had the server directly connected to a monitor. When I disconnected the server, and put him back in his normal place where I work on it via Remote Desktop, the drive stopped working with the programs. It's very odd. I can't check if something changed in the bios, as I'd have to move the server back to the monitor and hook it up.

 

I am wondering, could this be an issue with HDCP? The old drive, in Windows 7, had no trouble ripping through remote desktop. However, could this new optical drive or OS (Windows 10) be the problem? Does it have to be connected via an HDMI cord to a monitor to activate? I can see and explore the files via Remote Desktop, as before, but the neither program mentioned above can recognize the driver. Very confusing.

 

Oh well, if it worked 100% of the time, there would nothing to tinker with and therefore no hobby! If anyone has an idea of what might be going on, I am all ears. Otherwise, thanks again to the community - your help and knowledge was indispensable! 

 

Link to comment
Share on other sites

WhiteGuyTranslating

Fixed it! It took writing it down and thinking about it before it clicked in my head. I forgot that (since this is my first Win10 set up) that I kept the admin rights down to the second to last tier. Not wide open, wild west like I do on my Win7 machines but the one just before it. By right clicking and "Running as Administrator" that fixed the problem I was having with the program not reading correctly.

 

Once again, this forum and support community work wonders!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...