Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 05/27/23 in Blog Comments

  1. Unfortunately my original advisory text has been softened as it would have scared people. I said it MUST scare people because it's a serious and scary situation that needs to be understood in full length: In case Emby Server has been shutdown the system is COMPROMISED! What's most important in this situation is to understand what it means and which consequences it has and which steps are required. Getting Emby Server to start again is the very last and least problem in that case. Unfortunately it now appears as if we would just want to bother people by letting them go through a complicated procedure for starting Emby again while the really important steps appear just like an optional bonus task that may be omitted and can be overread like the typical security bla bla in those instruction manuals for kitchen gadgets. So once again clearly: When your system is COMPROMISED, your task is not about getting Emby starting again You have a lot of big things to do and consider. Restarting Emby is just a tiny little bit at the very end, and only when you decide not to re-install the system.
    6 points
  2. GENERAL NOTE - PLEASE READ Guys, I know that there are a lot of things still unexplained. This has been an incredibly tough effort for all of us, I had no sleep for days until Thursday and still pretty damaged. I'll try to answer all questions over the next days, the responses need to prioritized in a way that we are helping affected users first to get going again. But there are answers to all questions you have, and some might be surprising and might make things that some are criticizing appear in a different light. Thanks for your understanding.
    6 points
  3. While I do agree that the specifics of the attack chain could have been made more clear, I don't agree with you that this is typical of the industry in vulnerability disclosures. As the level of infiltration in this attack did require a multi-chain process involving first exploiting Emby and then leveraging the scripting plugin to escalate privileges and potentially carry out more damaging attacks, the level of disclosure here, and the steps taken to taken to actually shut down the offending service are some of the most high-handed tactics I've ever seen. The fact that the actual vulnerability was reported and ignored for over three years is both embarrassing and quite upsetting; but the response since the team realized it was being exploited in the wild has actually been okay. Emby is never going to know the extent of the damage caused by this particular attack. But I have been trying to get across in my previous responses that this is the worst type of vulnerability that can exist. I haven't looked up the CVE they reported for this, but this should be reported as a 9.9 or 10.0. This is the WORST thing that can happen. It is a full RCE that can be scanned for and is likely being scanned for on the internet. The entire attack chain can be automated at this point. Meaning someone can run a program without interacting with it that can detect a vulnerable Emby instance, and then run commands in succession to gain control of your system and automatically add it to an ever growing botnet of systems fully under control of one user. Again, EVERYTHING is at risk here. Emby will never have the full scope of impacted assets. And, even if they do, they aren't going to share it because it doesn't serve any purpose. If your server was impacted, reset every password that the system had access to and rebuild the system where it was installed. That is the only way you can be mostly confident that you have addressed this.
    2 points
  4. Ok, so let's get this straight because the more I think about it the more it irks me: 1)The vulnerability was KNOWN, REPORTED AND ACKNOWLEDGED by the Emby team for 3 years. It was 3 years out there in the open in this very same forum and they did not patch it until someone just decided to mess things up. Jellyfin, their free open source competitor had this patched a long, long, long time ago. 2)They had to make the choice of forcing an update to clients that will break their setups just to do damage control, which in an of itself is a big issue because tons of users (and paid customers might I add) have no idea how to fix this themselves, as we can see in the posts here. Don't get me wrong, I get the lesser evil thing, but I honestly don't like the idea of the devs being able to decide to shut down my system 3)As of today, no e-mail was sent, no communication other than the security update (which you can easily miss if you are not checking), the forum post (which, again, I don't visit the website every day) and the broken setup (which you can easily miss if you are not using Emby daily, if you are on holidays, etc). There might be people out there with compromised systems and credentials that have not found out yet because they are not super regular users, or systems that might be turned off but when turned on will have an outdated an still insecure version of Emby. 4)Again, and this is crazy, they frame this as a good thing "We saved you from the evil BotNets!". No! You had a huge vulnerability exposed, out in the open on your forums go unpatched for 3 YEARS. I have not seen anyone from the dev team either apologize or explain this. A post explaining how did this happen and what steps will be taken so that security vulnerabilities are taken seriously AND patched is definitely in order. The issue is not that there was a security vulnerability, that's just the cost of developing software, but that after it was disclosed it was not fixed (again, 3 years, I can't state this enough times), it was mishandled and it was not properly disclosed to ALL users, but rather they were more worried on getting their own spin on the news than on actually alerting everyone their systems might be compromised. PLEASE, send a damn e-mail letting people know what happened, NOT EVERY EMBY USER MIGHT NOW AND THERE'S PROBABLY VULNERABLE SYSTEMS UNPATCHED OUT THERE WITH PEOPLE'S DATA.
    2 points
  5. That's pretty much on-point. That theory about passwords being used elsewhere is nonsense. Like @ember1205said: when there's no e-mail address associated, then there's no point in trying those credentials elsewhere. The way how they are forwarding those credentials tells more: This is including all other login details like device id, device name, internal and external IP addresses etc. That's all information that is needed to gain access to the same server again at a later time, also for example after the vulnerability has been fixed and the original way doesn't work anymore, or security has been tightened, passwords changed, etc. It's their re-entry ticket to the server. (but maybe they are trying whether these are identical with OS credentials, that's realistic)
    1 point
  6. And which sites would those be? Or, is the expectation that the hackers will just randomly try my user/password combo on every site on the Internet? I don't have an email address tied to my accounts, so even trying the option of requesting a PW reset to try and at least know if I have an account on any site on the Internet is simply incomprehensible.
    1 point
  7. Howdy everyone, I'm late to post here, but have been keeping an eye on the subject-matter since the 11th . We had someone attempt to gain access on the 11th as well. MOST users had a local password set already so we were okay but our logs did show someone attempted to access the server by spoofing the local IP 127.0.0.1. The one user (with minimum access) was accessed due to no local password (a new account that was being setup the day prior so that slipped past my watch) so we deleted that user profile for safety measures and they had to use a new user name/password. I also no longer list names locally either. It doesn't look like they went through our domain so i got a new IP from my ISP as an extra precaution and im rerouting the IP associated with our domain name so users won't have to even notice the change. Also with the attempt on the 11th, i setup fail2ban to work with emby and it also looks at local IPs and will ban them for invalid attempts as well as global IPs. I would highly recommend folks to set up fail2ban. I followed a few of the posts i found in this forum to set mine up and did a few duckduck searches 2 get the cloudflare synchronization with it working as well (in case they ever did go through the domain name in the future). I wish I could post the steps I took but I had help setting it up since I just recently jumped ship from windows a few months ago so I'm still learning. Emby team, thanks so much for your quick response & patch. I just updated my server today when I woke up. edit: ps. after very close careful examination of our log files, there was no plugins added and those .dll files were not installed thankfully. Zander
    1 point
  8. The "insecure configuration" is a huge convenience. From what I understand, that is the feature that allows to bypass the password on your local network. I, and probably many others do this because who wants to type in a password through an on-screen keyboard every time they select themselves to login? Passwords aren't supposed to be easy or reused either. I certainly don't want to type in a 12 digit complex password just to switch between me and the wife's account. The steps they took seemed reasonable to me. Shutting down your server (in most cases they shutdown someone's docker container) doesn't stop you from turning it back on. It was done to get your attention, and it did. We know there are people who don't update anything for months at a time, and if they didn't do that (and let it spread further), the forum would have been full of people saying they should have done more after they found all their media deleted one day... Like I said, I'm not saying it isn't frustrating, but I think they did pretty good at shutting this down. Let's not forget that many much larger (and better funded) companies have had hacks, so they can't "just plug holes" to fix everything. It doesn't work that way in the connected world. Bugs upstream, convenience vs. security, bad user practices... It all plays a role. They could make the servers "bulletproof", but you wouldn't be able to remotely access your media either. I can remember when Kronos, who does payroll for half the world, had their entire infrastructure encrypted with ransomware making people go back to punch cards for six months. I'm just reminding people that this isn't a huge corporation, but they handled it better than many with more resources do in similar situations. We should all update, review our server security, and sit back and watch a movie...
    1 point
  9. I would add that with all the "cyber-issues" out there in the world, the Emby team did a great job getting the word out on this, and fixing it quick. It's easy to get "upset" about this kind of stuff, but don't forget that software is complicated, and all we can do is react to new threats as they happen. They did that. Just practice good security measures, and keep on truckin'... I woke up to a notification that an update was available, and done... Thanks guys!
    1 point
  10. I for one am appreciative of how quickly you acted regarding this hack. I run Emby in docker (on Unraid) and it doesn't appear anything has been compromised and I didn't see any unusual activity in Emby but as your blog suggested I changed everyone's passwords as a precaution. Thank you for being diligent.
    1 point
  11. Please apologize. I've been working more than two days non-stop on that matter, and I guess it's time for a rest.
    1 point
×
×
  • Create New...