How to Protect Yourself From Cryptovirus Attacks | Secure Cloud Backup Software | Nordic Backup
We are nearing the two-year mark since the introduction of the incredibly successful CryptoLocker virus. In the 4th quarter of 2013, over 41,000 bitcoins passed through CryptoLocker accounts (about 27 million USD), and who could ever know the dollar amount of all the damage that was done to those who refused to pay the ransom?

Although the CryptoLocker botnet and its operators were finally shut down in June 2014, and even though a public decryption service exists for the victims to get their files recovered, new viruses have only improved on CryptoLocker's techniques and continue to be successful in late 2014, both at encrypting data on personal and business computers, and also at extorting ransom payments.
 
 

Once an attack is underway, a backup is your only solution

Speaking with respect to CryptoDefense, and to the newer and more successful CryptoWall, what makes these viruses so dangerous is not just how quickly they will traverse file systems and encrypt important data very quickly, but they also have a good understanding of end users. They will send an email with an attachment that is just believable enough for the user to open it. Then they delete any shadow copies and system restore points they find on the system because they know that is the first place you would go to look for files that have been unharmed.

After that, they begin the process of encrypting your files, and since the attackers are speculating and hoping that you will pay the ransom, they leave behind text files and web links containing instructions on how to get the data decrypted. They leave these files in every folder, so a simple file system search is all it takes to survey the damage caused by a cryptovirus crawling your file system.

Compared to how long it takes for an end user to properly react to the infection and get the necessary personnel involved, the malicious encryption process is both fast and quiet. Unless your users spend a lot of time browsing the file system, they aren't likely to catch a cryptovirus in-flight. By the time your phone is ringing, you can safely assume your customer's system has been thoroughly encrypted. Most Nordic Backup customers who reach out to us for a recovery are shocked to learn that the virus had been encrypting data for several days or weeks without them noticing.

For many users, the infection can persist for a longer time than the retention period they have configured for their backups, and that means that just like their file system, their backup system would also be entirely encrypted with a key that they would have to buy in order to get their files restored.

Many of you may be familiar with a similar image to this, you might even get the same kicked-in-your-stomach feeling just by looking at it:

This is a screenshot of a search on a file system that was not completely encrypted. Still, you can see that each folder gets the decrypt instructions in three formats: TXT, HTML, and URL. 1719 items, at 3 instruction files per folder is 1719 / 3 = 573 folders that were encrypted before the infected computer was kicked from the server.



That ended up being a ~600 GB recovery, and it took over 3 hours restoring it from a local backup.
guide screenshot
Click to enlarge.
You can see the decrypt instructions are dated June 10, 2014. That is how far back in their retention (the recovery took place September 16) we had to go to get some of the data! Do you know how your file retention is configured right now? Go ahead and double-check it if you don't know.
 
 

How to Setup Storage Policies to Protect Your Server

In the first part of this guide, we have to accept as a given that client computers are inevitably going to get infected and additional layers of protection need to be configured on the server, independent of whatever security strategy you have going on with client computers. Malicious e-mail attachments are the most successful, but not the only attack vector chosen by CryptoWall.

Infected advertisements on news and social media sites, drive-by downloads, and browser exploits are among the strategies repeatedly used by cryptoviruses to get executed on client computers. You can't babysit all your users 24/7 and you can't single-handedly harden all the software they use from attacks. You have to accept that infections will happen from time to time and the next step is to come up with a gameplan for responding to that inevitability quickly and effectively.

So what do we actually do about this? We have a virus that comes in a convincing email (and through easier vectors than that), it is easy to execute, it quickly encrypts files, and it can conceal itself from typical end users. Unless your users are seasoned crypto-victims, they simply will not have the suspicious and vigilant frame of mind they need to catch a cryptovirus on their own, and for many reasons, you can't even depend on end users to notify you of the infection fast enough.

For this reason, and with special consideration for possibly a new wave of crypto attacks next year, we strongly suggest all of our partners to review the retention period that is configured for all their customers' file backups, and increase it to 60 days or more if they have not already done so.

The long retention period serves as a buffer for slow-reacting office staff who will allow the virus to persist for extended periods. Beyond that, we also need a sure-fire way for the system to alert us immediately to an infection.

Fortunately, Windows Server comes with sufficient tools built into its management console for us to configure our own file-level triggers and alerts, and we will provide some examples for configuring that. If you are using Server 2003 R2 or newer, you can easily protect files on your server from CryptoWall and CryptoDefense using Windows Server auditing tools.

If you are using Windows XP or Windows 7 or 8 as a basic file server in a workgroup environment, you will want to reconsider using a server OS in order to react and respond to modern threats where conventional antivirus and anti-malware scanners are ineffective. A successful attacker has thoroughly tested their work against anti-malware tools before the attack.

Once again, you will need a Windows Server, 2003 R2 or newer, with the File Services role installed.
 
 

Step 1: Windows File Server Resource Manager

In Server Manager, you can expand the Roles tree to navigate to File Server Resource Manager, or you can just open fsrm.msc from the Run dialog or command prompt and that will bring up the snap-in for File Server Resource Manager.

With FSRM open, we will look under File Screening Management, and then File Groups

You will see some sample groups already provided by Microsoft. These samples would work if you wanted to use the auditing features to record what file types your users are storing in whatever locations you wish to audit.

In the case of cryptoviruses, we aren't screening for specific file extensions, just a specific file name.
guide screenshot
Click to enlarge.


Step 2: Create a File Group

Hit it from the Actions bar or right-click on the white space to create a new File Group, and fill in the properties as shown:




We will call this group cryptovirus and the files we are looking for are called DECRYPT_INSTRUCTION and HOW_DECRYPT. Currently those are the two instruction filename conventions used by CryptoWall and CryptoDefense.

Click OK and the cryptovirus file group is ready.

guide screenshot

  Note: When a new cryptovirus comes out in the future, it's almost certain that it will not behave identically to CryptoWall, and you will want to review this guide and add additional filenames to the cryptovirus file group. We might even need to screen for different behavior altogether. This guide is not definitive. If this auditing technique becomes widespread, attackers are going to carry out their attacks differently.

As long as they have the advantage of striking first, we don't necessarily know how they will approach us, but we can assume they will keep trying new things and there will be future attacks. This method is not a guaranteed cure for cryptovirus infections. In fact, we believe it's very likely that you WILL get some kind of infection in the future, and a backup solution with long retention is your only way out of that scenario. This is simply a guide to demonstrate a Windows Server feature you can use to prevent getting the same infection twice, or to prevent getting the infections outlined in our examples. This guide is not a cure, it's a template to make your efforts in an ongoing struggle more effective.



Step 3: Create a File Screen

From the navigation bar, hit File Screens.

Create a new file screen and you'll see a box like this.

First, we will give it a path, and the file screen will audit everything under that path.

You can choose an entire drive letter like E: or any location where your users have mount points on the server and are able to to write from their workstation. CryptoWall encrypts local disks and all mapped drive letters so you want to be sure all the storage on the server that is accessible by a user is included in the file screen.







After this, select the radio button to "Define custom file screen properties," and click on the Custom Properties button:
guide screenshot


In this section, we change the radio button from Active Screening to Passive Screening, and we enable the check box that corresponds to the cryptovirus file group.

Some of you may be wondering, "Why not use Active Screening? It says right there it will not allow users to save unauthorized files." However, the active screen will only block the creation of the instruction files, it will not stop the virus from encrypting 2014_Financials.xlsx or some other file that already exists on the server.

Neither Active nor Passive screening can protect your files from encryption, and Passive screening will at least permit the creation of the decrypt instruction files which provides the necessary evidence for surveying the outbreak.

With Passive screening selected along with the cryptovirus file group, all the necessary triggers are configured for the server to respond to.

On the File Screen Properties box, the 4 other tabs above: E-Mail Message, Event Log, Command, and Report will provide you with options to configure the default action that is taken when a condition satisfying the trigger is met (ie, file creation matching the names listed in the cryptovirus file group).

guide screenshot


Step 4: Create and Test E-mail Notifications

Among these tabs, two of them are essential to providing effective notifications as well as take the necessary action to suspend and contain any malicious encryption activity.

You can set it up to write to the event log if you want, but we also need something more in-your-face. We'll start with the E-mail Message tab:

This is the simplest example. The domain administrator will receive an email that says where the infection occurred and who the responsible user is. You can customize the email however you like with a number of options for variables to insert.

You will want to consider only notifying the administrator and select personnel, and not the responsible user. Users do crazy things when they know they are being monitored, and they might make an attempt to hide the infection or escape responsibility which would only give the virus more time to work on their local system (more on that later).



guide screenshot



Once you are happy with how your e-mail notifications are configured, we need to double-check and test the e-mail SMTP settings for the FSRM service.

Click the very top item of the navigation bar, which should say File Server Resource Manager (local). With that selected, hit Configure Options... from the Actions bar on the right. This brings up the basic options screen for all FSRM services, and it opens to the Email Notifications tab, which is the configuration we need to check.

First, provide the config with correct info for your SMTP server and what user accounts you want to use. Once you click on Send Test E-mail, you will probably get an error.

This is because the FSRM service (SrmSvc) is running as a local system account, and the file server's computer account does not have permission to send mail as FSRM. To fix this, you can either run the SrmSvc as a real user and use their mailbox, or you can give the server's account an active directory extended right called "Send-As" for the FSRM mailbox by running the following command in your Exchange management shell:
guide screenshot
Click to enlarge.




Just make sure that you are using the right DOMAINServerName when you submit your command, and use the Send Test E-Mail feature until you are happy with how your notifications work. If you have trouble setting up FSRM e-mails, keep using the Send Test E-Mail feature, and a more detailed error will be written to the event log for you to investigate.

If your file server is a standalone server and you are not using Exchange, you can setup a virtual SMTP service on the file server and rig it up to Gmail or some other service for the purpose of sending out the e-mail notifications. Click here to check out the virtual SMTP server guide.

If you go for this solution, you'll need to install the IIS role, and the SMTP server used for FSRM notifications would of course be configured as 127.0.0.1.

With the e-mail configuration done, we can move on to the Command tab, which is the simplest part of the guide.
 
 

Step 5: Write an Action Script

We already have our file screen set up, and it is already configured to notify the administrator. So now what? Well, it depends on your server, your users, and your environment, but you will absolutely want some kind of default action to be taken by the server in order to shut the infection down immediately.

The command tab allows you to run a single command or a script, with certain levels of permissions. It's a shame they do not provide a script editor here with variables like the email message editor, because then we could do some truly surgical stuff.

However, a simple shell script will still allow us to protect the storage on the server once we get the trigger from the file screen.

In the simplest case of a conventional file server with users on workstations mapping network drives to the server, our action script at C:cryptovirus_response.cmd is very simple.
Let's take a look at what that script does:
 
guide screenshot
Yes, that's all it does. We have to stop LanmanServer, which is the "Server" service, so all file and printer sharing on the server will be shut down. After that, we disable the service, so that (and this will happen) once a user attempts to reboot the server and escape a costly tech service call, they will find that their mapped storage is still not available because file sharing did not restart with the system.

At this point they would have to get hold of the administrator, who should have received the email notification by now that an infection occurred, and they will have to wait for the admin to come in and explain that the file sharing is disabled until it's been established that it's safe for users to re-connect.

With LanmanServer shut down, the only way for someone to get the virus on the server at this point would be to get on the server and repeat the behavior that led to the infection, and no one could really do that accidentally or deliberately.

No users should have any access to a console session on the server, so they should not be able to get that far. Just in case though, we can remove the drive letter from E with the mountvol command so anyone who tries to get on the server won't see the drive and they aren't likely to find what happened to it. Of course, use discretion if you choose to add that last line to the script. Use discretion in all your scripts. We all know of some office managers who like to try to exercise their admin rights, even if to their own chagrin.

But what if you have a terminal server and no file server?

That's a risk factor too because there could be critical data on the terminal server without any mapped storage. Some of it might be on the C drive, and dismounting C won't do you much good.

In that case, you would want your file screen to contain ALL local and remote drives, and your cryptovirus_response.cmd script to contain these lines instead:



As long as we can assume the virus was executed by a user, this script will log everybody out, and then shut down remote desktop services, so no users can log back in. With TermService shut down, only an Administrator can get on the server, and only with a physical console session (or logmein or something like that), OR, you can restart the service from a remote console: as long as you are acting as a domain admin, you can say
sc \TermServer01 start TermService

In either case, once an admin has isolated the culprit user and/or terminal and/or workstation, and has assessed the damage (if any) done by the infection, and it's safe for everyone to re-connect, you can then decide to re-enable access to the server and resume normal operations.

Feel free to create files named DECRYPT_INSTRUCTION.TXT and drop it somewhere under your file screen path to see how the server reacts to the introduction of a file that matches the cryptovirus file group. The notification email and script commands should all be carried out instantly.
 
 
 

How to Setup Group Policies for Handling EXE Files to Protect Your Clients

With FSRM set up using the steps above, we can be sure that it's keeping a close eye on our centralized storage and it's watching out for infections that match the behavior we provided to the cryptovirus file group. Now we can briefly go over some strategies that can stop the virus from ever executing on your clients in the first place.

We will be looking at Software Restriction Policies first, which can be setup per-computer (Windows Pro and Enterprise and Ultimate versions only) using the Local Security Policy Editor (run secpol.msc to bring up the policy editor on the clients), or if your clients are all joined to your domain, you can configure the same policy for all computers through a Group Policy Object. This guide will cover the latter scenario.

Step 1: Create a GPO

Get on your domain controller and open Group Policy Management. We will expand the navigation tree until we find the branch that is for your domain OU. Right-click on it and hit Create a GPO in this domain, and Link it here... and give the GPO a name like cryptovirus SRP.

Now there will be a GPO listed under your domain with the name you provided. Right-click on your new GPO and hit Edit.... This will bring up the policy editor for that GPO.


guide screenshot
 Click to enlarge.


With the policy editor open, expand the navigation tree as shown until you have Software Restriction Policies selected.

If you are following along on a Windows client OS, Software Restriction Policies will be at the outermost level, under Security Settings (like this).

There will be a message about no policies beind defined. Right-click on Software Restriction Policies and hit New Software Restriction Policies, and you will see some sample rules created under this folder.
guide screenshot
 Click to enlarge.


Step 2: Create Path Rules

Now you can expand Software Restriction Policies, go into the folder called Additional Rules, right-click on the white space and select New Path Rule.... guide screenshot
 Click to enlarge.



 
  We will create our first path rule as shown: Path: %AppData%*.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from %AppData%.

This will be the first of many path rules that do not allow exe files to run from the path that is given. Some admins may consider it too drastic or restrictive to forbid any exe files to run under AppData because of the strict implications this has for other programs.

Safe applications would not be able to run from there, applications may not be able to auto-update themselves, compressed installers that extract their content to AppData will not be able to run the install from there either.

In this case, the user or admin would have to manually extract installation files to a manually created folder outside of AppData in order to run.

The decision to trade convenience for security is left with each admin. For many clients, we believe it is not safe to let exe files run from AppData for any reason. See below for creating exceptions.

 
  guide screenshot
 Click to enlarge.


Create more path rules just like the first one, each with the following settings:

Path: %UserProfile%Local Settings*.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from AppData (Win2k/WinXP/2003).

Path: %LocalAppData%*.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from %LocalAppData%.

Path: %AppData%**.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from a subfolder of %AppData%.

Path: %UserProfile%Local Settings**.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from a subfolder of Local AppData (Win2k/WinXP/2003).

Path: %LocalAppData%**.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from a subfolder of %LocalAppData%.

Path: %UserProfile%Local SettingsTemp*.zip*.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from archives opened by the Windows Zip handler. Extract these files manually in order to run (Win2k/WinXP/2003).

Path: %LocalAppData%Temp*.zip*.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from archives opened by the Windows Zip handler. Extract these files manually in order to run.

Path: %UserProfile%Local SettingsTempRar**.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from archives auto-extracted by WinRAR. Extract these files manually in order to run (Win2k/WinXP/2003).

Path: %LocalAppData%TempRar**.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from archives auto-extracted by WinRAR. Extract these files manually in order to run.

Path: %UserProfile%Local SettingsTemp7z**.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from archives auto-extracted by 7-Zip. Extract these files manually in order to run (Win2k/WinXP/2003).

Path: %LocalAppData%Temp7z**.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from archives auto-extracted by 7-Zip. Extract these files manually in order to run.

Path: %UserProfile%Local SettingsTempwz**.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from archives auto-extracted by WinZip. Extract these files manually in order to run (Win2k/WinXP/2003).

Path: %LocalAppData%Tempwz**.exe
Security Level: Disallowed
Description: Don't allow ANY executables to run from archives auto-extracted by WinZip. Extract these files manually in order to run.

See the pattern to this? We are already assuming that your mail client does not accept .exe attachments. However, your mail client is not likely to do anything about an infected .exe buried in an archive. It is more likely that you would be permitted to open the archive, which may then be auto-extracted and run by your default archive handler.

With these path rules, no exe files can run under AppData, no matter how they came to be created there. Feel free to add more paths to these standard rules if you have users downloading files or opening attachments in unconventional locations.

With all our path rules created, your Additional Rules screen should look like this:

guide screenshot
 Click to enlarge.


Now that we know exe files aren't allowed to run from these locations, some of your applications may stop working, or at least, they will stop updating.

In that case you could "whitelist" programs with even more path rules, and this time you would provide the path to the .exe file you want to whitelist, and set the security level to unrestricted.

Additionally, you can exclude certain user and computer accounts from the policies in this GPO by creating specific security settings for those accounts in the Delegation tab of the GPO settings.

Wrapping Up

Between File Screens and Software Restriction Policies, Windows Server provides flexible, adequate auditing tools to monitor, moderate, and restrict activity characterized by certain threats, at least like what we have seen so far from recent cryptoviruses.

Of course, by publishing this guide, this part of our stragey is now exposed to the public. Future attacks will almost certainly use a different approach, prehaps even rendering this guide completely obsolete.

For the inevitable scenario in which an attack is eventually successful, restoring data from a good backup is your only solution.

No part of this guide is offered as a solution to replace your backup, but we do acknowledge that restoring backups in order to recover from these attacks can be a time-consuming and costly ordeal, and we want to provide the necessary education for you to protect your users from repeat attacks.

If you do not have a backup solution, you should not expect this guide to afford you any protection at all. New, never before seen attacks are right around the corner and we can't begin to speculate about how they will work or what they will do.

Any readers of this guide who do not have a backup solution may consider this their written invitation to call Nordic Backup for a free consultation or browse our products here. We have affordable, competitive backup plans for everyone, no matter what kind of data or environment you may work with.

The era in which we live is one where threats are constantly lurking and prodding our professional and personal lives. There is no longer any reason or excuse to not have a backup for your important data. Our collective responsibility as educators is to make all of this clear to our friends, colleagues, and customers before they become victims, not after.

To stay on top of these threats is a commitment as well. Keep yourself informed of new virus behavior by subscribing to popular security newsletters. Profile the behavior of new viruses in your reading and create signatures for them which you could add to your file screens before your network is set upon by this new virus.

Please experiment and become familiar with combinations of File Screens and SRPs in your own environment and see what works best for you. Between these two techniques, the auditing possibilities are extensive.

For instance, the supplied SRPs would work great for users on a terminal server, and file screens would work great on a Small Business Server using folder or profile redirection.

Thoughtful consideration of your storage topology and permissions can also provide effective means to contain outbreaks and minimize the damage they can cause. Don't create global writeable shares for all users, adopt a "need to use" philosophy so that only the individuals who need write-access to certain shares are granted that access. Segment your shares by a particular department, data type, or staff member. Never do a "net use" of administrative "dollar sign" shares on workstations. Don't mount any shares on client machines unless they need to be mounted, and don't grant write permissions to users or computers that don't require write access to certain shares.

These have been well-established best practices for decades, yet in the real world we continue to observe that even the most basic security techniques all too often will take a back seat to convenience, or the customer may fail to see the value in paying for additional service hours to ensure proper implementation of proven security policies. Everyone involved must consider the inconvenience of a careful setup versus the inconvenience of downtime, lost business, opportunity cost, and of course the time and money spent reinstalling and reconfiguring a server that was successfully compromised.

We hope you have enjoyed this article and found it to be helpful. Please e-mail us with any questions or feedback.

If you like our guide, please share the link https://nordic-backup.com/guides/how-to-protect-yourself-from-cryptovirus-attacks/. The guide is copyrighted, therefore please do not copy it. Please help us getting the message out to everyone. Together we may have a chance of stopping the funding of the criminals behind cryptovirus attacks.
Share This

nb@nordic-backup.ru