Building a secure operating environment

In a recent report, the Australian Department of Defense states that 85% of the targeted breaches could have been prevented by implementing just four controls (http://www.asd.gov.au/infosec/mitigationstrategies.htm).

Quoting the report:

At least 85% of the targeted cyber intrusions that the Australian Signals Directorate (ASD) responds to could be prevented by following the Top 4 mitigation strategies listed in our Strategies to Mitigate Targeted Cyber Intrusions:

  • use application whitelisting to help prevent malicious software and unapproved programs from running
  • patch applications such as Java, PDF viewers, Flash, web browsers, and Microsoft Office
  • patch operating system vulnerabilities
  • restrict administrative privileges to operating systems and applications based on user duties.

And that is precisely what this book is focused on, with some additional measures, which should help you mitigate up to … I will be bold and say 98% of the attacks you might encounter.

It is essential to properly harden the nodes in your network – servers and endpoints. The stricter the policies on what can be used, how, and where – the more difficult it would be to introduce a malicious element and disrupt the security of your operations.

Establish some ground rules, the first and foremost of which should be performing everyday tasks on any OS as a limited account. This applies to all users but especially and primarily to IT administrators.

Policies should be set to prevent the execution and installation of unknown executables/software when running with a limited account. And when running as an admin, the administrative user should be barred from accessing the Internet on the proxy level.

There should be NO EXCEPTIONS to the above rule, no matter how many riots and complaints you receive – I can confirm from experience that IT Admins can get used to working this way, and they do use to it – after time, they not only accept it – but if you do your job and explain the reasons adequately, they will enforce it upon their non-compliant and rebellious peers.

The reason for this is contradictory but honest. The users with admin power are considered knowledgeable and experienced – that is why they got administrative rights in the first place, right? Wrong. Often administrators are overconfident and browse the Internet as admins with no clue of the risks of browsing. Then some download applications ‘to make their life easier,’ introducing malicious software into the company. Hence, the need to restrict internet access for administrative accounts and the need for administrators to work as limited users. If they need to download and execute something, they can download it as a little account and run it as an administrator.

Rule number two: no external devices allowed whatsoever, except company-issued ones, encrypted and allowed by Device ID, mapped to a user ID. Connecting a smartphone or tablet to a corporate laptop for file transfer should be impossible.

Code execution from external devices should be forbidden by policy. Copying executable files from an external device to the local drive should be immediately detected if allowed – and an alert should be sent to the IT administrative team if that happens, followed by creating a security incident. Disabling AutoPlay/AutoRun should be a no-brainer and was supposedly implemented long ago.

Rule Number 3: Every user’s “My Documents” folder should be on a network share. This effort aims to enable centralized backup (unless you have other solutions) and prevents the “Sony Disaster” in case your organization is attacked by destructive/encrypting malware. If this happens, you should have a backup to restore the encrypted files to their original versions quickly.

Rule Number 4: backup *up everything* you can, as often as possible.

Rule Number 5: This one is more of a recommendation – but if you can, build a clone of a standard desktop machine, ready to be logged on to and having auto-configuring mail client, etc. – based on the user account logged on. Have that one deployed on Amazon or another cloud vendor – ready to be cloned to as many copies as you need in the event of a disaster. Do the same for critical servers. If a disaster hits your organization, you can quickly power on and run from the cloud. Having powered-down copies of virtual machines in the cloud, ready to be cloned and powered on, is exceptionally cheap compared to maintaining a cold, warm, or hot site with physical devices.

Recent Posts

Follow Us

Weekly Tutorial