The importance of developing secure software is (hopefully) understood but what about our working practices?
Many, if not most of us will be familiar with navigating IT restrictions. Firewalls, limits on what you can install or automatically deleting anything that isn’t digitally signed from an approved source. All these impediments to us working.
Perhaps, like myself, you’ve disabled some security measures in the past as a quick measure to get a short test running. Or you run things as admin rather than setting up nuances permissions. Perhaps you’ve used your personal device to read a file.
Let me introduce CD Projekt Red. On the back of a rather stormy launch to Cyberpunk 2077 they were hacked. From what I gather, all their source code was stolen, personal employee data stolen and machines were encrypted with ransomware.
But this can’t happen to you right? Well maybe it could.
A couple of years ago I was working from home using a mixture of my own computers and CCTV cameras and also work kit on loan. One of my personal devices was compromised and at the time I panicked a little, re-imaged it and moved on… until I realised that the shared drives had been encrypted as well. By being slack on securing my personal devices, I’d potentially exposed a work machine (thankfully the shared files were installers for stuff like Wireshark). Potentially a more motivated attacker could have jumped machines, leveraged my VPN and got into my work network. In other words it could have been much worse.
One of the popular terms that I’ve learnt since becoming a Cyber Champion is “Zero Trust” and building a “Zero Trust Architecture”. This is about building solutions on the assumption that your outer layers of security should be compromised so you should secure all communications within your system. I could ramble on more about this but I want to stress that this applies not just to what we build, but to how we work.
If an attacker managed to get into one of my work machines they could steal our source code. This would have IP impacts but also would allow an attacker to understand our solutions and find any vulnerabilities. Simply encrypting all of our machines to stop them from working could be huge. Imagine if you have all engineers locked from doing work, or pushing changes to the repo. How much money does it cost to have developers sit in the kitchen having a coffee for a week whilst you try and restore things?
These types of attacks are very common in some sectors such as Government organisations, from “city hall” to police to health, but as software developers we’re viable targets as well.
So hopefully I’ve scared you a little. It is quite possible that you could expose your company and cause them massive damage.
However there are good things that we can be doing to protect ourselves.
My work uses security solutions that, as engineers, we usually deride for blocking us from working and sometimes look to work around. But they are important. If you can understand why they are there (see above), it is important to find how you can work alongside them, as opposed to against them.
Firewalls are an important start. All too often when we’re having communication issues with devices or services on our VMs we’ll ask “have you tried turning the firewall off?”. If you do this, only do it for a minute to prove whether firewall rules are an issue or not, then enable it again. It is important that machines on your network are only able to use the protocols and ports that you need them to use.
As tempting as it can be to download a tool to help with a job, for example I downloaded a tool to help me access the memory of an application to help with work, we need to consider the security implications. Could it be doing something malicious? Could an attacker use it to perform a malicious act? This could be a vulnerability in the application, or simply it would be a wonderful little tool for an attacker to use. Look at using software that has been approved by your organisation and uninstalling anything non-essential once it has served its purpose.
The other big area that so many of us fall down is on passwords. It is well known that a lot of people use things like Admin/Admin1234 or TestUser/Test1234 for their passwords in test environments. Similarly when there is a default login like admin/password, many people out there don’t change them.
I still remember being on a remote support session and without thinking I just entered the default credentials for an application and successfully logged in. Afterwards I was politely informed to always ask the customer to enter the credentials and it was also fed back to the customer to change their password.
p.s. don’t have default credentials in your application or at least force them to be changed after the first login.
It is important that we make sure that every account we create, especially admins, have a good & strong password that is unique. Don’t go replacing Admin1234 with My!W0rkN@m3 for everything on the network. Yes it is more secure but if someone got/guessed that password, they may have untold access to your work’s network.
So how do I remember them all? I do use wikis for some shared resources but it is better when we use a shared password manager that in itself has access permissions. I also have my own system for creating passwords, which for obvious reasons I won’t share, but that means I don’t need to remember what my passwords are, only what logic I used to come up with it.
The best solution however is to use domain accounts. This allows us to restrict access to machines and also use good, secure passwords. Obviously being part of a big corporation we don’t have permission to be adding short lived VMs to the company domain and making ourselves admins when we want, so what we’ve done is set up our own domain server that has no trust relationship with the main network.
There is another thing that we need to consider and that is access permissions. I doubt I’m alone in running most of my services as the Local Service admin account, using “Run as Administrator” or “sudo” commands when I want stuff to be working. A common example is when your service needs to write to Program Files. As a standard user it will fail but run as admin and it will work, right? This can be dangerous as there’s things like “Remote OS Command Injection” where an attacker could leverage a vulnerability to execute a command as an admin such as formatting a disk, or disabling security.
To prevent this it is best to have dedicated accounts for things that need to run with elevated privileges. For example, let’s say that you’ve downloaded a NTP service to keep your machines in sync. Rather than running as admin, the installer may help set up an account that is dedicated just to just what it needs to manage NTP – or you could set up your own account with a bit of Googling.
This is an area where mobile does seem better than desktops. For example if I downloaded an app that wanted to access my calls, I get a specific prompt asking for this permission. On Windows or Linux I’ll probably get an error when it tries and fails. After re-running as admin, it now works and exposes the application to way more than calls.
And finally – if any of this seems like too much effort to maintain then there is an alternate approach (depending on your setup). Create an isolated network for your testing where there’s no internet access and you need to physically connect to it.
It may seem like a pain but honestly, it is important that we consider the security implications of how we work just as much as the security of our products. After all, you don’t want to be the one that brings your company to a grinding halt.
Disclaimer: I have no idea on what caused the CD Projekt Red hack. It may have been something that I’ve discussed, it may not. I did not intend to speculate or criticise. I picked them as the example because I loved Cyberpunk 2077 (completed it 6 or 7 times). Please don’t sue me guys!