Things related to computer security in some way or another.
With my being very critical of Microsoft one might make the (usually accurate) assumption that I’m about to blast Microsoft. Whether any one is expecting me to or not, I don’t know but I will make something very clear: I fully take responsibility for my actions and I fully accept my mistakes. I further make the best of situations. As such, I would like to hope everyone else does too. I know that is unrealistic at best; indeed, too many people are too afraid to admit when they don’t know something (and therefore irresponsibly make up pathetic and incorrect responses) or when they make a mistake. But the fact of the matter is this: humans aren’t perfect. Learn from your mistakes and better yourself in the process.
No, I am not going to blast Microsoft. Microsoft _was_ responsible. They reported the end of life _SEVEN YEARS AGO_! I am instead blasting those who are complaining (and complaining is putting it VERY nicely – it is more like that of a whiny little brat who throws a temper tantrum when they don’t get their own way on every single thing despite the fact they have been told in advance this will happen) about how they now have to quickly upgrade or not get updates, security updates included. For instance, let’s take two different groups. Let’s start with the manager Rosemayre Barry of London-based business, The Pet Chip Company who stated the following (to the BBC or at least it was reported on the BBC):
“XP has been excellent,” she says. “I’m very put out. When you purchase a product you don’t expect it to be discontinued, especially when it’s one of [Microsoft's] most used products.”
Sorry to burst your bubble, Rosemayre, but ALL software will eventually be discontinued (just like how smoke detectors, carbon monoxide detectors and the like have to be replaced over time and/or are improved over time and that is not even considering maintenance like battery replacement). You can complain all you want but this is not only the correct thing technically, it is economically unfeasible to continue with a product as old as Windows XP is. I don’t care how used it is or isn’t (I hardly expect it to be the most used product of Microsoft’s, however; I would argue its office suite is more used as it works on multiple versions of Windows and corporations rely on it a lot). I also know for a fact that corporations tend to have contracts with computer manufacturers that they LEASE computers for a few years at a time and when the time comes for the next lease they will get the more recent software and this includes the operating system. Why would they do that? Well, again, it is economically better for the company, that’s why. And here’s some food for thought: Windows XP was released in 2001 and according to my trusty calculator (i.e., my brain) that means it is almost a 13 year old product (as it was August and we’re only in April). Well check this. Community ENTerprise OS (CentOS), a distribution of Linux, which is largely useful for servers, has a product life line, as far as I remember, of only 10 years. And you know something else? CentOS is very stable because it doesn’t have many updates or in other words is not on the bleeding edge. When a security flaw is fixed, the affected libraries and/or programs have fixes backported into the older libraries and/or programs. Indeed, the current GCC version is 4.8.2 and CentOS’s current version (unless you count my backport of 4.7.2 which you can find more info about at The Xexyl RPM Repository – possibly others exist somewhere else but for the time being the packages I maintain I have not updated to the 4.8.x tree) is 4.4.7 which was released on 20120313 or in other words the 13th of March in 2012. Yes, that is over _two years ago_. It means you don’t get the newer standards (even though the C and C++ standards that are most recent were in 2011 it is not to say that anything past the ratification date is somehow magically going to have it all. In fact, some features are still not complete in the most recent versions) but it also means your system remains stable and that is what a server needs to be: what good is a server if the service it offers is unstable (and I’m not referring to Internet connection stability! – that is another issue entirely and nothing to do with the operating system) and hard to use? Very little indeed. And realistically, 10 years is very reasonable if not more than very reasonable. Over the span of 10 years a lot changes including a lot of core changes (and let’s not forget standards changing) which means maintaining it for 10 years is quite significant and I cannot give anything but my highest praise to the team at CentOS – an open source and FREE operating system. To be fair to this manager, they at least DID upgrade to a more recent Windows, but the very complaint is beyond pathetic, irresponsible and outright naive and ignorant at best and stupid at worst. It is also unrealistic and unfair to Microsoft (and this coming from someone who is quite critical of Microsoft in general and someone who has railed on them more than once – in security and otherwise; in quotes about their capabilities and in articles alike – and quite harshly too; examples, one of which even includes a satirical image I made that is directed at Windows in general: Microsoft’s Irresponsible Failures and Why Microsoft Fails at the Global Spam Issue).
Next, let’s look at what the UK government has done: They are paying Microsoft £5.5m to extend updates of Microsoft Windows XP, Office 2003 and Exchange 2003 for ONE year. That is absolutely astonishing and I would think – to UK tax payers – atrocious. What the hell are they thinking? If the SEVEN years warning was not enough time what makes ONE extra year worth THAT MUCH? Furthermore, and most importantly, if they could not UPGRADE in SEVEN YEARS what makes any rational being expect them to UPGRADE WITHIN A YEAR? They claim they’re saving money. Yeah, right. Not only are they paying money to get updates for another year, they will STILL have to upgrade in due time, if they are to get updates. Think of it this way. When a major part of your car dies, you might consider fixing it. It will likely be pricey. If, however, shortly thereafter (let’s say within a year or two), another major part of your car dies. Also, the car has been used for quite some years and certainly out of warranty. What is the most logical and financially best assumption and choice? Is it to that this will be the last part to die – surely nothing else can go wrong! – and to pay for it and then wait until the third part dies (which it almost certainly will; it is mechanical and mechanical things die!)? Or is it maybe better to cut your losses and get a new car? I think we all know the answer. We of course, does not include the UK government.
The bottom line here is quite simple though: No, Microsoft is not being irresponsible. They are not being unreasonable either. No, they gave SEVEN YEARS notice. The only irresponsible and unreasonable people – and companies and/or government[s] – are those who STILL use Windows XP and especially those that are now forced to upgrade and at the same time whining worse than a spoiled brat who is used to getting his way but the one time he doesn’t, he throws a tantrum. Lastly, I want to point out the very dangerous fallacy these people are actually aligning themselves with. To those of us who remember when the TELNET and RSH protocols were prevalent, there came a time when enough was enough and standards had to change (e.g., secure shell aka ssh). Those who had any amount of logic in them UPGRADED. Many (though not as many as should have) saw many problems with them for far too long, among them the following (and note that these are on Unix systems and yes that means NO system is immune to security problems, be it Windows, Mac, Unix or anything else. Incidentally, Unix systems are what typically are used for servers which means customers data included in databases running on the servers, especially then as Windows NT was in its infancy by the time most – but probably not all – changed over):
- The fact a common configuration would allow “you” to remotely log in to a machine as a user from ANY HOST WITH NO PASSWORD? And of course it was PERFECTLY SAFE because after all, they won’t have a user with the same name, right? Well, ever occur to you that they could CREATE a user with that name? And ever hear of grabbing the password file remotely to find user names? Or a scrupulous employee who could do the same? An employee that was fired and wants revenge (and happens to have user names or maybe even stole data before they were completely locked out after being fired? Maybe they even left a backdoor in!)? For those who are slow that is sarcasm; it was NEVER safe and it WAS ALWAYS naive at best (this same problem is trust relationship and that is one of the biggest problems with security – too much trust is given far too easily). And indeed, Unix – just like the predecessor to the Internet – was NEVER designed with security in mind. That is why new standards are a good thing: to address problems and to extend, deprecate or obsolete standards (like, I don’t know, IPv6 as opposed to IPv4, anyone?).
- No encryption means sniffing could show the user and password (as well as other information in the traffic) to the sniffing party. Assuming that there is no one to sniff your traffic is security through obscurity at best and that is arguably worse than no security (it is a false sense of security and when taken to extreme some will refuse to believe it is a problem and therefore are blinded to the fact they already are or could be compromised any moment).
Consider those two examples for a moment. Finally, take the logic of “most people use it” or “it is so convenient and we shouldn’t have to upgrade” and where do you end up? Exactly like not upgrading from Windows XP or otherwise having tantrums about having to upgrade from Windows XP to something more recent (despite the seven years notice and it being in the news more and more as the deadline approached) or else not receive updates. In other words, you are staying behind times AND risking your data, your customer’s data, your system (and that means network if you have a network). And you know something? You had it coming to you so enjoy the problems YOU allowed and let’s hope that only you or your company is affected and not your customers (because it would be YOUR fault).
This is something I thought of quite some time ago but for whatever reason I never got around to writing about it. However, since it seems England is now doing exactly what I would advise against, and since I’m not actually programming (for once), I will take the time to write this. And its probably for the best, that I’m not programming today, given how tired I am. But I guess the article and its clarity will either show that or not. Either way, here goes:
So firstly, what is this all about? To put it simply, in September, all primary and secondary state schools in England, will require students to learn programming (“coding” is the way many word it). To make it actually worse, it seems they (and I hope this is just, for example, the BBC wording it this way) do not know that there is a real difference between programming and coding.
Although I discussed that very topic before, let me make it clear (since even if those involved won’t see this, it is pretty important). You write code, yes, but much like a building contractor NEEDS plans and a layout of the building BEFORE construction, you REALLY NEED a plan BEFORE you start to write the code. If you don’t, then what are you really going to accomplish? If you don’t even know what you’re TRYING to write then how WILL you write it? You might as well rewrite it SEVERAL times and guess what? That is EXACTLY what you will be doing! How do I know? Because I’ve worked on real programming projects as well as stupid/no real use programs, that’s how. If you don’t have a purpose (what will it do, how will it behave if the user inputs invalid input, how will output look, etc.) you are not going to learn because all you’re doing is writing code with no meaning. Besides not learning properly, you’re more likely to learn bad programming practices (because after all, you’re not really working on anything, so “surely it is OK if I just use a hack or don’t use proper memory management!”). The real danger there is the fact it APPEARS to work further strengthens your reasons to use said bad practices in REAL projects (just because a computer program does not crash immediately, in the first hour of run time or even all the way to the program finishing – for those that are meant to be finished, anyway – does NOT mean it is functioning properly; sorry, but it is NOT that simple). There’s many quotes about debugging and there’s a saying (I cannot recall the ratio but I want to say 80:20 or 90:10) out there that X percent of the time on a programming project is spent debugging, and it is not exactly a low number, either.
The problem is this: computer programming involves a certain aptitude and not only will some students resent this (and just one student resenting is a problem with this type of thing) just as they resent other things, some might still enjoy it even if they don’t learn properly, which is a risk to others (see end of post). Also, you cannot teach security and if you cannot teach security you sure as hell cannot teach secure programming (and its true: they don’t and that is why they have organisations that guide programmers in secure programming – OWASP for web security alone and there’s others for system and application programming). As for resentment, take me for example, back in highschool. I didn’t want to take foreign language because I had no need for it, I was very very ill at the time (much more than I am now) and I have problems hearing certain sounds (of course the schools naive “hearing tests” told them otherwise even though I elaborated time and again that, yes, I hear the beeps but that doesn’t mean much in way of letters, words, and communication, when considering learning, which I do not hear perfectly, does it? The irony is I had hearing tubes put in when I was three – perhaps the school needed them? – so you would think they could figure this out but they were like all schools are: complete failures) which ultimately would (and indeed, DID) make it VERY difficult to learn another language. But I was required to take foreign language. So what did I do? I took the simplest of the offered languages (simplest in terms of whatever those ‘in the know’ suggested), the least amount of years required (two years) and I basically learned only what I absolutely needed to pass (in other words, I barely got the lowest passing mark which by itself was below average) and forgot it in no time after getting past the course.
The fact that programmers in the industry just increase static sized arrays to account for users inputting too many characters instead of properly allocating the right size of memory (and remembering to deallocate when finished) or using a dynamically sized type of container (or string) like C++’s vector (or string class), says it all. To make it more amusing (albeit in a bad way), there is this very relevant report, noted on the BBC, in February of 2013. Quoting part of it and giving full link below.
Children as young as 11 years old are writing malicious computer code to hack accounts on gaming sites and social networks, experts have said.
“As more schools are educating people for programming in this early stage, before they are adults and understand the impact of what they’re doing, this will continue to grow.” said Yuval Ben-Itzhak, chief technology officer at AVG.
Too bad adults still do these things then, isn’t it? But yes, this definitely will continue, for sure. More below.
Most were written using basic coding languages such as Visual Basic and C#, and were written in a way that contain quite literal schoolboy errors that professional hackers were unlikely to make – many exposing the original source of the code.
My point exactly: you’ll teach mistakes (see below also) and in programming there is no room for mistakes; thankfully here, at least, it was not for stealing credit card numbers, stealing identities or anything to that degree of seriousness. Sadly, malware these days has no real art to it, and takes little skill writing (anyone remember some of the graphical and sound effects in the payload of the old malware? At least back then any harm – bad as it could be – was done to the user rather than a global scale done for mass theft, fraud and the like. Plus, that most viruses in the old days were written in assembly, more often than not, shows how much has changed, skill wise, and for the worst).
The program, Runescape Gold Hack, promised to give the gamer free virtual currency to use in the game – but it in fact was being used to steal log-in details from unsuspecting users.
“When the researchers looked at the source code we found interesting information,” explained Mr Ben-Itzhak to the BBC.
“We found that the malware was trying to steal the data from people and send it to a specific email address.
“The malware author included in that code the exact email address and password and additional information – more experienced hackers would never put these type of details in malware.”
That email address belonged, Mr Ben-Itzhak said, to an 11-year-old boy in Canada.
Enough information was discoverable, thanks to the malware’s source code, that researchers were even able to find out which town the boy lived in – and that his parents had recently treated him to a new iPhone.
Purely classic, isn’t it? Sad though, that his parents gave him an iPhone while he was doing this (rather than teaching him right from wrong). But who am I to judge parenting? I’m not a parent…
Linda Sandvik is the co-founder of Code Club, an initiative that teaches children aged nine and up how to code.
She told the BBC that the benefits from teaching children to code far outweighed any of the risks that were outlined in the AVG report.
“We teach English, maths and science to all students because they are fundamental to understanding society,” she said.
“The same is true of digital technology. When we gain literacy, we not only learn to read, but also to write. It is not enough to just use computer programs.”
No, it isn’t. You’re just very naive or an idiot. I try to avoid direct insults but it is the truth and the truth cannot be ignored. It is enough to use computer programs and most don’t even want to know how computers work: THEY JUST WANT [IT] TO WORK AND THAT IS IT. There are little – arguably there are none – so called benefits. Why? Because those with the right mindset (hence aptitude) will either get into it or not. When they do get into it though, at least it’s more likely to be done properly. If they don’t then it wasn’t meant for them. Programming is a very peculiar thing in that it is in fact one of the only black and whites in the world: you either have it in your or you don’t. Perhaps instead of defending the kids (which ultimately puts the blame on them and even I, someone who doesn’t like being around kids, see that that is not entirely fair – shameful!) by suggesting that the gains outweigh the risks, you should be defending yourself! That is to say, you should be working on teaching ethical programming (and if you cannot do that, because, say, its up to the parents, then don’t teach it at all) rather than just here it is, do as you wish (i.e., lazy way out) attitude. Either way, those who are into programming will learn far more on their own and much quicker too (maybe with a reference manual but still, they don’t need a teacher to tell them how to do this, how to do that; you learn by KNOWING combined with DOING and EVALUATING the outcome, then STARTING ALL OVER). Full article here: http://www.bbc.co.uk/news/technology-21371609
To give a quick summary of everything, there is a well known quote that goes like this:
“90% of the code is written by 10% of the programmers.” –Robert C. Martin
Unfortunately, though, while that may be true (referring to programming productivity), there is a lot of code out there that is badly written and that risks EVERYONE (even if my system is not vulnerable to a certain flaw that is abused directly by criminals, I can be caught up in the fire if that only includes bandwidth and log file consumption on my end; worse however, is when it is a big company has a vulnerable system in use which ultimately risks customers credit card information, home address and any other confidential information). This folks, is why I put it under security, and not programming.
Updated on 2014/02/17 to fix a typo and to add two caveats with this method that I thought of (caveat #5 is one that if it effects you it is what can be a rather bad problem but if it does not it won’t matter at all and caveat #6 is something that should always be kept in mind when using smrsh).
One of the projects I work on involves a VirtualBox install of Fedora Core. The reason Fedora Core is used is twofold:
- It is on the bleeding edge. This is important because it has the latest and greatest standards. In this specific project this means I have access to most recent C and C++ standards. The server the VirtualBox runs on runs a more stable distribution that backports security fixes but otherwise maintains stability because there are less updates which means less chance of things to go wrong.
- As for binary distribution (which is beneficial to production — I used to love Linux From Scratch and Gentoo but the truth is compiling all the time takes a lot of time and therefore have less time working on the things I want to be working on) I am mostly biased for RedHat based distributions. This and reason 1, above means Fedora Core is the perfect distribution for the virtual machine.
However, from in the VM I need to access a CVS repository on my server (the server hosting the virtual machine is not the same). Now while I could use ssh with passwords or pass-phrases on the SSH keys I don’t allow password logins and as for the ssh keys I don’t (in the case of these two users) require pass phrases. This of course leaves a problem with the security: anyone who can access the virtual box and those users (which is not likely because no passwords and only sudo allows it and that is restricted to users in question; but not likely does not mean impossible).
However, while no one that has access to that virtual machine (iptables policy is to reject and selectively allows certain IPs access to SSH — which is how firewalls should be built, mind you: if it it is not stated that it is allowed then deny it flat out) is someone I don’t trust, the truth of the matter is I would be naive and foolish to not consider this problem!
So there’s a few ways of going about this. The first way is one I am not particularly fond of. While it has its uses, there is specifically using in /etc/ssh/sshd_config the option ForceCommand (in a Match block). Why I don’t like this is simple: ForceCommand implies that I know the single command that is necessary, and that means the arguments to the command too. I might be misinterpreting that and I honestly cannot be completely sure (I tested this some time ago so memory is indeed a problem) but I seem to remember this is the case. That means that I cannot use cvs because I cannot do ‘cvs update’, ‘cvs commit’, ‘cvs diff’, etc. So what can be done? Well, there is I’m sure more than one way (as often the case in UNIX and its derivatives) but the way I went about it is this (I am not showing how to create the account or do anything but limit the set of commands):
- Install sendmail (mostly you are after the binary smrsh – sendmail restricted shell). Note that if you use (for example) postfix (or any other MTA) then depending on your distribution (and how you install sendmail) it might mean you’ll need to adjust which MTA is used (see man alternatives) as the default!
- Assuming you now have /usr/bin/smrsh then you are ready to restrict the commands to the user(s) in question. First, set their shell to be /usr/bin/smrsh by either updating the /etc/passwd file or the better way run (as root) /usr/bin/chsh [user] where obviously [user] is the user in question. It goes without saying but DO NOT EVEN THINK ABOUT SETTING root TO USE THIS SHELL!
- Now you need to decide which commands they are allowed to use. For the purpose of the example it will be cvs (but do see the caveats!). So what you do is something like this (again, as root as you’re writing to system file):
ln -s /usr/bin/cvs /etc/smrsh/cvs
which will create a symbolic link in /etc/smrsh called cvs which points to /usr/bin/cvs which is the command the user can use. If you wanted them to be able to use another command you would create a link just like with cvs.
- Assuming you already have the user’s ssh key installed, you’ll want to edit the file ~user/.ssh/authorized_keys file. If you have not installed their key then do that first. Now what do you need to add to the file and where in the file? Whichever key that you want to restrict (which is probably only allow one key or do it for all keys unless you want the hole of them copying the key from their ‘free’ machine to the ‘restricted’ machine) you need to add to the beginning of the line, the following:
That means that after you edit it will be in the form of (assuming that you use ssh-rsa):
no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa [key] [user@host]
Obviously [key] is the authorized key and [user@host] is the user@host they connect from. Although I cannot be 100% sure (as in I cannot remember but I am pretty sure this is not in fact a hole in my memory) I believe the user@host part is in fact just a helpful comment or reminder that this key is for this user and this host. Everything else will need to be there, however.
A few caveats need to be considered.
- Most importantly: If you don’t trust the user at all then they SHOULD NOT HAVE ACCESS AT ALL! This setup is used for when you have someone that needs access to the server but as a safety catch you don’t allow most commands as they don’t need it and/or there is a risk that someone else could abuse their access (for example if they are working in an office and they decide to step away for just a moment but keep their session unlocked).
- Those with physical access are irrelevant to this setup. Physical access = root access, period. Not debatable. If they want it they can have it.
- If they have more than one key installed then you either need to make sure all keys allow are set up the same way (preventing pty allocation etc.) or that it is impossible they can change the authorized_key file (which to be brutally honest, being sure of the latter is a guarantee of one thing: a false sense of security).
- This involves ssh keys. If you allow logins without ssh keys (e.g., with a password alone) then this is not really for you. ssh keys are ideal any way and you can have ssh keys with pass phrases too and even have a user password (think of if you grant them sudo access; now its an ssh key, a passphrase for that key [required] and for sudo their password [though, just like su, if too lax in allowing, it can be an issue], which means not only does a user need to have an authorized key they also need to know the pass phrase to that key. This is like having a ‘magical key’ that only fits in a lock when you also have a pass phrase you can input into a device that opens the keyhole. This is the safest approach especially if for any reason there is a chance that someone can steal the ssh key and you don’t happen to have ssh blocked at the firewall level by default (which is, depending on who needs access and from where, is a very good thing to consider doing).
- Something I neglected at first (not intentionally, just did not think to write it until a few days ago, today being 2014/02/17) is that if you have more than one login using this shell this may not be the most appropriate method to tackle the problem in question. The reason should be obvious but if it isn’t it is simply that the users might not have the same tasks (or, to put it another way, the commands they should be restricted to might be very different). This is definitely one downside to this method and is something to keep in mind.
- Even though I wrote caveat #1 which somewhat tackles this (it all comes down to trust which is given out too easily far too often), I feel it irresponsible to not discuss this. Depending on what you allow (and you should always be careful when adding a command to /etc/smrsh or in fact whenever you are deciding on something that changes the way things are dealt with by the system, especially if it involves users) you can run into trouble. Also consider that smrsh does allow && and ||. While I only allow certain IPs and only connections with an ssh-key (that is in authorized_key file), I only now thought to test this (which shows all the more how easy it is to forget something or even be unaware of something, that is actually a problem). The result? Because I disallow PTY allocations (in this setup) which means it is not possible to log in (unless of course through another user which is another can of worms entirely) it should be OK but do understand that breaking out of restricted shells – just like chroots – is not something that should be dismissed as “there is only one way and I know I prevent it” (a dangerous and incorrect assumption). This entire point (caveat 6) would be more of a problem if I did not disable PTY allocations but it still is something to strongly considered (again, if anyone can log on as this user through another user, then they aren’t exactly as restricted). On that, it should be noted that you should not ever add shell scripts or any program that can be told to run other programs (like procmail’s procmailrc file), to /etc/smrsh, and that also includes perl (or similar) and shells (that should be quite obvious but stating it anyway). sendmail’s restricted shell also allows certain built-ins like ‘exec’ so do keep that in mind too. Lastly, on the note of programs allowed, be mindful of the fact that this method DOES allow specifying command options, arguments and so even more care should be given when adding programs to /etc/smrsh (imagine you allow a program that can run another program – that is passed by name – and imagine now that the user runs a program you don’t want to allow; will it work? I’ve not tested it but it would be valuable to try it before deploying any possible loophole). If you do know in advance the command (exactly) then you should take the safer (more restrictive) approach built into sshd (just like always – you should always take the safest approach possible). I could go on about other things to consider but I won’t even try because besides I cannot possibly list everything (or even think of everything – I am human and not even remotely close to perfect) the the bottom line is this: there is always more to consider and it isn’t going to be a simple yes/no decision, so do consider all your options and their implications _before_ making any decision (in my case, with only one user with smrsh and only one command – which only runs scripts if configured on the server side – I would rather prevent normal logon and limit the commands than not).
With all that done, you should now have a login that can run cvs commands (but see caveat #6) but nothing else (sort of, see below). If they try something simple — even trying the following – they will get an error, as shown (and again, see caveat #6!):
$ ssh luser@server “ls -al”
smrsh: “ls” not available for sendmail programs (stat failed)
$ ssh luser@server
PTY allocation request failed on channel 0
Usage: -smrsh -c command
Connection to server closed.
On caveat #6, observe the following:
$ ssh luser@server “exec echo test && exec source ~/.bashrc”
smrsh: “source” not available for sendmail programs (stat failed)
Yes, that means that ‘echo test’ was in fact executed successfully (exec before ‘echo test’ was not strictly necessary for this, in case you wondered) — see below – but source was not (which is good; if you don’t know what source does do look it up as it is relevant). Note the following output:
$ ssh luser@server “exec ls || echo test”
smrsh: “ls” not available for sendmail programs (stat failed)
$ ssh luser@server “echo test”
And when looking at the above, specifically note the way the commands are invoked (or for the first time, the commands that I tried to invoke).
Note that this is some what of a quick write up (as in, it could be better organized). I have a lot going on, but as you’ll see, this has been delayed longer – much longer – than I originally anticipated. I think (hope) the ideas come across fine, and its of use or interest to someone. But if not, it might provide a bit of history about computer security (specifically related to malware).
Originally this was intended to be about the spreading of fear and misinformation about ‘cyber war’. And although some notes may be legit, a lot are unfounded and nothing new. Just because an agency never knew about something, does not mean it something majorly impressive or new. Sure, it could be. But that doesn’t have to be the case, and sadly, often it is not the case. Of course, many are drawn to fear, the uknown, and this makes them easy targets for propaganda and the like. I’m not going to cite examples, but they are out there. And although the concept of FUD was originally used in computer hardware sales, it can have a variety of uses, and computer security is one of them. This isn’t all about FUD but it does bring the spreading of fear, uncertainty and doubt, for gains or otherwise, whether they realize it or not.
However, I didn’t really write about that, and saw something else. Yes, it’s about technology and indeed security. But this time specifically about malware. It – according to some – seems like there are ingenious ideas coming out all the time, with respect to defeating some system. The fact is, while it does happen, it isn’t nearly as wide spread as some would make it seem. For example, I read a lot recently about how (e.g., Stuxnet and now Flame [the latter is what this is about]) can spread via a USB drive. Okay, so while this may seem so brilliant, it actually isn’t. Yes, it should be noted, but it’s nothing special either. I think it would suit these people well to either look up, or try to remember a bit further back in history. For instance, a USB drive can in a lot of systems be used as a boot drive. Now, that implies it has a boot loading mechanism. Well, what about the very old (we’re talking mid 1980s! Old for computers) Stoned and Brain viruses? Do you know what they attached to? Yes, boot sectors. Imagine that. Nothing too ingenious about a USB drive infection then, is there? And what about if it was attaching to a file on the drive (as after all, a drive has a file system, if its going to be used to store files)? Well, let’s see. That’s nothing new. File infectors have been around for decades. And then there’s the idea of file and boot sector (along with the master boot record) at the same time. That is known as a multipartite virus. Then, if it spreads via network, that’s nothing new either. Remember the Morris Worm? How do you think that spread? Exactly, via networks. Of course, there’s also combinations of all the above examples, and more.
Now, the malware Flame was recently uncovered. Some make it out to be a huge thing. And while it may have a lot of features in one program, that doesn’t mean its special, especially not compared to other major developments in the malware scene. Every time something new comes out, does not mean its impressive or a significant change. I can name many many bugs that seemed ingenious at the beginning, and [they] turned out to be nothing THAT original or significant. The problem is, people tend to be fearful, and also do not learn from history. That includes governments, and unfortunately that also is related to security (or lack there of). All these things we hear about, most of the time it isn’t new but old. I’ll elaborate.
In the late 1980s, a certain virus was found ITW (in the wild). Unfortunately for victims of such virus, it was very quiet, all the while slowly causing damage. You had backups, right? Well, in this case, it might even prove a problem if you do backup regularly: the fact its payload was done slowly and quietly means it would not be realized until backups have only the damaged copies. For instance, if you start out with a full backup, and have nightly incremental backups, and every fortnight you do another full backup, and then you rotate out older copies as time goes on, what might be the case when the damage is now visible? Essentially, the backups might only have the corrupted versions. And although I never liked (and still do not like) destructive code, I must give props to Dark Avenger’s virus (known by the same name as his handle), as that was quite clever. And if that wasn’t bad enough, there was another interesting feature, so interesting that it is actually a known concept in the antivirus and provirus scenes. The concept is piggybacking. What is that? Well, here’s the idea: First, know that this virus infected files but it added a twist: it infected files as they were opened. How, if the the virus is not running?
First, a bit of background about a programming concept. TSR stands for ‘terminate but/and stay resident’. What that means, is that it traps interrupt (or interrupts), which is basically an event. At the low level, that is to say, close to the hardware (e.g., your CPU), when a program requests to write something to the disk, an interrupt is called. The same applies for opening a file. The same applies to a lot of things in the computer. So then, how did Dark Avenger manipulate this? It went TSR (terminate and stay resident, i.e., it stays in memory after doing its work which includes installing interrupt handlers to certain interrupts). So, what did it hook? Interrupt 21h which was used when a file is opened (and other actions, too). To this end, it would piggy back on the antivirus. Indeed, this means as the antivirus is scanning the files for viruses, if it didn’t have a clue about Dark Avenger, and it was resident, then every single file that could be infected, that the antivirus opened to scan, would now be infected.
Here’s one of my favourite examples, and this is indeed (whether intentional or not) spreading FUD. Anyone remember that very destructive, wide spread virus called Michelangelo from the early 1990s? It seemed that everyone who knew about computers back then, was terrified of this oh so dangerous virus. Well, of course, since computer malware wasn’t new by any means, antiviruses made use of this scare (and admittedly, others). The media had a lot of articles about it, too. Sales soared. But not much really happened and it wasn’t really wide spread was it? No. And guess what happened after that? Yes, of course, some companies went out of business. That’s one example of abusing fearful people, if I do say so myself.
Back to Flame though. I’d just like to say that although it has a rich set of features, it isn’t really that ingenious. Think about it. They say it has a keylogger, sniffs the network, captures other things, and spreads through various ways (as I touched upon earlier, nothing new). Okay, some might argue: “It’s all in one program though!” True, but the only real issue is it is automated. There exists pentration testing operating systems, full of feature-rich tools, from port scanners, sniffers, and all sorts of other goodies. The interesting thing is, Flame is about 20MB. That is rather large. Okay, it isn’t large by todays disks, but for what it does (and is known), it is still fairly large. It certainly wouldn’t fit on a boot sector.
The interesting thing is, what Alan Woodward (yes, again) wrote about Flame. I’ll quote and remark.
This is an extremely advanced attack. It is more like a toolkit for compiling different code based weapons than a single tool. It can steal everything from the keys you are pressing to what is on your screen to what is being said near the machine.
It also has some very unusual data stealing features including reaching out to any Bluetooth enabled device nearby to see what it can steal.
Just like Stuxnet, this malware can spread by USB stick, i.e. it doesn’t need to be connected to a network, although it has that capability as well.
This wasn’t written by some spotty teenager in his/her bedroom. It is large, complicated and dedicated to stealing data whilst remaining hidden for a long time.
Some features may be less used or unusual, but I really don’t see it any more advanced than other things. In fact, dare I say its in this day and age rather the opposite. Pretty much all the features are versions of stuff done a long time ago, so why not seeing it sooner? I won’t comment on the programming of it, as I’ve not seen the source, and the person or people involved obviously put a lot of effort in to it (which is to be commended), and it did hide itself for some while, too, which is also interesting. On the last line of the quote though, another person in the article wrote this (and they’re related):
Currently there are three known classes of players who develop malware and spyware: hacktivists, cybercriminals and nation states.
Nonsense. Complete and utter nonsense. To me, a cyber criminal is someone who for instance, steals money or someones identity via technology. Many years ago, I actually knew quite a few virus writers. I didn’t approve of some things (e.g., tricking unsuspecting victims into activating CIH virus [which had some interesting properties, too]), and never was fond of malicious code, but they were an interesting bunch. And a lot (more often than not, actually) didn’t have destructive payloads. As for those involved, some were actually quite talented assembly (and other languages) programmers (which is what drew me to them). They all had different backgrounds, different goals, and came from different countries. But one thing is certain: they weren’t state (They were often fearful of such), they certainly didn’t steal money or similar, and they were not hacktivists, nor anything related to the common usage of the word ‘hacker’. Further, some were teenagers, and some were in it not for harming others or gain, but for learning. Yes, believe it or not, you can learn a lot by studying assembly language or in general source code.
The whole point though: not everything that is new and unique is the most sophisticated thing. If you say that every time, it is basically a version of the ‘boy who cried wolf’. That is a huge problem for security. It is also sensationalism, and besides the media and government, who likes that? It doesn’t help anyone with their computers, not one bit. If a security company’s employees always say this stuff, and don’t even help people (besides telling them to buy their software), then what are the true intentions and who is really being helped? That’s the problem with spreading fear, uncertainty and doubt.
Quick addendum (12:00:54 PST): This is no joke. I’m very serious when I said I read this on the BBC (sadly). For those curious, you can find the original article here: http://www.bbc.co.uk/news/technology-17032274
Indeed, this is going to be some what of a satire. Let me explain. The other day (a week or two ago actually), I saw a rather amusing article by a security expert (posted on the BBC). Now, I’m not going to say he is or isn’t an expert. We all have our strengths and weaknesses and that’s how humans are. But I will say that his thoughts mentioned in the article are completely ridiculous and outright incorrect, in many ways. The title of the article is ‘The Internet is Broken – we need to start over’. That’s why I reworded it.
The headline is actually a good way to start the article – it has a very good hook. So, what better way to start this, than by quoting it?
Last year, the level and ferocity of cyber-attacks on the internet reached such a horrendous level that some are now thinking the unthinkable: to let the internet wither on the vine and start up a new more robust one instead.
Maybe I’m a blind administrator, but I didn’t notice this. Oh, sure I noticed probes on Xexyl (and my other sites), and I noticed many companies being attacked successfully. What I did not notice however is this being anything new. Please, Professor Alan Woodward, could you tell me how this is any different from the other years? Just because they aren’t reported, does not mean the attacks didn’t happen. A lot of organizations, companies, whatever else, do not admit to attacks. Others may have been attacked but the attack did not succeed, either at all or enough. Further, many don’t even know it! (I have made several web hosting companies and even a technical school aware of the fact they had a compromised machine or network. It wasn’t hard to figure out when they started probing me, and they are a legit company. But they didn’t notice it. Wonder why that might not be reported then?).
Let’s go back in time, as I think it’s really important. Let’s go back to, say, November 2 of 1988. Why? The Robert T. Morris Worm would be pretty significant for that time and is a prime example of a simple truth: if you can make it then you can break it. That means, anything made by a human can be broken. Not only did the Morris worm exploit many different services and machines, it impacted them in a rather large way. They crawled to their knees. And what services did it impact on those machines? Some do come to mind immediately, but I’ll get the list from a source, so you get the full idea:
sendmail, finger, rsh/rexec and – this one is important – the weakest link in the chain: passwords, in particular weak (and the reason its the weakest is it’s a HUMAN creation). Further, due to the way the worm worked, it acted as a fork bomb, hence making them crawl to their knees. Only someone who is completely oblivious to the Morris worm would not think that’s a significant part of the Internet history. And when is that again? 1988. When was this article (the BBC one) I mentioned written in? 2012! And that is only ONE example. What about the CIH virus? That even was spread by accident on software from major retailers. And since it trashed the CMOS, it prevented machines from not booting at all until the board or at least chip was replaced. What about when the 13 year old from Canada, known as Mafiaboy took down eBay, Amazon, Yahoo and several others websites by way of DDoS attacks (distributed denial of service attacks) years ago? I might add that a some of the attacks last year and the more recent years were DDoS attacks (certainly more than that, but they were still one type of the attacks). The point is the same, however: attacks on human creations – be it vandalism (graffiti, physically damaging someone’s car, whatever else), or a computer network, even a network of network – the Internet. They happen and they always will!
Now, there is something more important to realize is wrong or missing from this BBC article.
However, recently the evidence suggests that our efforts to secure the internet are becoming less and less effective, and so the idea of a radical alternative suddenly starts to look less laughable.
The only thing that is laughable, is that suggestion. Are you really considering throwing out 40 some years of many many people’s creations and work? Seriously? That is frankly disgraceful and shows how destructive humans are! You claim security breaches cause lost revenue. That’s true. So does shoplifting. So does a global economic crisis. So do many other things. The fact is, the ways to protect is not the problem. The problem is people have always been short sighted in the planning stage of things, and furthermore, lazy/ignorant of what is necessary – implementing a policy, having the proper skill set, and so on. Where do you think the quote hindsight is perfect vision (and variants) comes from?
No matter what, peoples creations are bound to be flawed in some way, at some time. That’s how this world is. To throw out 40+ years of development because some like to cause trouble would also cause a lot of money loss. Firstly, some companies are only online businesses. So to take the ‘Internet’ offline would put them out of business (and even if they moved to a local store, it means they’d have to either build additional stores = more money spent, or they’d have a far smaller customer base). And surely you understand supply and demand, so when these companies go out of business, prices will potentially rise for those who do survive.
The fact of the matter is this: developing something, you are bound to make a mess. That’s expected. It also doesn’t matter. What does matter, is how you react and address the issues that come up. And, that’s exactly what has been done. For example, I mentioned rsh/rexec. I remember when these were more common, and the flaws they had were hard to believe. But they were still there! You know what is far more common now? SSH (which allows for a remote shell and also running commands remotely). You also have scp (secure shell’s remote copy). You don’t see telnet services as much either, do you? That’s because it is less secure than say, ssh.
No matter what, nothing is perfect in this world. Nothing. Trying to be perfect is an easy way to go absolutely bonkers. If you ask anyone (including myself, I admit) how perfectionism impacts their life, if they have it bad enough, you’ll at least hear or see what I mean.
To summarize, a good friend from Holland once told me about a saying there. When translated to English, it means this :
Where there is lumber work, there’s wood chips.
I’ve since then have passed that on to other people, when for example, they made a mistake that had them down. That saying is golden. It’s absolutely true, and its something everyone can think of at some point or points in their life. After all, no one is perfect….
(As an aside: all the above considered, the internet is pointless if it doesn’t exist, and it would take some long time to be back up. What Alan Woodward is suggestion reminds me of security through obscurity. A perfect Internet in a non perfect world is impossible. That’s the bottom line.)