Things related to computer security in some way or another.
One of the projects I work on involves a VirtualBox install of Fedora Core. The reason Fedora Core is used is twofold:
- It is on the bleeding edge. This is important because it has the latest and greatest standards. In this specific project this means I have access to most recent C and C++ standards. The server the VirtualBox runs on runs a more stable distribution that backports security fixes but otherwise maintains stability because there are less updates which means less chance of things to go wrong.
- As for binary distribution (which is beneficial to production — I used to love Linux From Scratch and Gentoo but the truth is compiling all the time takes a lot of time and therefore have less time working on the things I want to be working on) I am mostly biased for RedHat based distributions. This and reason 1, above means Fedora Core is the prefect distribution for the virtual machine.
However, from in the VM I need to access a CVS repository on my server (the server hosting the virtual machine is not the same). Now while I could use ssh with passwords or pass-phrases on the SSH keys I don’t allow password logins and as for the ssh keys I don’t (in the case of these two users) require pass phrases. This of course leaves a problem with the security: anyone who can access the virtual box and those users (which is not likely because no passwords and only sudo allows it and that is restricted to users in question; but not likely does not mean impossible).
However, while no one that has access to that virtual machine (iptables policy is to reject and selectively allows certain IPs access to SSH — which is how firewalls should be built, mind you: if it it is not stated that it is allowed then deny it flat out) is someone I don’t trust, the truth of the matter is I would be naive and foolish to not consider this problem!
So there’s a few ways of going about this. The first way is one I am not particularly fond of. While it has its uses, there is specifically using in /etc/ssh/sshd_config the option ForceCommand (in a Match block). Why I don’t like this is simple: ForceCommand implies that I know the single command that is necessary, and that means the arguments to the command too. I might be misinterpreting that and I honestly cannot be completely sure (I tested this some time ago so memory is indeed a problem) but I seem to remember this is the case. That means that I cannot use cvs because I cannot do ‘cvs update’, ‘cvs commit’, ‘cvs diff’, etc. So what can be done? Well, there is I’m sure more than one way (as often the case in UNIX and its derivatives) but the way I went about it is this (I am not showing how to create the account or do anything but limit the set of commands):
- Install sendmail (mostly you are after the binary smrsh – sendmail restricted shell). Note that if you use (for example) postfix (or any other MTA) then depending on your distribution (and how you install sendmail) it might mean you’ll need to adjust which MTA is used (see man alternatives) as the default!
- Assuming you now have /usr/bin/smrsh then you are ready to restrict the commands to the user(s) in question. First, set their shell to be /usr/bin/smrsh by either updating the /etc/passwd file or the better way run (as root) /usr/bin/chsh [user] where obviously [user] is the user in question. It goes without saying but DO NOT EVEN THINK ABOUT SETTING root TO USE THIS SHELL!
- Now you need to decide which commands they are allowed to use. For the purpose of the example it will be cvs. So what you do is something like this (again, as root as you’re writing to system file):
ln -s /usr/bin/cvs /etc/smrsh/cvs
which will create a symbolic link in /etc/smrsh called cvs which points to /usr/bin/cvs which is the command the user can use. If you wanted them to be able to use another command you would create a link just like with cvs.
- Assuming you already have the user’s ssh key installed, you’ll want to edit the file ~user/.ssh/authorized_keys file. If you have not installed their key then do that first. Now what do you need to add to the file and where in the file? Whichever key that you want to restrict (which is probably only allow one key or do it for all keys unless you want the hole of them copying the key from their ‘free’ machine to the ‘restricted’ machine) you need to add to the beginning of the line, the following:
That means that after you edit it will be in the form of (assuming that you use ssh-rsa):
no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa [key] [user@host]
Obviously [key] is the authorized key and [user@host] is the user@host they connect from. Although I cannot be 100% sure (as in I cannot remember but I am pretty sure this is not in fact a hole in my memory) I believe the user@host part is in fact just a helpful comment or reminder that this key is for this user and this host. Everything else will need to be there, however.
A few caveats need to be considered.
- Most importantly: If you don’t trust the user at all then they SHOULD NOT HAVE ACCESS AT ALL! This setup is used for when you have someone that needs access to the server but as a safety catch you don’t allow most commands as they don’t need it and/or there is a risk that someone else could abuse their access (for example if they are working in an office and they decide to step away for just a moment but keep their session unlocked).
- Those with physical access are irrelevant to this setup. Physical access = root access, period. Not debatable. If they want it they can have it.
- If they have more than one key installed then you either need to make sure all keys allow this or that it is impossible they can change the authorized_key file (which to be brutally honest, being sure of the latter is a guarantee of one thing: a false sense of security).
- This involves ssh keys. If you allow logins without ssh keys (e.g., with a password alone) then this is not really for you. ssh keys are ideal any way and you can have ssh keys with pass phrases too, which means not only does a user need to have an authorized key they also need to know the pass phrase to that key. This is like having a ‘magical key’ that only fits in a lock when you also have a pass phrase you can input into a device that opens the keyhole. This is the safest approach especially if for any reason there is a chance that someone can steal the ssh key and you don’t happen to have ssh blocked at the firewall level by default (which is, depending on who needs access and from where, is a very good thing to consider doing).
With all that done, you should now have a login that can run cvs commands but nothing else. If they try something simple — even trying the following – they will get an error, as shown:
$ ssh luser@server “ls -al”
smrsh: “ls” not available for sendmail programs (stat failed)
$ ssh luser@server
PTY allocation request failed on channel 0
Usage: -smrsh -c command
Connection to server closed.
Note that this is some what of a quick write up (as in, it could be better organized). I have a lot going on, but as you’ll see, this has been delayed longer – much longer – than I originally anticipated. I think (hope) the ideas come across fine, and its of use or interest to someone. But if not, it might provide a bit of history about computer security (specifically related to malware).
Originally this was intended to be about the spreading of fear and misinformation about ‘cyber war’. And although some notes may be legit, a lot are unfounded and nothing new. Just because an agency never knew about something, does not mean it something majorly impressive or new. Sure, it could be. But that doesn’t have to be the case, and sadly, often it is not the case. Of course, many are drawn to fear, the uknown, and this makes them easy targets for propaganda and the like. I’m not going to cite examples, but they are out there. And although the concept of FUD was originally used in computer hardware sales, it can have a variety of uses, and computer security is one of them. This isn’t all about FUD but it does bring the spreading of fear, uncertainty and doubt, for gains or otherwise, whether they realize it or not.
However, I didn’t really write about that, and saw something else. Yes, it’s about technology and indeed security. But this time specifically about malware. It – according to some – seems like there are ingenious ideas coming out all the time, with respect to defeating some system. The fact is, while it does happen, it isn’t nearly as wide spread as some would make it seem. For example, I read a lot recently about how (e.g., Stuxnet and now Flame [the latter is what this is about]) can spread via a USB drive. Okay, so while this may seem so brilliant, it actually isn’t. Yes, it should be noted, but it’s nothing special either. I think it would suit these people well to either look up, or try to remember a bit further back in history. For instance, a USB drive can in a lot of systems be used as a boot drive. Now, that implies it has a boot loading mechanism. Well, what about the very old (we’re talking mid 1980s! Old for computers) Stoned and Brain viruses? Do you know what they attached to? Yes, boot sectors. Imagine that. Nothing too ingenious about a USB drive infection then, is there? And what about if it was attaching to a file on the drive (as after all, a drive has a file system, if its going to be used to store files)? Well, let’s see. That’s nothing new. File infectors have been around for decades. And then there’s the idea of file and boot sector (along with the master boot record) at the same time. That is known as a multipartite virus. Then, if it spreads via network, that’s nothing new either. Remember the Morris Worm? How do you think that spread? Exactly, via networks. Of course, there’s also combinations of all the above examples, and more.
Now, the malware Flame was recently uncovered. Some make it out to be a huge thing. And while it may have a lot of features in one program, that doesn’t mean its special, especially not compared to other major developments in the malware scene. Every time something new comes out, does not mean its impressive or a significant change. I can name many many bugs that seemed ingenious at the beginning, and [they] turned out to be nothing THAT original or significant. The problem is, people tend to be fearful, and also do not learn from history. That includes governments, and unfortunately that also is related to security (or lack there of). All these things we hear about, most of the time it isn’t new but old. I’ll elaborate.
In the late 1980s, a certain virus was found ITW (in the wild). Unfortunately for victims of such virus, it was very quiet, all the while slowly causing damage. You had backups, right? Well, in this case, it might even prove a problem if you do backup regularly: the fact its payload was done slowly and quietly means it would not be realized until backups have only the damaged copies. For instance, if you start out with a full backup, and have nightly incremental backups, and every fortnight you do another full backup, and then you rotate out older copies as time goes on, what might be the case when the damage is now visible? Essentially, the backups might only have the corrupted versions. And although I never liked (and still do not like) destructive code, I must give props to Dark Avenger’s virus (known by the same name as his handle), as that was quite clever. And if that wasn’t bad enough, there was another interesting feature, so interesting that it is actually a known concept in the antivirus and provirus scenes. The concept is piggybacking. What is that? Well, here’s the idea: First, know that this virus infected files but it added a twist: it infected files as they were opened. How, if the the virus is not running?
First, a bit of background about a programming concept. TSR stands for ‘terminate but/and stay resident’. What that means, is that it traps interrupt (or interrupts), which is basically an event. At the low level, that is to say, close to the hardware (e.g., your CPU), when a program requests to write something to the disk, an interrupt is called. The same applies for opening a file. The same applies to a lot of things in the computer. So then, how did Dark Avenger manipulate this? It went TSR (terminate and stay resident, i.e., it stays in memory after doing its work which includes installing interrupt handlers to certain interrupts). So, what did it hook? Interrupt 21h which was used when a file is opened (and other actions, too). To this end, it would piggy back on the antivirus. Indeed, this means as the antivirus is scanning the files for viruses, if it didn’t have a clue about Dark Avenger, and it was resident, then every single file that could be infected, that the antivirus opened to scan, would now be infected.
Here’s one of my favourite examples, and this is indeed (whether intentional or not) spreading FUD. Anyone remember that very destructive, wide spread virus called Michelangelo from the early 1990s? It seemed that everyone who knew about computers back then, was terrified of this oh so dangerous virus. Well, of course, since computer malware wasn’t new by any means, antiviruses made use of this scare (and admittedly, others). The media had a lot of articles about it, too. Sales soared. But not much really happened and it wasn’t really wide spread was it? No. And guess what happened after that? Yes, of course, some companies went out of business. That’s one example of abusing fearful people, if I do say so myself.
Back to Flame though. I’d just like to say that although it has a rich set of features, it isn’t really that ingenious. Think about it. They say it has a keylogger, sniffs the network, captures other things, and spreads through various ways (as I touched upon earlier, nothing new). Okay, some might argue: “It’s all in one program though!” True, but the only real issue is it is automated. There exists pentration testing operating systems, full of feature-rich tools, from port scanners, sniffers, and all sorts of other goodies. The interesting thing is, Flame is about 20MB. That is rather large. Okay, it isn’t large by todays disks, but for what it does (and is known), it is still fairly large. It certainly wouldn’t fit on a boot sector.
The interesting thing is, what Alan Woodward (yes, again) wrote about Flame. I’ll quote and remark.
This is an extremely advanced attack. It is more like a toolkit for compiling different code based weapons than a single tool. It can steal everything from the keys you are pressing to what is on your screen to what is being said near the machine.
It also has some very unusual data stealing features including reaching out to any Bluetooth enabled device nearby to see what it can steal.
Just like Stuxnet, this malware can spread by USB stick, i.e. it doesn’t need to be connected to a network, although it has that capability as well.
This wasn’t written by some spotty teenager in his/her bedroom. It is large, complicated and dedicated to stealing data whilst remaining hidden for a long time.
Some features may be less used or unusual, but I really don’t see it any more advanced than other things. In fact, dare I say its in this day and age rather the opposite. Pretty much all the features are versions of stuff done a long time ago, so why not seeing it sooner? I won’t comment on the programming of it, as I’ve not seen the source, and the person or people involved obviously put a lot of effort in to it (which is to be commended), and it did hide itself for some while, too, which is also interesting. On the last line of the quote though, another person in the article wrote this (and they’re related):
Currently there are three known classes of players who develop malware and spyware: hacktivists, cybercriminals and nation states.
Nonsense. Complete and utter nonsense. To me, a cyber criminal is someone who for instance, steals money or someones identity via technology. Many years ago, I actually knew quite a few virus writers. I didn’t approve of some things (e.g., tricking unsuspecting victims into activating CIH virus [which had some interesting properties, too]), and never was fond of malicious code, but they were an interesting bunch. And a lot (more often than not, actually) didn’t have destructive payloads. As for those involved, some were actually quite talented assembly (and other languages) programmers (which is what drew me to them). They all had different backgrounds, different goals, and came from different countries. But one thing is certain: they weren’t state (They were often fearful of such), they certainly didn’t steal money or similar, and they were not hacktivists, nor anything related to the common usage of the word ‘hacker’. Further, some were teenagers, and some were in it not for harming others or gain, but for learning. Yes, believe it or not, you can learn a lot by studying assembly language or in general source code.
The whole point though: not everything that is new and unique is the most sophisticated thing. If you say that every time, it is basically a version of the ‘boy who cried wolf’. That is a huge problem for security. It is also sensationalism, and besides the media and government, who likes that? It doesn’t help anyone with their computers, not one bit. If a security company’s employees always say this stuff, and don’t even help people (besides telling them to buy their software), then what are the true intentions and who is really being helped? That’s the problem with spreading fear, uncertainty and doubt.
Quick addendum (12:00:54 PST): This is no joke. I’m very serious when I said I read this on the BBC (sadly). For those curious, you can find the original article here: http://www.bbc.co.uk/news/technology-17032274
Indeed, this is going to be some what of a satire. Let me explain. The other day (a week or two ago actually), I saw a rather amusing article by a security expert (posted on the BBC). Now, I’m not going to say he is or isn’t an expert. We all have our strengths and weaknesses and that’s how humans are. But I will say that his thoughts mentioned in the article are completely ridiculous and outright incorrect, in many ways. The title of the article is ‘The Internet is Broken – we need to start over’. That’s why I reworded it.
The headline is actually a good way to start the article – it has a very good hook. So, what better way to start this, than by quoting it?
Last year, the level and ferocity of cyber-attacks on the internet reached such a horrendous level that some are now thinking the unthinkable: to let the internet wither on the vine and start up a new more robust one instead.
Maybe I’m a blind administrator, but I didn’t notice this. Oh, sure I noticed probes on Xexyl (and my other sites), and I noticed many companies being attacked successfully. What I did not notice however is this being anything new. Please, Professor Alan Woodward, could you tell me how this is any different from the other years? Just because they aren’t reported, does not mean the attacks didn’t happen. A lot of organizations, companies, whatever else, do not admit to attacks. Others may have been attacked but the attack did not succeed, either at all or enough. Further, many don’t even know it! (I have made several web hosting companies and even a technical school aware of the fact they had a compromised machine or network. It wasn’t hard to figure out when they started probing me, and they are a legit company. But they didn’t notice it. Wonder why that might not be reported then?).
Let’s go back in time, as I think it’s really important. Let’s go back to, say, November 2 of 1988. Why? The Robert T. Morris Worm would be pretty significant for that time and is a prime example of a simple truth: if you can make it then you can break it. That means, anything made by a human can be broken. Not only did the Morris worm exploit many different services and machines, it impacted them in a rather large way. They crawled to their knees. And what services did it impact on those machines? Some do come to mind immediately, but I’ll get the list from a source, so you get the full idea:
sendmail, finger, rsh/rexec and – this one is important – the weakest link in the chain: passwords, in particular weak (and the reason its the weakest is it’s a HUMAN creation). Further, due to the way the worm worked, it acted as a fork bomb, hence making them crawl to their knees. Only someone who is completely oblivious to the Morris worm would not think that’s a significant part of the Internet history. And when is that again? 1988. When was this article (the BBC one) I mentioned written in? 2012! And that is only ONE example. What about the CIH virus? That even was spread by accident on software from major retailers. And since it trashed the CMOS, it prevented machines from not booting at all until the board or at least chip was replaced. What about when the 13 year old from Canada, known as Mafiaboy took down eBay, Amazon, Yahoo and several others websites by way of DDoS attacks (distributed denial of service attacks) years ago? I might add that a some of the attacks last year and the more recent years were DDoS attacks (certainly more than that, but they were still one type of the attacks). The point is the same, however: attacks on human creations – be it vandalism (graffiti, physically damaging someone’s car, whatever else), or a computer network, even a network of network – the Internet. They happen and they always will!
Now, there is something more important to realize is wrong or missing from this BBC article.
However, recently the evidence suggests that our efforts to secure the internet are becoming less and less effective, and so the idea of a radical alternative suddenly starts to look less laughable.
The only thing that is laughable, is that suggestion. Are you really considering throwing out 40 some years of many many people’s creations and work? Seriously? That is frankly disgraceful and shows how destructive humans are! You claim security breaches cause lost revenue. That’s true. So does shoplifting. So does a global economic crisis. So do many other things. The fact is, the ways to protect is not the problem. The problem is people have always been short sighted in the planning stage of things, and furthermore, lazy/ignorant of what is necessary – implementing a policy, having the proper skill set, and so on. Where do you think the quote hindsight is perfect vision (and variants) comes from?
No matter what, peoples creations are bound to be flawed in some way, at some time. That’s how this world is. To throw out 40+ years of development because some like to cause trouble would also cause a lot of money loss. Firstly, some companies are only online businesses. So to take the ‘Internet’ offline would put them out of business (and even if they moved to a local store, it means they’d have to either build additional stores = more money spent, or they’d have a far smaller customer base). And surely you understand supply and demand, so when these companies go out of business, prices will potentially rise for those who do survive.
The fact of the matter is this: developing something, you are bound to make a mess. That’s expected. It also doesn’t matter. What does matter, is how you react and address the issues that come up. And, that’s exactly what has been done. For example, I mentioned rsh/rexec. I remember when these were more common, and the flaws they had were hard to believe. But they were still there! You know what is far more common now? SSH (which allows for a remote shell and also running commands remotely). You also have scp (secure shell’s remote copy). You don’t see telnet services as much either, do you? That’s because it is less secure than say, ssh.
No matter what, nothing is perfect in this world. Nothing. Trying to be perfect is an easy way to go absolutely bonkers. If you ask anyone (including myself, I admit) how perfectionism impacts their life, if they have it bad enough, you’ll at least hear or see what I mean.
To summarize, a good friend from Holland once told me about a saying there. When translated to English, it means this :
Where there is lumber work, there’s wood chips.
I’ve since then have passed that on to other people, when for example, they made a mistake that had them down. That saying is golden. It’s absolutely true, and its something everyone can think of at some point or points in their life. After all, no one is perfect….
(As an aside: all the above considered, the internet is pointless if it doesn’t exist, and it would take some long time to be back up. What Alan Woodward is suggestion reminds me of security through obscurity. A perfect Internet in a non perfect world is impossible. That’s the bottom line.)
The other day I saw reference to something abbreviated as ‘BYOD’. They claim it stands for ‘Bring Your Own Device’ (e.g., to work). I say that’s more like bringing a demon to your office and I mean demon in the most thought of sense: pure hell.
I especially love this part of the article mentioning it :
Out with the old: You may find yourself using your own device – laptop, tablet and/or smartphone – for work whether you like it or not.
The only thing that’s ‘out’ is the author (from BBC’s) mind and I mean that in the sense of out of touch with reality (of security, business perspective, liabilities, etc.). This idea is about as dumb as it gets with current standards. Allow me to explain this in a different way, for the non system and network administrators….
As an admin of a very small network (we’re talking < 10 devices connected), I would dread this idea. I cannot imagine this for a COMPANY – even medium size or small, let alone a large company. What a bloody nightmare that would be! You heard of the bastard operator from hell? Well, if not, let’s just say you would if this came to any administrator with half an ounce of sense (and I’m sorry, but that’s about as nice as I can muster it up).
The very fact of the matter is, this is a nice way to get your network breached. I guess they’re forgetting the old type of virus called a master boot record / boot sector infecter ? All it took is someone unknowingly putting a floppy in the computer (that was infected) and then the computer shuts down.. or is rebooted (if up). Or maybe its put in (without thinking more so) and then the computer turned on (e.g., a workstation). Monkey brain anyone? Yes, there’s word play involved: on one hand, I am describing the administrator (that wants to allow employees to bring their own devices that is), and on the other its a reference to the Monkey and Brain viruses, both of which were MBR/BS viruses.
And if that’s bad enough, maybe you need to brush up on your security terms (yes, just knowing OF these things along with some BASIC logic would be enough to concern any intelligent administrator). I’ll mention some of them for you.
- Worm – a program similar to a computer virus, only it has the added ability to spread by network (this could be through a flaw/hole in a service, an operating system, or even something as simple as emailing it with a convincing message that running the program is a good idea [or exploiting email clients that run executables or scripts]). You think the virus is bad? What about a worm? What if it exported data? Oh, and see next term.
- Backdoor – a program which ‘opens a door’ (e.g., some port on the victim computer) that allows access to the backdoor and potentially the entire system. Cleverly named, isn’t it?
- Denial of Service, Distributed Denial of Service, etc. I don’t think I have to discuss this one much.
- Spam botnet anyone?
The list could go on for quite a while. And to make it worse, you have additional ideas that come out all the time. I remember years ago – someone asked on a forum I was part of, if it was possible for an image to to contain executable code. I replied with a resounding ‘yes’ – under a condition. The condition? An image viewer being stupid enough to actually interpret/execute code found IN the image! What happened some months later? I seem to remember it actually happened as such. I know the first part is 100% valid, but I’m pretty sure a certain software company in Redmond, Washington did it. Great!
I’m sorry to say this, but allowing employees to bring their own device to a company’s network and expecting it to work is about as stupid as giving an unexploded grenade to a 2 year old to play with (or similar – a loaded firearm of some kind). It’s really that simple. It’s opening a huge can of ugly worms (possibly literally) and you may as well expose the ENTIRE CORPORATE NETWORK to the outside. Forget all the layers of security – the DMZ, the firewall(s). Just forget all that stuff is in place! In short, let’s all bring our own demons to work! If it isn’t broken, we’ll break it so they can fix it (and we’ll also whine when they can’t fix it as quickly as we like).
I’d hope that most companies see the stupidity in allowing such a thing. I know some major corporations do see that. That’s why they prevent software being installed by employees and have other protections and even company policies (“you’re fired if X, Y, Z”) that specifically do NOT allow this type of stuff.
I admit I’d not mind being able to have my own choice of OS BUT for EVERY employee (e.g., corporate policy expects you to) to bring in their own device – without any kind of audit especially – is stupid. The fact is you cannot expect things to go OK when this isn’t the case (as in you’re not allowed to bring your own device) so how in the world they think they can keep control when you’re allowed to is well beyond me… (mainly because it’d be a bloody nightmare and simply impossible with how things work). I’d also like to say that bringing a company laptop is different.
But in short, you’ll have issues come up. And if you allow the BYOD then you’re going to have even more come up, guaranteed (you don’t know where that was!).
I admit it: I am very critical of Microsoft – especially when it comes to security. But, that’s partly because of all the things I’ve seen over the years; their attitude towards security is not just shocking and scary. It’s downright irresponsible. We ALL have to make the best of security issues. We will never be perfect – no one is. Not me, not even Dennis Ritchie was perfect. No one is and that’s fine. But to IGNORE issues when it effects so many people is just wrong. How many systems are compromised that are Windows based versus other systems? A lot more, I’d say. And you know, that’s not because flaws don’t exist in other operating systems. It’s not because there aren’t other systems. It’s not because there’s not exploits for other systems. I assure you, there’s a lot of flaws in many other operating systems. Observe the following facts.
- The first worm to exist (or one of the first and is certainly considered the first one and most notorious one in those times) was designed to effect Unix based operating systems! Known as the Morris Worm, it targeted rsh/rexec (long history of security problems and should not be used for years now), sendmail, finger and weak passwords. Those are Unix programs and services (minus the last one which is a human failure).
- A remote hole or flaw in a Unix/Linux system can often lead to root access (that is, the administrator account, for the Windows folks out there). Yes, this can happen in Windows but often it was that the system crashes or other possibilities too (which these could happen in Unix/Linux too, but its more likely the former).
- I remember YEARS ago (well over a decade ago) of Unix/Linux malware.
- Macintosh operating systems have had pretty significant malware too!
So WHAT is the REAL problem? The problem, I would say is twofold :
- When something is (no offence to any Windows users – it’s not all of them) “dumbed down”, it makes it easier to use. Clearly the point. However, the fact is when something is so easy to use, it means you don’t have to learn much or as much, which means you’re more unaware of things, and that means more likely to become a victim of a flaw or attack in the system. It’s kind of like, for a rather realistic example, what do you do if your brakes in your car aren’t working or even if your gas pedal gets stuck? Do you turn the key, or what? You SHOULD learn this, but that does not mean you will; after all, something may be a newer phenomenon. But if you’re experienced, say with a computer system, you can trouble shoot and figure things out much more easily.
- Secondly, it’s not all the users fault (despite my quote you may see from time to time on this site): Microsoft has truly shown ignorance, outright stupidity and simply put they do not always even care enough to fix a problem (which I’ve written about before).
So what am I getting at with this article ? Windows 8, which is not even out yet. What IS out is something they are at least considering. Picture passwords. WHAT?! Check this…
Once you have selected an image, we divide the image into a grid. The longest dimension of the image is divided into 100 segments. The shorter dimension is then divided on that scale to create the grid upon which you draw gestures. To set up your picture password, you then place your gestures on the field we create. Individual points are defined by their coordinate (x,y) position on the grid. For the line, we record the starting and ending coordinates, as well as the order in which they occur. We use the ordering information to determine the direction the line was drawn in. For the circle, we record a center point coordinate, the radius of the circle, and its directionality. For the tap, we record the coordinate of the touch point.
There’s more to be found on their blog, which you can find at http://blogs.msdn.com/b/b8/archive/2011/12/16/signing-in-with-a-picture-password.aspx
The funny part, is they actually do the mathematics for password combinations versus the above, and what does it say? Look here :
The analysis of the number of unique PINs is trivial. A 4-digit PIN (4 digits with 10 independent possibilities each) means there are 104 = 10,000 unique combinations.
When looking at alphanumeric passwords, the analysis can be simplified by assuming passwords are a sequence of characters comprised of lower case letters (26), upper case letters (26), digits (10), and symbols (10). In the most basic case, when a password is comprised strictly of n lower case letters, there are 26n permutations. When the password can be any length from 1 to n letters, then there are this many permutations:
For instance, an 8-character password has 208 billion possible combinations, which to most people would seem amazingly secure.
Unfortunately, the way most users pick passwords is far from random. Left to their own devices, people use common words and phrases, names of family members, and so on.
So, because most users choose insecure passwords, you’re thinking this is going to be any better? Wow, stupidity sure is potent. Unless you require a certain amount of gestures, it’s not going to be better, despite what your math comes up with. Add to the fact that any modern system can allow MORE THAN 8 character passwords! But even then, well, even Graham Cluley got it right. Note: I some times don’t even agree with him, and have in the past criticized him about some things. Who is he? A top security researcher at Sophos security (that’s an antivirus and internet security program company). And what did he say ? Let’ see…
“With normal password entry, what you’re doing is asterisked on the screen,” said Mr Cluley. “With this gesture input, folks may find it easier to see the movements you are making.”
And he also added something else of importance :
It just might be better if an operating system encourages stronger passwords, and that includes checking against a dictionary file (Unix has done this for years!). Look:
[cody@triangle src]$ passwd
Changing password for user cody.
Changing password for cody.
(current) UNIX password:
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: it is too short
passwd: Have exhausted maximum number of retries for service
Now, firstly, note that in Unix/Linux systems, echo is turned off completely for password entry. So, the first line I typed is ‘passwd’ to change the current user’s password. Then, I type in my current password. This is checked to decrease the possibility that I left the console unattended and some donkey is trying to change my password. Then, I typed ‘dictionary’ for the first time. What happened? Ah yes – based on a dictionary word and is therefore NOT ALLOWED! The next two times? ‘test’ (w/o quotes). Yes, simply test. And look, its too short! Imagine how hard that is?
So, basically, what we have here is that Microsoft once again is being irresponsible security wise. Why? Are they stupid? Blind? Ignorant? All of the above? I don’t really care. The only thing I care about is they are being irresponsible and once again they seem to believe they can just ignore simple steps to help with security. Whether someone is breached by this or not, is irrelevant: the fact is IT IS INSECURE. At least, if anything, don’t SHOW on the screen what’s being done. Either way, when you ignore security, or miss something obvious, then you increase the risk for someone to get compromised. And what happens with that? OTHERS who they don’t even know SUFFER. You have spam botnets, you have DoS and DDoS attacks, you have scanners going across routers to servers… worms eating bandwidth… and so on.
Lastly, I wanted to say one thing before some wise person tries to claim something about this concept of gestures compared to keyboards. They might mention keyloggers and then say, the gestures are immune to it. Sorry to burst your bubble, but nope, its not immune at all. You can determine where the cursor is, you can change it, you can always do at the low level what the user is triggering at the high level. This is why you can trigger ctrl-alt-del event and the operating system can too (in addition it acts upon it). What the user can do, the system has to interpret and do it.
No matter which way you look at it, this is a foolish move (pardon the pun). It’s also amusing (minus the possible security implications) – they try to justify it because users use weak passwords. Fact is, users will use weak gestures too. Humans are the weakest link in the chain. And, if they argue that it has to be X-Y number of gestures, then why the heck can you not do the same for passwords ?! Exactly my point. I get the feeling they only care about sales, and with an “innovative” thing they are possibly going to get more interest (not that they don’t have enough already).