101 Years of Xexyl…

… for those who can count in binary, at least; indeed it was five years ago yesterday that I registered xexyl.net. I would have never suspected I would have what I have here, today. I would never ever imagined having my own RPM repository and yet that is only one of the accomplishments here (for whatever they are each worth).

I fully admit I live in something of a fantasy world (which is something of a paradox: if I admit it does that make it real? If it is real then what is fantasy and how real is it?) and so it seems appropriate that, given the anniversary of xexyl.net, I reflect upon the history of xexyl.net and some of the views I have shared (the namesake is much older, as I have made aware in the about section. It was many years ago that I played the game Xexyz and it clearly made an impact – perhaps not unlike the space rocket that was launched in to the moon, some years back… – in me. But xexyl.net is only five years old and while I have older domains, this is the first one I really feel is part of me).

I have written quite some off the wall, completely bizarre and (pseudo) random articles, but I try to always have some meaning to them (no matter how small or large and no matter how obvious or not) even if the meanings are somewhat ambiguous, cryptic and vague (as hard as it is to imagine that someone who elaborates as much as I do on any one topic, I do in fact abuse ambiguity and vagueness and much of what I write and indeed say, is cryptic). I do know however that I do not succeed in this attempt. To suggest anything else is to believe in perfection in the sense of no room for improvement.

I strongly believe that there is one kind of perfection that people should strive for, something that many might not think of as ‘perfect': constantly improving yourself, eternally evolving as a person. When you learn something new or accomplish something (no matter how small or large), rather than think you are finished (something that one definition of ‘perfect’ suggests) you should think of it as a percentage: every time you reach ‘perfection’ – as 100% – you should strive for 200% of the last mile (200% of 1 is 2, 200% of 2 is 4, 200% of 4 is 8, etc.). This is, interestingly enough, exactly like binary: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 and so on (each increment is a two times the previous value). In between the powers of 2 you make use of the other bits. For example, 1b + 1b (1 + 1 decimal) is 10b (2 decimal). 10b + 1b (2 + 1 decimal) is 11b (3 decimal). 11b + 1b (3 + 1 decimal) is 100b (4 decimal). This repeats in powers of 2 because binary is base 2. I’ve written about this before but this is what I will call – from this point onward – ‘binary perfection’. It is also the only ideal perfection exactly because it is constantly evolving. This may very well be an eccentric way to look at it but I am incredibly eccentric person. Still, this is the ‘perfect analogy’ and I daresay is a brilliant and accurate analogy.

As always, true to my word, I will continue this when I can. Because as long as I admit my mistakes I am not in denial; as long as I am not in denial, I can learn more, improve myself and those around me. While I do it for myself (this is one of the rare things I consider myself and myself alone), if it betters anyone else, then I will consider it a positive side effect. But indeed there are times where I am inactive for long periods of time and there are other times where I have a couple or more posts in a month (or a fortnight or whatever it is). This is because of what I have pointed out: I do this for me but I also believe in openness with respect to sharing knowledge and experience. This includes but is not limited to programming (and by programming I refer to experience, concepts as well as published works, whether my work alone or my contributions to others’ works). But I am not an open person and I never have been. Perhaps this is best: I am a rather dark, twisted individual, an individual possessed by many demons. These demons are insidious monsters of fire that lash out at me (and at times my surroundings) but they are MY demons and I’ll be damned if anyone tries to take them away from me.

I am Xexyl and this is my manifesto of and for eternal madness…

The Secret: Trust and Privacy

First, as is typical of me, the title is deliberate but beyond the pun it actually is an important thing to consider, which is what I’m going to tackle here. The secret does indeed imply multiple things and that includes the secret to secrets, the relation between privacy and secret and how trust is involved in all of this. I was going to write a revision to my post about encryption being important (and I might still to amend one thing, to give credit to the FBI boss about something, something commendable) but I just read an article on the BBC that I feel gives me much more to write about. So let me begin with trust.

Trust is something I refer to a lot, in person and here and pretty much everywhere I write about something that considering trust is a good thing. Indeed, trust is given far too easily. As I have outlined before, even a little bit of trust – seemingly harmless – can be abused. Make no mistake: it is abused. The problem is if you’re too willing to trust how do you know when you’ve been too trusting? While I understand people need to have some established trust with in their social circles, there are some things that do not need to be shared and there are things that really should not be entrusted to anyone except yourself, and that potentially includes your significant other. Computer security is something that fits here. Security in general is. The other problem is: ignorance. Ignorance is not wrong but it does hurt and if you don’t understand the risks of something (which I would argue the fanatical and especially the younger Facebook and other social media users, are) is risky, how do you proceed? For kids it is harder as it is known that kids just do not seem to understand that they are not immortal, not immune to things that really are quite dangerous. However, if you are too trusting with computers, you are opening a – yes, I know – a huge can of worms, and it can cause all sorts of problems (any of taking complete ownership of your computer, monitoring your activities which can lead to identify theft, phishing and many other things, from …). The list of how many issues that granting trust can lead to, is, I fear, unlimited in size. It is that serious. You have to find a balance and it is incredibly hard to do, no matter how experienced you are. I’ve made the general ideas clear before, but I don’t think I’ve actually tackled this issue with privacy and secrecy. I think it is time I do that.

In the wake of the Edward Snowden leaks, many more people are concerned for their privacy. While they should have always been concerned, it doesn’t really change the fact that they are now at least somewhat more cautious (or many are, at least). I have put this thought to words in multiple ways. The most recent is when I made a really stupid mistake (leading to me – perhaps a bit too critical but the point is the same – awarding myself the ID 10 T award), all because I was far more exhausted than I thought. Had I been clear I wouldn’t have had the problem. But I wasn’t clear headed and how could I know it? You only know it once it is too late (this goes for driving too and that makes it even more scary because you could hurt someone else, whether someone you care about or someone you don’t even know). The best way to word this is new on my part: Despite the dismissal people suggest (“what you don’t know cannot hurt you” is 100% wrong), the reality is this: what you don’t know can hurt you, it likely will and worse is it could even kill you! This is not an exaggeration at all. I don’t really need to get in to examples. The point is these people had no idea to what extent spying was taking place. Worse still they didn’t suspect any thing of the sort. (You should actually expect the worst in this type of thing but I suppose that takes some time to learn and come to terms with.) Regardless, they do now. It has been reported – and this is not surprising really, is it? – that a great population of the United States are now very concerned with privacy, have much less trust in the governments (not just the US government, folks – don’t fall for the trap that only some countries do it, you’re only risking harm to yourselves if you do!) in privacy. What some might not think of (although certainly more and more do and will over time), and this is something I somewhat was getting at with the encryption post, is this: If the NSA could not keep secret (and that is ironic itself, isn’t it? Very ironic and to the point of hilarity) their own activities (own is keyword number one) secret (and safe!) then how can you expect them to keep YOUR (keyword number two) information secret and safe? You cannot. There is no excuse for it: they aren’t the only ones, government, corporations, it really doesn’t matter, too many think of security after the fact (and those that do think of it in the beginning are still prone to making a mistake or not thinking of all angles… or a bug in a critical component of their system, leads to the hard work in place, being much less useful or relevant). The fact they are a spying agency and they couldn’t keep that secret is to someone who is easily amused (like myself), hilarious. But it is also serious isn’t it? Yes, and it actually strengthens (or further shows) my point that I will get to in the end (about secrets). To make matters worse (as hard as that is to fathom), you have the increase (and I will tell everyone this, this is not going to go away and it is not going to be contained – no, it will only get worse) in point of sale attacks (e.g., through malware) that has in less than a year led to more corporations having major leaks of confidential information than I would like to see in five or even ten years. This is the number of corporations – the amount of victims is millions (per corporation, even)! This information includes credit card details, email addresses, home addresses, … basically the information that can help phish you even enough to steal your identity (to name one of the more extreme possibilities). Even if they don’t use it for phishing you would be naive to expect them to not use the stolen information.

I know I elaborate a lot and unfortunately I haven’t tied it all together yet. I promise it is short, however (although I do give some examples below, too, that do add up in length). There is only one way to keep something safe, and that is this: don’t share it. The moment you share something with anyone, the moment you write it down, type it (even if you don’t save it to disk), do some activity that is seen by a third party (webcam or video tape, anyone?), it is not a secret. While the latter (being seen by camera) is not always applicable, the rest is. And what good is a secret that is no longer a secret? Exactly: it is no longer secret and therefore cannot be considered a secret. Considering it safe because you trust someone – regardless of what you think they will do to keep it safe and regardless of how much you think you know them – is a dangerous thing (case in point: the phenomenon called, as I seem to remember, revenge porn). In the end, always be careful with what you reveal. No one is immune to these risks (if you are careless someone will be quite pleased to abuse it) and I consider myself just as vulnerable exactly because I AM vulnerable!

On a whole, here is a summary of secrets, trust and security: the secret to staying safe and as secure as possible, is to not give out trust for things that need not be shared with anyone in the first place. If you think you must share something, think twice really hard and consider it again: you might not need to no matter how much the person (or entity) claims it will benefit you. Do you really, honestly, need to turn your thermostat on by your computer or phone? No, you do not and some thermostats have been exposed to have security flaws (in recent times). It isn’t that important. What might seem to be convenient might actually be the opposite in the end.

Bottom line there is this: If someone insists you need something from them or their company, they do not have you in your best interest! Who is anyone else to judge whether you need their service or product?

A classic example and a funny story where the con-artist was exposed: If you go to a specialist to have an antique valued and they offer to buy it you should never do it because if they tell you something is worth X it is one thing. It is however another thing entirely to tell you it is worth X and then offer to buy it from you. The story: years ago, my mother caught a smog-check service in their fraud (and they were consequently shutdown for it, as should be) because despite being female – and therefore what the con-artist thought would be easy prey, nice try loser – she is incredibly smart and he was a complete moron. He was so moronic that despite my mother being there listening to the previous transaction between the customer (“victim”) and himself, he told my mother the same story: you have a certain problem and I’ll charge X to fix it. The moron didn’t even change the story at all – he used it word for word, same problem, same price, right in front of my mother. In short: those telling you the value of something and then telling you they’re willing to buy/fix/whatever, are liars. Some are better liars but they’re still liars.

It is even worse when they are (example) calling you – i.e., you didn’t go to them! Unsolicited tech support calls, anyone? Yes, this happened not long ago. I really pissed off this person by turning the tables on him. While what I did is commendable (As he claimed, I wasting his time which means he lost time he could be cheating someone else) do note that some would have instead fallen victim and the reason he kept up until I decided to play along (and make a fool of him, as you’ll see if you read), is exactly because they are trained: trying to manipulate, trying to keep me on the line as long as possible (which means more time to try to convince me I need their service), and they only wanted to cheat me out of money (or worse: cause a problem with my computer that they were claiming to fix). Even though I got the better of them (as I always have) and to the point of him claiming I was wasting HIS time, they will just continue on and try the next until they find a victim. It is just like spam: as long as it pays they will keep it up. People do respond (directly and indirectly) to spam and it will not end because of this, as annoying as it may be. Again, if some entity is telling you you need their service or product, it is not with your best interest but their interest! That is undeniable even if you initially went to them, if they are insisting you need their product or service, they are only their to gain and not help. This is very different from going to a doctor and them telling you something serious (although certainly there are quacks out there, there is a difference and it isn’t too difficult to discern). Always be thinking!

2014 ID10T World Champion

There are two things I want to point out. The first one is noting that my mistake is not as bad as it initially seems because prior to systemd, this would not have been a problem at all. Second, I am remarking on why I admit to these types of things:

First, and perhaps the most frustrating for me (but what is done is done and I cannot change it but only accept it and move on) is that previously, before /bin, /sbin, /lib and /lib64 were made symbolic links to /usr/bin, /usr/sbin, /usr/lib and /usr/lib64, I would have been fine. Indeed, I can see that is where my mind was, besides the other part I discussed (about how files can be deleted yet still used as long as a reference is available; it is only once all references to the file are closed that the file is no longer usable). Where was mount, umount before this? And did it use /usr/lib64 or was it /lib64 ? The annoying thing is: it was under /bin and /lib64 which means that it used to be – but is not in systemd – on the root volume. So umount on /usr would have meant /usr would be gone but however /bin would still be there. So I would have still had access to /bin/mount. Alas, that is one of the things I didn’t like about some changes over the years, and it hit me hard. Eventually I will laugh at it entirely but for now I can only laugh in some ways (it IS funny but I’m more annoyed at myself currently). As I get to in my second point, I’m not renaming this post (dignity remains strong) even though it is not as bad as I made it sound, initially. While I would argue it was a rather stupid mistake, I don’t know if champion is still correct. Maybe better is last place in the final round, is more correct. Maybe not even that. Regardless, the title (for once the pun is not intended) is remaining the same.

Second, some might wonder why I admit to such a thing as below (as well as other things like when I messed up Apache logs… or other things I’m sure I have written about, before… and will in the future…) when xexyl.net is more about computers in general, primarily focusing on programming, Linux (typically Red Hat based distributions) and security. The reason I include things like the below is that I know that my greatest strength is that I’m willing to accept mistakes that I make; I don’t ever place the blame on someone or something else if I am responsible. Equally I address my mistakes in the best way possible. Now ask yourself this: If I don’t accept my mistakes, can I possibly take care of the problem? If I did not make a mistake – which is what being in denial really is – then there isn’t a problem at all. So how can I fix a problem that isn’t a problem? No one is perfect, and my typical joke aside (I consider myself, much of the time, to be no one, and “no one is perfect”), it is my thinking that if I can publicly admit to mistakes then it shows just how serious I am when I suggest to others (for example, here) that the only mistake is not accepting your own mistakes. So to that end, I made a mistake. Life goes on…

There are various web pages out there about computer user errors. A fun one that I’m aware of is top 10 worst mistakes at the command line. While I certainly cannot make claim to some of the obvious ones known, I am by no means perfect. Indeed, I have made many mistakes over the years and I wouldn’t have it any other way: the only mistake would be to not accept the mistake(s) and therefore not learn from them (although the mistake I’ll reveal here is one that is hard to learn from in some ways, as I explain: fatigue is something that is very hard to determine and by extension being tired means you don’t even know you are as tired as you are). Since I often call myself a no-one or nobody (exactly what Nemo in Captain Nemo in 20,000 Leagues Under the Sea means, in Latin), I have a great deal of amusement from the idea of “no one is perfect” exactly because of what I consider myself. But humour aside I am not perfect at all. While I have remarked on this before, I think the gem of them all is this:

There is no such thing as human weakness, there is only
strength and… those blinded by… the fallacy of perfection.
— Xexyl

If you can accept that truth then you can always learn, always expand yourself, always improve yourself and potentially those around you. This is hard for some to accept but those who do accept it know exactly what I mean. I assure everyone, you are not perfect!

So with that out of the way, let me get to the point of this post. I admit that mistakes of the past fail to come to my mind although I know I’ve made many and some more idiotic than others. However, around 6:00 today I made what is absolutely my worst mistake ever, and one that gives me the honour and privilege to be the holder of the title:  2014 IDI0T World Champion.

What is it? Prepare yourselves and challenge yourself as well. A while back I renamed the LVM volume group on my server. Something however, occurred to me, being that – obviously – some file systems are not able to be umounted in order to be mounted to the new volume group. That doesn’t mean that files at the current mount point cannot be accessed. What it does mean, however, is that if I update the kernel I will have in the bootloader a reference to the old volume group. This means I will have to update the entry the next time I reboot. I did keep this in mind and I almost went this route until this morning when I got the wise (which is to say really, really stupid) idea of running:

# init S

in order to get to single user mode, thereby making most filesystems easier to umount. Of course, I had already fixed /home, /opt and a few others that don’t have to be open. I was not thinking in full here, however, and it went from this to much worse. After logging in as root (again, obviously) to “fix” things, I went to tackle /usr which is where all hell broke loose…

It used to be that you would have /bin and /sbin on a different file system (or if nothing else, not be the same as) than /usr/bin and /usr/sbin. However, in more modern systems, you have the following:

$ ls -l /{,s}bin
lrwxrwxrwx. 1 root root 7 Dec 18  2013 /bin -> usr/bin
lrwxrwxrwx. 1 root root 8 Dec 18  2013 /sbin -> usr/sbin

which means that anything that used to be under /bin would now be /usr/bin. In addition, you also had /lib and (for 64-bit builds) /lib64. However, similar to the above, you also have:

$ ls -l /lib{,64}
lrwxrwxrwx. 1 root root 7 Dec 18  2013 /lib -> usr/lib
lrwxrwxrwx. 1 root root 9 Dec 18  2013 /lib64 -> usr/lib64

which means you absolutely need /usr to be mounted! Even if I had (a recent upgrade to latest release of server combined with me not installing busybox again for statically linked commands) busybox (or similar) installed, I would have been screwed over by the simple fact that once /usr is umounted and therefore I have no way to run mount again! Most disturbing is that I knew what I was about to do was risky, and risky because I was going to use an option that had potential for risk without the worry as I just described. However, as soon as I ran the command but before I confirmed it, I knew I would be forced to do a hard reboot. The command is as such:

# /usr/bin/umount -l /usr

Indeed, I just made it impossible to mount, change run level, do much of anything other than reboot (and not by command! That was already made impossible by my idiocy!). And so I did. Of course, I still had to update the boot entry. While that is the least of my worries (was no problem), it is ironic indeed because I would have had to do that regardless of when I rebooted next. So all things considered, for the time being, I am, I fear, the 2014 World Holder of the ID 10 T award. Indeed, I’m calling myself an idiot. I would argue that idiot is putting it way too nicely.

As for the -l option, given the description in umount(1), the hour it was and the sleep I did (not) get last night, I was thinking along the lines of (and this is why I didn’t think beyond it, stupid as that is!) as long as you have a reference to a file, even if it is deleted, you still can use it and even have the chance to restore it (or execute it or… keep it running). Once all file references are gone, if it is deleted, then it is gone. So when I read:

-l, –lazy
Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. (Requires kernel 2.4.11 or later.)

I only thought of the latter part and not the detach NOW portion. In addition, I wasn’t thinking of the commands themselves. Clearly if programs are under /usr then I might need /usr to … run mount! This is a perfect example, I might add, of how dangerous being tired is: you might think you have the clarity to work on something but the reality is if you don’t have that clarity then you don’t have the clarity to determine whether or not you have ability to judge any of it in the first place. This implies I likely won’t get much done today but at least I did do one thing: I fixed the logical volume rename issue. That is something even if it obliterated my (good) system uptime and at the same time revealing how bad MY uptime was (I should not have been at the server let alone up at all!).

Using ‘script’ and ‘tail’ to watch a shell session in real-time

This is an old trick that my longest standing friend Mark and I used years ago on one of his UltraSPARC stations while having fun doing any number of things. It can be used for all sorts of needs (e.g., showing someone how to do something, allowing someone to help debug your problem to name two of many others) but the main idea is that person is running tasks (for the purpose of this article I will pretend this person is the victim) and more generally using the shell, while the other person (and pretending that this person is the snoop) is watching everything, even if they’re across the world. It works as long as both are on the same system and that the victim writes output (directs to) a file that the snoop can read (as in open for reading).

Before I get to how to do this, I want to point something else out. If you look at the man page for script, you will see the following block of text under OPTIONS:

-f, –flush
Flush output after each write. This is nice for telecooperation: one person does `mkfifo foo; script -f foo’, and another can supervise real-time what is being done using `cat foo’.

But there are two problems with this method both due to the fact that the watching party (as I put it for amusement, the snoop) has control. For example, if I do indeed type at the shell:

$ mkfifo /tmp/$(whoami).log ; script --flush -f /tmp/$(whoami).log

… then my session will block, waiting for the snoop to type at their prompt:

$ cat /tmp/luser.log

(assuming my login name is indeed luser). And until that happens, even if I type a command, no output occurs on my end (the command is not ignored, however). Once the other person does type that I will see the output of script (showing that the the output is being written to /tmp/luser.log and any output from commands that I might have typed). The other user will see the output too, including which file is being written to. Secondly, the snoop decides when to stop. When they hit ctrl-c then once I begin to type, I will see at my end, something like this:

$ lScript done, file is /tmp/luser.log

Note that I hit the letter l, as if I was going to type ls (for example) and then I see the script done output. If I finish the command, let’s say by typing s and then hit enter, I will instead of seeing the output of ls, I will see (since typing ls hardly takes any time I will show it as it would appear on my screen, with the command completed, or one would suspect so):

$ lScript done, file is /tmp/luser.log
$ s
-bash: s: command not found

Yes, that means that the first character closes my end (the lScript is not a typo, that is what appears on my screen), shows me the typical message after script is done and then and only then do I get to enter a command proper.

So the question is, is there a way that I can control the starting of the file, and even more than that, could the snoop check on the file later (doesn’t watch in the beginning) or stop in the middle and then start watching again? Absolutely. Here’s how:

  • Instead of making a fifo (first in first out, i.e., a queue) I specify a file to write the script output to (a normal file with a caveat as below), or alternatively let the default file name be the output, instead. So what I type is:
    $ script --flush -f /tmp/$(whoami).log
    Script started, file is /tmp/luser.log
  • After that is done, I inform (somewhere else, or otherwise they use the –retry option of tail, to repeatedly try until interrupted or the file can be followed) the snoop (now THAT is something you don’t expect to ever be true, is it? Why would I inform a snoop of anything at all?! This is of course WHY I chose the analogy in the first place) and they then type:
    $ tail -f /tmp/luser.log

    And they will see – by default – the last ten lines of the session (the session implies the script log, so not the last ten lines of my screen!). They could of course specify how many lines but the point is they will now be following (that’s what -f does) the output of the file, which means whenever I type a command, they will see that as well as any output. This will happen until they hit ctrl-c or I type ‘exit’ (and if I do that they will still try to follow the file so they will need to hit ctrl-c too). Note that even if I remove the log file while they’re watching it, they will still see the output until I exit the script session. This is because they have a file descriptor of the log file and so while the file is no longer written to, they are still following it (this is because of how inodes work).

As for the caveat I referred to, it is simply this: control characters are also sent to the file and so it isn’t ASCII only. Furthermore, because of the same reason, using text editors (e.g., vi) will not show correctly to the snoop.

In the end, this is probably not often used but it is very useful when it is indeed needed. Lastly, if you were cat the output file, you’d see it as if you were watching the file in real-time. Most importantly: do not ever do anything that would reveal confidential information and if you do have anything you don’t want shown to the world, do not use /tmp or any public-readable file (and rm it when done too!). Yes, you can have someone read a file in your directory as long as they know the full path and have proper permissions to the directory and file.

Encryption IS Critical

I admit that I’m not big on mobile phones (and I also admit this starts out on phones but it is a general thing and the rules apply to all types of nodes). I’ve pointed this out before, especially with regards to so-called smart technology. However, just because I personally don’t have much use for it, most of the time, does not mean that the devices should not be as secure as possible. Thus, I am firstly, giving credit to Apple (which all things considered is exceptionally rare) and Google (which is also very rare). I don’t like Apple particularly because of Steve Jobs’ arrogance (which I’ve also written about) but that is only part of it. At the same time, I do have fond memories of the early Apple computers. As for Google, I have serious issues with them but I haven’t actually put words to it here (or anywhere actually). But just because I don’t like them does not mean they can never do something right or that I approve of. To suggest that would be me being exactly as I call them out for. Well, since Apple and Google recently suggested they were to enable encryption by default for iOS and Android, I want to commend them for it: encryption is important.

There is the suggestion, most often by authorities (but not always as – and this is something I was planning on and I might still write about – Kaspersky showed not too long ago when they suggested similar things), that encryption (and more generally, privacy) is a problem and a risk to ones safety (and others’ safety). The problem here is that they are lying to themselves or they are simply ignorant (ignore the obvious please, I know it but it is besides the point for this discussion). They are also risking the people they claim to want to protect and they also risk themselves. Indeed, how many times has government data been stolen? More than I would like to believe and I don’t even try to keep track of it (statistics can be interesting but I don’t find the subject of government – or indeed other entity – failures all that interesting. Well, not usually). The problem really comes down to this, doesn’t it? If someone has access to your or another person’s private details, and it is not protected (or poorly protected), then what can be done to you or that other person if someone ELSE gets that information? Identify theft? Yes. Easier time gathering other information about you, who you work for, your friends, family, your friends’ families, etc.? Yes. One of the first things an attacker will do is gather information because it is that useful in attacks, isn’t it? And yet, that’s only two issues of many more, and both of those are serious.

On the subject of encryption and the suggestion that “if you have nothing to hide you have nothing to fear”, there is a simple way to obliterate it. All one needs to do is ask a certain (or similar) question and explanation following, directed at the very naive and foolish person (Facebook founder has suggested similar, as an example). The question is along the lines of: Is that why you keep your bank account, credit cards, keys, passwords, etc., away from others? You suggest that you shouldn’t have a need to keep something private because you have nothing to hide unless you did something wrong (and so the only time you need to fear is when you are in fact doing something wrong). But here you are hiding something (that you wouldn’t want others knowing, in other words, and with your logic it follows that you did something wrong), yet here you are hiding your private information. The truth is that if you have that mentality, you are either lying to yourself (and ironically hiding something from yourself and therefore not exactly following your suggestion) or you have hidden intent or reasons to want others information (which, ironically enough, is also hiding something – your intent). And at the same time,  you know full well that YOU do want your information private (and YOU should want it private!).

But while I’m not surprised here, I still find it hard to fathom how certain people, corporations and other entities still think strong encryption is a bad thing. Never mind the fact that many high-profile cases of criminal data confiscated by police has been encrypted and yet revealed. Never mind the above. It is about control and power and we all know that the only people worthy of power are those who do not seek it but are somehow bestowed with it. So what am I getting at? It seems that, according to the BBC, the FBI boss is concerned about Apple’s and Google’s plans. Now I’m not going to be critical of this person, the FBI in general or anything of the sort. I made aware in the past that I won’t get in to the cesspool that is politics. However, what I will do is remark on something this person said but not remark on it by itself. Rather I will refer to something most amusing. What he said is this:

“What concerns me about this is companies marketing something expressly to allow people to place themselves beyond the law,” he said.

“I am a huge believer in the rule of law, but I am also a believer that no-one in this country is beyond the law,” he added.

But yet, if you look at the man page of expect, which allows interactive things that a Unix shell cannot do by itself, you’ll note the following capability:

  • Cause your computer to dial you back, so that you can login without paying for the call.

That is, as far as I am aware, a type of toll fraud. Why am I even bringing this up, though? What does this have to do with the topic? Well, if you look further at the man page, you’ll see the following:

Thanks to John Ousterhout for Tcl, and Scott Paisley for inspiration. Thanks to Rob Savoye for Expect’s autoconfiguration code.

The HISTORY file documents much of the evolution of expect. It makes interesting reading and might give you further insight to this software. Thanks to the people mentioned in it who sent me bug fixes and gave other assis‐

Design and implementation of Expect was paid for in part by the U.S. government and is therefore in the public domain. However the author and NIST would like credit if this program and documentation or portions of them are
29 December 1994

I’m not at all suggesting that the FBI paid for this, and I’m not at all suggesting anyone in the government paid for it (it is, after all, from 1994). And I’m not suggesting they approve of this. But I AM pointing out the irony. This is what I meant earlier – it all comes down to WHO is saying WHAT and WHY they are saying it. And it isn’t always what it appears or is claimed. Make no mistake people, encryption IS Important, just like PCI compliance, auditing (regular corporation auditing of different types, auditing of medical professionals, auditing in everything), and anyone suggesting otherwise is ignoring some very critical truths. So consider that a reminder, if you will, of why encryption is a good thing. Like it or not, many humans have no problem with theft, no problem with manipulation, no problem with destroying animals or their habitat (Amazon forest, anyone?). It is by no means a good thing but it is still reality and not thinking about it is a grave mistake (including indeed literally, and yes, I admit that that is pointing out a pun). We cannot control others in everything but that doesn’t mean we aren’t responsible for our own actions and ignoring something that risks yourself (never mind others here) places the blame on you, not someone else.

shell aliases: the good, the bad and the ugly

I erroneously claimed that the -f option is required with the rm command to remove non-empty directories. This is only a partial truth. You need -r as that is for recursion which traversing a file system, it would traverse directories encountered, until there are no more directories found (and indeed file system loops can occur which programs do consider). But -f isn’t for non-empty directories as such but rather write-permission related. Specifically, in relation to recurse (-r), if you specify -r you’ll still be prompted whether to recurse in to the directory, if it is write-protected (or write-protected file in). If you specify -f, you will not be prompted. Of course, there’s other reasons you might not be able to remove the directory or any files in it, but that is another issue entirely. Furthermore, root need not concern themselves with write permission, at least in the sense that they can override it.

Please observe the irony (that actually further proves my point, and that itself is ironic as well) that I suggest using the absolute path name and then I do not (with sed). This is what I mean by I am guilty of the same mistakes. It is something I have done over the years: work on getting in to the habit (of using absolute paths) and then it slides and then it happens all over again. This is why it is so important to get it right the first time (and this rule applies to security in general!). To make it worse, I knew it before I had root access to any machine (ever), years back. But this is also what I discussed with convenience getting in the way with security (and aliases only further add to the convenience/security conflict, especially with how certain aliases enable coloured output or some other feature). Be aware of what you are doing, always, and beware of not taking this all to heart. (And you can bet I’ve made a mental note to do this. Again.) Note that this rule won’t apply to shell built-ins unless you use the program too (some – e.g., echo – have both). The command ‘type’ is a built-in, though, and it is not a program. You can check by using the command itself like (type -P type will show nothing because there is no file on disk for type). Note also that I’ve not updated the commands where I show how aliases work (or commands that might be aliased). I’ve also not updated ls (and truthfully it probably is less of an issue, unless you are root, of course) but do note how to determine all ways a command can be invoked:

$ type -a ls
ls is aliased to `ls --color=auto'
ls is /usr/bin/ls

This could in theory could be only for Unix and its derivatives, but I feel there are similar risks in other environments. For instance, in DOS, extensions of programs had a priority so that if you didn’t type ‘DOOM2.EXE’ it would check – if I recall correctly, ‘DOOM2.BAT’ and ‘DOOM2.COM’ and then ‘DOOM2.EXE’. I don’t remember if that is the exact order but with no privilege separation you had the ability to rename files so that it if you wanted to write a wrapper to DOOM2 you could do it easily enough  (I use DOOM2 in the example because not only was it one of my favourite graphical computer games, one I beat repeatedly I enjoyed it so much, much more than the original DOOM… I also happened to write a wrapper for DOOM2 itself, back then). Similarly, Windows doesn’t show extensions at all (by default, last I knew anyway) and so if a file is called ‘doom.txt.exe’ then double clicking on it would actually execute the executable instead of opening a text file (but the user would only see the name ‘doom.txt’). This is a serious flaw in multiple ways. Unix has its own issues with paths (but at least you can redefine them and there IS privilege separation). But it isn’t without its faults. Indeed, Unix wasn’t designed with security in mind and that is why so many changes have been implemented over the years (the same goes for the Internet main protocols – e.g., IP, TCP, UDP, ICMP – as well as other protocols at say, the application layer – all in their own ways). This is why things are so easy to exploit. This time I will discuss the issue of shell aliases.

General idea for finding the program (or script or…) to execute is also a priority. This is why when you are root (or using a privileged command) you should always use a fully-qualified name (primarily known as using the absolute file name). It is arguably better to always do this because, what if someone modified your PATH, added a file in your bin directory, updated your aliases, … ? Now you risk running what you don’t intend to. There is a way to determine all the ways it could be invoked but you should not rely on this, either. So the good, then the bad and then the ugly of the way this works (remember, security and convenience conflict with each other a lot, which is quite unfortunate but something that cannot be forgotten!). When I refer to aliases understand that aliases are even worse than the others (PATH and $HOME/bin/) in some ways, which I will get to at the ugly.


There is one case where aliases are fine (or at least not so bad as the others; the others is when you use options). It isn’t without flaws, however. Either way: let’s say you’re like me and you’re a member of the Cult of VI (as opposed to the Church of Emacs). You have vi installed but you also like vim features (and so have it installed too). You might want vi in some cases but vim in others (for instance, root uses vi and other users use vim, contrived example or not is up to your own interpretation). If you place in $HOME/.bashrc the following line, then you can override what happens when you type the command in question as follows:

$ /usr/bin/grep alias $HOME/.bashrc
alias vi='vim'

Then typing ‘vi’ at the shell will open vim. Equally, if you type ‘vi -O file1 file2′ it will be run as ‘vim -O file1 file2′. This is useful but even then it has its risks. It is up to the user to decide, however (and after all, if a user is compromised you should assume the system is compromised because if it hasn’t been already it likely will be, so what’s the harm? Well I would disagree that there is no harm – indeed there is – but…)


Indeed, this is both bad and ugly. First, the bad part: confusion. Some utilities have conflicting options. So if you alias a command to use your favourite options, what if one day you want to use another option (or see if you like it) and you are used to typing the basename (so not the absolute name)? You get an error about conflicting options (or you get results you don’t expect)? Is it a bug in the program itself? Well, check aliases as well as where else the problem might occur. In  bash (for example) you can use:

$ type -P diff

However, is that necessarily what is executed? Let’s take a further look:

$ type -a diff
diff is aliased to `diff -N -p -u'
diff is /usr/bin/diff

So no, it isn’t necessarily the case. What happens if I use -y, which is a conflicting output type? Let’s see:

$ diff -y
diff: conflicting output style options
diff: Try 'diff --help' for more information.

Note that I didn’t even finish the command line! It detected invalid output styles and that was it. Yet it appears I did not actually specify conflicting output style types – clearly I only specified one option so this means indeed the alias was used, which means that while I specified options, those options are included and not excluding other options (certain programs will take the last option as the one that rules but not all do and diff does not here). If however, I were to do:

$ /usr/bin/diff -y
/usr/bin/diff: missing operand after '-y'
/usr/bin/diff: Try '/usr/bin/diff --help' for more information.

There we go: the error as expected. That’s how you get around it. But let’s move on to the ugly because “getting around it” is only if you remember and more so do not ever rely on aliases! Especially do not rely on it for certain commands. This cannot be overstated! The ugly is this:

It is unfortunate but Red Hat Linux based distributions have this by default and not only is it baby-sitting (which is both risky but also obnoxious much of the time … something about the two being related) it has an inherent risk. Let’s take a look at default alias for root’s  ‘rm':

# type -a rm
rm is aliased to `rm -i'
rm is /usr/bin/rm

-i means interactive. rm is of course remove. Okay so what is the big deal, surely this is helpful because as root you can wipe out the entire file system? Okay that’s fine but you can also argue the same with chown and chmod (always be careful recursively with these – well in general even – utilities… but these specifically are dangerous; they can break the system with ease). I’ll get to those in a bit. The risk is quite simple. You rely on the alias which means you never think about the risks involved; indeed, you just type ‘n’ if you don’t want to delete the files encountered (and you can send yes to all by piping ‘yes’, among other ways, if you wanted to at a one time avoid the nuisance). The risk then is, what if by chance you are an administrator (a new administrator) on another (different) system and it does not have the -i option? You then go to do something like (and one hopes you aren’t root but I’m going to show it as if I was root – in fact I’m not running this command – because it is serious):

# /usr/bin/pwd
# rm *

The pwd command was more of to show you a possibility. Sure, there are directories there that won’t be wiped out because there was no recursive option, but even if you are fast with sending an interrupt (usually ctrl-c but can be shown and also set with the stty command, see stty –help for more info), you are going to have lost files. The above would actually have shown that some files were directories after the rm * but before the last #, but all the files in /etc itself would be gone. And this is indeed an example of “the problem is that which is between the keyboard and chair” or “PEBKAC” (“problem exists between keyboard and chair”) or even “PICNIC” (problem in chair not in computer”), among others. Why is that? Because you relied on something one way and therefore never thought to get in the habit of being careful (and either always specifying -i or using the command in a safe manner like always making sure you know exactly what you are typing). As for chown and chmod? Well if you look at the man pages, you see the following options (for both):

 do not treat '/' specially (the default)
 fail to operate recursively on '/'

Now if you look at the man page for rm, and see these options, you’ll note a different default:

 do not treat '/' specially
 do not remove '/' (default)

The problem? You might get used to the supposed helpful behaviour with rm which would show you:

rm: it is dangerous to operate recursively on ‘/’
 rm: use --no-preserve-root to override this failsafe

So you are protected from your carelessness (you shouldn’t be careless… yes it happens and I’m guilty of it too, but this is one of the things backups were invented for, as well as only being as privileged as is necessary and only for the task in hand). But that protection is a mistake itself. This is especially true when you then look at chown and chmod, both of which are ALSO dangerous when recursively done on / (actually on many directories recursively, example to not do it on is /etc as that will break a lot of things, too). And don’t even get me started on the mistake of: chown -R luser.luser .*/ because even if you are in /home/luser/lusers, then as long as you are root (it is a risk for users to change owners and so therefore only root can do that) then you will be changing the root file system and everything under it (/etc, /bin/, /dev/, everything) to be owned by luser as the user and luser as the group. Hope you had backups. You’ll definitely need them. Oh, and yes, any recursive action on .* is a risky thing indeed. To see this in action in a safe manner, as some user, in their home directory or even a sub-directory of their home directory, try the following:

$ /usr/bin/ls -alR .*

… and you’ll notice it going to /home and then / and everything below it! The reason is the way path globbing works (try man -s 7 glob). I’d suggest you read the whole thing but the one in particular is under Pathnames.

So yes, if you rely on aliases which is relying on not thinking (which is a problem itself in so many ways) then you’re setting yourself up for a disaster. Whether that disaster in fact happens is not guaranteed but one should be prepared and not set themselves up for it in the first place. And unfortunately some distributions set you up for this by default. I’m somewhat of the mind to alias rm to ‘rm –no-preserve-root’ but I think most would consider me crazy (they’re probably more correct than they think). As for the alias rm in /root/.bashrc, here’s how you remove it (or maybe if you prefer to comment it out). Just like everything else, there’s many ways, this is at the command prompt:

# /usr/bin/sed -i 's,alias \(rm\|cp\|mv\),#alias \1,g' /root/.bashrc

Oh, by the way, yes, cp and mv (hence the command above commenting all three out) are also aliased in root’s .bashrc to use interactive mode and yes the risks are the same (you risk overwriting files when you aren’t on an aliased account and this might even be the same system that you are used to it with root but you don’t have it on all your accounts which means if you were just as root and remembered it was fine then you logout back to your normal, non-privileged user and you do some maintenance there, what happens if you then use one of those commands that is not aliased to -i? Indeed, aliases can be slightly good, bad and very ugly). Note that (although you should do this anyway) even if you were to source (by ‘source /root/.bashrc’ or equally ‘. /root/.bashrc’) the file again the aliases would still exist because it didn’t unalias them (you could of course run that too but better is log out and the next time you are logged in you won’t have that curse upon you).

One more thing that I think others should be aware of as it further proves my point about forgetting aliases (whether you have them or not). The reason I wrote this is twofold:

  • First, I’ve delayed the alias issue with rm (and similar commands) but it is something I’ve long thought of and it is indeed a serious trap.
  • Second, and this is where I really make the point: the reason this came up is one of my accounts on my server had the alias for diff as above. I don’t even remember setting it up! In fact, I don’t even know what I might have used diff for, with that account! That right there proves my point entirely (and yes, I removed it). Be aware of aliases and always be careful especially as a privileged user…


The Hidden Dangers of Migrating Configuration Files

One of the things I have suggested (in person, here, elsewhere) time and again, is the user is more often than not, the real problem. It is the truth, it really is. I also tell others and more often write about how there should be no shame with admitting to mistakes. The only mistake you can make is not admitting to your mistakes, because if you don’t admit to your mistakes then you cannot learn from them; indeed, hiding behind a mask is not going to make the problem go away but will actually make it worse (and appear as not a problem at the same time). So then let me make something very clear, and this too is something I’ve written about (and mentioned to people otherwise) before: administrators are users too. Any administrator not admitting to making blunders, is one that is either lying (and a poor liar at that, I might add) or their idea of administration is logging into a computer, remotely and running a few basic commands, then logging out. Anyone that uses a computer in any way is a user, it as simple as that. So what does this have to do with migrating configuration files? It is something I just noticed in full and it is a huge blunder on my part. It is actually really stupid but it is something to learn from, like everything else in life.

At somewhere around 5 PM / 17:00 PST on June 16, my server was up for 2 years, 330 days, 23 hours, 29 minutes and 17 seconds. I know this because of the uptime daemon I wrote some time ago. However, around that time, also, there was a problem with the server. I did not know it until the next morning at about 4:00, because I had gone for the night. The problem is keyboard would not waken the monitor (once turned on) and I could not ssh in to the server, from this computer; indeed, the network appeared down. In fact, it was down. However, the LEDs on the motherboard (thanks to a side window in case) were lit, the fan lights were lit and the fans were indeed moving. The only thing is, the system itself was unresponsive. The suspect is something that I cannot prove one way or another. The suspect is, however, this: an out of memory issue, and the thinking is the Linux OOM killer killed a critical process (and/or was not able to resolve the issue in time). I backed up every log file at that time, in the case I ever wanted to look in to it further (probably won’t be enough information but there was enough information to tell me at about what time it stopped). There had been a recent library (glibc which is, well, very much part of the kernel and everything else) update but Linux is really good about this so it really is anyone’s guess. All I know is when logs stopped updating. The end result is I had to do a hard reboot. And since CentOS 7 came out a month or two later, I figured why not? True I don’t like systemd but there are other things I do like about CentOS 7 and the programmer in me really liked the idea of GCC 4.8.x and C/C++11 support. Besides, I manage on Fedora Core (a remote server and the computer I write from) so I can manage in CentOS 7. Well here’s the problem: I originally had trouble (the day was bad and I naively went against my intuition which was telling me repeatedly, “this is a big mistake” – it was). Then I got it working the next day, when I was more clear. However, just like CentOS 5 to CentOS 6 had certain major services (in that case it was Dovecot) having major releases, the same happened here only this time it was Apache. And there were quite some configuration changes indeed, as it was a major release (from 2.2 to 2.4). I made a critical mistake, however:

I migrated old configuration files for Apache. Here is what happened and here is why I finally noticed it (and why I did not notice it before). This (migrating old files) is indeed dangerous if you are not very careful (keep in mind the major changes means that unless you have other systems with the same layout, you will not be 100% aware of all – keyword – changes). Even if you are careful, and even if things appear fine (no error, no warning, everything seems to work), there is always the danger of something that changed that is in fact a problem. And that is exactly what happened. Let me explain.

In Apache 2.2.x you had the main config file /etc/httpd/conf/httpd.conf and you also had the directory /etc/httpd/conf.d (with extra configuration files like the one for mod_ssl, mod_security, and so on). In the main config file, however, in the beginning, you had the LoadModule directives so that everything works fine. But since the configuration file has <IfModule></IfModule> blocks, as long as the module in question is not required, there is no harm. You can consider it optional. In Apache 2.4.x, however, early in the file /etc/httpd/conf/httpd.conf it includes the directory /etc/httpd/conf.modules.d which then has, among other files, 00-base.conf and in that file is the LoadModules directive. And here is where the problem arose. I had made a test run of the install but without thinking of the <IfModule></IfModule> and non-required modules, and since the other Include directive is at the end of the file, there surely was no harm in shifting things around, right? Well, looking back it is easy to see where I screwed up and how. But yes, there was harm. And while I noticed this issue, it didn’t exactly register (perhaps something to do with sleep deprivation combined with reading daily logs in the early morning and more than that, being human, i.e., not perfect by any means, not even close). Furthermore, error log was fine and so in logwatch output, I did indeed see httpd logs. But something didn’t register until I saw the following:

0.00 MB transferred in 2 responses (1xx 0, 2xx 1, 3xx 1, 4xx 0, 5xx 0)
2 Content pages (0.00 MB)

Certainly that could not be right! I looked at my website yesterday, even, and more than once. But then something else occurred to me. I began to think about it, and it had been sometime that all I saw was the typical scanning for vulnerabilities that every webserver gets. I had not in fact saw much more. The closest would be:

2.79 MB transferred in 228 responses (1xx 0, 2xx 191, 3xx 36, 4xx 1, 5xx 0)
4 Images (0.00 MB),
224 Content pages (2.79 MB),

And yet, I knew I had custom scripts for logwatch, that I made some time back (that shows other information that I want to see, that isn’t in the default logwatch httpd script/config). But I figured that maybe I forgot to restore it. The simple solution was to move the include directives to before the <IfModule></IfModule> blocks, or in other words, much earlier in the file, not at the end.

To be true to my nature and word, I’m going to share what I actually saw in logs. This, I hope, will show exactly how sincere I am when I suggest that people admit to their mistakes and to not worry about so-called weaknesses. If there is any human weakness, it is the inability to understand that perfection isn’t possible. But that is more as I put it before: blinded by a fallacy. If you cannot admit to mistakes then you are hiding from the truth and ironically you are not fooling anyone but yourself.

The log entries looked like this:


Yes, really. I’m 100% serious. How could I screw up that bad? It is quite simple: it evaluated to that, because at the end of the config file, I include a separate directory that holds the vhosts themselves. But the CustomLog format I use, that I cleverly named vhost (because it shows the vhost as well as some other vhost specifics), was not in fact evaluated (the log modules were not loaded at the time). And in the <VirtualHost></VirtualHost> blocks I have CustomLog directives which would normally refer to a format. This means the custom log format was not used. The reason the error logs worked is I did not make a custom error log. But since the log modules were loaded after the configuration of the log formats, the access logs had the format of “vhost” as a string, and that is it. Brilliant example of “the problem is that which is between the keyboard and chair” as I worded it circa 1999 (and others have put it other ways, longer than me, for sure). And to continue with sharing such a stupid blunder, I’m going to point out that this has been this way about 41 days and 3 hours. Yes, I noticed it but I only noticed it in a (local) test log file (test virtual host). Either way, it did not register as a problem (it should have but it absolutely did not!). I have no idea why it didn’t, but it didn’t. True, I have had serious sleep issues, but that is irrelevant. The fact is: I made a huge mistake with the migration of configuration files. It was my own fault, I am 100% to blame, and there is nothing else to it. But this is indeed something to consider, because no one is perfect and when there is a major configuration directory restructure (or any type of significant restructure) there are risks to keep in mind. This is just nature: significant changes do require getting accustomed to things and all it takes is being distracted, not thinking of one tiny thing, or even not understanding something in particular, in order for a problem to occur. Thankfully, though, most of the time problems are noticed quickly and fixed quickly, too.  But in this case I really screwed up and I am only thankful it wasn’t something more serious. Something to learn from, however, and that is exactly what I’ve done.


chkconfig: alternatives system

This will be a somewhat quick write-up. Today I wanted to link in a library to a program that is in the main Fedora Core repository (but it excludes the library due to policy).  In the past I had done this by making my own RPM package with the release one above the main release, or, if you will excuse the pun, alternatively not installing the Fedora Core version at all but only mine. I then had the thought of why not use the alternatives system? After all, if I wanted to change the default I could do that. This RPM isn’t going to be in any of my repositories (I added one for CentOS 7 in the past few months) but realistically it could. There was one thing that bothered me about the alternatives system, however:

I could never quite remember the proper installation of an alternatives group because I never actually looked at it with a clear head and although it is clear (the description) once I looked in to it more intently, I always confused the link versus the path. Regardless, today I decided to sort it out once and for all. This is how each option works:

alternatives --install <link> <name> <path> <priority> [--initscript <service>] [--slave <link> <name> <path>]*

The link parameter is the symlink which exists in /etc/alternatives and which points to the currently preferred default. The name parameter is the name of the alternative group itself. The path is the actual target of the symlink. –initscripts is a Red Hat Linux specific and although I primarily work with Red Hat I will not cover it. Priority is a number, the highest number will be selected in auto mode (see below). –slave is for groupings; for instance, if the program I was building had a man page but so does the main one (the one from Fedora Core repository), what happens when I use man on the program name? With the groups it allows updating based on the master. For the example I will use a program I wrote but yet there is another one out there (also in the main Fedora Core repository): an uptimed. Let’s say mine is called ‘suptimed’ and the other is ‘uptimed’. So the files ‘/usr/bin/suptimed’ and ‘/usr/bin/uptimed’ exist. Further, the man pages for suptimed and uptimed are ‘/usr/share/man/man1/suptimed.1.gz’ and ‘/usr/share/man/man1/uptimed.1.gz’  (without the ‘s). This also includes just enough files to explain the syntax.

alternatives --install /usr/bin/uptimed uptimed /usr/bin/suptimed 1000 --slave /usr/share/man/man1/uptimed.1.gz uptimed.1.gz /usr/share/man/man1/suptimed.1.gz

While this is a hypothetical example (as in there might be more files to include in the slaves [1]), it should explain it well enough. After this, if you were run uptimed it would run suptimed instead. Furthermore, if you were to type ‘man uptimed’ it would show suptimed’s man page. Under /etc/alternatives you would see symlinks called uptimed and uptimed.1.gz with the first pointing to /usr/bin/suptimed and the second pointing to /usr/share/man/man1/suptimed.1.gz

The syntax given above that has [–slave <link> <name> <path>]* and specifically the * after it means you can use it more than once, depending on how many slaves. As for the [ and ] that is the typical way of showing options (not required) and their parameters (which may or may not be required for the option). The angle brackets indicate required arguments. This is a general rule, or perhaps even a de-facto standard.

alternatives --remove <name> <path>

Exact same parameter meanings. So to remove suptimed from the group  (and as master it might then not even use alternatives – the trick is when there IS an alternative) I would use:

alternatives --remove uptimed /usr/bin/suptimed

alternatives --auto <name>

As explained above. Name has the same meaning.

alternatives --config <name>

Allows you to configure the alternative for the group. Is a TUI (text user interface).

The rest I won’t get in to. The –config option is also not part of the original implementation (Debian Linux). –display <name> and –list are quite straight forward.

Debugging With GDB: Conditional Breakpoints

Monday I discovered a script error in what is one of many scripts running in a process. I knew that it was not the script itself but rather a change I had made in the source code (so not the script but the script engine because I implemented a bug in it) that caused the error. But in this case, the script in question is one of many of the same type, each running in a sequence. This means if I am to attach to the debugger and set a breakpoint I have to check that it is the proper instance and if not, continue until the next instance, repeating this until I get to the proper instance. Even then, I have to make sure that I don’t make a mistake and that ultimately I find the problem (unless I want to repeat the process again, which usually is not preferred).

I never really thought of having to do this because I rarely use the debugger at all let alone debug something like this.  But when I do it is time-consuming. So I had a brilliant idea: make a simple function that the script could call and then trap that function. When I reach that breakpoint I then step into the script. Well I was going to write about this but on a whim I decided to look at GDB’s help for breakpoints. I saw something interesting but I did not look in to it until today. Well, as it turns out, GDB already has this functionality, only better. So this is how it works:

  • While GDB is attached to the process you set a breakpoint to the function you need to debug. For example, you want to set a breakpoint at cow: break cow
  • GDB will tell you which breakpoint number it is. It’ll look something like: ‘Breakpoint 1 at 0xdeadbeef: cow’ where ‘0xdeadbeef’ is the address of the function ‘cow’ in the program space and ‘cow’ is the function you set a breakpoint to. Okay, the function cow is probably not there it almost assuredly does not have the address ‘0xdeadbeef’ although it could happen (and it would be very ironic yet amusing indeed), but this is just to show the output (and show how much fun hexadecimal can be, at least how fun it is to me). Regardless, you have the breakpoint number and that is critical for the next step, which is – if you will excuse the pun – the beef of the entire process.
  • So one might ask, does GDB have the ability to check the variable passed in to the function for the condition? Yes, it does, and it also has the ability to dereference a pointer (or access a member function or variable on an object) passed in to the function. So if cow has a parameter of Cow *c and c has a function idnum (or a member variable idnum) then you can indeed make use of it in the condition. This brings us to the last step (besides debugging the bug you implemented, that is).
  • Command in gdb: ‘cond 1 c->idnum == 57005′ (without the quotes) will instruct GDB to only stop at function cow (at 0xdeadbeef) when c->idnum (or if you prefer and you specified the condition as c->idnum() == 57005, then c->idnum() is checked) is 57005. Why 57005 for the example? Because 0xdead is 57005 in decimal. So all you have to do now is tell GDB to continue: ‘c’ (also without the quotes). When it stops you’ll be at function ‘cow’ and c->idnum will be equal to 57005. To contrast, if you had made the condition as c->idnum != 57005 then it will break whenever the cow is alive (to further the example above).

That’s all there is to it!

Open Source and Security

One of the things I often write about is how open source is in fact good for security. Some will argue the opposite to the end. But what they are relying on, at best, is security through obscurity. Just because the source code is not readily available does not mean it is not possible to find flaws or even reverse engineer it. It doesn’t mean it cannot be modified, either. I can find – as could anyone else – countless examples of this. I have personally added a feature to a Windows dll file – a rather important one, that is shell32.dll – in the past. I then went on to convince Windows file integrity check to not only see the modified file as correct but if I were to have replaced it with the original, unmodified one, then it would have replaced it with my modified version. And how did I add a feature without the source code? My point exactly. So to believe you cannot uncover how it works (or, as some will have you believe, modify and/or add features) is a huge mistake. But whatever. This is about open source and security. Before I can get in to that, however, I want to bring up something else I often write about at times.

That thing I write about is this: one should always admit to mistakes. You shouldn’t get angry and you shouldn’t take it as anything but a learning opportunity. Indeed, if you use it to better yourself, better whatever you made the mistake in (let’s say you are working on a project at work and you make a mistake that throws the project off in some way) and therefore better everything and everyone involved (or around), then you have gained and not lost. Sure, you might in some cases actually lose something (time and/or money, for example) but all good comes with bad and the reverse is true too: all bad comes with good. Put another way, I am by no means suggesting open source is perfect.

The only thing that is perfect is imperfection.
— Xexyl

I thought of that the other day. Or maybe better put, I actually got around to putting it in my fortune file (I keep a file of pending ideas for quotes as well as the fortune file itself). The idea is incredibly simple: the only thing that will consistently happen without failure is ‘failure’, time and time again. In other words, there is no perfection. ‘Failure’ because it isn’t a failure if you learn from it; it is instead successfully learning yet another thing and is another opportunity to grow. On the subject of failure or not, I want to add a more recent quote  (this was thought of earlier this month or later in August than when I originally posted this, which was 15 August) that I think really nails this idea very well:

There is no such thing as human weakness, there is only
strength and… those blinded by… the fallacy of perfection.
— Xexyl

In short, the only weakness is the product of ones’ mind. There is no perfection but if you accept this you will be much further ahead (if you don’t accept it you will be less able to take advantage of what imperfection offers). All of this together is important, though. I refer to admitting mistakes and how it is only a good thing. I also suggest that open source is by no means perfect and therefore to be critical of it, as if it is less secure, is flawed. But here’s the thing. I can think of a rather critical open source library that is used by a lot of servers, that has had a terrible year. One might think with this, and specifically what it is (which I will get to in a moment), it is somehow less secure or more problematic. What is this software? Well, let me start by noting the following CVE fixes that were pushed in to update repositories, yesterday:

 – fix CVE-2014-3505 – doublefree in DTLS packet processing
– fix CVE-2014-3506 – avoid memory exhaustion in DTLS
– fix CVE-2014-3507 – avoid memory leak in DTLS
– fix CVE-2014-3508 – fix OID handling to avoid information leak
– fix CVE-2014-3509 – fix race condition when parsing server hello
– fix CVE-2014-3510 – fix DoS in anonymous (EC)DH handling in DTLS
– fix CVE-2014-3511 – disallow protocol downgrade via fragmentation

To those who are not aware, I refer to the same software that had Heartbleed vulnerability. It therefore is also the same software as some other CVE fixes not too long after that. And indeed it seems that OpenSSL is having a bad year. Well, whatever – or perhaps better put, whomever – is the source (and yes, I truly do love puns) of the flaws, is irrelevant. What is relevant is this: they clearly are having issues. Someone or some people adding the changes are clearly not doing proper sanity checks and in general not auditing well enough. This just happens, however. It is part of life. It is a bad patch (to those that do not like puns, I am sorry, but yes, there goes another one) of time for them. They’ll get over it. Everyone does, even though it is inevitable that it happens again. As I put it: This just happens.

To those who want to be critical, and not constructively critical, I would like to remind you of the following points:

  • Many websites use OpenSSL to encrypt your data and this includes your online purchases and other credit card transactions. Maybe instead of being only negative you should think about your own mistakes more rather than attacking others? I’m not suggesting that you not considering yours, but in the case you are not, think about this. If nothing else, consider that this type of criticism will lead to nothing and since OpenSSL is critical (I am not consciously and deliberately making all these puns, it is just in my nature) then this can lead to no good and certainly is of no help.
  • No one is perfect, as I not only suggested above, but also suggested at other times. I’ll also bring it up again, in the future. Because thinking yourself infallible is going to lead to more embarrassment than if you understand this and prepare yourself, always being on the look out for mistakes and reacting appropriately.
  • Most importantly: this has nothing to do with open source versus closed source. Closed source has its issues too, including less people that can audit it. The Linux kernel, for example, and I mean the source code thereof, is on many people’s computers and that is a lot to see any issues. Issues still have and still will happen, however.

With that, I would like to end with one more thought. Apache, the organization that maintains the popular web server as well as other projects, is really to be commended for their post-attack analyses. They have a section on their website, which details attacks and the details include what mistakes they made, what happened, what they did to fix it as well as what they learned. That is impressive. I don’t know if any closed source corporations do that, but either way, it is something to really think about. It is genuine, it takes real courage to do it, and it benefits everyone. This is one example. There are immature comments there but that shows just how impressive Apache is to do this (they have other incident reports I seem to recall). The specific incident is reported here.

Steve Gibson: Self-proclaimed Security Expert, King of Charlatans

Just to clarify a few points. Firstly, I have five usable IP addresses. That is because, like I explain below, some of the IPs are not usable for systems but instead have other functions. Secondly, the ports detected as closed and my firewall returning icmp errors. It is true  I do return that but of course there’s missing ports there and none of the others are open (that is, none have services bound to them), either. There are times I flat out drop packets to the floor but if I have the logs I’m not sure which log file it is (due to log rotation), to check for sure. There’s indeed some inconsistencies. But the point remains the same: there was absolutely nothing running on any of those ports just like the ports it detected as ‘stealth’ (which is more like not receiving a response and what some might call filtered but in the end it does not mean nothing is there and it does not mean you are some how immune to attacks). Third, I revised the footnote about FQDN, IP addresses and what they resolve to. There’s a few things that I was not clear with and in some ways unfair with, too. I was taking an issue to one  thing in particular and I did a very poor job of it I might add (something I am highly successful at I admit).

One might think I have better things to worry about than write about a known charlatan but I have always been somewhat bemused with his idea of security (perhaps because he is clueless and his suggestions are unhelpful to those who believe him which is a risk to everyone). More importantly though, I want to dispel the mythical value of what he likes to call stealth ports (and even more than that anything that is not stealth is somehow a risk). This, however, will not only tackle that, it will be done in what some might consider an immature way. I am admitting it full on though. I’m bored and I wanted to show just how useless his scans are by making a mockery of those scans. So while this may seem childish to some, I am merely having fun while writing about ONE of MANY flaws Steve Gibson is LITTERED with (I use the word littered figuratively and literally).

So let’s begin, shall we? I’ll go in the order of pages you go through to have his ShieldsUP! scan begin. First page I get I see the following:


Without your knowledge or explicit permission, the Windows networking technology which connects your computer to the Internet may be offering some or all of your computer’s data to the entire world at this very moment!

Greetings indeed. Firstly, I am very well aware of what my system reveals. I also know that this has nothing to do with permission (anyone who thinks they have a say in what their machine reveals, when connecting to the Internet – or phone to a phone network, or … – is very naive and anyone suggesting that there IS permission involved, is a complete fool). On the other hand, I was not aware I am running Windows. You cannot detect that yet you scan ports which would give you one way to determine OS? Here’s a funny part of that: since I run a passive fingerprinting service (p0f) MY SYSTEM determined your server’s OS (well, technically, kernel but all things considering, that is the most important bit, isn’t it? Indeed it is not 100% correct, however, but that goes with fingerprinting in general and I know that it DOES detect MY system correctly). So not only is MY host revealing information YOURS is too. Ironic? Absolutely not! Amusing? Yes. And lastly, let’s finish this part up: “all of your computer’s data to the entire world at this very moment!” You know, if it were not for the fact people believe you, that would be hilarious too. Let’s break that into two parts. First, ALL of my computer’s data? Really now? Anyone who can think rationally knows that this is nothing but sensationalism at best but much more than that: it is you proclaiming to be an expert and then ABUSING that claim to MANIPULATE others into believing you (put another way: it is by no means revealing ALL data, not in the logical – data – sense or physical – hardware – sense). And the entire world? So you’re telling me that every single host on the Internet is analyzing my host at this very moment? If that were the case, my system’s resources would be too full to even connect to your website. Okay, context would suggest that you mean COULD but frankly I already covered that this is not the case (I challenge you to name the directory that is most often my current working directory let alone know that said directory even exists on my system).

If you are using a personal firewall product which LOGS contacts by other systems, you should expect to see entries from this site’s probing IP addresses: -thru- Since we own this IP range, these packets will …

Well, technically, based on that range, your block is And technically your block includes (a) the network address, (b) the default gateway and (c) the broadcast address. That means that the IPs that would be probing is in the range more like ‘’ – ‘’. And people really trust you? You don’t even know basic networking and they trust you with security?

Your Internet connection’s IP address is uniquely associated with the following “machine name”:


Technically that is the FQDN (fully-qualified domain name[1]), not “machine name” as you put it. You continue in this paragraph:

The string of text above is known as your Internet connection’s “reverse DNS.” The end of the string is probably a domain name related to your ISP. This will be common to all customers of this ISP. But the beginning of the string uniquely identifies your Internet connection. The question is: Is the beginning of the string an “account ID” that is uniquely and permanently tied to you, or is it merely related to your current public IP address and thus subject to change?

Again, your terminology is rather mixed up. While it is true that it you did a reverse lookup on my IP, it isn’t exactly “reverse DNS”. But since you are trying to simplify (read: dumb it down to your level) it for others, and since I know I can be seriously pedantic, I’ll let it slide. But it has nothing to do with my Internet connection itself (I have exactly one). It has to do with my IP address of which I have many (many if you consider my IPv6 block, but only 5 if you consider IPv4). You don’t exactly have the same FQDN on more than one machine any more than you have the same IP on more than one network interface (even on the same system). So no, it is NOT my Internet connection but THE specific host that went to your website and in particular the IP assigned to that host I connected from. And the “string” has nothing to do with an “account ID” either. But I’ll get back to that in a minute.

The concern is that any web site can easily retrieve this unique “machine name” (just as we have) whenever you visit. It may be used to uniquely identify you on the Internet. In that way it’s like a “supercookie” over which you have no control. You can not disable, delete, or change it. Due to the rapid erosion of online privacy, and the diminishing respect for the sanctity of the user, we wanted to make you aware of this possibility. Note also that reverse DNS may disclose your geographic location.

I can actually request a different block from my ISP and I can also change the IP on my network card. Then the only thing that is there is my IP and its FQDN (that is not in use and I can change the FQDN as I have reverse delegation yet according to you I cannot do any of that). I love your ridiculous terminology though. Supercookie? Whatever. As for it giving away my geographic location, let me make something very clear: the FQDN is irrelevant without the IP address. While it is true that the name will (sometimes) refer to a city it isn’t necessarily the same city or even county as the person USING it. The IP address is related to the network; the hostname is a CONVENIENCE for humans. You know, it used to be that host -> IP was done without DNS (since it didn’t exist) but rather a file that maintains the mapping (and still is used albeit very little). The reason it exists is for convenience and in general because no one would be able to know the IP of every domain name. Lastly, not all IPs resolve into a name.

If the machine name shown above is only a version of the IP address, then there is less cause for concern because the name will change as, when, and if your Internet IP changes. But if the machine name is a fixed account ID assigned by your ISP, as is often the case, then it will follow you and not change when your IP address does change. It can be used to persistently identify you as long as you use this ISP.

The occasions it resembles the IP is when the ISP has authority of the in-addr.arpa DNS zone of (your) IP and therefore has their own ‘default’ PTR record (but they don’t always have a PTR record which your suggestion does not account for; indeed, I could have removed the PTR record for my IP and then you’d have seen no hostname). But this does not indicate that it is static or not. Indeed, even dynamic IPs typically (not always) have a PTR record. Again, the name does not necessarily imply static: it is the IP that matters. And welcome to yesteryear… in these days you typically pay extra for static IPs but you suggest it is quite often that your “machine name is a fixed account ID” (which itself is completely misuse of terminology). On the other hand, you’re right: it won’t change when your IP address changes because the IP is relevant, not the hostname! And if your IP changes then it isn’t so persistent in identifying you, is it? It might identify your location but as multiple IPs (dynamic) and not a single IP.

There is no standard governing the format of these machine names, so this is not something we can automatically determine for you. If several of the numbers from your current IP address ( appear in the machine name, then it is likely that the name is only related to the IP address and not to you.

Except ISP authentication logs and timestamps… And I repeat the above: the name can include exactly as you suggest and still be static!

But you may wish to make a note of the machine name shown above and check back from time to time to see whether the name follows any changes to your IP address, or whether it, instead, follows you.

Thanks for the suggestion but I think I’m fine since I’m the one that named it.

Now, let’s get to the last bit of the ShieldsUP! nonsense.

GRC Port Authority Report created on UTC: 2014-07-16 at 13:20:16
Results from scan of ports: 0-10550 Ports Open
72 Ports Closed
984 Ports Stealth
1056 Ports TestedNO PORTS were found to be OPEN.Ports found to be CLOSED were: 0, 1, 2, 3, 4, 5, 6, 36, 37,
64, 66, 96, 97, 128, 159, 160,
189, 190, 219, 220, 249, 250,
279, 280, 306, 311, 340, 341,
369, 371, 399, 400, 429, 430,
460, 461, 490, 491, 520, 521,
550, 551, 581, 582, 608, 612,
641, 642, 672, 673, 734, 735,
765, 766, 795, 796, 825, 826,
855, 856, 884, 885, 915, 916,
945, 946, 975, 976, 1005, 1006,
1035, 1036Other than what is listed above, all ports are STEALTH.

TruStealth: FAILED – NOT all tested ports were STEALTH,
– NO unsolicited packets were received,

The ports you detected as “CLOSED” and not “STEALTH” were in fact returning an ICMP host-unreachable. You fail to take into account the golden rule of firewalls: that which is not explicitly permitted is forbidden. That means that even though I have no service running on any of those ports I still reject the packets to it. Incidentally, some ports you declared as “STEALTH” did exactly the same (because I only allow the ports in a specific IP block as the source network). The only time I drop packets to the floor is when state checks fail (e.g., a TCP SYN flag is set but it is already a known connection). I could prove that too, because I actually went and had you do the scan a second time but this time I added specific iptables rules for your IP block which changed the results quite a bit and indeed I used the same ICMP error code.

As for ping: administrators that block ping outright need to be hit over the head with a pot of sense. Rate limit by all means, that is more than understandable, but blocking ICMP echo requests (and indeed replies) only makes troubleshooting network connectivity issues more of a hassle and at the same time does absolutely nothing for security (fragmented packets and anything that can be abused obviously are dealt with differently because they are different!). Indeed, if they are going to attack they don’t really care if you respond to ICMP requests. If there is a vulnerability they will go after that and frankly hiding behind your “stealth” ports is only a false sense of security and/or security through obscurity (which is a false sense of security and even more harmful at the same time). Here’s two examples: First, if someone sends you a link (example: in email) and it seems to you to be legit and you click on it (there’s a lot of this in recent years and is ever increasing), the fact you have no services running does not mean you somehow are immune to XSS, phishing attacks, malware, or anything else. Security is, always has been and always will be a many layered thing. Secondly: social engineering.

And with that, I want to finish with the following:

If anyone wants the REAL story about Steve Gibson, you need only Google for “steve gibson charlatan” and see the many results. I can vouch for some of them but there really is no need for it – the evidence is so overwhelming that it doesn’t need any more validation. Here’s a good one though (which also shows his ignorance as well as how credible his proclamations are): http://www.theregister.co.uk/2002/02/25/steve_gibson_invents_broken_syncookies/ (actually it is a really good one). If you want a list of items, check the search result that refers to Attrition.org and you will see just how credible he is NOT. A good example is the one that links to a page about Gibson and XSS flaws which itself links to: http://seclists.org/vuln-dev/2002/May/25 which is itself offers a great amount of amusement (note that some of the links are no longer valid as it was years ago but that is the one at seclists.org and not the only incident).

[1] Technically, what his host is doing is taking the IP address and resolving it to a name (which is querying the PTR record as I refer to above). Since I have reverse delegation (so have authority) and have my own domain (which I also have authority of) I have my IPs resolve to fully-qualified domain names as such. FQDN is perhaps not the best wording (nor fair, especially) on my part in that I was abusing the fact that he is expecting normal PTR records that an ISP has rather than a server with proper an A record with a matching PTR record. What he refers to is as above: resolving the IP address to a name, which does not have to have a name. Equally, even if a domain exists by name, it does not have to resolve to an IP (“it is only registered”). He just gave it his own name for his own ego (or whatever else).

Death Valley, California, Safety Tips and Harry Potter

I guess this might be the most bizarre title for a post yet but it is a take on real life and fantasy and particularly the Harry Potter series. I am implying two things with real life. I will get to the Harry Potter part later. While it is a specific tragedy in Death Valley it is not an uncommon event and since I have many fond memories of Death Valley (and know the risks), I want to reflect on it all (because indeed fantasy is very much part of me, perhaps too much so).

For the first ten years of my life (approximate) I visited Death Valley each year, in November. It is a beautiful place with many wonderful sights. I have many fond memories of playing on the old kind of Tonka trucks (which is a very good example of “they don’t make [it] like they used to” as nowadays it is made out of plastic and what I’m about to describe would be impossible). My brother and I would take a quick climb up the hill right behind our tent, get on our Tonka trucks (each our own) and ride down, crashing or not, but having a lot of fun regardless. I remember the amazing sand dunes with the wind blowing like it tends to in a desert. I remember being fortunate enough that there was a ghost town with a person living there who could supply me with electricity for my nebulizer for an asthma attack (and fortunate enough to see many ghost towns from where miners in the California Gold Rush would have resided). I remember, absolutely, Furnace Creek with the visitor centre and how nice everyone was there. I even remember the garbage truck driver who let my brother and me activate the mechanism to pick up the bin. I remember the many rides on family friends’ dune buggies. The amazing hikes in the many canyons is probably a highlight (but certainly not the only highlight). Then there is Scotty’s Castle (they had a haunted house during Halloween if I recall). There is actually an underground river (which is an inspiration to another work I did but that is another story entirely). They have a swimming pool that is naturally warm. I remember all these things and more even if most of it is vague. It truly is a wonderful place.

Unfortunately, because of the vast area which spans more than 3,373,000 acres (according to Wiki which I seem to remember is about the right area – I’m sure the official Death Valley site would have more on this) and the very fact it is the hottest place on Earth (despite some claims; I am referring to officially acknowledged records) at  134.6 F / 57 C. That was, ironically enough, recorded this very month in 1913, on July 10 (again according to Wiki but from memory, other sources do have it in the early 1900s). This is an important bit (the day of the month in particular) for when I get to fantasy, by the way. Interestingly, the area I live in has a higher record for December and January than Death Valley by a few degrees (Death Valley: December and January at 89 F / 32 C ; my location I know I have seen on the thermostat at least 95 F / 35 C for both months although it could have been higher too). Regardless, Death Valley has a higher record by 10 C higher (my location record: 47 C / 116.6 F; Death Valley as above). And if you think of the size (as listed above) and that much of it is unknown territory for all but seasoned campers (which my family would fit that category), you have to be prepared. Make no mistake people: Death Valley and deserts in general, can be very, very dangerous. Always make sure you keep yourself hydrated. What is hydration though, for humans? It is keeping your electrolytes at a balanced level. This means that indeed too much water is as dangerous as too little water. As a general rule of thumb that was given to me by the RN (registered nurse) for a hematologist I had (another story entirely, as well, as for why I had one): if you are thirsty you waited too long. Furthermore, for Death Valley (for example) make sure you either have a guide or you know your way around (and keep track – no matter how you do this – where you go). That may include maps, compass, landmarks, and any other number of techniques. But it is absolutely critical. I have time and again read articles on the BBC where someone (or some people) from the UK or parts of Europe were unprepared and were found dead. It is a wonderful place but be prepared. Although this should be obvious, it often isn’t: Death Valley is better visited in the cooler months (close to Winter or even in Winter). I promise you this: it won’t be cold by any means. Even if you are used to blizzards in your area, you will still have plenty of heat year round in Death Valley. I should actually restate that slightly, thinking about a specific risk (and possibility). Deserts can drop to freezing temperatures! It is rare yes, but when it does it still will be cold. Furthermore, deserts can see lots of rain, even flash floods! Yes, I’ve experienced this exactly. Furthermore, as for risks, if it looks cloudy (or if you have a sense of smell like mine where you can smell rain that is about to drop, and no that is not an exaggeration – my sense of smell is incredibly strong) or there is a drizzle (or otherwise light rain) or more than that, do not even think about hiking the canyons! It is incredibly dangerous to attempt it! This cannot be stressed enough. As for deserts and freezing temperature, I live in a desert (most of Southern California is a desert) and while it was over 22 years ago (approximately) we still have seen snow on our yard. So desert does not mean no rain or no snow. I’ve seen people write about hot and dry climates and deserts (comparing the two) but that is exactly what a desert is: a hot and dry climate! But climate does not by any means somehow restrict what can or cannot happen. Just like Europe can see mid 30s (centigrade) so too can deserts see less than zero. And all this brings me to the last part: fantasy.

One of my favourite genres (reading – I rarely watch TV or films) is fantasy. While this is not the only series I have read, the Harry Potter series is one that I am referring to in particular to, as I already highlighted. Indeed, everything in Harry Potter has a reason, has a purpose and in general will be part of the entire story! That is how good it is and that is how much I enjoyed it (I also love puzzles so putting things together, or rather the need to do that, was a very nice treat indeed). I’m thankful for a friend that finally got me to read it (I had the books actually but never got around to reading the ones that were out, which would be up to and including book 3, Harry Potter and the Prisoner of Azkaban). The last two books I read the day it came out, in full, with hours to spare. Well, why on Earth would I be writing about fantasy, specifically Harry Potter, and Death Valley, together? I just read on the BBC, that Harry Potter Actor Dave Legeno has been found dead in Death Valley. He played the werewolf Fenrir Greyback. I will note the irony that today, the 12th of July, this year it is a full moon. I will also readily admit that in fantasy, not counting races by themselves (e.g., Elves, Dwarves, Trolls, …) werewolves are my favourite type of creature. I find the idea fascinating and there is a large part of me that wishes they were real. (As for my favourite race it would likely be Elves) I didn’t know the actor, of course, but the very fact he was British makes me think he too fell to the – if you will excuse the pun which is by no means meant to be offensive to his family or anyone else – fantasy of experiencing Death Valley, and unfortunately it was fatal. And remember I specifically wrote 1913, July 10 as the record temperature for Death Valley? Well, I did mean it when I wrote it has significance here: he was found dead on July 11 of this year. Whether that means he died on the 11th is not exactly known yet (it is indeed a very large expansion and it is only that hikers found him, that it is known) but that it was one day off is ironic indeed. It is completely possible he died on the 10th and it is also possible it was days before or even the 11th. This is one of those things that will be known after autopsy occurs as well as backtracking (by witnesses and other evidence) and not until then. Until then, it is anyone’s guess (and merely speculation). Regardless of this, it is another person who was unaware of the risks of which there are many (depending on where in Death Valley you might be in a vehicle; what happens if you run out of fuel and only have enough water for three days? There are so many scenarios but they are far too often not thought of or simply neglected). Two other critical bits of advice: don’t ignore the signs left all around the park (giving warnings) and always, without fail, tell someone where you will be! If someone knew where he was and knew approximately when he should be back (which should always be considered when telling someone else where you’ll be) they could have gone looking for him. This piece of advice, I might add, goes for hiking, canoeing and anything else (outside of Death Valley, this is a general rule), especially if you are alone (but truthfully – and I get the impression he WAS alone – you should not be alone in a place as large as Death Valley because there are many places to fall, there are animals that could harm you, and instead of having a story to bring home you risk not coming home at all). There are just so many risks so always be aware of that and prepare ahead of time. Regardless, I want to thank Dave for playing Fenrir Greyback. I don’t know if [you] played in any other films and I do not know anything about you or your past but I wish you knew the risks beforehand and my condolences (for whatever they can be and whatever they are worth) to your friends and family. I know that most will find this post out of character (again if you will excuse the indeed intended pun) for what I typically write about, but fantasy is something I am very fond of, and I have fond memories of Death Valley as well.

“I ‘Told’ You So!”

Update on 2014/06/25: Added a word that makes something more clear (specifically the pilots were not BEING responsible but I wrote “were not responsible”).

I was just checking the BBC live news feed I have in my bookmark bar in Firefox and I noticed something of interest. What is that? How automated vehicle systems (whether controlled by humans or not it is still created by and automation itself has its own flaws) are indeed dangerous. Now why is that interesting to me? Because I have written about this before in more than one way! So let us break this article down a bit:

The crew of the Asiana flight that crashed in San Francisco “over-relied on automated systems” the head of the US transport safety agency has said.

How many times have I written about things being dumbed down to the point where people are unable – or refuse – to think and act accordingly to X, Y and Z? I know it has been more than once but apparently it was not enough! Actually, I would rather state: apparently not enough people are thinking at all. That is certainly a concern to any rational being. Or it should be.

Chris Hart, acting chairman of the National Transportation Safety Board (NTSB), said such systems were allowing serious errors to occur.

Clearly. As the title suggests: I ‘told’ you so!

The NTSB said the 6 July 2013 crash, which killed three, was caused by pilot mismanagement of the plane’s descent.

Again: relying on “smart” technology is relying on the smartest of the designer and the user (which doesn’t leave much chance, does it?). But actually in this case it is even worse. The reasons: First, they are endangering others lives (and three died – is that enough yet?). Second is the fact that they are operating machinery, not using a stupid (which is what a “smart” phone is) phone. I specifically wrote about emergency vehicles and this and here we are, where exactly the situation arises: there are events that absolutely cannot be accounted for automatically and require that a person is paying attention and using the tool responsibly!

During the meeting on Tuesday, Mr Hart said the Asiana crew did not fully understand the automated systems on the Boeing 777, but the issues they encountered were not unique.

This is also called “dumbing the system down” isn’t it? Yes, because when you are no longer required to think and know how something works, you cannot fix problems!

“In their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid,” Mr Hart said.

Much like I wrote about related to all of and then some: computer security, computer problems, emergency vehicles and in general automated vehicles. This is another example.

The South Korea-based airline said those flying the plane reasonably believed the automatic throttle would keep the plane flying fast enough to land safely.

Making assumptions at the risk of others lives is irresponsible and frankly reprehensible! I would argue it is potentially – and in this case, is – murderous!

But that feature was shut off after a pilot idled it to correct an unexplained climb earlier in the landing.

Does all of this start to make sense? No? It should. Look what the pilot did? Why? A stupid mistake or an evil gremlin took over him momentarily? Maybe the gremlin IS their stupidity.

The airline argued the automated system should have been designed so that the auto throttle would maintain the proper speed after the pilot put it in “hold mode”.

They should rather be saying sorry and then some. They should also be taking care of the mistake THEY made (at least as much as they can; they already killed – and yes, that is the proper way of wording it – three people)!

Boeing has been warned about this feature by US and European airline regulators.

The blame shouldn’t be placed on Boeing if they didn’t actually neglect and they are doing what it seems everyone wants: automation. Is that such a good idea? As I pointed out many times: no. Let me reword that a bit. Is Honda responsible for a drunk getting behind the wheel and then killing a family of five, four, three, two or even one person (themselves included – realistically that would be the only one who is not innocent!)? No? Then why the hell should Boeing be blamed for a pilot misusing the equipment? The pilot is not being responsible and the reason (and how) the pilot is not being responsible is irrelevant!

“Asiana has a point, but this is not the first time it has happened,” John Cox, an aviation safety consultant, told the Associated Press news agency.

It won’t be the last, either. Mark my words. I wish I was wrong but until people wake up it won’t be fixed (that isn’t even including the planes already in commission).

“Any of these highly automated airplanes have these conditions that require special training and pilot awareness. … This is something that has been known for many years.”

And neglected. Because why? Here I go again: it is so dumbed down, so automatic that the burden shouldn’t be placed on the operators! Well guess what? Life isn’t fair. Maybe you didn’t notice that or you like to ignore the bad parts of life, but the fact remains life isn’t fair and they (the pilots and in general the airliner) are playing the pathetic blame game (which really is saying “I’m too immature and irresponsible and not only that I cannot dare admit that I am not perfect. Because of that it HAS to be someone else who is at fault!”).

Among the recommendations the NTSB made in its report:

  • The Federal Aviation Administration should require Boeing to develop “enhanced” training for automated systems, including editing the training manual to adequately describe the auto-throttle programme.
  • Asiana should change its automated flying policy to include more manual flight both in training and during normal operations
  • Boeing should develop a change to its automatic flight control systems to make sure the plane “energy state” remains at or above minimum level needed to stay aloft during the entire flight.

My rebuttal to the three points:

  • They should actually insist upon “improving” the fully automated system (like scrapping the idea). True, this wasn’t completely automated but it seems that many want that (Google self driving cars, anyone?). Because let’s all be real, are they of use here? No, they are not. They’re killing – scrap that, murdering! – people. And that is how it always will be! There is never enough training. There is always the need to stay in the loop. The same applies to medicine, science, security (computer, network and otherwise), and pretty much everything in life!
  • Great idea. A bit late of them though, isn’t it? In fact, a bit late of all airliners that rely on such a stupid design!
  • Well they could always improve but the same thing can be said for cars, computers, medicinal science, other science, and here we go again: everything in this world! But bottom line is this: it is not at all Boeing’s fault. They’re doing what everyone seems to want.

And people STILL want flying cars? Really? How can anyone be THAT stupid? While I don’t find it hard to believe such people exist, I still find it shocking. To close this, I’ll make a few final remarks:

This might be the wrong time, according to some, since it is just reported. But it is not! If it is not the right time now, then when? This same thing happens with everything of this nature! Humans always wait until a disaster (natural or man made) happens until doing something. And then they pretend (lying about it in the process) to be better but what happens next? They do the same thing all over again. And guess what also happens at that time? The same damned discussions (that I dissected, above) occurs! Here’s a computer security example: I’ve lost count with the number of times NASA has suggested they would be improving policies with their network and I have also lost count of times they then went on to LATER be compromised AGAIN with the SAME or EQUALLY stupid CAUSE! Why is this? Irresponsibility and complete and utter stupidity. Aside from the fact that the only thing we learn thing from history is that – and yes, pun is most definitely intended - we do not learn a bloody thing from history! And that is because of stupidity and irresponsibility.

Make no mistake, people:

  1. This will continue happening until humans wake up (which I fear that since even in 2014 ‘they’ have not woken up, they never will!).
  2. I told you so, I was right then and I am still right!
  3. Not only did I tell you so about computer security (in the context of automation) I also told you about real life incidents, including emergencies. And I was right then and I am still right!

Hurts? Well some times that’s the best way. Build some pain threshold as you’ll certainly need it. If only it was everyone’s head at risk, because they’re so thick that they’d survive! Instead we all are at risk because of others (including ourselves, our families, everyone’s families, et al.). Even those like me who suggest this time and again are at risk (because they are either forced in to using the automation or they are surrounded by drones – any pun is much intended here, as well – who willingly use their “smart” everything… smart everything except their brain, that is!

SELinux, Security and Irony Involved

I’ve thought of this in the past and I’ve been trying to do more things (than usual) to keep me busy (there’s too few things that I spend time doing, more often than not), and so I thought I would finally get to this. So, when it comes to SELinux, there are two schools of thought:

  1. Enable and learn it.
  2. Disable it; it isn’t worth the trouble.

There is also a combination of the two: put it in permissive mode so you can at least see alerts (much like logs might be used). But for the purpose of this post, I’m going to only include the mainstream thoughts (so 1 and 2, above). Before that though, I want to point something out. It is indeed true I put this in the security category but there is a bit more to it than that, as those who read it will find out at the end (anyone who knows me – and yes this is a hint – will know I am referring to irony, as the title refers to). I am not going to give a suggestion on the debate of SELinux (and that is what it is – a debate). I don’t enjoy and there is no use in having endless debates on what is good, bad, what should be done, debating on whether or not to do something or even debating on a debating (and yes the latter two DO happen – I’ve been involved in a project that had this and I stayed out of it and did what I knew to be best for the project over all). That is all a waste of time and I’ll leave it to those who enjoy that kind of thing. Because indeed the two schools of thought do involve quite a bit of emotion (something I try to avoid itself, even) – they are so passionate about it, so involved in it, that it really doesn’t matter what is suggested from the other side. It is taking “we all see and hear what we want to see and hear” to the extreme. This is all the more likely when you have two sides. It doesn’t matter what they are debating and it doesn’t matter how knowledgeable they are or are not and nothing else matters either: they believe their side’s purpose so strongly, so passionately that much of it is mindless and futile (neither side sees anything but their own side’s view). Now then…

Let’s start with the first. Yes, security is indeed – as I’ve written about before – layered and multiple layers it always has been and always should be. And indeed there are things SELinux can protect against. On the other hand, security has to have a balance or else there is even less security (password aging + many different accounts so different passwords + password requirements/restrictions = a recipe for disaster). In fact, it is a false sense of security and that is a very bad thing. So let’s get to point two. Yes, that’s all I’m going to write on the first point. As I already wrote, there isn’t much to it notwithstanding endless debates: it has pros and it has cons, that’s all there is to it.

Then there’s the school of thought that SELinux is not worth the time and so should just be disabled. I know what they mean, not only with the labelling of the file systems (I wrote about this before and how SELinux itself has issues at times all because of labels, and so you have to have it fix itself). That labelling issue is bad itself but then consider how it affects maintenance (even worse for administrators that maintain many systems). For instance, new directories in Apache configuration, as one example. Yes, part of this is laziness but again there’s a balance. While this machine (the one I write from, not xexyl.net) does not use it I still practice safe computing, I only deal with software in main repositories, and in general I follow exactly as I preach: multiple layers of security. And finally, to end this post, we get to some irony. I know those who know me well enough will also know very well that I absolutely love irony, sarcasm, satire, puns and in general wit. So here it is – and as a warning – this is very potent irony so for those who don’t know what I’m about to write, prepare yourselves:

You know all that talk about the NSA and its spying (nothing alarming, mind you, nor anything new… they’ve been this way a long long time and which country doesn’t have a spy network anyway? Be honest!), it supposedly placing backdoors in software and even deliberately weakening portions of encryption schemes? Yeah, that agency that there’s been fuss about ever since Snowden started releasing the information last year. Guess who is involved with SELinux? Exactly folks: the NSA is very much part of SELinux. In fact, the credit to SELinux belongs to the NSA. I’m not at all suggesting they tampered with any of it, mind you. I don’t know and as I already pointed out I don’t care to debate or throw around conspiracy theories. It is all a waste of time and I’m not about to libel the NSA (or anyone, any company or anything) about anything (not only is it illegal it is pathetic and simply unethical, none of which is appealing to me), directly or indirectly. All I’m doing is pointing out the irony to those that forget (or never knew of) SELinux in its infancy and linking it with the heated discussion about the NSA of today. So which is it? Should you use SELinux or not? That’s for each administrator to decide but I think actually the real thing I don’t understand (but do ponder about) is: where do people come up with the energy, motivation and time to bicker about the most unimportant, futile things? And more than that, why do they bother? I guess we’re all guilty to some extent but some take it too far too often.

As a quick clarification, primarily for those who might misunderstand what I find ironic in the situation, the irony isn’t that the NSA is part of (or was if not still) SELinux. It isn’t anything like that. What IS ironic is that many are beyond surprised (and I still don’t know why they are but then again I know of NSA’s history) at the revelations (about the NSA) and/or fearful of their actions and I would assume that some of those people are those who encourage use of SELinux. Whether that is the case and whether it did or did not change their views, I obviously cannot know. So put simply, the irony is that many have faith in SELinux which the NSA was (or is) an essential part of and now there is much furor about the NSA after the revelations (“revelations” is how I would put it, at least for a decent amount of it).

Fully-automated ‘Security’ is Dangerous

Thought of a better name. Still leaving the aside, below, as much of it is still relevant.

(As a brief aside, before I get to the point: This could probably be better named. By a fair bit, even. The main idea is security is a many layered concept and it involves computers – and its software – as well as people, and not either or and in fact it might involve multiples of each kind. Indeed, humans are the weakest link in the chain but as an interesting paradox, humans are still a very necessary part of the chain. Also, while it may seem I’m being critical in much of this, I am actually leading to much less criticism, giving the said organisation the benefit of the doubt as well as getting to the entire point and even wishing the entire event success!)

In our ever ‘connected’ world it appears – at least to me – that there is so much more discussion about automatically solving problems without any human interaction (I’m not referring to things like calculating a new value for Pi, puzzles, mazes or any thing like that; that IS interesting and that is indeed useful, including to security, even if indirectly and yes, this is about security but security itself on a whole scale). I find this ironic and in a potentially dangerous way. Why are we having everything connected if we are to detach ourselves from the devices (or in some cases the attached so to the device that they are detached from the entire world)? (Besides brief examples, I’ll ignore the part where so many are so attached to their bloody phone – which is, as noted, the same thing as being detached from the world despite the idea they are ever more attached or perhaps better stated, ‘connected’ – that they walk into walls, people – like someone did to me the other day, same cause – and even walking off a pier in Australia while checking Facebook! Why would I ignore that? Because I find it so pathetic yet so funny that I hope more people do stupid things like that that I absolutely will laugh at as should be done, as long as they are not risking others lives [including rescuers lives, mind you; that’s the only potential problem with the last example: it could have been worse and due to some klutz the emergency crew could be at risk instead of taking care of someone else]. After all, those that are going to do it don’t get the problem so I may as well get a laugh at their idiocy, just like everyone should do. Laughing is healthy. Besides those points it is irrelevant to this post). Of course, the idea of having everything connected also brings the thought of automation. Well, that’s a problem for many things including security.

I just read that the DARPA (which is the agency that created ARPANet, you know, the predecessor to the Internet – and ARPA still is referred to in DNS, for example) is running a competition as such:

“Over the next two years, innovators worldwide are invited to answer the call of Cyber Grand Challenge. In 2016, DARPA will hold the world’s first all-computer Capture the Flag tournament live on stage co-located with the DEF CON Conference in Las Vegas where automated systems may take the first steps towards a defensible, connected future.”

Now, first, a disclaimer of sorts. Having had (and still do) friends who have been to (and continue to go to) DefCon (and I’m referring to the earlier years of DefCon as well) and not only did they go there, they bugged me relentlessly (you know who you are!) for years to go there too (which I always refused, much to their dismay, but I refused for multiple reasons, including the complete truth: the smoke there would kill me if nothing else did before that), I know very well that there are highly skilled individuals there. I have indirectly written about groups that go there, even. Yes, they’re highly capable, and as the article I read about this competition points out, DefCon already has a capture the flag style tournament, and has for many years (and they really are skilled, I would suggest that except charlatans like Carolyn P. Meinel, many are much more skilled than me and I’m fine with that. It only makes sense anyway: they have a much less hectic life). Of course the difference here is fully automated without any human intervention. And that is a potentially dangerous thing. I would like to believe they (DARPA) would know better seeing as how the Internet (and the predecessor thereof) was never designed with security in mind (and there is never enough foresight) – security as in computer security, anyway. The original reason for it was a network of networks capable of withstanding a nuclear attack. Yes, folks, the Cold War brought one good thing to the world: the Internet. Imagine that paranoia would lead us to the wonderful Internet. Perhaps though, it wasn’t paranoia. It is quite hard to know, as after all a certain United States of America President considered the Soviet Union “the Evil Empire” and as far as I know wanted to delve further on that theme, which is not only sheer idiocy it is complete lunacy (actually it is much worse than that)! To liken a country to that, it boggles the mind. Regardless, that’s just why I view it ironic (and would like to think they would know better). Nevertheless, I know that they (DARPA, that is) mean well (or well, I hope so). But there is still a dangerous thing here.

Here is the danger: By allowing a computer to operate on its own and assuming it will always work, you are essentially taking a great risk and no one will forget what assuming does, either. I think that this is actually a bit understated, because you’re relying on trust. And as anyone who has been into security for 15-20 years (or more) knows, trust is given far too easily. It is a major cause of security mishaps. People are too trusting. I know I’ve written about this before, but I’ll just mention the names of the utilities (rsh, rcp, …) that were at one point the norm and briefly explain the problem: the configuration option – that was often used! – which allowed logging in to a certain host WITHOUT a password from ANY IP, as long as you login as a CERTAIN login! And people have refuted this by using the logic of, they don’t have a login with that name (and note: there is a system wide configuration file of this and also per user which makes it even more of a nightmare). Well, if it is their own system, or if they compromised it, guess what they can do? Exactly – create a login with that name. Now they’re more or less a local user which is so much closer to rooting (or put another way, gaining complete control of) the system (which potentially allows further systems to be compromised).

So why is the DARPA even considering fully automated intervention/protection? While I would like to claim that I am the first one to notice this (and more so put it in similar words), I am not, but it is true: the only thing we learn from history is that we don’t learn a damned thing from history (or we don’t pay attention, which is even worse because it is flat out stupidity). The very fact that systems were compromised by something that was ignored, not thought of prior to (or thought of in a certain way – yes, different angles provide different insights) or new innovations come along to trample over what was once considered safe is all that should be needed in order to understand this. But if not, perhaps this question will resonate better: does lacking encryption mean anything to you, your company, or anyone else? For instance, telnet, a service that allows authentication and isn’t encrypted (logging in, as in sending login and password in clear over the wire). If THAT was not foreseen you can be sure that there will ALWAYS be something that cannot be predicted. Something I have – as I am sure everyone has – experienced is that things will go wrong when you least expect them to. Not only that, much like I wrote not too long ago, it is as if a curse has been cast on you and things start to come crashing down in a long sequence of horrible luck.

Make no mistake: I expect nothing but greatness from the folks at DefCon. However, there’s never a fool-proof, 100% secure solution (and they know that!). The best you can expect is to always be on the look out, always making sure things are OK, keeping up to date on new techniques, new vulnerabilities, and so on, so software in addition to humans! This is exactly why you cannot teach security; you can only learn it – by applying knowledge, thinking power and something else that schools cannot give you: real life experience. No matter how good someone is, there’s going to be someone who can get the better of that person. I’m no different. Security websites have been compromised before, they will in the future. Just like pretty much every other kind of site (example: one of my other sites, before this website and when I wasn’t hosting on my own, the host made a terrible blunder, one that compromised their entire network and put them out of business. But guess what? Indeed, the websites they hosted were defaced, including the other site of mine. And you know what? That’s not exactly uncommon, for defaced websites to occur in mass batches simply because the webhost had a server – or servers – compromised* and well, the defacer had to make their point and name made). So while I know DefCon will deliver, I know also it to be a mistake for DARPA to think there will at some point be no need for human intervention (and I truly hope they actually mean it to be in addition to humans; I did not, after all, read the entire statement, but it makes for a topic to write about and that’s all that matters). Well, there is one time this will happen: when either the Sun is dead (and so life here is dead) or humans obliterate each other, directly or indirectly. But computers will hardly care at that point. The best case scenario is that they can intervene certain (and indeed perhaps many) attacks. But there will never be a 100% accurate way to do this. If it were so, heuristics and the many other tricks that anti-virus (and malware itself) products deploy, would be much more successful and have no need for updates. But has this happened? No. That’s why it is a constant battle between malware writers and anti-malware writers: new techniques, new people in the field, things changing (or more generally, technology evolving like it always will) and in general a volatile environment will always keep things interesting. Lastly, there is one other thing: humans are the weakest link in the security chain. That is a critical thing to remember.

*I have had my server scanned by different sites and they didn’t know they had a customer (or in some cases, a system – owned by a student, perhaps – at a school campus) that had their account compromised. Indeed, I tipped the webhosts (and schools) off that they had a rogue scanner trying to find vulnerable systems (all of which my server filtered but nothing is fool-proof, remember that). They were thankful, for the most part, that I informed them. But here’s the thing: they’re human and even though they are a company that should be looking out for that kind of thing, they aren’t perfect (because they are human). In other words: no site is 100% immune 100% of the time.

Good luck, however, to those in the competition. I would specifically mention particular (potential) participants (like the people who bugged me for years to go there!) but I’d rather not state them here by name, for specific (quite a few) reasons. Regardless, I do wish them all good luck (those I know and those I do not know). It WILL be interesting. But one can hope (and I want to believe they are keeping this in mind) the DARPA knows this is only ONE of the MANY LAYERS that make up security (security is, always has been and always will be, made up of multiple layers).