Using ‘script’ and ‘tail’ to watch a shell session in real-time

This is an old trick that my longest standing friend Mark and I used years ago on one of his UltraSPARC stations while having fun doing any number of things. It can be used for all sorts of needs (e.g., showing someone how to do something, allowing someone to help debug your problem to name two of many others) but the main idea is that person is running tasks (for the purpose of this article I will pretend this person is the victim) and more generally using the shell, while the other person (and pretending that this person is the snoop) is watching everything, even if they’re across the world. It works as long as both are on the same system and that the victim writes output (directs to) a file that the snoop can read (as in open for reading).

Before I get to how to do this, I want to point something else out. If you look at the man page for script, you will see the following block of text under OPTIONS:

-f, –flush
Flush output after each write. This is nice for telecooperation: one person does `mkfifo foo; script -f foo’, and another can supervise real-time what is being done using `cat foo’.

But there are two problems with this method both due to the fact that the watching party (as I put it for amusement, the snoop) has control. For example, if I do indeed type at the shell:

$ mkfifo /tmp/$(whoami).log ; script --flush -f /tmp/$(whoami).log

… then my session will block, waiting for the snoop to type at their prompt:

$ cat /tmp/luser.log

(assuming my login name is indeed luser). And until that happens, even if I type a command, no output occurs on my end (the command is not ignored, however). Once the other person does type that I will see the output of script (showing that the the output is being written to /tmp/luser.log and any output from commands that I might have typed). The other user will see the output too, including which file is being written to. Secondly, the snoop decides when to stop. When they hit ctrl-c then once I begin to type, I will see at my end, something like this:

$ lScript done, file is /tmp/luser.log
$

Note that I hit the letter l, as if I was going to type ls (for example) and then I see the script done output. If I finish the command, let’s say by typing s and then hit enter, I will instead of seeing the output of ls, I will see (since typing ls hardly takes any time I will show it as it would appear on my screen, with the command completed, or one would suspect so):

$ lScript done, file is /tmp/luser.log
$ s
-bash: s: command not found

Yes, that means that the first character closes my end (the lScript is not a typo, that is what appears on my screen), shows me the typical message after script is done and then and only then do I get to enter a command proper.

So the question is, is there a way that I can control the starting of the file, and even more than that, could the snoop check on the file later (doesn’t watch in the beginning) or stop in the middle and then start watching again? Absolutely. Here’s how:

  • Instead of making a fifo (first in first out, i.e., a queue) I specify a file to write the script output to (a normal file with a caveat as below), or alternatively let the default file name be the output, instead. So what I type is:
    $ script --flush -f /tmp/$(whoami).log
    Script started, file is /tmp/luser.log
    $
  • After that is done, I inform (somewhere else, or otherwise they use the –retry option of tail, to repeatedly try until interrupted or the file can be followed) the snoop (now THAT is something you don’t expect to ever be true, is it? Why would I inform a snoop of anything at all?! This is of course WHY I chose the analogy in the first place) and they then type:
    $ tail -f /tmp/luser.log

    And they will see – by default – the last ten lines of the session (the session implies the script log, so not the last ten lines of my screen!). They could of course specify how many lines but the point is they will now be following (that’s what -f does) the output of the file, which means whenever I type a command, they will see that as well as any output. This will happen until they hit ctrl-c or I type ‘exit’ (and if I do that they will still try to follow the file so they will need to hit ctrl-c too). Note that even if I remove the log file while they’re watching it, they will still see the output until I exit the script session. This is because they have a file descriptor of the log file and so while the file is no longer written to, they are still following it (this is because of how inodes work).

As for the caveat I referred to, it is simply this: control characters are also sent to the file and so it isn’t ASCII only. Furthermore, because of the same reason, using text editors (e.g., vi) will not show correctly to the snoop.

In the end, this is probably not often used but it is very useful when it is indeed needed. Lastly, if you were cat the output file, you’d see it as if you were watching the file in real-time. Most importantly: do not ever do anything that would reveal confidential information and if you do have anything you don’t want shown to the world, do not use /tmp or any public-readable file (and rm it when done too!). Yes, you can have someone read a file in your directory as long as they know the full path and have proper permissions to the directory and file.

Encryption IS Critical

I admit that I’m not big on mobile phones (and I also admit this starts out on phones but it is a general thing and the rules apply to all types of nodes). I’ve pointed this out before, especially with regards to so-called smart technology. However, just because I personally don’t have much use for it, most of the time, does not mean that the devices should not be as secure as possible. Thus, I am firstly, giving credit to Apple (which all things considered is exceptionally rare) and Google (which is also very rare). I don’t like Apple particularly because of Steve Jobs’ arrogance (which I’ve also written about) but that is only part of it. At the same time, I do have fond memories of the early Apple computers. As for Google, I have serious issues with them but I haven’t actually put words to it here (or anywhere actually). But just because I don’t like them does not mean they can never do something right or that I approve of. To suggest that would be me being exactly as I call them out for. Well, since Apple and Google recently suggested they were to enable encryption by default for iOS and Android, I want to commend them for it: encryption is important.

There is the suggestion, most often by authorities (but not always as – and this is something I was planning on and I might still write about – Kaspersky showed not too long ago when they suggested similar things), that encryption (and more generally, privacy) is a problem and a risk to ones safety (and others’ safety). The problem here is that they are lying to themselves or they are simply ignorant (ignore the obvious please, I know it but it is besides the point for this discussion). They are also risking the people they claim to want to protect and they also risk themselves. Indeed, how many times has government data been stolen? More than I would like to believe and I don’t even try to keep track of it (statistics can be interesting but I don’t find the subject of government – or indeed other entity – failures all that interesting. Well, not usually). The problem really comes down to this, doesn’t it? If someone has access to your or another person’s private details, and it is not protected (or poorly protected), then what can be done to you or that other person if someone ELSE gets that information? Identify theft? Yes. Easier time gathering other information about you, who you work for, your friends, family, your friends’ families, etc.? Yes. One of the first things an attacker will do is gather information because it is that useful in attacks, isn’t it? And yet, that’s only two issues of many more, and both of those are serious.

On the subject of encryption and the suggestion that “if you have nothing to hide you have nothing to fear”, there is a simple way to obliterate it. All one needs to do is ask a certain (or similar) question and explanation following, directed at the very naive and foolish person (Facebook founder has suggested similar, as an example). The question is along the lines of: Is that why you keep your bank account, credit cards, keys, passwords, etc., away from others? You suggest that you shouldn’t have a need to keep something private because you have nothing to hide unless you did something wrong (and so the only time you need to fear is when you are in fact doing something wrong). But here you are hiding something (that you wouldn’t want others knowing, in other words, and with your logic it follows that you did something wrong), yet here you are hiding your private information. The truth is that if you have that mentality, you are either lying to yourself (and ironically hiding something from yourself and therefore not exactly following your suggestion) or you have hidden intent or reasons to want others information (which, ironically enough, is also hiding something – your intent). And at the same time,  you know full well that YOU do want your information private (and YOU should want it private!).

But while I’m not surprised here, I still find it hard to fathom how certain people, corporations and other entities still think strong encryption is a bad thing. Never mind the fact that many high-profile cases of criminal data confiscated by police has been encrypted and yet revealed. Never mind the above. It is about control and power and we all know that the only people worthy of power are those who do not seek it but are somehow bestowed with it. So what am I getting at? It seems that, according to the BBC, the FBI boss is concerned about Apple’s and Google’s plans. Now I’m not going to be critical of this person, the FBI in general or anything of the sort. I made aware in the past that I won’t get in to the cesspool that is politics. However, what I will do is remark on something this person said but not remark on it by itself. Rather I will refer to something most amusing. What he said is this:

“What concerns me about this is companies marketing something expressly to allow people to place themselves beyond the law,” he said.

“I am a huge believer in the rule of law, but I am also a believer that no-one in this country is beyond the law,” he added.

But yet, if you look at the man page of expect, which allows interactive things that a Unix shell cannot do by itself, you’ll note the following capability:

  • Cause your computer to dial you back, so that you can login without paying for the call.

That is, as far as I am aware, a type of toll fraud. Why am I even bringing this up, though? What does this have to do with the topic? Well, if you look further at the man page, you’ll see the following:

ACKNOWLEDGMENTS
Thanks to John Ousterhout for Tcl, and Scott Paisley for inspiration. Thanks to Rob Savoye for Expect’s autoconfiguration code.

The HISTORY file documents much of the evolution of expect. It makes interesting reading and might give you further insight to this software. Thanks to the people mentioned in it who sent me bug fixes and gave other assis‐
tance.

Design and implementation of Expect was paid for in part by the U.S. government and is therefore in the public domain. However the author and NIST would like credit if this program and documentation or portions of them are
used.
29 December 1994

I’m not at all suggesting that the FBI paid for this, and I’m not at all suggesting anyone in the government paid for it (it is, after all, from 1994). And I’m not suggesting they approve of this. But I AM pointing out the irony. This is what I meant earlier – it all comes down to WHO is saying WHAT and WHY they are saying it. And it isn’t always what it appears or is claimed. Make no mistake people, encryption IS Important, just like PCI compliance, auditing (regular corporation auditing of different types, auditing of medical professionals, auditing in everything), and anyone suggesting otherwise is ignoring some very critical truths. So consider that a reminder, if you will, of why encryption is a good thing. Like it or not, many humans have no problem with theft, no problem with manipulation, no problem with destroying animals or their habitat (Amazon forest, anyone?). It is by no means a good thing but it is still reality and not thinking about it is a grave mistake (including indeed literally, and yes, I admit that that is pointing out a pun). We cannot control others in everything but that doesn’t mean we aren’t responsible for our own actions and ignoring something that risks yourself (never mind others here) places the blame on you, not someone else.

shell aliases: the good, the bad and the ugly


2014/10/07:
Please observe the irony (that actually further proves my point, and that itself is ironic as well) that I suggest using the absolute path name and then I do not (with sed). This is what I mean by I am guilty of the same mistakes. It is something I have done over the years: work on getting in to the habit (of using absolute paths) and then it slides and then it happens all over again. This is why it is so important to get it right the first time (and this rule applies to security in general!). To make it worse, I knew it before I had root access to any machine (ever), years back. But this is also what I discussed with convenience getting in the way with security (and aliases only further add to the convenience/security conflict, especially with how certain aliases enable coloured output or some other feature). Be aware of what you are doing, always, and beware of not taking this all to heart. (And you can bet I’ve made a mental note to do this. Again.) Note that this rule won’t apply to shell built-ins unless you use the program too (some – e.g., echo – have both). The command ‘type’ is a built-in, though, and it is not a program. You can check by using the command itself like (type -P type will show nothing because there is no file on disk for type). Note also that I’ve not updated the commands where I show how aliases work (or commands that might be aliased). I’ve also not updated ls (and truthfully it probably is less of an issue, unless you are root, of course) but do note how to determine all ways a command can be invoked:

$ type -a ls
ls is aliased to `ls --color=auto'
ls is /usr/bin/ls

This could in theory could be only for Unix and its derivatives, but I feel there are similar risks in other environments. For instance, in DOS, extensions of programs had a priority so that if you didn’t type ‘DOOM2.EXE’ it would check – if I recall correctly, ‘DOOM2.BAT’ and ‘DOOM2.COM’ and then ‘DOOM2.EXE’. I don’t remember if that is the exact order but with no privilege separation you had the ability to rename files so that it if you wanted to write a wrapper to DOOM2 you could do it easily enough  (I use DOOM2 in the example because not only was it one of my favourite graphical computer games, one I beat repeatedly I enjoyed it so much, much more than the original DOOM… I also happened to write a wrapper for DOOM2 itself, back then). Similarly, Windows doesn’t show extensions at all (by default, last I knew anyway) and so if a file is called ‘doom.txt.exe’ then double clicking on it would actually execute the executable instead of opening a text file (but the user would only see the name ‘doom.txt’). This is a serious flaw in multiple ways. Unix has its own issues with paths (but at least you can redefine them and there IS privilege separation). But it isn’t without its faults. Indeed, Unix wasn’t designed with security in mind and that is why so many changes have been implemented over the years (the same goes for the Internet main protocols – e.g., IP, TCP, UDP, ICMP – as well as other protocols at say, the application layer – all in their own ways). This is why things are so easy to exploit. This time I will discuss the issue of shell aliases.

General idea for finding the program (or script or…) to execute is also a priority. This is why when you are root (or using a privileged command) you should always use a fully-qualified name (primarily known as using the absolute file name). It is arguably better to always do this because, what if someone modified your PATH, added a file in your bin directory, updated your aliases, … ? Now you risk running what you don’t intend to. There is a way to determine all the ways it could be invoked but you should not rely on this, either. So the good, then the bad and then the ugly of the way this works (remember, security and convenience conflict with each other a lot, which is quite unfortunate but something that cannot be forgotten!). When I refer to aliases understand that aliases are even worse than the others (PATH and $HOME/bin/) in some ways, which I will get to at the ugly.


THE GOOD


There is one case where aliases are fine (or at least not so bad as the others; the others is when you use options). It isn’t without flaws, however. Either way: let’s say you’re like me and you’re a member of the Cult of VI (as opposed to the Church of Emacs). You have vi installed but you also like vim features (and so have it installed too). You might want vi in some cases but vim in others (for instance, root uses vi and other users use vim, contrived example or not is up to your own interpretation). If you place in $HOME/.bashrc the following line, then you can override what happens when you type the command in question as follows:

$ /usr/bin/grep alias $HOME/.bashrc
alias vi='vim'

Then typing ‘vi’ at the shell will open vim. Equally, if you type ‘vi -O file1 file2′ it will be run as ‘vim -O file1 file2′. This is useful but even then it has its risks. It is up to the user to decide, however (and after all, if a user is compromised you should assume the system is compromised because if it hasn’t been already it likely will be, so what’s the harm? Well I would disagree that there is no harm – indeed there is – but…)


THE BAD AND THE UGLY


Indeed, this is both bad and ugly. First, the bad part: confusion. Some utilities have conflicting options. So if you alias a command to use your favourite options, what if one day you want to use another option (or see if you like it) and you are used to typing the basename (so not the absolute name)? You get an error about conflicting options (or you get results you don’t expect)? Is it a bug in the program itself? Well, check aliases as well as where else the problem might occur. In  bash (for example) you can use:

$ type -P diff
/usr/bin/diff

However, is that necessarily what is executed? Let’s take a further look:

$ type -a diff
diff is aliased to `diff -N -p -u'
diff is /usr/bin/diff

So no, it isn’t necessarily the case. What happens if I use -y, which is a conflicting output type? Let’s see:

$ diff -y
diff: conflicting output style options
diff: Try 'diff --help' for more information.

Note that I didn’t even finish the command line! It detected invalid output styles and that was it. Yet it appears I did not actually specify conflicting output style types – clearly I only specified one option so this means indeed the alias was used, which means that while I specified options, those options are included and not excluding other options (certain programs will take the last option as the one that rules but not all do and diff does not here). If however, I were to do:

$ /usr/bin/diff -y
/usr/bin/diff: missing operand after '-y'
/usr/bin/diff: Try '/usr/bin/diff --help' for more information.

There we go: the error as expected. That’s how you get around it. But let’s move on to the ugly because “getting around it” is only if you remember and more so do not ever rely on aliases! Especially do not rely on it for certain commands. This cannot be overstated! The ugly is this:

It is unfortunate but Red Hat Linux based distributions have this by default and not only is it baby-sitting (which is both risky but also obnoxious much of the time … something about the two being related) it has an inherent risk. Let’s take a look at default alias for root’s  ‘rm':

# type -a rm
rm is aliased to `rm -i'
rm is /usr/bin/rm

-i means interactive. rm is of course remove. Okay so what is the big deal, surely this is helpful because as root you can wipe out the entire file system? Okay that’s fine but you can also argue the same with chown and chmod (always be careful recursively with these – well in general even – utilities… but these specifically are dangerous; they can break the system with ease). I’ll get to those in a bit. The risk is quite simple. You rely on the alias which means you never think about the risks involved; indeed, you just type ‘n’ if you don’t want to delete the files encountered (and you can send yes to all by piping ‘yes’, among other ways, if you wanted to at a one time avoid the nuisance). The risk then is, what if by chance you are an administrator (a new administrator) on another (different) system and it does not have the -i option? You then go to do something like (and one hopes you aren’t root but I’m going to show it as if I was root – in fact I’m not running this command – because it is serious):

# /usr/bin/pwd
/etc
# rm *
#

The pwd command was more of to show you a possibility. Sure, there are directories there that won’t be wiped out because there was no recursive option (and no force which is necessary for directories that are not empty), but even if you are fast with sending an interrupt (usually ctrl-c but can be shown and also set with the stty command, see stty –help for more info), you are going to have lost files. The above would actually have shown that some files were directories after the rm * but before the last #, but all the files in /etc itself would be gone. And this is indeed an example of “the problem is that which is between the keyboard and chair” or “PEBKAC” (“problem exists between keyboard and chair”) or even “PICNIC” (problem in chair not in computer”), among others. Why is that? Because you relied on something one way and therefore never thought to get in the habit of being careful (and either always specifying -i or using the command in a safe manner like always making sure you know exactly what you are typing). As for chown and chmod? Well if you look at the man pages, you see the following options (for both):

--no-preserve-root
 do not treat '/' specially (the default)
--preserve-root
 fail to operate recursively on '/'

Now if you look at the man page for rm, and see these options, you’ll note a different default:

--no-preserve-root
 do not treat '/' specially
--preserve-root
 do not remove '/' (default)

The problem? You might get used to the supposed helpful behaviour with rm which would show you:

rm: it is dangerous to operate recursively on ‘/’
 rm: use --no-preserve-root to override this failsafe

So you are protected from your carelessness (you shouldn’t be careless… yes it happens and I’m guilty of it too, but this is one of the things backups were invented for, as well as only being as privileged as is necessary and only for the task in hand). But that protection is a mistake itself. This is especially true when you then look at chown and chmod, both of which are ALSO dangerous when recursively done on / (actually on many directories recursively, example to not do it on is /etc as that will break a lot of things, too). And don’t even get me started on the mistake of: chown -R luser.luser .*/ because even if you are in /home/luser/lusers, then as long as you are root (it is a risk for users to change owners and so therefore only root can do that) then you will be changing the root file system and everything under it (/etc, /bin/, /dev/, everything) to be owned by luser as the user and luser as the group. Hope you had backups. You’ll definitely need them. Oh, and yes, any recursive action on .* is a risky thing indeed. To see this in action in a safe manner, as some user, in their home directory or even a sub-directory of their home directory, try the following:

$ /usr/bin/ls -alR .*

… and you’ll notice it going to /home and then / and everything below it! The reason is the way path globbing works (try man -s 7 glob). I’d suggest you read the whole thing but the one in particular is under Pathnames.

So yes, if you rely on aliases which is relying on not thinking (which is a problem itself in so many ways) then you’re setting yourself up for a disaster. Whether that disaster in fact happens is not guaranteed but one should be prepared and not set themselves up for it in the first place. And unfortunately some distributions set you up for this by default. I’m somewhat of the mind to alias rm to ‘rm –no-preserve-root’ but I think most would consider me crazy (they’re probably more correct than they think). As for the alias rm in /root/.bashrc, here’s how you remove it (or maybe if you prefer to comment it out). Just like everything else, there’s many ways, this is at the command prompt:

# /usr/bin/sed -i 's,alias \(rm\|cp\|mv\),#alias \1,g' /root/.bashrc

Oh, by the way, yes, cp and mv (hence the command above commenting all three out) are also aliased in root’s .bashrc to use interactive mode and yes the risks are the same (you risk overwriting files when you aren’t on an aliased account and this might even be the same system that you are used to it with root but you don’t have it on all your accounts which means if you were just as root and remembered it was fine then you logout back to your normal, non-privileged user and you do some maintenance there, what happens if you then use one of those commands that is not aliased to -i? Indeed, aliases can be slightly good, bad and very ugly). Note that (although you should do this anyway) even if you were to source (by ‘source /root/.bashrc’ or equally ‘. /root/.bashrc’) the file again the aliases would still exist because it didn’t unalias them (you could of course run that too but better is log out and the next time you are logged in you won’t have that curse upon you).

One more thing that I think others should be aware of as it further proves my point about forgetting aliases (whether you have them or not). The reason I wrote this is twofold:

  • First, I’ve delayed the alias issue with rm (and similar commands) but it is something I’ve long thought of and it is indeed a serious trap.
  • Second, and this is where I really make the point: the reason this came up is one of my accounts on my server had the alias for diff as above. I don’t even remember setting it up! In fact, I don’t even know what I might have used diff for, with that account! That right there proves my point entirely (and yes, I removed it). Be aware of aliases and always be careful especially as a privileged user…

 

The Hidden Dangers of Migrating Configuration Files

One of the things I have suggested (in person, here, elsewhere) time and again, is the user is more often than not, the real problem. It is the truth, it really is. I also tell others and more often write about how there should be no shame with admitting to mistakes. The only mistake you can make is not admitting to your mistakes, because if you don’t admit to your mistakes then you cannot learn from them; indeed, hiding behind a mask is not going to make the problem go away but will actually make it worse (and appear as not a problem at the same time). So then let me make something very clear, and this too is something I’ve written about (and mentioned to people otherwise) before: administrators are users too. Any administrator not admitting to making blunders, is one that is either lying (and a poor liar at that, I might add) or their idea of administration is logging into a computer, remotely and running a few basic commands, then logging out. Anyone that uses a computer in any way is a user, it as simple as that. So what does this have to do with migrating configuration files? It is something I just noticed in full and it is a huge blunder on my part. It is actually really stupid but it is something to learn from, like everything else in life.

At somewhere around 5 PM / 17:00 PST on June 16, my server was up for 2 years, 330 days, 23 hours, 29 minutes and 17 seconds. I know this because of the uptime daemon I wrote some time ago. However, around that time, also, there was a problem with the server. I did not know it until the next morning at about 4:00, because I had gone for the night. The problem is keyboard would not waken the monitor (once turned on) and I could not ssh in to the server, from this computer; indeed, the network appeared down. In fact, it was down. However, the LEDs on the motherboard (thanks to a side window in case) were lit, the fan lights were lit and the fans were indeed moving. The only thing is, the system itself was unresponsive. The suspect is something that I cannot prove one way or another. The suspect is, however, this: an out of memory issue, and the thinking is the Linux OOM killer killed a critical process (and/or was not able to resolve the issue in time). I backed up every log file at that time, in the case I ever wanted to look in to it further (probably won’t be enough information but there was enough information to tell me at about what time it stopped). There had been a recent library (glibc which is, well, very much part of the kernel and everything else) update but Linux is really good about this so it really is anyone’s guess. All I know is when logs stopped updating. The end result is I had to do a hard reboot. And since CentOS 7 came out a month or two later, I figured why not? True I don’t like systemd but there are other things I do like about CentOS 7 and the programmer in me really liked the idea of GCC 4.8.x and C/C++11 support. Besides, I manage on Fedora Core (a remote server and the computer I write from) so I can manage in CentOS 7. Well here’s the problem: I originally had trouble (the day was bad and I naively went against my intuition which was telling me repeatedly, “this is a big mistake” – it was). Then I got it working the next day, when I was more clear. However, just like CentOS 5 to CentOS 6 had certain major services (in that case it was Dovecot) having major releases, the same happened here only this time it was Apache. And there were quite some configuration changes indeed, as it was a major release (from 2.2 to 2.4). I made a critical mistake, however:

I migrated old configuration files for Apache. Here is what happened and here is why I finally noticed it (and why I did not notice it before). This (migrating old files) is indeed dangerous if you are not very careful (keep in mind the major changes means that unless you have other systems with the same layout, you will not be 100% aware of all – keyword – changes). Even if you are careful, and even if things appear fine (no error, no warning, everything seems to work), there is always the danger of something that changed that is in fact a problem. And that is exactly what happened. Let me explain.

In Apache 2.2.x you had the main config file /etc/httpd/conf/httpd.conf and you also had the directory /etc/httpd/conf.d (with extra configuration files like the one for mod_ssl, mod_security, and so on). In the main config file, however, in the beginning, you had the LoadModule directives so that everything works fine. But since the configuration file has <IfModule></IfModule> blocks, as long as the module in question is not required, there is no harm. You can consider it optional. In Apache 2.4.x, however, early in the file /etc/httpd/conf/httpd.conf it includes the directory /etc/httpd/conf.modules.d which then has, among other files, 00-base.conf and in that file is the LoadModules directive. And here is where the problem arose. I had made a test run of the install but without thinking of the <IfModule></IfModule> and non-required modules, and since the other Include directive is at the end of the file, there surely was no harm in shifting things around, right? Well, looking back it is easy to see where I screwed up and how. But yes, there was harm. And while I noticed this issue, it didn’t exactly register (perhaps something to do with sleep deprivation combined with reading daily logs in the early morning and more than that, being human, i.e., not perfect by any means, not even close). Furthermore, error log was fine and so in logwatch output, I did indeed see httpd logs. But something didn’t register until I saw the following:


0.00 MB transferred in 2 responses (1xx 0, 2xx 1, 3xx 1, 4xx 0, 5xx 0)
2 Content pages (0.00 MB)


Certainly that could not be right! I looked at my website yesterday, even, and more than once. But then something else occurred to me. I began to think about it, and it had been sometime that all I saw was the typical scanning for vulnerabilities that every webserver gets. I had not in fact saw much more. The closest would be:


2.79 MB transferred in 228 responses (1xx 0, 2xx 191, 3xx 36, 4xx 1, 5xx 0)
4 Images (0.00 MB),
224 Content pages (2.79 MB),


And yet, I knew I had custom scripts for logwatch, that I made some time back (that shows other information that I want to see, that isn’t in the default logwatch httpd script/config). But I figured that maybe I forgot to restore it. The simple solution was to move the include directives to before the <IfModule></IfModule> blocks, or in other words, much earlier in the file, not at the end.

To be true to my nature and word, I’m going to share what I actually saw in logs. This, I hope, will show exactly how sincere I am when I suggest that people admit to their mistakes and to not worry about so-called weaknesses. If there is any human weakness, it is the inability to understand that perfection isn’t possible. But that is more as I put it before: blinded by a fallacy. If you cannot admit to mistakes then you are hiding from the truth and ironically you are not fooling anyone but yourself.

The log entries looked like this:


vhost


Yes, really. I’m 100% serious. How could I screw up that bad? It is quite simple: it evaluated to that, because at the end of the config file, I include a separate directory that holds the vhosts themselves. But the CustomLog format I use, that I cleverly named vhost (because it shows the vhost as well as some other vhost specifics), was not in fact evaluated (the log modules were not loaded at the time). And in the <VirtualHost></VirtualHost> blocks I have CustomLog directives which would normally refer to a format. This means the custom log format was not used. The reason the error logs worked is I did not make a custom error log. But since the log modules were loaded after the configuration of the log formats, the access logs had the format of “vhost” as a string, and that is it. Brilliant example of “the problem is that which is between the keyboard and chair” as I worded it circa 1999 (and others have put it other ways, longer than me, for sure). And to continue with sharing such a stupid blunder, I’m going to point out that this has been this way about 41 days and 3 hours. Yes, I noticed it but I only noticed it in a (local) test log file (test virtual host). Either way, it did not register as a problem (it should have but it absolutely did not!). I have no idea why it didn’t, but it didn’t. True, I have had serious sleep issues, but that is irrelevant. The fact is: I made a huge mistake with the migration of configuration files. It was my own fault, I am 100% to blame, and there is nothing else to it. But this is indeed something to consider, because no one is perfect and when there is a major configuration directory restructure (or any type of significant restructure) there are risks to keep in mind. This is just nature: significant changes do require getting accustomed to things and all it takes is being distracted, not thinking of one tiny thing, or even not understanding something in particular, in order for a problem to occur. Thankfully, though, most of the time problems are noticed quickly and fixed quickly, too.  But in this case I really screwed up and I am only thankful it wasn’t something more serious. Something to learn from, however, and that is exactly what I’ve done.

 

chkconfig: alternatives system

This will be a somewhat quick write-up. Today I wanted to link in a library to a program that is in the main Fedora Core repository (but it excludes the library due to policy).  In the past I had done this by making my own RPM package with the release one above the main release, or, if you will excuse the pun, alternatively not installing the Fedora Core version at all but only mine. I then had the thought of why not use the alternatives system? After all, if I wanted to change the default I could do that. This RPM isn’t going to be in any of my repositories (I added one for CentOS 7 in the past few months) but realistically it could. There was one thing that bothered me about the alternatives system, however:

I could never quite remember the proper installation of an alternatives group because I never actually looked at it with a clear head and although it is clear (the description) once I looked in to it more intently, I always confused the link versus the path. Regardless, today I decided to sort it out once and for all. This is how each option works:


alternatives --install <link> <name> <path> <priority> [--initscript <service>] [--slave <link> <name> <path>]*

The link parameter is the symlink which exists in /etc/alternatives and which points to the currently preferred default. The name parameter is the name of the alternative group itself. The path is the actual target of the symlink. –initscripts is a Red Hat Linux specific and although I primarily work with Red Hat I will not cover it. Priority is a number, the highest number will be selected in auto mode (see below). –slave is for groupings; for instance, if the program I was building had a man page but so does the main one (the one from Fedora Core repository), what happens when I use man on the program name? With the groups it allows updating based on the master. For the example I will use a program I wrote but yet there is another one out there (also in the main Fedora Core repository): an uptimed. Let’s say mine is called ‘suptimed’ and the other is ‘uptimed’. So the files ‘/usr/bin/suptimed’ and ‘/usr/bin/uptimed’ exist. Further, the man pages for suptimed and uptimed are ‘/usr/share/man/man1/suptimed.1.gz’ and ‘/usr/share/man/man1/uptimed.1.gz’  (without the ‘s). This also includes just enough files to explain the syntax.

alternatives --install /usr/bin/uptimed uptimed /usr/bin/suptimed 1000 --slave /usr/share/man/man1/uptimed.1.gz uptimed.1.gz /usr/share/man/man1/suptimed.1.gz

While this is a hypothetical example (as in there might be more files to include in the slaves [1]), it should explain it well enough. After this, if you were run uptimed it would run suptimed instead. Furthermore, if you were to type ‘man uptimed’ it would show suptimed’s man page. Under /etc/alternatives you would see symlinks called uptimed and uptimed.1.gz with the first pointing to /usr/bin/suptimed and the second pointing to /usr/share/man/man1/suptimed.1.gz

The syntax given above that has [--slave <link> <name> <path>]* and specifically the * after it means you can use it more than once, depending on how many slaves. As for the [ and ] that is the typical way of showing options (not required) and their parameters (which may or may not be required for the option). The angle brackets indicate required arguments. This is a general rule, or perhaps even a de-facto standard.


alternatives --remove <name> <path>

Exact same parameter meanings. So to remove suptimed from the group  (and as master it might then not even use alternatives – the trick is when there IS an alternative) I would use:

alternatives --remove uptimed /usr/bin/suptimed

alternatives --auto <name>

As explained above. Name has the same meaning.


alternatives --config <name>

Allows you to configure the alternative for the group. Is a TUI (text user interface).

The rest I won’t get in to. The –config option is also not part of the original implementation (Debian Linux). –display <name> and –list are quite straight forward.

Debugging With GDB: Conditional Breakpoints

Monday I discovered a script error in what is one of many scripts running in a process. I knew that it was not the script itself but rather a change I had made in the source code (so not the script but the script engine because I implemented a bug in it) that caused the error. But in this case, the script in question is one of many of the same type, each running in a sequence. This means if I am to attach to the debugger and set a breakpoint I have to check that it is the proper instance and if not, continue until the next instance, repeating this until I get to the proper instance. Even then, I have to make sure that I don’t make a mistake and that ultimately I find the problem (unless I want to repeat the process again, which usually is not preferred).

I never really thought of having to do this because I rarely use the debugger at all let alone debug something like this.  But when I do it is time-consuming. So I had a brilliant idea: make a simple function that the script could call and then trap that function. When I reach that breakpoint I then step into the script. Well I was going to write about this but on a whim I decided to look at GDB’s help for breakpoints. I saw something interesting but I did not look in to it until today. Well, as it turns out, GDB already has this functionality, only better. So this is how it works:

  • While GDB is attached to the process you set a breakpoint to the function you need to debug. For example, you want to set a breakpoint at cow: break cow
  • GDB will tell you which breakpoint number it is. It’ll look something like: ‘Breakpoint 1 at 0xdeadbeef: cow’ where ‘0xdeadbeef’ is the address of the function ‘cow’ in the program space and ‘cow’ is the function you set a breakpoint to. Okay, the function cow is probably not there it almost assuredly does not have the address ‘0xdeadbeef’ although it could happen (and it would be very ironic yet amusing indeed), but this is just to show the output (and show how much fun hexadecimal can be, at least how fun it is to me). Regardless, you have the breakpoint number and that is critical for the next step, which is – if you will excuse the pun – the beef of the entire process.
  • So one might ask, does GDB have the ability to check the variable passed in to the function for the condition? Yes, it does, and it also has the ability to dereference a pointer (or access a member function or variable on an object) passed in to the function. So if cow has a parameter of Cow *c and c has a function idnum (or a member variable idnum) then you can indeed make use of it in the condition. This brings us to the last step (besides debugging the bug you implemented, that is).
  • Command in gdb: ‘cond 1 c->idnum == 57005′ (without the quotes) will instruct GDB to only stop at function cow (at 0xdeadbeef) when c->idnum (or if you prefer and you specified the condition as c->idnum() == 57005, then c->idnum() is checked) is 57005. Why 57005 for the example? Because 0xdead is 57005 in decimal. So all you have to do now is tell GDB to continue: ‘c’ (also without the quotes). When it stops you’ll be at function ‘cow’ and c->idnum will be equal to 57005. To contrast, if you had made the condition as c->idnum != 57005 then it will break whenever the cow is alive (to further the example above).

That’s all there is to it!

Open Source and Security

One of the things I often write about is how open source is in fact good for security. Some will argue the opposite to the end. But what they are relying on, at best, is security through obscurity. Just because the source code is not readily available does not mean it is not possible to find flaws or even reverse engineer it. It doesn’t mean it cannot be modified, either. I can find – as could anyone else – countless examples of this. I have personally added a feature to a Windows dll file – a rather important one, that is shell32.dll – in the past. I then went on to convince Windows file integrity check to not only see the modified file as correct but if I were to have replaced it with the original, unmodified one, then it would have replaced it with my modified version. And how did I add a feature without the source code? My point exactly. So to believe you cannot uncover how it works (or, as some will have you believe, modify and/or add features) is a huge mistake. But whatever. This is about open source and security. Before I can get in to that, however, I want to bring up something else I often write about at times.

That thing I write about is this: one should always admit to mistakes. You shouldn’t get angry and you shouldn’t take it as anything but a learning opportunity. Indeed, if you use it to better yourself, better whatever you made the mistake in (let’s say you are working on a project at work and you make a mistake that throws the project off in some way) and therefore better everything and everyone involved (or around), then you have gained and not lost. Sure, you might in some cases actually lose something (time and/or money, for example) but all good comes with bad and the reverse is true too: all bad comes with good. Put another way, I am by no means suggesting open source is perfect.

The only thing that is perfect is imperfection.
— Xexyl

I thought of that the other day. Or maybe better put, I actually got around to putting it in my fortune file (I keep a file of pending ideas for quotes as well as the fortune file itself). The idea is incredibly simple: the only thing that will consistently happen without failure is ‘failure’, time and time again. In other words, there is no perfection. ‘Failure’ because it isn’t a failure if you learn from it; it is instead successfully learning yet another thing and is another opportunity to grow. On the subject of failure or not, I want to add a more recent quote  (this was thought of earlier this month or later in August than when I originally posted this, which was 15 August) that I think really nails this idea very well:

There is no such thing as human weakness, there is only
strength and… those blinded by… the fallacy of perfection.
— Xexyl

In short, the only weakness is the product of ones’ mind. There is no perfection but if you accept this you will be much further ahead (if you don’t accept it you will be less able to take advantage of what imperfection offers). All of this together is important, though. I refer to admitting mistakes and how it is only a good thing. I also suggest that open source is by no means perfect and therefore to be critical of it, as if it is less secure, is flawed. But here’s the thing. I can think of a rather critical open source library that is used by a lot of servers, that has had a terrible year. One might think with this, and specifically what it is (which I will get to in a moment), it is somehow less secure or more problematic. What is this software? Well, let me start by noting the following CVE fixes that were pushed in to update repositories, yesterday:


 – fix CVE-2014-3505 – doublefree in DTLS packet processing
– fix CVE-2014-3506 – avoid memory exhaustion in DTLS
– fix CVE-2014-3507 – avoid memory leak in DTLS
– fix CVE-2014-3508 – fix OID handling to avoid information leak
– fix CVE-2014-3509 – fix race condition when parsing server hello
– fix CVE-2014-3510 – fix DoS in anonymous (EC)DH handling in DTLS
– fix CVE-2014-3511 – disallow protocol downgrade via fragmentation


To those who are not aware, I refer to the same software that had Heartbleed vulnerability. It therefore is also the same software as some other CVE fixes not too long after that. And indeed it seems that OpenSSL is having a bad year. Well, whatever – or perhaps better put, whomever – is the source (and yes, I truly do love puns) of the flaws, is irrelevant. What is relevant is this: they clearly are having issues. Someone or some people adding the changes are clearly not doing proper sanity checks and in general not auditing well enough. This just happens, however. It is part of life. It is a bad patch (to those that do not like puns, I am sorry, but yes, there goes another one) of time for them. They’ll get over it. Everyone does, even though it is inevitable that it happens again. As I put it: This just happens.

To those who want to be critical, and not constructively critical, I would like to remind you of the following points:

  • Many websites use OpenSSL to encrypt your data and this includes your online purchases and other credit card transactions. Maybe instead of being only negative you should think about your own mistakes more rather than attacking others? I’m not suggesting that you not considering yours, but in the case you are not, think about this. If nothing else, consider that this type of criticism will lead to nothing and since OpenSSL is critical (I am not consciously and deliberately making all these puns, it is just in my nature) then this can lead to no good and certainly is of no help.
  • No one is perfect, as I not only suggested above, but also suggested at other times. I’ll also bring it up again, in the future. Because thinking yourself infallible is going to lead to more embarrassment than if you understand this and prepare yourself, always being on the look out for mistakes and reacting appropriately.
  • Most importantly: this has nothing to do with open source versus closed source. Closed source has its issues too, including less people that can audit it. The Linux kernel, for example, and I mean the source code thereof, is on many people’s computers and that is a lot to see any issues. Issues still have and still will happen, however.

With that, I would like to end with one more thought. Apache, the organization that maintains the popular web server as well as other projects, is really to be commended for their post-attack analyses. They have a section on their website, which details attacks and the details include what mistakes they made, what happened, what they did to fix it as well as what they learned. That is impressive. I don’t know if any closed source corporations do that, but either way, it is something to really think about. It is genuine, it takes real courage to do it, and it benefits everyone. This is one example. There are immature comments there but that shows just how impressive Apache is to do this (they have other incident reports I seem to recall). The specific incident is reported here.

Steve Gibson: Self-proclaimed Security Expert, King of Charlatans


2014/08/28:
Just to clarify a few points. Firstly, I have five usable IP addresses. That is because, like I explain below, some of the IPs are not usable for systems but instead have other functions. Secondly, the ports detected as closed and my firewall returning icmp errors. It is true  I do return that but of course there’s missing ports there and none of the others are open (that is, none have services bound to them), either. There are times I flat out drop packets to the floor but if I have the logs I’m not sure which log file it is (due to log rotation), to check for sure. There’s indeed some inconsistencies. But the point remains the same: there was absolutely nothing running on any of those ports just like the ports it detected as ‘stealth’ (which is more like not receiving a response and what some might call filtered but in the end it does not mean nothing is there and it does not mean you are some how immune to attacks). Third, I revised the footnote about FQDN, IP addresses and what they resolve to. There’s a few things that I was not clear with and in some ways unfair with, too. I was taking an issue to one  thing in particular and I did a very poor job of it I might add (something I am highly successful at I admit).


One might think I have better things to worry about than write about a known charlatan but I have always been somewhat bemused with his idea of security (perhaps because he is clueless and his suggestions are unhelpful to those who believe him which is a risk to everyone). More importantly though, I want to dispel the mythical value of what he likes to call stealth ports (and even more than that anything that is not stealth is somehow a risk). This, however, will not only tackle that, it will be done in what some might consider an immature way. I am admitting it full on though. I’m bored and I wanted to show just how useless his scans are by making a mockery of those scans. So while this may seem childish to some, I am merely having fun while writing about ONE of MANY flaws Steve Gibson is LITTERED with (I use the word littered figuratively and literally).

So let’s begin, shall we? I’ll go in the order of pages you go through to have his ShieldsUP! scan begin. First page I get I see the following:

Greetings!

Without your knowledge or explicit permission, the Windows networking technology which connects your computer to the Internet may be offering some or all of your computer’s data to the entire world at this very moment!

Greetings indeed. Firstly, I am very well aware of what my system reveals. I also know that this has nothing to do with permission (anyone who thinks they have a say in what their machine reveals, when connecting to the Internet – or phone to a phone network, or … – is very naive and anyone suggesting that there IS permission involved, is a complete fool). On the other hand, I was not aware I am running Windows. You cannot detect that yet you scan ports which would give you one way to determine OS? Here’s a funny part of that: since I run a passive fingerprinting service (p0f) MY SYSTEM determined your server’s OS (well, technically, kernel but all things considering, that is the most important bit, isn’t it? Indeed it is not 100% correct, however, but that goes with fingerprinting in general and I know that it DOES detect MY system correctly). So not only is MY host revealing information YOURS is too. Ironic? Absolutely not! Amusing? Yes. And lastly, let’s finish this part up: “all of your computer’s data to the entire world at this very moment!” You know, if it were not for the fact people believe you, that would be hilarious too. Let’s break that into two parts. First, ALL of my computer’s data? Really now? Anyone who can think rationally knows that this is nothing but sensationalism at best but much more than that: it is you proclaiming to be an expert and then ABUSING that claim to MANIPULATE others into believing you (put another way: it is by no means revealing ALL data, not in the logical – data – sense or physical – hardware – sense). And the entire world? So you’re telling me that every single host on the Internet is analyzing my host at this very moment? If that were the case, my system’s resources would be too full to even connect to your website. Okay, context would suggest that you mean COULD but frankly I already covered that this is not the case (I challenge you to name the directory that is most often my current working directory let alone know that said directory even exists on my system).

If you are using a personal firewall product which LOGS contacts by other systems, you should expect to see entries from this site’s probing IP addresses: 4.79.142.192 -thru- 4.79.142.207. Since we own this IP range, these packets will …

Well, technically, based on that range, your block is 4.79.142.192/28. And technically your block includes (a) the network address, (b) the default gateway and (c) the broadcast address. That means that the IPs that would be probing is in the range more like ‘4.79.142.193’ – ‘4.79.142.206’. And people really trust you? You don’t even know basic networking and they trust you with security?

Your Internet connection’s IP address is uniquely associated with the following “machine name”:

wolfenstein.xexyl.net

Technically that is the FQDN (fully-qualified domain name[1]), not “machine name” as you put it. You continue in this paragraph:

The string of text above is known as your Internet connection’s “reverse DNS.” The end of the string is probably a domain name related to your ISP. This will be common to all customers of this ISP. But the beginning of the string uniquely identifies your Internet connection. The question is: Is the beginning of the string an “account ID” that is uniquely and permanently tied to you, or is it merely related to your current public IP address and thus subject to change?

Again, your terminology is rather mixed up. While it is true that it you did a reverse lookup on my IP, it isn’t exactly “reverse DNS”. But since you are trying to simplify (read: dumb it down to your level) it for others, and since I know I can be seriously pedantic, I’ll let it slide. But it has nothing to do with my Internet connection itself (I have exactly one). It has to do with my IP address of which I have many (many if you consider my IPv6 block, but only 5 if you consider IPv4). You don’t exactly have the same FQDN on more than one machine any more than you have the same IP on more than one network interface (even on the same system). So no, it is NOT my Internet connection but THE specific host that went to your website and in particular the IP assigned to that host I connected from. And the “string” has nothing to do with an “account ID” either. But I’ll get back to that in a minute.

The concern is that any web site can easily retrieve this unique “machine name” (just as we have) whenever you visit. It may be used to uniquely identify you on the Internet. In that way it’s like a “supercookie” over which you have no control. You can not disable, delete, or change it. Due to the rapid erosion of online privacy, and the diminishing respect for the sanctity of the user, we wanted to make you aware of this possibility. Note also that reverse DNS may disclose your geographic location.

I can actually request a different block from my ISP and I can also change the IP on my network card. Then the only thing that is there is my IP and its FQDN (that is not in use and I can change the FQDN as I have reverse delegation yet according to you I cannot do any of that). I love your ridiculous terminology though. Supercookie? Whatever. As for it giving away my geographic location, let me make something very clear: the FQDN is irrelevant without the IP address. While it is true that the name will (sometimes) refer to a city it isn’t necessarily the same city or even county as the person USING it. The IP address is related to the network; the hostname is a CONVENIENCE for humans. You know, it used to be that host -> IP was done without DNS (since it didn’t exist) but rather a file that maintains the mapping (and still is used albeit very little). The reason it exists is for convenience and in general because no one would be able to know the IP of every domain name. Lastly, not all IPs resolve into a name.

If the machine name shown above is only a version of the IP address, then there is less cause for concern because the name will change as, when, and if your Internet IP changes. But if the machine name is a fixed account ID assigned by your ISP, as is often the case, then it will follow you and not change when your IP address does change. It can be used to persistently identify you as long as you use this ISP.

The occasions it resembles the IP is when the ISP has authority of the in-addr.arpa DNS zone of (your) IP and therefore has their own ‘default’ PTR record (but they don’t always have a PTR record which your suggestion does not account for; indeed, I could have removed the PTR record for my IP and then you’d have seen no hostname). But this does not indicate that it is static or not. Indeed, even dynamic IPs typically (not always) have a PTR record. Again, the name does not necessarily imply static: it is the IP that matters. And welcome to yesteryear… in these days you typically pay extra for static IPs but you suggest it is quite often that your “machine name is a fixed account ID” (which itself is completely misuse of terminology). On the other hand, you’re right: it won’t change when your IP address changes because the IP is relevant, not the hostname! And if your IP changes then it isn’t so persistent in identifying you, is it? It might identify your location but as multiple IPs (dynamic) and not a single IP.

There is no standard governing the format of these machine names, so this is not something we can automatically determine for you. If several of the numbers from your current IP address (23.120.238.106) appear in the machine name, then it is likely that the name is only related to the IP address and not to you.

Except ISP authentication logs and timestamps… And I repeat the above: the name can include exactly as you suggest and still be static!

But you may wish to make a note of the machine name shown above and check back from time to time to see whether the name follows any changes to your IP address, or whether it, instead, follows you.

Thanks for the suggestion but I think I’m fine since I’m the one that named it.

Now, let’s get to the last bit of the ShieldsUP! nonsense.

GRC Port Authority Report created on UTC: 2014-07-16 at 13:20:16
Results from scan of ports: 0-10550 Ports Open
72 Ports Closed
984 Ports Stealth
———————
1056 Ports TestedNO PORTS were found to be OPEN.Ports found to be CLOSED were: 0, 1, 2, 3, 4, 5, 6, 36, 37,
64, 66, 96, 97, 128, 159, 160,
189, 190, 219, 220, 249, 250,
279, 280, 306, 311, 340, 341,
369, 371, 399, 400, 429, 430,
460, 461, 490, 491, 520, 521,
550, 551, 581, 582, 608, 612,
641, 642, 672, 673, 734, 735,
765, 766, 795, 796, 825, 826,
855, 856, 884, 885, 915, 916,
945, 946, 975, 976, 1005, 1006,
1035, 1036Other than what is listed above, all ports are STEALTH.

TruStealth: FAILED – NOT all tested ports were STEALTH,
– NO unsolicited packets were received,
– A PING REPLY (ICMP Echo) WAS RECEIVED.

The ports you detected as “CLOSED” and not “STEALTH” were in fact returning an ICMP host-unreachable. You fail to take into account the golden rule of firewalls: that which is not explicitly permitted is forbidden. That means that even though I have no service running on any of those ports I still reject the packets to it. Incidentally, some ports you declared as “STEALTH” did exactly the same (because I only allow the ports in a specific IP block as the source network). The only time I drop packets to the floor is when state checks fail (e.g., a TCP SYN flag is set but it is already a known connection). I could prove that too, because I actually went and had you do the scan a second time but this time I added specific iptables rules for your IP block which changed the results quite a bit and indeed I used the same ICMP error code.

As for ping: administrators that block ping outright need to be hit over the head with a pot of sense. Rate limit by all means, that is more than understandable, but blocking ICMP echo requests (and indeed replies) only makes troubleshooting network connectivity issues more of a hassle and at the same time does absolutely nothing for security (fragmented packets and anything that can be abused obviously are dealt with differently because they are different!). Indeed, if they are going to attack they don’t really care if you respond to ICMP requests. If there is a vulnerability they will go after that and frankly hiding behind your “stealth” ports is only a false sense of security and/or security through obscurity (which is a false sense of security and even more harmful at the same time). Here’s two examples: First, if someone sends you a link (example: in email) and it seems to you to be legit and you click on it (there’s a lot of this in recent years and is ever increasing), the fact you have no services running does not mean you somehow are immune to XSS, phishing attacks, malware, or anything else. Security is, always has been and always will be a many layered thing. Secondly: social engineering.

And with that, I want to finish with the following:

If anyone wants the REAL story about Steve Gibson, you need only Google for “steve gibson charlatan” and see the many results. I can vouch for some of them but there really is no need for it – the evidence is so overwhelming that it doesn’t need any more validation. Here’s a good one though (which also shows his ignorance as well as how credible his proclamations are): http://www.theregister.co.uk/2002/02/25/steve_gibson_invents_broken_syncookies/ (actually it is a really good one). If you want a list of items, check the search result that refers to Attrition.org and you will see just how credible he is NOT. A good example is the one that links to a page about Gibson and XSS flaws which itself links to: http://seclists.org/vuln-dev/2002/May/25 which is itself offers a great amount of amusement (note that some of the links are no longer valid as it was years ago but that is the one at seclists.org and not the only incident).

[1] Technically, what his host is doing is taking the IP address and resolving it to a name (which is querying the PTR record as I refer to above). Since I have reverse delegation (so have authority) and have my own domain (which I also have authority of) I have my IPs resolve to fully-qualified domain names as such. FQDN is perhaps not the best wording (nor fair, especially) on my part in that I was abusing the fact that he is expecting normal PTR records that an ISP has rather than a server with proper an A record with a matching PTR record. What he refers to is as above: resolving the IP address to a name, which does not have to have a name. Equally, even if a domain exists by name, it does not have to resolve to an IP (“it is only registered”). He just gave it his own name for his own ego (or whatever else).

Death Valley, California, Safety Tips and Harry Potter

I guess this might be the most bizarre title for a post yet but it is a take on real life and fantasy and particularly the Harry Potter series. I am implying two things with real life. I will get to the Harry Potter part later. While it is a specific tragedy in Death Valley it is not an uncommon event and since I have many fond memories of Death Valley (and know the risks), I want to reflect on it all (because indeed fantasy is very much part of me, perhaps too much so).

For the first ten years of my life (approximate) I visited Death Valley each year, in November. It is a beautiful place with many wonderful sights. I have many fond memories of playing on the old kind of Tonka trucks (which is a very good example of “they don’t make [it] like they used to” as nowadays it is made out of plastic and what I’m about to describe would be impossible). My brother and I would take a quick climb up the hill right behind our tent, get on our Tonka trucks (each our own) and ride down, crashing or not, but having a lot of fun regardless. I remember the amazing sand dunes with the wind blowing like it tends to in a desert. I remember being fortunate enough that there was a ghost town with a person living there who could supply me with electricity for my nebulizer for an asthma attack (and fortunate enough to see many ghost towns from where miners in the California Gold Rush would have resided). I remember, absolutely, Furnace Creek with the visitor centre and how nice everyone was there. I even remember the garbage truck driver who let my brother and me activate the mechanism to pick up the bin. I remember the many rides on family friends’ dune buggies. The amazing hikes in the many canyons is probably a highlight (but certainly not the only highlight). Then there is Scotty’s Castle (they had a haunted house during Halloween if I recall). There is actually an underground river (which is an inspiration to another work I did but that is another story entirely). They have a swimming pool that is naturally warm. I remember all these things and more even if most of it is vague. It truly is a wonderful place.

Unfortunately, because of the vast area which spans more than 3,373,000 acres (according to Wiki which I seem to remember is about the right area – I’m sure the official Death Valley site would have more on this) and the very fact it is the hottest place on Earth (despite some claims; I am referring to officially acknowledged records) at  134.6 F / 57 C. That was, ironically enough, recorded this very month in 1913, on July 10 (again according to Wiki but from memory, other sources do have it in the early 1900s). This is an important bit (the day of the month in particular) for when I get to fantasy, by the way. Interestingly, the area I live in has a higher record for December and January than Death Valley by a few degrees (Death Valley: December and January at 89 F / 32 C ; my location I know I have seen on the thermostat at least 95 F / 35 C for both months although it could have been higher too). Regardless, Death Valley has a higher record by 10 C higher (my location record: 47 C / 116.6 F; Death Valley as above). And if you think of the size (as listed above) and that much of it is unknown territory for all but seasoned campers (which my family would fit that category), you have to be prepared. Make no mistake people: Death Valley and deserts in general, can be very, very dangerous. Always make sure you keep yourself hydrated. What is hydration though, for humans? It is keeping your electrolytes at a balanced level. This means that indeed too much water is as dangerous as too little water. As a general rule of thumb that was given to me by the RN (registered nurse) for a hematologist I had (another story entirely, as well, as for why I had one): if you are thirsty you waited too long. Furthermore, for Death Valley (for example) make sure you either have a guide or you know your way around (and keep track – no matter how you do this – where you go). That may include maps, compass, landmarks, and any other number of techniques. But it is absolutely critical. I have time and again read articles on the BBC where someone (or some people) from the UK or parts of Europe were unprepared and were found dead. It is a wonderful place but be prepared. Although this should be obvious, it often isn’t: Death Valley is better visited in the cooler months (close to Winter or even in Winter). I promise you this: it won’t be cold by any means. Even if you are used to blizzards in your area, you will still have plenty of heat year round in Death Valley. I should actually restate that slightly, thinking about a specific risk (and possibility). Deserts can drop to freezing temperatures! It is rare yes, but when it does it still will be cold. Furthermore, deserts can see lots of rain, even flash floods! Yes, I’ve experienced this exactly. Furthermore, as for risks, if it looks cloudy (or if you have a sense of smell like mine where you can smell rain that is about to drop, and no that is not an exaggeration – my sense of smell is incredibly strong) or there is a drizzle (or otherwise light rain) or more than that, do not even think about hiking the canyons! It is incredibly dangerous to attempt it! This cannot be stressed enough. As for deserts and freezing temperature, I live in a desert (most of Southern California is a desert) and while it was over 22 years ago (approximately) we still have seen snow on our yard. So desert does not mean no rain or no snow. I’ve seen people write about hot and dry climates and deserts (comparing the two) but that is exactly what a desert is: a hot and dry climate! But climate does not by any means somehow restrict what can or cannot happen. Just like Europe can see mid 30s (centigrade) so too can deserts see less than zero. And all this brings me to the last part: fantasy.

One of my favourite genres (reading – I rarely watch TV or films) is fantasy. While this is not the only series I have read, the Harry Potter series is one that I am referring to in particular to, as I already highlighted. Indeed, everything in Harry Potter has a reason, has a purpose and in general will be part of the entire story! That is how good it is and that is how much I enjoyed it (I also love puzzles so putting things together, or rather the need to do that, was a very nice treat indeed). I’m thankful for a friend that finally got me to read it (I had the books actually but never got around to reading the ones that were out, which would be up to and including book 3, Harry Potter and the Prisoner of Azkaban). The last two books I read the day it came out, in full, with hours to spare. Well, why on Earth would I be writing about fantasy, specifically Harry Potter, and Death Valley, together? I just read on the BBC, that Harry Potter Actor Dave Legeno has been found dead in Death Valley. He played the werewolf Fenrir Greyback. I will note the irony that today, the 12th of July, this year it is a full moon. I will also readily admit that in fantasy, not counting races by themselves (e.g., Elves, Dwarves, Trolls, …) werewolves are my favourite type of creature. I find the idea fascinating and there is a large part of me that wishes they were real. (As for my favourite race it would likely be Elves) I didn’t know the actor, of course, but the very fact he was British makes me think he too fell to the – if you will excuse the pun which is by no means meant to be offensive to his family or anyone else – fantasy of experiencing Death Valley, and unfortunately it was fatal. And remember I specifically wrote 1913, July 10 as the record temperature for Death Valley? Well, I did mean it when I wrote it has significance here: he was found dead on July 11 of this year. Whether that means he died on the 11th is not exactly known yet (it is indeed a very large expansion and it is only that hikers found him, that it is known) but that it was one day off is ironic indeed. It is completely possible he died on the 10th and it is also possible it was days before or even the 11th. This is one of those things that will be known after autopsy occurs as well as backtracking (by witnesses and other evidence) and not until then. Until then, it is anyone’s guess (and merely speculation). Regardless of this, it is another person who was unaware of the risks of which there are many (depending on where in Death Valley you might be in a vehicle; what happens if you run out of fuel and only have enough water for three days? There are so many scenarios but they are far too often not thought of or simply neglected). Two other critical bits of advice: don’t ignore the signs left all around the park (giving warnings) and always, without fail, tell someone where you will be! If someone knew where he was and knew approximately when he should be back (which should always be considered when telling someone else where you’ll be) they could have gone looking for him. This piece of advice, I might add, goes for hiking, canoeing and anything else (outside of Death Valley, this is a general rule), especially if you are alone (but truthfully – and I get the impression he WAS alone – you should not be alone in a place as large as Death Valley because there are many places to fall, there are animals that could harm you, and instead of having a story to bring home you risk not coming home at all). There are just so many risks so always be aware of that and prepare ahead of time. Regardless, I want to thank Dave for playing Fenrir Greyback. I don’t know if [you] played in any other films and I do not know anything about you or your past but I wish you knew the risks beforehand and my condolences (for whatever they can be and whatever they are worth) to your friends and family. I know that most will find this post out of character (again if you will excuse the indeed intended pun) for what I typically write about, but fantasy is something I am very fond of, and I have fond memories of Death Valley as well.

“I ‘Told’ You So!”

Update on 2014/06/25: Added a word that makes something more clear (specifically the pilots were not BEING responsible but I wrote “were not responsible”).

I was just checking the BBC live news feed I have in my bookmark bar in Firefox and I noticed something of interest. What is that? How automated vehicle systems (whether controlled by humans or not it is still created by and automation itself has its own flaws) are indeed dangerous. Now why is that interesting to me? Because I have written about this before in more than one way! So let us break this article down a bit:

The crew of the Asiana flight that crashed in San Francisco “over-relied on automated systems” the head of the US transport safety agency has said.

How many times have I written about things being dumbed down to the point where people are unable – or refuse – to think and act accordingly to X, Y and Z? I know it has been more than once but apparently it was not enough! Actually, I would rather state: apparently not enough people are thinking at all. That is certainly a concern to any rational being. Or it should be.

Chris Hart, acting chairman of the National Transportation Safety Board (NTSB), said such systems were allowing serious errors to occur.

Clearly. As the title suggests: I ‘told’ you so!

The NTSB said the 6 July 2013 crash, which killed three, was caused by pilot mismanagement of the plane’s descent.

Again: relying on “smart” technology is relying on the smartest of the designer and the user (which doesn’t leave much chance, does it?). But actually in this case it is even worse. The reasons: First, they are endangering others lives (and three died – is that enough yet?). Second is the fact that they are operating machinery, not using a stupid (which is what a “smart” phone is) phone. I specifically wrote about emergency vehicles and this and here we are, where exactly the situation arises: there are events that absolutely cannot be accounted for automatically and require that a person is paying attention and using the tool responsibly!

During the meeting on Tuesday, Mr Hart said the Asiana crew did not fully understand the automated systems on the Boeing 777, but the issues they encountered were not unique.

This is also called “dumbing the system down” isn’t it? Yes, because when you are no longer required to think and know how something works, you cannot fix problems!

“In their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid,” Mr Hart said.

Much like I wrote about related to all of and then some: computer security, computer problems, emergency vehicles and in general automated vehicles. This is another example.

The South Korea-based airline said those flying the plane reasonably believed the automatic throttle would keep the plane flying fast enough to land safely.

Making assumptions at the risk of others lives is irresponsible and frankly reprehensible! I would argue it is potentially – and in this case, is – murderous!

But that feature was shut off after a pilot idled it to correct an unexplained climb earlier in the landing.

Does all of this start to make sense? No? It should. Look what the pilot did? Why? A stupid mistake or an evil gremlin took over him momentarily? Maybe the gremlin IS their stupidity.

The airline argued the automated system should have been designed so that the auto throttle would maintain the proper speed after the pilot put it in “hold mode”.

They should rather be saying sorry and then some. They should also be taking care of the mistake THEY made (at least as much as they can; they already killed – and yes, that is the proper way of wording it – three people)!

Boeing has been warned about this feature by US and European airline regulators.

The blame shouldn’t be placed on Boeing if they didn’t actually neglect and they are doing what it seems everyone wants: automation. Is that such a good idea? As I pointed out many times: no. Let me reword that a bit. Is Honda responsible for a drunk getting behind the wheel and then killing a family of five, four, three, two or even one person (themselves included – realistically that would be the only one who is not innocent!)? No? Then why the hell should Boeing be blamed for a pilot misusing the equipment? The pilot is not being responsible and the reason (and how) the pilot is not being responsible is irrelevant!

“Asiana has a point, but this is not the first time it has happened,” John Cox, an aviation safety consultant, told the Associated Press news agency.

It won’t be the last, either. Mark my words. I wish I was wrong but until people wake up it won’t be fixed (that isn’t even including the planes already in commission).

“Any of these highly automated airplanes have these conditions that require special training and pilot awareness. … This is something that has been known for many years.”

And neglected. Because why? Here I go again: it is so dumbed down, so automatic that the burden shouldn’t be placed on the operators! Well guess what? Life isn’t fair. Maybe you didn’t notice that or you like to ignore the bad parts of life, but the fact remains life isn’t fair and they (the pilots and in general the airliner) are playing the pathetic blame game (which really is saying “I’m too immature and irresponsible and not only that I cannot dare admit that I am not perfect. Because of that it HAS to be someone else who is at fault!”).

Among the recommendations the NTSB made in its report:

  • The Federal Aviation Administration should require Boeing to develop “enhanced” training for automated systems, including editing the training manual to adequately describe the auto-throttle programme.
  • Asiana should change its automated flying policy to include more manual flight both in training and during normal operations
  • Boeing should develop a change to its automatic flight control systems to make sure the plane “energy state” remains at or above minimum level needed to stay aloft during the entire flight.

My rebuttal to the three points:

  • They should actually insist upon “improving” the fully automated system (like scrapping the idea). True, this wasn’t completely automated but it seems that many want that (Google self driving cars, anyone?). Because let’s all be real, are they of use here? No, they are not. They’re killing – scrap that, murdering! – people. And that is how it always will be! There is never enough training. There is always the need to stay in the loop. The same applies to medicine, science, security (computer, network and otherwise), and pretty much everything in life!
  • Great idea. A bit late of them though, isn’t it? In fact, a bit late of all airliners that rely on such a stupid design!
  • Well they could always improve but the same thing can be said for cars, computers, medicinal science, other science, and here we go again: everything in this world! But bottom line is this: it is not at all Boeing’s fault. They’re doing what everyone seems to want.

And people STILL want flying cars? Really? How can anyone be THAT stupid? While I don’t find it hard to believe such people exist, I still find it shocking. To close this, I’ll make a few final remarks:

This might be the wrong time, according to some, since it is just reported. But it is not! If it is not the right time now, then when? This same thing happens with everything of this nature! Humans always wait until a disaster (natural or man made) happens until doing something. And then they pretend (lying about it in the process) to be better but what happens next? They do the same thing all over again. And guess what also happens at that time? The same damned discussions (that I dissected, above) occurs! Here’s a computer security example: I’ve lost count with the number of times NASA has suggested they would be improving policies with their network and I have also lost count of times they then went on to LATER be compromised AGAIN with the SAME or EQUALLY stupid CAUSE! Why is this? Irresponsibility and complete and utter stupidity. Aside from the fact that the only thing we learn thing from history is that – and yes, pun is most definitely intended - we do not learn a bloody thing from history! And that is because of stupidity and irresponsibility.

Make no mistake, people:

  1. This will continue happening until humans wake up (which I fear that since even in 2014 ‘they’ have not woken up, they never will!).
  2. I told you so, I was right then and I am still right!
  3. Not only did I tell you so about computer security (in the context of automation) I also told you about real life incidents, including emergencies. And I was right then and I am still right!

Hurts? Well some times that’s the best way. Build some pain threshold as you’ll certainly need it. If only it was everyone’s head at risk, because they’re so thick that they’d survive! Instead we all are at risk because of others (including ourselves, our families, everyone’s families, et al.). Even those like me who suggest this time and again are at risk (because they are either forced in to using the automation or they are surrounded by drones – any pun is much intended here, as well – who willingly use their “smart” everything… smart everything except their brain, that is!

SELinux, Security and Irony Involved

I’ve thought of this in the past and I’ve been trying to do more things (than usual) to keep me busy (there’s too few things that I spend time doing, more often than not), and so I thought I would finally get to this. So, when it comes to SELinux, there are two schools of thought:

  1. Enable and learn it.
  2. Disable it; it isn’t worth the trouble.

There is also a combination of the two: put it in permissive mode so you can at least see alerts (much like logs might be used). But for the purpose of this post, I’m going to only include the mainstream thoughts (so 1 and 2, above). Before that though, I want to point something out. It is indeed true I put this in the security category but there is a bit more to it than that, as those who read it will find out at the end (anyone who knows me – and yes this is a hint – will know I am referring to irony, as the title refers to). I am not going to give a suggestion on the debate of SELinux (and that is what it is – a debate). I don’t enjoy and there is no use in having endless debates on what is good, bad, what should be done, debating on whether or not to do something or even debating on a debating (and yes the latter two DO happen – I’ve been involved in a project that had this and I stayed out of it and did what I knew to be best for the project over all). That is all a waste of time and I’ll leave it to those who enjoy that kind of thing. Because indeed the two schools of thought do involve quite a bit of emotion (something I try to avoid itself, even) – they are so passionate about it, so involved in it, that it really doesn’t matter what is suggested from the other side. It is taking “we all see and hear what we want to see and hear” to the extreme. This is all the more likely when you have two sides. It doesn’t matter what they are debating and it doesn’t matter how knowledgeable they are or are not and nothing else matters either: they believe their side’s purpose so strongly, so passionately that much of it is mindless and futile (neither side sees anything but their own side’s view). Now then…

Let’s start with the first. Yes, security is indeed – as I’ve written about before – layered and multiple layers it always has been and always should be. And indeed there are things SELinux can protect against. On the other hand, security has to have a balance or else there is even less security (password aging + many different accounts so different passwords + password requirements/restrictions = a recipe for disaster). In fact, it is a false sense of security and that is a very bad thing. So let’s get to point two. Yes, that’s all I’m going to write on the first point. As I already wrote, there isn’t much to it notwithstanding endless debates: it has pros and it has cons, that’s all there is to it.

Then there’s the school of thought that SELinux is not worth the time and so should just be disabled. I know what they mean, not only with the labelling of the file systems (I wrote about this before and how SELinux itself has issues at times all because of labels, and so you have to have it fix itself). That labelling issue is bad itself but then consider how it affects maintenance (even worse for administrators that maintain many systems). For instance, new directories in Apache configuration, as one example. Yes, part of this is laziness but again there’s a balance. While this machine (the one I write from, not xexyl.net) does not use it I still practice safe computing, I only deal with software in main repositories, and in general I follow exactly as I preach: multiple layers of security. And finally, to end this post, we get to some irony. I know those who know me well enough will also know very well that I absolutely love irony, sarcasm, satire, puns and in general wit. So here it is – and as a warning – this is very potent irony so for those who don’t know what I’m about to write, prepare yourselves:

You know all that talk about the NSA and its spying (nothing alarming, mind you, nor anything new… they’ve been this way a long long time and which country doesn’t have a spy network anyway? Be honest!), it supposedly placing backdoors in software and even deliberately weakening portions of encryption schemes? Yeah, that agency that there’s been fuss about ever since Snowden started releasing the information last year. Guess who is involved with SELinux? Exactly folks: the NSA is very much part of SELinux. In fact, the credit to SELinux belongs to the NSA. I’m not at all suggesting they tampered with any of it, mind you. I don’t know and as I already pointed out I don’t care to debate or throw around conspiracy theories. It is all a waste of time and I’m not about to libel the NSA (or anyone, any company or anything) about anything (not only is it illegal it is pathetic and simply unethical, none of which is appealing to me), directly or indirectly. All I’m doing is pointing out the irony to those that forget (or never knew of) SELinux in its infancy and linking it with the heated discussion about the NSA of today. So which is it? Should you use SELinux or not? That’s for each administrator to decide but I think actually the real thing I don’t understand (but do ponder about) is: where do people come up with the energy, motivation and time to bicker about the most unimportant, futile things? And more than that, why do they bother? I guess we’re all guilty to some extent but some take it too far too often.

As a quick clarification, primarily for those who might misunderstand what I find ironic in the situation, the irony isn’t that the NSA is part of (or was if not still) SELinux. It isn’t anything like that. What IS ironic is that many are beyond surprised (and I still don’t know why they are but then again I know of NSA’s history) at the revelations (about the NSA) and/or fearful of their actions and I would assume that some of those people are those who encourage use of SELinux. Whether that is the case and whether it did or did not change their views, I obviously cannot know. So put simply, the irony is that many have faith in SELinux which the NSA was (or is) an essential part of and now there is much furor about the NSA after the revelations (“revelations” is how I would put it, at least for a decent amount of it).

Fully-automated ‘Security’ is Dangerous


2014/06/11:
Thought of a better name. Still leaving the aside, below, as much of it is still relevant.


(As a brief aside, before I get to the point: This could probably be better named. By a fair bit, even. The main idea is security is a many layered concept and it involves computers – and its software – as well as people, and not either or and in fact it might involve multiples of each kind. Indeed, humans are the weakest link in the chain but as an interesting paradox, humans are still a very necessary part of the chain. Also, while it may seem I’m being critical in much of this, I am actually leading to much less criticism, giving the said organisation the benefit of the doubt as well as getting to the entire point and even wishing the entire event success!)

In our ever ‘connected’ world it appears – at least to me – that there is so much more discussion about automatically solving problems without any human interaction (I’m not referring to things like calculating a new value for Pi, puzzles, mazes or any thing like that; that IS interesting and that is indeed useful, including to security, even if indirectly and yes, this is about security but security itself on a whole scale). I find this ironic and in a potentially dangerous way. Why are we having everything connected if we are to detach ourselves from the devices (or in some cases the attached so to the device that they are detached from the entire world)? (Besides brief examples, I’ll ignore the part where so many are so attached to their bloody phone – which is, as noted, the same thing as being detached from the world despite the idea they are ever more attached or perhaps better stated, ‘connected’ – that they walk into walls, people – like someone did to me the other day, same cause – and even walking off a pier in Australia while checking Facebook! Why would I ignore that? Because I find it so pathetic yet so funny that I hope more people do stupid things like that that I absolutely will laugh at as should be done, as long as they are not risking others lives [including rescuers lives, mind you; that's the only potential problem with the last example: it could have been worse and due to some klutz the emergency crew could be at risk instead of taking care of someone else]. After all, those that are going to do it don’t get the problem so I may as well get a laugh at their idiocy, just like everyone should do. Laughing is healthy. Besides those points it is irrelevant to this post). Of course, the idea of having everything connected also brings the thought of automation. Well, that’s a problem for many things including security.

I just read that the DARPA (which is the agency that created ARPANet, you know, the predecessor to the Internet – and ARPA still is referred to in DNS, for example) is running a competition as such:

“Over the next two years, innovators worldwide are invited to answer the call of Cyber Grand Challenge. In 2016, DARPA will hold the world’s first all-computer Capture the Flag tournament live on stage co-located with the DEF CON Conference in Las Vegas where automated systems may take the first steps towards a defensible, connected future.”

Now, first, a disclaimer of sorts. Having had (and still do) friends who have been to (and continue to go to) DefCon (and I’m referring to the earlier years of DefCon as well) and not only did they go there, they bugged me relentlessly (you know who you are!) for years to go there too (which I always refused, much to their dismay, but I refused for multiple reasons, including the complete truth: the smoke there would kill me if nothing else did before that), I know very well that there are highly skilled individuals there. I have indirectly written about groups that go there, even. Yes, they’re highly capable, and as the article I read about this competition points out, DefCon already has a capture the flag style tournament, and has for many years (and they really are skilled, I would suggest that except charlatans like Carolyn P. Meinel, many are much more skilled than me and I’m fine with that. It only makes sense anyway: they have a much less hectic life). Of course the difference here is fully automated without any human intervention. And that is a potentially dangerous thing. I would like to believe they (DARPA) would know better seeing as how the Internet (and the predecessor thereof) was never designed with security in mind (and there is never enough foresight) – security as in computer security, anyway. The original reason for it was a network of networks capable of withstanding a nuclear attack. Yes, folks, the Cold War brought one good thing to the world: the Internet. Imagine that paranoia would lead us to the wonderful Internet. Perhaps though, it wasn’t paranoia. It is quite hard to know, as after all a certain United States of America President considered the Soviet Union “the Evil Empire” and as far as I know wanted to delve further on that theme, which is not only sheer idiocy it is complete lunacy (actually it is much worse than that)! To liken a country to that, it boggles the mind. Regardless, that’s just why I view it ironic (and would like to think they would know better). Nevertheless, I know that they (DARPA, that is) mean well (or well, I hope so). But there is still a dangerous thing here.

Here is the danger: By allowing a computer to operate on its own and assuming it will always work, you are essentially taking a great risk and no one will forget what assuming does, either. I think that this is actually a bit understated, because you’re relying on trust. And as anyone who has been into security for 15-20 years (or more) knows, trust is given far too easily. It is a major cause of security mishaps. People are too trusting. I know I’ve written about this before, but I’ll just mention the names of the utilities (rsh, rcp, …) that were at one point the norm and briefly explain the problem: the configuration option – that was often used! – which allowed logging in to a certain host WITHOUT a password from ANY IP, as long as you login as a CERTAIN login! And people have refuted this by using the logic of, they don’t have a login with that name (and note: there is a system wide configuration file of this and also per user which makes it even more of a nightmare). Well, if it is their own system, or if they compromised it, guess what they can do? Exactly – create a login with that name. Now they’re more or less a local user which is so much closer to rooting (or put another way, gaining complete control of) the system (which potentially allows further systems to be compromised).

So why is the DARPA even considering fully automated intervention/protection? While I would like to claim that I am the first one to notice this (and more so put it in similar words), I am not, but it is true: the only thing we learn from history is that we don’t learn a damned thing from history (or we don’t pay attention, which is even worse because it is flat out stupidity). The very fact that systems were compromised by something that was ignored, not thought of prior to (or thought of in a certain way – yes, different angles provide different insights) or new innovations come along to trample over what was once considered safe is all that should be needed in order to understand this. But if not, perhaps this question will resonate better: does lacking encryption mean anything to you, your company, or anyone else? For instance, telnet, a service that allows authentication and isn’t encrypted (logging in, as in sending login and password in clear over the wire). If THAT was not foreseen you can be sure that there will ALWAYS be something that cannot be predicted. Something I have – as I am sure everyone has – experienced is that things will go wrong when you least expect them to. Not only that, much like I wrote not too long ago, it is as if a curse has been cast on you and things start to come crashing down in a long sequence of horrible luck.

Make no mistake: I expect nothing but greatness from the folks at DefCon. However, there’s never a fool-proof, 100% secure solution (and they know that!). The best you can expect is to always be on the look out, always making sure things are OK, keeping up to date on new techniques, new vulnerabilities, and so on, so software in addition to humans! This is exactly why you cannot teach security; you can only learn it – by applying knowledge, thinking power and something else that schools cannot give you: real life experience. No matter how good someone is, there’s going to be someone who can get the better of that person. I’m no different. Security websites have been compromised before, they will in the future. Just like pretty much every other kind of site (example: one of my other sites, before this website and when I wasn’t hosting on my own, the host made a terrible blunder, one that compromised their entire network and put them out of business. But guess what? Indeed, the websites they hosted were defaced, including the other site of mine. And you know what? That’s not exactly uncommon, for defaced websites to occur in mass batches simply because the webhost had a server – or servers – compromised* and well, the defacer had to make their point and name made). So while I know DefCon will deliver, I know also it to be a mistake for DARPA to think there will at some point be no need for human intervention (and I truly hope they actually mean it to be in addition to humans; I did not, after all, read the entire statement, but it makes for a topic to write about and that’s all that matters). Well, there is one time this will happen: when either the Sun is dead (and so life here is dead) or humans obliterate each other, directly or indirectly. But computers will hardly care at that point. The best case scenario is that they can intervene certain (and indeed perhaps many) attacks. But there will never be a 100% accurate way to do this. If it were so, heuristics and the many other tricks that anti-virus (and malware itself) products deploy, would be much more successful and have no need for updates. But has this happened? No. That’s why it is a constant battle between malware writers and anti-malware writers: new techniques, new people in the field, things changing (or more generally, technology evolving like it always will) and in general a volatile environment will always keep things interesting. Lastly, there is one other thing: humans are the weakest link in the security chain. That is a critical thing to remember.

*I have had my server scanned by different sites and they didn’t know they had a customer (or in some cases, a system – owned by a student, perhaps – at a school campus) that had their account compromised. Indeed, I tipped the webhosts (and schools) off that they had a rogue scanner trying to find vulnerable systems (all of which my server filtered but nothing is fool-proof, remember that). They were thankful, for the most part, that I informed them. But here’s the thing: they’re human and even though they are a company that should be looking out for that kind of thing, they aren’t perfect (because they are human). In other words: no site is 100% immune 100% of the time.

Good luck, however, to those in the competition. I would specifically mention particular (potential) participants (like the people who bugged me for years to go there!) but I’d rather not state them here by name, for specific (quite a few) reasons. Regardless, I do wish them all good luck (those I know and those I do not know). It WILL be interesting. But one can hope (and I want to believe they are keeping this in mind) the DARPA knows this is only ONE of the MANY LAYERS that make up security (security is, always has been and always will be, made up of multiple layers).

Implementing TCP Keepalive Via Socket Options in C

Update on 2014/06/08: Fixed an error with IPv6 problem (that I refer to but do not elaborate too much on). Obviously an MTU of 14800 is not less than 1500 and well, I won’t go beyond that: I meant 1480 (although I found a reason for a different, lower MTU, but I don’t remember specifics and is besides the point of TCP Keepalives and manipulating them with the setsockopt call).

Update on 2014/21/05: I added the reference [1] that I forgot to add after suggesting that there would be a note on the specific topic (gateway in ISP terms versus the more general network gateway).

Important Update (fix) on 2014/13/05: I forgot a very important #include in the source file I link to at the end of the post. While I don’t include (pardon the irony) creation of the socket, I don’t have any of the source in a function, I DO use the proper #include files because without those the functions that I call will not be declared which will result in compiler errors. The problem is, because I have #ifdef .. #endif blocks for the relevant socket options, without this file (that is now #include’d) it would silently be skipped. The file in question is netinet/tcp.h (relative to /usr/include/). Without that file included, the socket options would not be #define’d and therefore this post is less than useful and in fact less than useless.

This will be fairly short as I am quite preoccupied and it is a fairly simple topic (which is actually the reason I’m able to discuss it). In recent times I noticed a couple issues with a network connection that I am often idle but should not be dropped as the application itself uses the setsockopt(2) call to enable TCP Keepalives at the socket level when prior to binding itself to its ports (and this option should be inherited by the connecting clients). While they did inherit this property, there was a problem and it only showed itself when over IPv4 (IPv6 had another problem and that was resolved by changing the MTU to 1480, down from 1500 via my network configuration. It didn’t always have this problem – serious latency, well over two minutes, when being sent a page worth of text – but I have this vague memory that my modem/router, in such context known as a “gateway”[1] – used to have its MTU at 1480 but is now 1500). While I initially did think of TCP Keepalives I did not actually think beyond the fact the application enables SOL_KEEPALIVE through the setsockopt call (which should have resolved the issue, in my thinking). But a friend suggested – after me mentioning that it was within 1 to 2 hours of idleness – that it might be the actual time between initial keepalive being sent, the amount of probes to send if nothing is received by the other side, and how often to send the probes.

This thought was especially interesting because the initial keepalive is sent after (by default, under Linux) 7200 seconds (which is 2 hours). Since it usually took an hour before I noticed it (by actually having a reason to no longer be idle) then it would stand to reason that the keepalive time is too high. So to test this, I initially set on both sides (server and client) to have much shorter keepalive time (via the sysctl command). This did not seem to help, however. So, to really figure this out I sniffed the traffic on both ends. This means I could see when the server sends a keepalive probe, when (or if) the client sends a response, and if there is no response, I would see the next probe (presumably). It turns out that my end did receive the keepalive. However, it only received it one time. In other words, if I set the time to 10 minutes (600 seconds), was idle for 10 minutes, I would receive a keepalive and respond. But 10 minutes later (so 20 minutes of being idle), the keepalive was NOT received. This is when I saw the further probes being sent by the server (but none of which were received at the client end and so the connection was considered ‘dead’ after a certain amount of probes). Well, as it turns out, this can be remedied by taking advantage of setsockopt a bit further.

As far as I am aware, keepalive is not set by default on sockets (even TCP sockets) so you would in that case need to set that option first. Here is an example of how to set all the related options. Note that I was not interested in playing around with the finding the optimal time for the server and client (actually in this case it is for the client even though the server is the one that sets the socket options). Therefore, the time could potentially be higher than I set it to. This applies to all keepalive values. Nevertheless, for my issue, after the last three calls to setsockopt, recompiling the server, rebooting it and trying again, the problem is resolved (in fact I might not actually need two of the three additional calls but again I was not wanting to play around with the settings for long). The first call will turn on keepalive support and the following three set the options related to TCP keepalives, that I will comment on. This should be considered a mixture of pseudo-code and not. That is, I am not including error reporting of any real degree, nor am I gracefully handling the error. I’m also not including the creation of the socket or the other related things. This is strictly setting keepalive, printing a basic error (with the perror call) and exiting. Further, I’m not explaining how setsockopt works except what is in the file and that in the example the file descriptor referring to the socket is the variable ‘s’ (which again is not being created for you).

The actual snippet can be found here.

[1]Gateway is actually much more than a router/modem combination. Indeed, there is the default gateway which allows traffic destined for another network to actually get there (when there is no other gateway to route traffic through, in the routing tables). In general, a gateway is a router (which allows traffic destined for another network to actually get there and is therefore similar to the default gateway because the default gateway IS a gateway). It can have other features along with it, but in the sense of ISPs a gateway is often a modem and router combination. But the modem is in general serves a different purpose than a gateway so this is why I initially brought this up (but forgot to actually – if you pardon any word play – address).

Programmers are Human Too

Yes, as of 2014/06/08 this has changed titles and is quite different. I think this is better overall because it is more to the point (that I was trying to get across). It was originally about the Heartbleed vulnerability in OpenSSL. I have some remarks about that, below, and then I will write about the new title. I could argue that this title is not even the best. Really it is about how things will never go exactly as planned, 100% of the time. That’s a universal truth. First, though, about OpenSSL.

I have sense the time of writing the original post (quite some time ago even) seen the actual source code and it was worse than I thought (there were absolutely no sanity check, no checks at all, which is a ghastly error and very naive: you cannot ever be too careful especially when dealing with input, whether from a file, a user or anything else). Of course, I noticed some days ago that more vulnerabilities were found in OpenSSL. The question is then: why do I tend to harp on open source is more secure, generally? Because generally it IS. The reason is the source exists on many people’s computers, which means more can verify the source (both in security bugs or any bugs even as well as whether or not it has been tampered with) and also many more people can view it and find errors (and the open source community is really good on fixing errors simply because they care about programming; it is a passion and no programmer who is – if you will excuse the much intended word play, encryption and all – worth their salt will not be bothered by a bug in their software). True, others can find errors but that itself is good because let’s be completely honest: how many find bugs/errors (security included) in Windows? MacOS? Other proprietary software? Exactly: many. The only difference is with open source it is easier to find the errors (or rather, spot errors) and if it is a programmer they might very well fix it and send a patch in (and/or report it – you may not believe that but if you look at the bugzilla for different software, including security sections, you will find quite some entries). Relying on closed source for security (as in if they cannot see the source code then they cannot find bugs to exploit – which, by the way is a fallacy unless there is no way to read the symbols in the software and there is also no way to, for example, use a hex editor on it or a disassembler even) is more or less – in the context being security –  is nothing of security but rather security through obscurity (which I would rather call it “insecurity”, “false sense of security through denial” or even – to be more blunt – “asking for a security problem when and where you don’t expect it” to give three examples of how bad it is). Indeed, security through obscurity, is just like a poorly designed firewall, worse than none, because you believe you are safe (all the while not safe but truly have no idea how bad it is or isn’t) and since you believe you have a safe setup you won’t look in to the issue further (rather than constantly changing as new ideas, new risks, new anything, comes up). Nevertheless, the fact is programmers are human too and while some things might seem completely stupid, blind or anything in between, we all make mistakes.

So, essentially: yes, bugs in software or even hardware (which has happened and will happen again) can be beyond frustrating for the user (and they can also be the same to the programmers involved, mind you, as well as other programmers who need it fixed but cannot). But so can a leak in your house, plumbing problems or even nature (tree falling onto your house, for example). The truth of the matter is, unless you have somehow forgotten and deemed yourself perfect (which I assure you, no one is, especially not programmers – I’m not the only programmer that has observed this, mind you – but  no one else is either which means you are not perfect, either), you cannot realistically expect any one else to be perfect. Problems will happen, always.

To bring the issue with OpenSSL into perspective, or maybe better stated is: to give a non computer, real life phenomenon, think about a time when something went wrong (more than the usual thing that everyone experiences on a daily basis). For instance, the motor in your car needs replacement. That’s not a daily occurrence (or one can hope not!). While this won’t always happen, I know from experience – and others I have discussed this with, agree – that often when things start breaking down, you feel like it is one thing after another, and it is as if you were cursed. And how long it goes (days, weeks, …) will vary, but the fact of the matter is, multiple things go wrong and often when it is the worst time and/or least expected (things are going incredibly well and suddenly something horrible happens).

Well so too can this happen with software. I think the best way to look at it is this: the bugs have already been fixed and while it is true that bug fixes often introduce new bugs (because as I put it: programmers – myself included – implement bugs and that is completely true) but that goes for any new feature (any modification to software is bound to implement bugs – it will not always happen but it always has the potential). The only kind of software or design (of something else) that has zero problems is the kind that doesn’t exist. This is why the RFCs obsolete things, this is why telnet and rcp/rsh were replaced with ssh (over time even! Some were very slow to change over and when you look at the vulnerabilities, especially related to trust with rcp/rsh, it is shocking how slow administrators were to replace them!). This is why TCP syn cookies were introduced, this is why everything in the universe (and I use the word universe in the literal sense: indeed, the universe that has the planets we all know of, including Earth) changes. In short: no matter what safety mechanisms are in place, something else will eventually happen (as for the universe, does solar storms mean anything to Earth? What about the sun dying? Yes, both do mean something to the Earth!).

So what is the way to go about this? Address them as they come, in the way you can. That includes, by the way, giving constructive criticism as well as help where you can (which isn’t always possible – e.g., I’m not an electrician so I sure as hell cannot offer advice on a situation except I can refer you to an electrician I know to be good and trustworthy as well as experienced). I think that is the only way to stay semi-sane in such a chaotic world. Whether anyone agrees, I cannot change nor will I try to change their view. All I am doing is reminding others (which I admit is probably not many – but I don’t mind it: I’m not exactly outgoing, social, and so I don’t mind that I don’t have a widespread target. I write for the sake of writing, any way) that nothing is perfect, not humans, not anything else. If you can understand that you can actually better yourself (which, even if you don’t use that fact to better others deliberately, you at least are better for yourself and incidentally you will better others too even if only indirectly; how you feel is not only contagious but if you’re in a better mood or you have insight in to something, then others who are around you or work/deal with/correspond with you, will also feel that vibe and/or gain that insight).

Windows XP End of Life: Is Microsoft Irresponsible?

With my being very critical of Microsoft one might make the (usually accurate) assumption that I’m about to blast Microsoft. Whether any one is expecting me to or not, I don’t know but I will make something very clear: I fully take responsibility for my actions and I fully accept my mistakes. I further make the best of situations. As such, I would like to hope everyone else does too. I know that is unrealistic at best; indeed, too many people are too afraid to admit when they don’t know something (and therefore irresponsibly make up pathetic and incorrect responses) or when they make a mistake. But the fact of the matter is this: humans aren’t perfect. Learn from your mistakes and better yourself in the process.

No, I am not going to blast Microsoft. Microsoft _was_ responsible. They reported the end of life _SEVEN YEARS AGO_! I am instead blasting those who are complaining (and complaining is putting it VERY nicely – it is more like that of a whiny little brat who throws a temper tantrum when they don’t get their own way on every single thing despite the fact they have been told in advance this will happen) about how they now have to quickly upgrade or not get updates, security updates included. For instance, let’s take two different groups. Let’s start with the manager Rosemayre Barry of London-based business, The Pet Chip Company who stated the following (to the BBC or at least it was reported on the BBC):

“XP has been excellent,” she says. “I’m very put out. When you purchase a product you don’t expect it to be discontinued, especially when it’s one of [Microsoft's] most used products.”

 

Sorry to burst your bubble, Rosemayre, but ALL software will eventually be discontinued (just like how smoke detectors, carbon monoxide detectors and the like have to be replaced over time and/or are improved over time and that is not even considering maintenance like battery replacement). You can complain all you want but this is not only the correct thing technically, it is economically unfeasible to continue with a product as old as Windows XP is. I don’t care how used it is or isn’t (I hardly expect it to be the most used product of Microsoft’s, however; I would argue its office suite is more used as it works on multiple versions of Windows and corporations rely on it a lot). I also know for a fact that corporations tend to have contracts with computer manufacturers that they LEASE computers for a few years at a time and when the time comes for the next lease they will get the more recent software and this includes the operating system.  Why would they do that? Well, again, it is economically better for the company, that’s why. And here’s some food for thought: Windows XP was released in 2001 and according to my trusty calculator (i.e., my brain) that means it is almost a 13 year old product (as it was August and we’re only in April). Well check this. Community ENTerprise OS (CentOS), a distribution of Linux, which is largely useful for servers, has a product life line, as far as I remember, of only 10 years. And you know something else? CentOS is very stable because it doesn’t have many updates or in other words is not on the bleeding edge. When a security flaw is fixed, the affected libraries and/or programs have fixes backported into the older libraries and/or programs. Indeed, the current GCC version is 4.8.2 and CentOS’s current version (unless you count my backport of 4.7.2 which you can find more info about at The Xexyl RPM Repository – possibly others exist somewhere else but for the time being the packages I maintain I have not updated to the 4.8.x tree) is 4.4.7 which was released on 20120313 or in other words the 13th of March in 2012. Yes, that is over _two years ago_. It means you don’t get the newer standards (even though the C and C++ standards that are most recent were in 2011 it is not to say that anything past the ratification date is somehow magically going to have it all. In fact, some features are still not complete in the most recent versions) but it also means your system remains stable and that is what a server needs to be: what good is a server if the service it offers is unstable (and I’m not referring to Internet connection stability! – that is another issue entirely and nothing to do with the operating system) and hard to use? Very little indeed. And realistically, 10 years is very reasonable if not more than very reasonable. Over the span of 10 years a lot changes including a lot of core changes (and let’s not forget standards changing) which means maintaining it for 10 years is quite significant and I cannot give anything but my highest praise to the team at CentOS – an open source and FREE operating system. To be fair to this manager, they at least DID upgrade to a more recent Windows, but the very complaint is unfair to Microsoft (and this coming from someone who is quite critical of Microsoft in general and someone who has railed on them more than once – in security and otherwise; in quotes about their capabilities and in articles alike – and quite harshly too; examples, one of which even includes a satirical image I made that is directed at Windows in general: Microsoft’s Irresponsible Failures and Why Microsoft Fails at the Global Spam Issue). It is also unrealistic to expect anything else.

Next, let’s look at what the UK government has done: They are paying Microsoft £5.5m to extend updates of Microsoft Windows XP, Office 2003 and Exchange 2003 for ONE year. That is absolutely astonishing and I would think – to UK tax payers – atrocious. What the hell are they thinking? If the SEVEN years warning was not enough time what makes ONE extra year worth THAT MUCH? Furthermore, and most importantly, if they could not UPGRADE in SEVEN YEARS what makes any rational being expect them to UPGRADE WITHIN A YEAR? They claim they’re saving money. Indeed that is a very odd idea of saving money. Not only are they paying money to get updates for another year, they will STILL have to upgrade in due time, if they are to get updates. Think of it this way. When a major part of your car dies, you might consider fixing it. It will likely be pricey. If, however, shortly thereafter (let’s say within a year or two), another major part of your car dies. Also, the car has been used for quite some years and certainly out of warranty. What is the most logical and financially best assumption and choice? Is it that this will be the last part to die – surely nothing else can go wrong! – and to pay for it and then wait until the third part dies (which it almost certainly will; it is mechanical and mechanical things die!)? Or is it maybe better to cut your losses and get a new car? I think we all know the answer. We of course, does not include the UK government. Of course, most versions of Windows might be the car that is breaking down but that is another story entirely.

The bottom line here is quite simple though: No, Microsoft is not being irresponsible. They are not being unreasonable either. No, they gave SEVEN YEARS notice. The only irresponsible and unreasonable people – and companies and/or government[s] – are those who STILL use Windows XP and especially those that are now forced to upgrade and at the same time whining worse than a spoiled brat who is used to getting his way but the one time he doesn’t, he throws a tantrum. Lastly, I want to point out the very dangerous fallacy these people are actually aligning themselves with. To those of us who remember when the TELNET and RSH protocols were prevalent, there came a time when enough was enough and standards had to change (e.g., secure shell aka ssh). Those who had any amount of logic in them UPGRADED. Many (though not as many as should have) saw many problems with them for far too long, among them the following (and note that these are on Unix systems and yes that means NO system is immune to security problems, be it Windows, Mac, Unix or anything else. Incidentally, Unix systems are what typically are used for servers which means customers data included in databases running on the servers, especially then as Windows NT was in its infancy by the time most – but probably not all – changed over):

  1. The fact a common configuration would allow “you” to remotely log in to a machine as a user from ANY HOST WITH NO PASSWORD? And of course it was PERFECTLY SAFE because after all, they won’t have a user with the same name, right? Well, ever occur to you that they could CREATE a user with that name? And ever hear of grabbing the password file remotely to find user names? Or a scrupulous employee who could do the same? An employee that was fired and wants revenge (and happens to have user names or maybe even stole data before they were completely locked out after being fired? Maybe they even left a backdoor in!)? For those who are slow that is sarcasm; it was NEVER safe and it WAS ALWAYS naive at best (this same problem is trust relationship and that is one of the biggest problems with security – too much trust is given far too easily). And indeed, Unix – just like the predecessor to the Internet – was NEVER designed with security in mind. That is why new standards are a good thing: to address problems and to extend, deprecate or obsolete standards (like, I don’t know, IPv6 as opposed to IPv4, anyone?).
  2. No encryption means sniffing could show the user and password (as well as other information in the traffic) to the sniffing party. Assuming that there is no one to sniff your traffic is security through obscurity at best and that is arguably worse than no security (it is a false sense of security and when taken to extreme some will refuse to believe it is a problem and therefore are blinded to the fact they already are or could be compromised any moment).

Consider those two examples for a moment. Finally, take the logic of “most people use it” or “it is so convenient and we shouldn’t have to upgrade” and where do you end up? Exactly like not upgrading from Windows XP or otherwise having tantrums about having to upgrade from Windows XP to something more recent (despite the seven years notice and it being in the news more and more as the deadline approached) or else not receive updates. In other words, you are staying behind times AND risking your data, your customer’s data, your system (and that means network if you have a network). And you know something? You had it coming to you so enjoy the problems YOU allowed and let’s hope that only you or your company is affected and not your customers (because it would be YOUR fault).