The Art of Easter Eggs

This is obviously something that is best classified in the general topic, simply because the software I write about is Unix and its derivatives (primarily Linux). What inspired this is two things in particular:

  • I discovered an easter egg in the editor vim earlier this year (which is to say, late September, early October).
  • Besides fond memories of easter eggs I discovered over the years, I enjoy designing them myself, for programs I write (or at least one that others will see, i.e., a specific MUD).

This is just for fun (which is expected with a topic about easter eggs) and basically a list of easter eggs that are memorable to me, that either I discovered on my own or remember reading about them at some point over the years (I’ll specify which is which). I won’t list any I have implemented any where at all and I never will. These easter eggs, mind you, are all old with perhaps the exception of the vim one (which I suspect has been there for quite some time but it is new to me). Therefore I don’t think this is harmful. If however you enjoy finding them on your own, don’t read this. That is my warning.

  • Colossal Cave Adventure, also known as Advent, is an old text based game, somewhat like a MUD only single player. You interact with objects, open/close doors, you can get lost, you can die, and you gain points, too. While the version I am looking at now (version 4, one I fixed a segfault in, for a friend and therefore have it locally) is not the one I played years ago (it was an earlier version that I played), it is still fun and absolutely has easter eggs. The narrator (I guess you could call it) doesn’t take to swearing kindly. There are many responses to swearing and there are many words it sees as swearing. I’ll leave it to your imagination except for the one I find most amusing (at least, literally it is amusing – it contradicts itself):

    ? screw you

    I trust you know what “you” might be, ’cause I don’t.

    Interestingly, when said friend referred to a crash, and they didn’t know exactly what triggered it (it was for her friend who has a Mac and first it failed to compile to which i fixed that) except that it occurred after a command was typed. What that command in question was, I don’t remember (they didn’t know and in fact it wasn’t a specific command and not only that command – it happened more than once) but I had the idea to play with exactly the above: as I was swearing at the computer, it gave me the information I needed; it was a segfault and I recompiled (with debugging symbols – the source is actually obfuscated and I didn’t think of running it through a beautifier and the programmer in me thought to make it drop a core), removed the limit on core size, caused it to crash (therefore dumping core) and found that there was a dereference on a NULL pointer (which, as I’ve discussed before, is much preferable than a pointer that was never assigned to anything – at all – or otherwise pointing to garbage). Added an if, recompiled and it was all fine.

  • I liked this one a lot although I admit I enjoyed more so figuring out how to defeat the boss (and therefore win the game) more than the easter egg (which I also discovered on my own, if memory serves me correctly). I played the game a lot and I beat it many times. The last area  – Icon of Sin – is one hell – indeed, it is intended – of a toxic dump full of demons and monsters alike… but very well worth playing through (unless you are very easily frustrated). This is one of the few computer games I played – most were console games. The game in question is DOOM 2. The easter egg is the severed head of one of the developers, John Romero’s. If you are curious, check http://doom.wikia.com/wiki/ as they have a picture and (for those wondering how it is found) how to find it. What that Wiki page informed me of, something I did not know, is at the beginning of the last area – Icon of Sin – the voice says something that explains – once you decipher it – how to defeat the last boss. I had (have is a much better word) a knack for figuring out how things work and how to solve things (puzzles, games, …) and so I beat it without the hint (there are quite a few things in the area that can make or break your success but I quite enjoy these things).
  • Mortal Kombat series is another game I really enjoyed for a lot of years. These features are more well known, perhaps, but there are hidden characters in the series. One character, named by the reversing the last name of the two developers (or two of, being Ed Boon and John Tobias) is Noob Saibot. He appeared at some points (don’t remember specifics) and says “Toasty”. While checking the Mortal Kombat Wiki, I saw two other names that ring a bell: Smoke and Jade. Looking further it seems that I did indeed go beyond seeing them in the background (definitely this) and in fact fought against them (whether I figured out how to do this on my own or anything else I really cannot remember – I suspect not by myself in full).

More generally, I know there are many others I discovered (or was told about and enjoyed) over the years. I’ll reflect on a theme, one I did not do at all but I remember reading way back when. Then I’ll get to the vim easter egg.

So if you search Google for ‘Bill Gates is the antichrist’. The entry on http://urbanlegends.about.com is much of what it used to be (if not all). It is unfortunate that it isn’t the original, the one I saw so long ago: the original was lost because it was on Geocities and that is long dead. There is an easter egg in one of Microsoft Office (Excel 95 maybe?) that is listed. There’s also some maths with Bill Gates name (think: decimal values being added up) and what it equates to. Funnily enough, among listed is (not so much related to Bill Gates but is is still relevant to the fact I mention the editor vim – though in this case it is vi more so):

Note that the internet is also commony known as the World Wide Web or WWW... One way to write WWW is V/ (VI):

WWW V/ V/ V/ 666
Something to ponder upon, right?

Why is that amusing? Because of the editor wars between vi and emacs. This is one of those wars that is not hell-bent (can’t help it) on flaming but rather wit and humour. Wikipedia has an entry on it but it is claimed vi is the editor of the devil for the above reason (‘vi’, Roman numeral for 6). (There were more examples in that Wikipedia article but that’s the relevant one).

As for the easter egg in Vim I will give an explanation of why and how I discovered it (because I found that more useful than the easter egg), allowing those who are curious, to try it themselves (tip: you can change it as well!). Of all the programs I use, the one I use the most (perhaps better stated is, of all the utilities), is the shell and in my case ‘konsole’ (at my server I don’t have a GUI and so I just use the console itself). I usually have 5-10 tabs (or more) which means 5-10+ shells open at any time. Since I use vim for my editor of choice (I used to use vi but years ago tried vim and I agree with the name: vim is indeed VI iMproved), and since it allows you to open one file and then switch to another file without exiting (you can also open more than one ‘window’, each with another file and this applies equally), the current task in the tab (of konsole) shows the original invocation. This was annoying for many reasons. Looking in to how to fix this, it would be something like putting this in your runtime file (per-user would be ~/.vimrc but you could also use system-wide but I tend to frown upon enforcing changes on all accounts, even if they can disable it):
:auto BufEnter * let &titlestring = hostname() . ":vim " . expand("%p")
:set title titlestring=%<%F%=%l/%L-%P titlelen=70

Now if you open vim with (example): ‘vim file1′ you would note as it is before (in konsole tab): ‘vim file1′ (it might show other information like the hostname or however you configure it but this is up to you, in the profile settings[1]). However, if you were to be in command mode and then use ‘:e file2′ you would now see the tab has been updated to show ‘vim file2′. Now if you quit vim (command mode): ‘:q’ you will see the tab title has changed again. “Thanks for flying vim!” As for how you can change it, I’ll leave that to you but it is noted in the help file (‘:help title’ and read that entry as well as the entries below it, about titles). As an interesting bit, because I wanted to confirm that indeed the two changes are exactly what is needed, I commented out (prepend with a double quote) the first line, saved and (in another shell) started vim. It then shows as the title the name of the file followed by much whitespace then what is usually in the status (bottom of screen by default): current line/line count % where current line is the line where the cursor is, the line count is how many lines total and % is what percentage of the file the cursor is at.

As a final note: Enjoy easter eggs, whether you find them on your own or not: we put them there for our own enjoyment as well as yours! Although I am obviously biased, I think it really shows how programmers are clever and how easily they are amused. It is a good thing, though, it is a good way to release frustrations and some of the time programmers are not really appreciated (or the amount of effort they can put in is not always respected) so these things just show that they too can have fun and when others find it, they hopefully enjoy it as much if not more than the actual program.

Viewpoint: The Attack on Sony

2014/12/21:
I am redacting my original post because while I strongly believe it is misguided and unhelpful what is being claimed (by the US government), I think also that the way I addressed this was was not helpful, either. Certainly it detracts from my main point. While I often will keep things I’ve written, as I put it in my about section, I also believe in fixing mistakes where necessary. I’ve also noted that some of my writing will come off as a rant or otherwise aggressive and that I fix it where I can (and I always try to get a point across but often fail because of aggressiveness, whether it was intentional or not – yesterday’s aggressiveness was not intentional by any means). It is interesting to note that the other day I actually went to write something about this issue and I decided to delay it because I felt I was not in the right mindset. Apparently I was still not in the right mindset, yesterday. Of course, even though I’m fixing the post, that does not at all mean what was public will not remain public: once on the Internet it is as good as on the Internet (and even if all references were removed it still doesn’t mean there isn’t a single person who saw it and potentially captured it: I’ve done exactly this and I know others have too). But I still believe in taking responsibility and addressing mistakes where possible, and addressing means fixing the issue(s). So with that, my modified view on the attack on Sony. Do note that the title could very well be better worded (and this is how it was yesterday, too). I’m not sure how else to word it so it’ll suffice.


The last time, Sony brought (after the ‘first’ – the quotes are important here – attack) brought in a security professional. Yet, while some might find it ironic (it isn’t) there was another attack. First tip of old: if you consider security after an attack, or after deployment (e.g. in software development), then you’re behind at best and you may very well be too late, too. Second tip of old: in general, notwithstanding certain (rare and still not well advised) cases, if your network was compromised (there is one thing to consider here[1]), the only safe way is to start all over with improved policies, based on what you learned from the attack. There is the reason I quoted ‘first': while it isn’t a guarantee (by all means, given what was claimed, it could have been individual but that makes it even worse, not better!) I wouldn’t be surprised if they had left a backdoor or otherwise hadn’t truly left. This very bit is a common thing, isn’t it? Why would it not be? While some do it for a challenge (and I would argue this is far less common these days) there is this simple fact that they use the breached network for many things. This includes bouncing (and this isn’t counting bouncing off of proxies). This brings me to the first real point about the ‘evidence': IP address.

I could elaborate on why IP address doesn’t mean much, certainly not for proof, but I think I have a better way. If you were to lose your mobile phone, or if someone were to steal it, who is the rightful owner? You? Yes. Okay, so what happens if they then use that phone to pull pranks, make threatening calls, or otherwise abuse the fact it isn’t their phone? Is it your fault, is it your responsibility? No? Then what makes you think IP address is any different? It isn’t. There’s far too many possibilities. Worse is that even if the IPs are from North Korea (which I haven’t seen them nor do I really care – it is irrelevant to the point) it doesn’t mean it is state sponsored. It also doesn’t mean it isn’t. And that is exactly the problem: it is speculation and until it is actually confirmed it may as well be slander. I’m sorry to say that being confident (as at least one US official has stated) doesn’t equate to reality, most certainly not 100% of the time; I know this personally as does anyone who has been delusional but is not currently having said delusion: I was confident that traffic signals were spying on me and me alone as one example. Yes, I’m able to admit this publicly. Why? Why not is better asked. While I am by no means suggesting they are delusional here, my point is that being confident does not equate to reality (and this applies to those who are not delusional). While this is not necessarily any better of an example, this is something that specifically makes my point that many things are not as they seem: Mirror Lake in Yosemite National Park, to name one of several (I seem to remember there is one in Canada, too). This should all be kept in mind when dealing with accusations. I know, I know, there is the addition to IP that the attack ‘looks’ similar to a previous attack: it is still speculation until proven otherwise. Again I’m going to give a non-technical example: some countries purchase aircraft from other countries. But that doesn’t mean the jet flying over a country IS the country that manufactured the jet. Similarly, some countries share flags while others have flags that are similar to another. That doesn’t mean that the countries are the same.

There’s also been the claim that this attack is unprecedented. I don’t think so. Neither was it impossible to prevent. Yes, there is always someone who can best another, but that doesn’t mean there is never room for improvement; there is always room for improvement! Always. Just like some attacks are not prevented, many more are. But to throw blame elsewhere, and to not address the real problem is a problem itself. This is not the first time Sony has come under attack. They also aren’t the only one to be compromised more than once. I know for a fact that Kevin Mitnick, what many would call a notorious hacker (and he calls himself a hacker too, or at least he did) fell prey to some that bested his him and consequently compromised his network. His company after his release from federal prison (2001 comes to mind as his release date but I’d have to check to confirm). In addition, the reason he was caught the second time around (indeed his arrest in the mid 1990s was not his first time being in trouble with the law) was because someone bested him then, as I recall someone he had attacked himself. I certainly do not call him a hacker, not even by the media’s definition of hacker: he is excellent at social engineering, that much is true. Regardless, this was not the first time Sony was compromised. You would think that someone like Mitnick would be able to not fall prey here, given his title. But then it is easier to forgive a company like Sony. The only thing that matters is (and I really hope they do exactly this) they re-evaluate their policies and implement the improvements. This goes for every entity. The only true mistake is not learning from your mistakes. The only failure is not learning from your supposed failures; we all make mistakes and none of us succeed in everything.

I would like to leave with some final thoughts: The group that claimed responsibility for the attack on Sony, GOP – Guardians of Peace – only started to use the film about North Korea’s leader after it was suggested it was related. Because of this, and since North Korea was enraged about the film (I have some thoughts here, which I share below[2]) prior to the attack, it was now North Korea’s fault. This is a fallacy. A fallacy is illogical deduction and even if they are responsible, the logic used as above, is flawed.

In the end, IP address, similarity in attack and other such things are not indicative of anything, not indicative unless you wish to believe only what you want to believe. As a final example: Robert Tappan Morris, indeed the author of the infamous ‘Internet Worm’, aka ‘The Morris Worm’, made his worm appear to come from a different university than his. This was to throw the authorities off his trail. Due to mistakes on his part, however, the worm brought the affected machines to their knees. The effects of it were out and he was tracked down. Combine this with the fact that (for instance) many viruses, worms, trojan horses, backdoors and otherwise malware, are families of malware (which might not be written by the same person and indeed this is the case), this shows too that similarities does not imply equivalence (nor same source).


[1] Does web defacement constitute network being compromised? It could. But it could also not be. File integrity checks would help determine this here (but is not perfect either, if the attacker gets root access).  It is true that with content management systems, a web defacement makes it easier and not requiring compromising the system itself (especially true if there is a configuration in the web files that specifically deny the CMS from modifying those files; i.e. you can only use the interface for the content and nothing else). In the end, the only safe way (but mind the fact that depending on how long ago the attack was (and attack implies original access, not defacement!) backups could also have a backdoor or indeed anything else). On this latter bit, backups: this is one of many reasons that the backup volume should either only be mounted when backing up (or restoring) or made immutable (except during writing to the volume). Another reason is user error (and yes, as I’ve made clear, administrators count as users): if a command you run, a script you write or something else you run (or is affected by a bug) goes badly, what if you wipe out (or damage) your backup? (Redundant backup isn’t necessarily the answerany more than redundant storage; the point of backup is having it in multiple locations (e.g. off-site (no, this does not include the cloud or anything you do not have complete control over!) and on-site), not having multiple copies: the difference is subtle but something that should be understood).

[2]As for North Korea being enraged. The fact remains that North Korea and the United States are not on good terms. So when you consider the film’s plot, and you consider the different culture, it isn’t all that surprising, is it? And you can see how it can provoke them. The subject of free speech (and more specifically freedom of expression) is usually brought up and indeed it is here too. Unfortunately though, like many things in this world, it is very often taken too far. It is especially taken too far when defending someone or something that (you) agree with (or sympathise with). It is also defended when you disagree or dislike that which is offended or upset by the expressions. As something I am unfortunately familiar with far too well, many people excuse bullies (even minor bullying is wrong but minor bullying develops further in to moderate to extreme to beyond extreme) as “kids will be kids”. The problem is, you can only abuse someone so much, before they snap. So yes, kids will be kids – until victims of bullying – also kids – get revenge on bullies. Then the kids are now horrible, and the “kids will be kids” is no where to be seen or heard. I am eternally grateful that there isn’t a violent streak in me, because I would have been another example of the above. I did get revenge in ways, but I did it subtly and non-violently. I also enjoyed outsmarting them, making them look like fools without them even knowing just how much so. The reality is violence doesn’t solve anything, at least not in positive ways, but if you subject someone to abuse, when they get revenge (which indeed includes violence), it is natural and expected. To explain this, there is a phenomenon called ‘identifying with the aggressor’. This is exactly why domestic abuse runs in families: the victims are not in power, are helpless and suffer because of it. But they also see that in order to gain control, which means to stop the abuse, they can become abusive themselves. So continues a vicious cycle…

George Boole: 150 Years Later

This will be fairly quick as there isn’t much to write and I’m trying to turn off the lights for the day (the wording is… intentional). By chance I saw an entry in my BBC feed called George Boole and the AND OR NOT gates. Being a programmer, I immediately knew what it was referring to (of course programmers aren’t the only ones who would know it but they definitely do). What I did not (yes, I know, I can’t help it) is that it was 150 years ago that Boole himself died.

I find it rather interesting because within the last few months I started (and have not finished it yet) an article on boolean logic, looking at it in a different way than I’ve seen it explained. The reason it isn’t finished is, along with that thing called ‘real life’ (or that’s what I am told, anyway – I’ve yet to confirm it to my satisfaction …), it was a multiple-topic article (and it is more specifically how C handles boolean which is not the same as many other languages, e.g. Java). I just never finished it. I will in time but I could not pass this day up without remarking on the fact that he is indirectly responsible for so many things that many take for granted (even simple things like flashlights – if I am thinking right (and I’m not really putting any thought in to this at all for above noted reason) it is indeed the NOT gate that makes it work). It is simply amazing that such simple things as boolean (which can get complicated, yes, but everything can get complicated and the point remains that boolean gates themselves are simple in design) can give life to so many things. But yet this is something that is common: some of the most crazy, stupid sounding (and stupid simple) ideas are actually brilliant and perhaps much more than the person had in mind, when they thought of it in the first place (and boolean logic is only one example).

As an aside, tomorrow, December 9, as I recall, is Grace Hopper’s birthday. She also played a significant role in computing (and indeed she is the one who made popular the term debugging).

101 Years of Xexyl…

… for those who can count in binary, at least; indeed it was five years ago yesterday that I registered xexyl.net. I would have never suspected I would have what I have here, today. I would never ever imagined having my own RPM repository and yet that is only one of the accomplishments here (for whatever they are each worth).


I fully admit I live in something of a fantasy world (which is something of a paradox: if I admit it does that make it real? If it is real then what is fantasy and how real is it?) and so it seems appropriate that, given the anniversary of xexyl.net, I reflect upon the history of xexyl.net and some of the views I have shared (the namesake is much older, as I have made aware in the about section. It was many years ago that I played the game Xexyz and it clearly made an impact – perhaps not unlike the space rocket that was launched in to the moon, some years back… – in me. But xexyl.net is only five years old and while I have older domains, this is the first one I really feel is part of me).

I have written quite some off the wall, completely bizarre and (pseudo) random articles, but I try to always have some meaning to them (no matter how small or large and no matter how obvious or not) even if the meanings are somewhat ambiguous, cryptic and vague (as hard as it is to imagine that someone who elaborates as much as I do on any one topic, I do in fact abuse ambiguity and vagueness and much of what I write and indeed say, is cryptic). I do know however that I do not succeed in this attempt. To suggest anything else is to believe in perfection in the sense of no room for improvement.

I strongly believe that there is one kind of perfection that people should strive for, something that many might not think of as ‘perfect': constantly improving yourself, eternally evolving as a person. When you learn something new or accomplish something (no matter how small or large), rather than think you are finished (something that one definition of ‘perfect’ suggests) you should think of it as a percentage: every time you reach ‘perfection’ – as 100% – you should strive for 200% of the last mile (200% of 1 is 2, 200% of 2 is 4, 200% of 4 is 8, etc.). This is, interestingly enough, exactly like binary: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 and so on (each increment is a two times the previous value). In between the powers of 2 you make use of the other bits. For example, 1b + 1b (1 + 1 decimal) is 10b (2 decimal). 10b + 1b (2 + 1 decimal) is 11b (3 decimal). 11b + 1b (3 + 1 decimal) is 100b (4 decimal). This repeats in powers of 2 because binary is base 2. I’ve written about this before but this is what I will call – from this point onward – ‘binary perfection’. It is also the only ideal perfection exactly because it is constantly evolving. This may very well be an eccentric way to look at it but I am incredibly eccentric person. Still, this is the ‘perfect analogy’ and I daresay is a brilliant and accurate analogy.

As always, true to my word, I will continue this when I can. Because as long as I admit my mistakes I am not in denial; as long as I am not in denial, I can learn more, improve myself and those around me. While I do it for myself (this is one of the rare things I consider myself and myself alone), if it betters anyone else, then I will consider it a positive side effect. But indeed there are times where I am inactive for long periods of time and there are other times where I have a couple or more posts in a month (or a fortnight or whatever it is). This is because of what I have pointed out: I do this for me but I also believe in openness with respect to sharing knowledge and experience. This includes but is not limited to programming (and by programming I refer to experience, concepts as well as published works, whether my work alone or my contributions to others’ works). But I am not an open person and I never have been. Perhaps this is best: I am a rather dark, twisted individual, an individual possessed by many demons. These demons are insidious monsters of fire that lash out at me (and at times my surroundings) but they are MY demons and I’ll be damned if anyone tries to take them away from me.

I am Xexyl and this is my manifesto of and for eternal madness…

The Secret: Trust and Privacy

First, as is typical of me, the title is deliberate but beyond the pun it actually is an important thing to consider, which is what I’m going to tackle here. The secret does indeed imply multiple things and that includes the secret to secrets, the relation between privacy and security and how trust is involved in all of this. I was going to write a revision to my post about encryption being important (and I might still to amend one thing, to give credit to the FBI boss about something, something commendable) but I just read an article on the BBC that I feel gives me much more to write about. So let me begin with trust.

Trust is something I refer to a lot, in person and here and pretty much everywhere I write about something that considering trust is a good thing. Indeed, trust is given far too easily. As I have outlined before, even a little bit of trust – seemingly harmless – can be abused. Make no mistake: it is abused. The problem is if you’re too willing to trust how do you know when you’ve been too trusting? While I understand people need to have some established trust with in their social circles, there are some things that do not need to be shared and there are things that really should not be entrusted to anyone except yourself, and that potentially includes your significant other. Computer security is something that fits here. Security in general is. The other problem is: ignorance. Ignorance is not wrong but it does hurt and if you don’t understand the risks of something (which I would argue the fanatical and especially the younger Facebook and other social media users, are) is risky, how do you proceed? For kids it is harder as it is known that kids just do not seem to understand that they are not immortal, not immune to things that really are quite dangerous. However, if you are too trusting with computers, you are opening a – yes, I know – a huge can of worms, and it can cause all sorts of problems (any of taking complete ownership of your computer, monitoring your activities which can lead to identify theft, phishing and many other things, from …). The list of how many issues that granting trust can lead to, is, I fear, unlimited in size. It is that serious. You have to find a balance and it is incredibly hard to do, no matter how experienced you are. I’ve made the general ideas clear before, but I don’t think I’ve actually tackled this issue with privacy and secrecy. I think it is time I do that.

In the wake of the Edward Snowden leaks, many more people are concerned for their privacy. While they should have always been concerned, it doesn’t really change the fact that they are now at least somewhat more cautious (or many are, at least). I have put this thought to words in multiple ways. The most recent is when I made a really stupid mistake (leading to me – perhaps a bit too critical but the point is the same – awarding myself the ID 10 T award), all because I was far more exhausted than I thought. Had I been clear I wouldn’t have had the problem. But I wasn’t clear headed and how could I know it? You only know it once it is too late (this goes for driving too and that makes it even more scary because you could hurt someone else, whether someone you care about or someone you don’t even know). The best way to word this is new on my part: Despite the dismissal people suggest (“what you don’t know cannot hurt you” is 100% wrong), the reality is this: what you don’t know can hurt you, it likely will and worse is it could even kill you! This is not an exaggeration at all. I don’t really need to get in to examples. The point is these people had no idea to what extent spying was taking place. Worse still they didn’t suspect any thing of the sort. (You should actually expect the worst in this type of thing but I suppose that takes some time to learn and come to terms with.) Regardless, they do now. It has been reported – and this is not surprising really, is it? – that a great population of the United States are now very concerned with privacy, have much less trust in the governments (not just the US government, folks – don’t fall for the trap that only some countries do it, you’re only risking harm to yourselves if you do!) in privacy. What some might not think of (although certainly more and more do and will over time), and this is something I somewhat was getting at with the encryption post, is this: If the NSA could not keep secret (and that is ironic itself, isn’t it? Very ironic and to the point of hilarity) their own activities (own is keyword number one) secret (and safe!) then how can you expect them to keep YOUR (keyword number two) information secret and safe? You cannot. There is no excuse for it: they aren’t the only ones, government, corporations, it really doesn’t matter, too many think of security after the fact (and those that do think of it in the beginning are still prone to making a mistake or not thinking of all angles… or a bug in a critical component of their system, leads to the hard work in place, being much less useful or relevant). The fact they are a spying agency and they couldn’t keep that secret is to someone who is easily amused (like myself), hilarious. But it is also serious isn’t it? Yes, and it actually strengthens (or further shows) my point that I will get to in the end (about secrets). To make matters worse (as hard as that is to fathom), you have the increase (and I will tell everyone this, this is not going to go away and it is not going to be contained – no, it will only get worse) in point of sale attacks (e.g. through malware) that has in less than a year led to more corporations having major leaks of confidential information than I would like to see in five or even ten years. This is the number of corporations – the amount of victims is millions (per corporation, even)! This information includes credit card details, email addresses, home addresses, … basically the information that can help phish you even enough to steal your identity (to name one of the more extreme possibilities). Even if they don’t use it for phishing you would be naive to expect them to not use the stolen information.

I know I elaborate a lot and unfortunately I haven’t tied it all together yet. I promise it is short, however (although I do give some examples below, too, that do add up in length). There is only one way to keep something safe, and that is this: don’t share it. The moment you share something with anyone, the moment you write it down, type it (even if you don’t save it to disk), do some activity that is seen by a third party (webcam or video tape, anyone?), it is not a secret. While the latter (being seen by camera) is not always applicable, the rest is. And what good is a secret that is no longer a secret? Exactly: it is no longer secret and therefore cannot be considered a secret. Considering it safe because you trust someone – regardless of what you think they will do to keep it safe and regardless of how much you think you know them – is a dangerous thing (case in point: the phenomenon called, as I seem to remember, revenge porn). In the end, always be careful with what you reveal. No one is immune to these risks (if you are careless someone will be quite pleased to abuse it) and I consider myself just as vulnerable exactly because I AM vulnerable!

On a whole, here is a summary of secrets, trust and security: the secret to staying safe and as secure as possible, is to not give out trust for things that need not be shared with anyone in the first place. If you think you must share something, think twice really hard and consider it again: you might not need to no matter how much the person (or entity) claims it will benefit you. Do you really, honestly, need to turn your thermostat on by your computer or phone? No, you do not and some thermostats have been exposed to have security flaws (in recent times). It isn’t that important. What might seem to be convenient might actually be the opposite in the end.

Bottom line there is this: If someone insists you need something from them or their company, they do not have you in your best interest! Who is anyone else to judge whether you need their service or product?

A classic example and a funny story where the con-artist was exposed: If you go to a specialist to have an antique valued and they offer to buy it you should never do it because if they tell you something is worth X it is one thing. It is however another thing entirely to tell you it is worth X and then offer to buy it from you. The story: years ago, my mother caught a smog-check service in their fraud (and they were consequently shutdown for it, as should be) because despite being female – and therefore what the con-artist thought would be easy prey, nice try loser – she is incredibly smart and he was a complete moron. He was so moronic that despite my mother being there listening to the previous transaction between the customer (“victim”) and himself, he told my mother the same story: you have a certain problem and I’ll charge X to fix it. The moron didn’t even change the story at all – he used it word for word, same problem, same price, right in front of my mother. In short: those telling you the value of something and then telling you they’re willing to buy/fix/whatever, are liars. Some are better liars but they’re still liars.

It is even worse when they are (example) calling you – i.e., you didn’t go to them! Unsolicited tech support calls, anyone? Yes, this happened not long ago. I really pissed off this person by turning the tables on him. While what I did is commendable (As he claimed, I wasting his time which means he lost time he could be cheating someone else) do note that some would have instead fallen victim and the reason he kept up until I decided to play along (and make a fool of him, as you’ll see if you read), is exactly because they are trained: trying to manipulate, trying to keep me on the line as long as possible (which means more time to try to convince me I need their service), and they only wanted to cheat me out of money (or worse: cause a problem with my computer that they were claiming to fix). Even though I got the better of them (as I always have) and to the point of him claiming I was wasting HIS time, they will just continue on and try the next until they find a victim. It is just like spam: as long as it pays they will keep it up. People do respond (directly and indirectly) to spam and it will not end because of this, as annoying as it may be. Again, if some entity is telling you you need their service or product, it is not with your best interest but their interest! That is undeniable even if you initially went to them, if they are insisting you need their product or service, they are only their to gain and not help. This is very different from going to a doctor and them telling you something serious (although certainly there are quacks out there, there is a difference and it isn’t too difficult to discern). Always be thinking!

2014 ID10T World Champion


2014/11/02:
There are two things I want to point out. The first one is noting that my mistake is not as bad as it initially seems because prior to systemd, this would not have been a problem at all. Second, I am remarking on why I admit to these types of things:

First, and perhaps the most frustrating for me (but what is done is done and I cannot change it but only accept it and move on) is that previously, before /bin, /sbin, /lib and /lib64 were made symbolic links to /usr/bin, /usr/sbin, /usr/lib and /usr/lib64, I would have been fine. Indeed, I can see that is where my mind was, besides the other part I discussed (about how files can be deleted yet still used as long as a reference is available; it is only once all references to the file are closed that the file is no longer usable). Where was mount, umount before this? And did it use /usr/lib64 or was it /lib64 ? The annoying thing is: it was under /bin and /lib64 which means that it used to be – but is not in systemd – on the root volume. So umount on /usr would have meant /usr would be gone but however /bin would still be there. So I would have still had access to /bin/mount. Alas, that is one of the things I didn’t like about some changes over the years, and it hit me hard. Eventually I will laugh at it entirely but for now I can only laugh in some ways (it IS funny but I’m more annoyed at myself currently). As I get to in my second point, I’m not renaming this post (dignity remains strong) even though it is not as bad as I made it sound, initially. While I would argue it was a rather stupid mistake, I don’t know if champion is still correct. Maybe better is last place in the final round, is more correct. Maybe not even that. Regardless, the title (for once the pun is not intended) is remaining the same.

Second, some might wonder why I admit to such a thing as below (as well as other things like when I messed up Apache logs… or other things I’m sure I have written about, before… and will in the future…) when xexyl.net is more about computers in general, primarily focusing on programming, Linux (typically Red Hat based distributions) and security. The reason I include things like the below is that I know that my greatest strength is that I’m willing to accept mistakes that I make; I don’t ever place the blame on someone or something else if I am responsible. Equally I address my mistakes in the best way possible. Now ask yourself this: If I don’t accept my mistakes, can I possibly take care of the problem? If I did not make a mistake – which is what being in denial really is – then there isn’t a problem at all. So how can I fix a problem that isn’t a problem? No one is perfect, and my typical joke aside (I consider myself, much of the time, to be no one, and “no one is perfect”), it is my thinking that if I can publicly admit to mistakes then it shows just how serious I am when I suggest to others (for example, here) that the only mistake is not accepting your own mistakes. So to that end, I made a mistake. Life goes on…


There are various web pages out there about computer user errors. A fun one that I’m aware of is top 10 worst mistakes at the command line. While I certainly cannot make claim to some of the obvious ones known, I am by no means perfect. Indeed, I have made many mistakes over the years and I wouldn’t have it any other way: the only mistake would be to not accept the mistake(s) and therefore not learn from them (although the mistake I’ll reveal here is one that is hard to learn from in some ways, as I explain: fatigue is something that is very hard to determine and by extension being tired means you don’t even know you are as tired as you are). Since I often call myself a no-one or nobody (exactly what Nemo in Captain Nemo in 20,000 Leagues Under the Sea means, in Latin), I have a great deal of amusement from the idea of “no one is perfect” exactly because of what I consider myself. But humour aside I am not perfect at all. While I have remarked on this before, I think the gem of them all is this:

There is no such thing as human weakness, there is only
strength and… those blinded by… the fallacy of perfection.
— Xexyl

If you can accept that truth then you can always learn, always expand yourself, always improve yourself and potentially those around you. This is hard for some to accept but those who do accept it know exactly what I mean. I assure everyone, you are not perfect!

So with that out of the way, let me get to the point of this post. I admit that mistakes of the past fail to come to my mind although I know I’ve made many and some more idiotic than others. However, around 6:00 today I made what is absolutely my worst mistake ever, and one that gives me the honour and privilege to be the holder of the title:  2014 IDI0T World Champion.

What is it? Prepare yourselves and challenge yourself as well. A while back I renamed the LVM volume group on my server. Something however, occurred to me, being that – obviously – some file systems are not able to be umounted in order to be mounted to the new volume group. That doesn’t mean that files at the current mount point cannot be accessed. What it does mean, however, is that if I update the kernel I will have in the bootloader a reference to the old volume group. This means I will have to update the entry the next time I reboot. I did keep this in mind and I almost went this route until this morning when I got the wise (which is to say really, really stupid) idea of running:

# init S

in order to get to single user mode, thereby making most filesystems easier to umount. Of course, I had already fixed /home, /opt and a few others that don’t have to be open. I was not thinking in full here, however, and it went from this to much worse. After logging in as root (again, obviously) to “fix” things, I went to tackle /usr which is where all hell broke loose…

It used to be that you would have /bin and /sbin on a different file system (or if nothing else, not be the same as) than /usr/bin and /usr/sbin. However, in more modern systems, you have the following:

$ ls -l /{,s}bin
lrwxrwxrwx. 1 root root 7 Dec 18  2013 /bin -> usr/bin
lrwxrwxrwx. 1 root root 8 Dec 18  2013 /sbin -> usr/sbin

which means that anything that used to be under /bin would now be /usr/bin. In addition, you also had /lib and (for 64-bit builds) /lib64. However, similar to the above, you also have:

$ ls -l /lib{,64}
lrwxrwxrwx. 1 root root 7 Dec 18  2013 /lib -> usr/lib
lrwxrwxrwx. 1 root root 9 Dec 18  2013 /lib64 -> usr/lib64

which means you absolutely need /usr to be mounted! Even if I had (a recent upgrade to latest release of server combined with me not installing busybox again for statically linked commands) busybox (or similar) installed, I would have been screwed over by the simple fact that once /usr is umounted and therefore I have no way to run mount again! Most disturbing is that I knew what I was about to do was risky, and risky because I was going to use an option that had potential for risk without the worry as I just described. However, as soon as I ran the command but before I confirmed it, I knew I would be forced to do a hard reboot. The command is as such:

# /usr/bin/umount -l /usr

Indeed, I just made it impossible to mount, change run level, do much of anything other than reboot (and not by command! That was already made impossible by my idiocy!). And so I did. Of course, I still had to update the boot entry. While that is the least of my worries (was no problem), it is ironic indeed because I would have had to do that regardless of when I rebooted next. So all things considered, for the time being, I am, I fear, the 2014 World Holder of the ID 10 T award. Indeed, I’m calling myself an idiot. I would argue that idiot is putting it way too nicely.

As for the -l option, given the description in umount(1), the hour it was and the sleep I did (not) get last night, I was thinking along the lines of (and this is why I didn’t think beyond it, stupid as that is!) as long as you have a reference to a file, even if it is deleted, you still can use it and even have the chance to restore it (or execute it or… keep it running). Once all file references are gone, if it is deleted, then it is gone. So when I read:

-l, –lazy
Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. (Requires kernel 2.4.11 or later.)

I only thought of the latter part and not the detach NOW portion. In addition, I wasn’t thinking of the commands themselves. Clearly if programs are under /usr then I might need /usr to … run mount! This is a perfect example, I might add, of how dangerous being tired is: you might think you have the clarity to work on something but the reality is if you don’t have that clarity then you don’t have the clarity to determine whether or not you have ability to judge any of it in the first place. This implies I likely won’t get much done today but at least I did do one thing: I fixed the logical volume rename issue. That is something even if it obliterated my (good) system uptime and at the same time revealing how bad MY uptime was (I should not have been at the server let alone up at all!).

Using ‘script’ and ‘tail’ to watch a shell session in real-time

This is an old trick that my longest standing friend Mark and I used years ago on one of his UltraSPARC stations while having fun doing any number of things. It can be used for all sorts of needs (e.g., showing someone how to do something, allowing someone to help debug your problem to name two of many others) but the main idea is that person is running tasks (for the purpose of this article I will pretend this person is the victim) and more generally using the shell, while the other person (and pretending that this person is the snoop) is watching everything, even if they’re across the world. It works as long as both are on the same system and that the victim writes output (directs to) a file that the snoop can read (as in open for reading).

Before I get to how to do this, I want to point something else out. If you look at the man page for script, you will see the following block of text under OPTIONS:

-f, –flush
Flush output after each write. This is nice for telecooperation: one person does `mkfifo foo; script -f foo’, and another can supervise real-time what is being done using `cat foo’.

But there are two problems with this method both due to the fact that the watching party (as I put it for amusement, the snoop) has control. For example, if I do indeed type at the shell:

$ mkfifo /tmp/$(whoami).log ; script --flush -f /tmp/$(whoami).log

… then my session will block, waiting for the snoop to type at their prompt:

$ cat /tmp/luser.log

(assuming my login name is indeed luser). And until that happens, even if I type a command, no output occurs on my end (the command is not ignored, however). Once the other person does type that I will see the output of script (showing that the the output is being written to /tmp/luser.log and any output from commands that I might have typed). The other user will see the output too, including which file is being written to. Secondly, the snoop decides when to stop. When they hit ctrl-c then once I begin to type, I will see at my end, something like this:

$ lScript done, file is /tmp/luser.log
$

Note that I hit the letter l, as if I was going to type ls (for example) and then I see the script done output. If I finish the command, let’s say by typing s and then hit enter, I will instead of seeing the output of ls, I will see (since typing ls hardly takes any time I will show it as it would appear on my screen, with the command completed, or one would suspect so):

$ lScript done, file is /tmp/luser.log
$ s
-bash: s: command not found

Yes, that means that the first character closes my end (the lScript is not a typo, that is what appears on my screen), shows me the typical message after script is done and then and only then do I get to enter a command proper.

So the question is, is there a way that I can control the starting of the file, and even more than that, could the snoop check on the file later (doesn’t watch in the beginning) or stop in the middle and then start watching again? Absolutely. Here’s how:

  • Instead of making a fifo (first in first out, i.e., a queue) I specify a file to write the script output to (a normal file with a caveat as below), or alternatively let the default file name be the output, instead. So what I type is:
    $ script --flush -f /tmp/$(whoami).log
    Script started, file is /tmp/luser.log
    $
  • After that is done, I inform (somewhere else, or otherwise they use the –retry option of tail, to repeatedly try until interrupted or the file can be followed) the snoop (now THAT is something you don’t expect to ever be true, is it? Why would I inform a snoop of anything at all?! This is of course WHY I chose the analogy in the first place) and they then type:
    $ tail -f /tmp/luser.log

    And they will see – by default – the last ten lines of the session (the session implies the script log, so not the last ten lines of my screen!). They could of course specify how many lines but the point is they will now be following (that’s what -f does) the output of the file, which means whenever I type a command, they will see that as well as any output. This will happen until they hit ctrl-c or I type ‘exit’ (and if I do that they will still try to follow the file so they will need to hit ctrl-c too). Note that even if I remove the log file while they’re watching it, they will still see the output until I exit the script session. This is because they have a file descriptor of the log file and so while the file is no longer written to, they are still following it (this is because of how inodes work).

As for the caveat I referred to, it is simply this: control characters are also sent to the file and so it isn’t ASCII only. Furthermore, because of the same reason, using text editors (e.g., vi) will not show correctly to the snoop.

In the end, this is probably not often used but it is very useful when it is indeed needed. Lastly, if you were cat the output file, you’d see it as if you were watching the file in real-time. Most importantly: do not ever do anything that would reveal confidential information and if you do have anything you don’t want shown to the world, do not use /tmp or any public-readable file (and rm it when done too!). Yes, you can have someone read a file in your directory as long as they know the full path and have proper permissions to the directory and file.

Encryption IS Critical

I admit that I’m not big on mobile phones (and I also admit this starts out on phones but it is a general thing and the rules apply to all types of nodes). I’ve pointed this out before, especially with regards to so-called smart technology. However, just because I personally don’t have much use for it, most of the time, does not mean that the devices should not be as secure as possible. Thus, I am firstly, giving credit to Apple (which all things considered is exceptionally rare) and Google (which is also very rare). I don’t like Apple particularly because of Steve Jobs’ arrogance (which I’ve also written about) but that is only part of it. At the same time, I do have fond memories of the early Apple computers. As for Google, I have serious issues with them but I haven’t actually put words to it here (or anywhere actually). But just because I don’t like them does not mean they can never do something right or that I approve of. To suggest that would be me being exactly as I call them out for. Well, since Apple and Google recently suggested they were to enable encryption by default for iOS and Android, I want to commend them for it: encryption is important.

There is the suggestion, most often by authorities (but not always as – and this is something I was planning on and I might still write about – Kaspersky showed not too long ago when they suggested similar things), that encryption (and more generally, privacy) is a problem and a risk to ones safety (and others’ safety). The problem here is that they are lying to themselves or they are simply ignorant (ignore the obvious please, I know it but it is besides the point for this discussion). They are also risking the people they claim to want to protect and they also risk themselves. Indeed, how many times has government data been stolen? More than I would like to believe and I don’t even try to keep track of it (statistics can be interesting but I don’t find the subject of government – or indeed other entity – failures all that interesting. Well, not usually). The problem really comes down to this, doesn’t it? If someone has access to your or another person’s private details, and it is not protected (or poorly protected), then what can be done to you or that other person if someone ELSE gets that information? Identify theft? Yes. Easier time gathering other information about you, who you work for, your friends, family, your friends’ families, etc.? Yes. One of the first things an attacker will do is gather information because it is that useful in attacks, isn’t it? And yet, that’s only two issues of many more, and both of those are serious.

On the subject of encryption and the suggestion that “if you have nothing to hide you have nothing to fear”, there is a simple way to obliterate it. All one needs to do is ask a certain (or similar) question and explanation following, directed at the very naive and foolish person (Facebook founder has suggested similar, as an example). The question is along the lines of: Is that why you keep your bank account, credit cards, keys, passwords, etc., away from others? You suggest that you shouldn’t have a need to keep something private because you have nothing to hide unless you did something wrong (and so the only time you need to fear is when you are in fact doing something wrong). But here you are hiding something (that you wouldn’t want others knowing, in other words, and with your logic it follows that you did something wrong), yet here you are hiding your private information. The truth is that if you have that mentality, you are either lying to yourself (and ironically hiding something from yourself and therefore not exactly following your suggestion) or you have hidden intent or reasons to want others information (which, ironically enough, is also hiding something – your intent). And at the same time,  you know full well that YOU do want your information private (and YOU should want it private!).

But while I’m not surprised here, I still find it hard to fathom how certain people, corporations and other entities still think strong encryption is a bad thing. Never mind the fact that many high-profile cases of criminal data confiscated by police has been encrypted and yet revealed. Never mind the above. It is about control and power and we all know that the only people worthy of power are those who do not seek it but are somehow bestowed with it. So what am I getting at? It seems that, according to the BBC, the FBI boss is concerned about Apple’s and Google’s plans. Now I’m not going to be critical of this person, the FBI in general or anything of the sort. I made aware in the past that I won’t get in to the cesspool that is politics. However, what I will do is remark on something this person said but not remark on it by itself. Rather I will refer to something most amusing. What he said is this:

“What concerns me about this is companies marketing something expressly to allow people to place themselves beyond the law,” he said.

“I am a huge believer in the rule of law, but I am also a believer that no-one in this country is beyond the law,” he added.

But yet, if you look at the man page of expect, which allows interactive things that a Unix shell cannot do by itself, you’ll note the following capability:

  • Cause your computer to dial you back, so that you can login without paying for the call.

That is, as far as I am aware, a type of toll fraud. Why am I even bringing this up, though? What does this have to do with the topic? Well, if you look further at the man page, you’ll see the following:

ACKNOWLEDGMENTS
Thanks to John Ousterhout for Tcl, and Scott Paisley for inspiration. Thanks to Rob Savoye for Expect’s autoconfiguration code.

The HISTORY file documents much of the evolution of expect. It makes interesting reading and might give you further insight to this software. Thanks to the people mentioned in it who sent me bug fixes and gave other assis‐
tance.

Design and implementation of Expect was paid for in part by the U.S. government and is therefore in the public domain. However the author and NIST would like credit if this program and documentation or portions of them are
used.
29 December 1994

I’m not at all suggesting that the FBI paid for this, and I’m not at all suggesting anyone in the government paid for it (it is, after all, from 1994). And I’m not suggesting they approve of this. But I AM pointing out the irony. This is what I meant earlier – it all comes down to WHO is saying WHAT and WHY they are saying it. And it isn’t always what it appears or is claimed. Make no mistake people, encryption IS Important, just like PCI compliance, auditing (regular corporation auditing of different types, auditing of medical professionals, auditing in everything), and anyone suggesting otherwise is ignoring some very critical truths. So consider that a reminder, if you will, of why encryption is a good thing. Like it or not, many humans have no problem with theft, no problem with manipulation, no problem with destroying animals or their habitat (Amazon forest, anyone?). It is by no means a good thing but it is still reality and not thinking about it is a grave mistake (including indeed literally, and yes, I admit that that is pointing out a pun). We cannot control others in everything but that doesn’t mean we aren’t responsible for our own actions and ignoring something that risks yourself (never mind others here) places the blame on you, not someone else.

shell aliases: the good, the bad and the ugly


2014/11/11:
I erroneously claimed that the -f option is required with the rm command to remove non-empty directories. This is only a partial truth. You need -r as that is for recursion which traversing a file system, it would traverse directories encountered, until there are no more directories found (and indeed file system loops can occur which programs do consider). But -f isn’t for non-empty directories as such but rather write-permission related. Specifically, in relation to recurse (-r), if you specify -r you’ll still be prompted whether to recurse in to the directory, if it is write-protected (or write-protected file in). If you specify -f, you will not be prompted. Of course, there’s other reasons you might not be able to remove the directory or any files in it, but that is another issue entirely. Furthermore, root need not concern themselves with write permission, at least in the sense that they can override it.


2014/10/07:
Please observe the irony (that actually further proves my point, and that itself is ironic as well) that I suggest using the absolute path name and then I do not (with sed). This is what I mean by I am guilty of the same mistakes. It is something I have done over the years: work on getting in to the habit (of using absolute paths) and then it slides and then it happens all over again. This is why it is so important to get it right the first time (and this rule applies to security in general!). To make it worse, I knew it before I had root access to any machine (ever), years back. But this is also what I discussed with convenience getting in the way with security (and aliases only further add to the convenience/security conflict, especially with how certain aliases enable coloured output or some other feature). Be aware of what you are doing, always, and beware of not taking this all to heart. (And you can bet I’ve made a mental note to do this. Again.) Note that this rule won’t apply to shell built-ins unless you use the program too (some – e.g., echo – have both). The command ‘type’ is a built-in, though, and it is not a program. You can check by using the command itself like (type -P type will show nothing because there is no file on disk for type). Note also that I’ve not updated the commands where I show how aliases work (or commands that might be aliased). I’ve also not updated ls (and truthfully it probably is less of an issue, unless you are root, of course) but do note how to determine all ways a command can be invoked:

$ type -a ls
ls is aliased to `ls --color=auto'
ls is /usr/bin/ls

This could in theory could be only for Unix and its derivatives, but I feel there are similar risks in other environments. For instance, in DOS, extensions of programs had a priority so that if you didn’t type ‘DOOM2.EXE’ it would check – if I recall correctly, ‘DOOM2.BAT’ and ‘DOOM2.COM’ and then ‘DOOM2.EXE’. I don’t remember if that is the exact order but with no privilege separation you had the ability to rename files so that it if you wanted to write a wrapper to DOOM2 you could do it easily enough  (I use DOOM2 in the example because not only was it one of my favourite graphical computer games, one I beat repeatedly I enjoyed it so much, much more than the original DOOM… I also happened to write a wrapper for DOOM2 itself, back then). Similarly, Windows doesn’t show extensions at all (by default, last I knew anyway) and so if a file is called ‘doom.txt.exe’ then double clicking on it would actually execute the executable instead of opening a text file (but the user would only see the name ‘doom.txt’). This is a serious flaw in multiple ways. Unix has its own issues with paths (but at least you can redefine them and there IS privilege separation). But it isn’t without its faults. Indeed, Unix wasn’t designed with security in mind and that is why so many changes have been implemented over the years (the same goes for the Internet main protocols – e.g., IP, TCP, UDP, ICMP – as well as other protocols at say, the application layer – all in their own ways). This is why things are so easy to exploit. This time I will discuss the issue of shell aliases.

General idea for finding the program (or script or…) to execute is also a priority. This is why when you are root (or using a privileged command) you should always use a fully-qualified name (primarily known as using the absolute file name). It is arguably better to always do this because, what if someone modified your PATH, added a file in your bin directory, updated your aliases, … ? Now you risk running what you don’t intend to. There is a way to determine all the ways it could be invoked but you should not rely on this, either. So the good, then the bad and then the ugly of the way this works (remember, security and convenience conflict with each other a lot, which is quite unfortunate but something that cannot be forgotten!). When I refer to aliases understand that aliases are even worse than the others (PATH and $HOME/bin/) in some ways, which I will get to at the ugly.


THE GOOD


There is one case where aliases are fine (or at least not so bad as the others; the others is when you use options). It isn’t without flaws, however. Either way: let’s say you’re like me and you’re a member of the Cult of VI (as opposed to the Church of Emacs). You have vi installed but you also like vim features (and so have it installed too). You might want vi in some cases but vim in others (for instance, root uses vi and other users use vim, contrived example or not is up to your own interpretation). If you place in $HOME/.bashrc the following line, then you can override what happens when you type the command in question as follows:

$ /usr/bin/grep alias $HOME/.bashrc
alias vi='vim'

Then typing ‘vi’ at the shell will open vim. Equally, if you type ‘vi -O file1 file2′ it will be run as ‘vim -O file1 file2′. This is useful but even then it has its risks. It is up to the user to decide, however (and after all, if a user is compromised you should assume the system is compromised because if it hasn’t been already it likely will be, so what’s the harm? Well I would disagree that there is no harm – indeed there is – but…)


THE BAD AND THE UGLY


Indeed, this is both bad and ugly. First, the bad part: confusion. Some utilities have conflicting options. So if you alias a command to use your favourite options, what if one day you want to use another option (or see if you like it) and you are used to typing the basename (so not the absolute name)? You get an error about conflicting options (or you get results you don’t expect)? Is it a bug in the program itself? Well, check aliases as well as where else the problem might occur. In  bash (for example) you can use:

$ type -P diff
/usr/bin/diff

However, is that necessarily what is executed? Let’s take a further look:

$ type -a diff
diff is aliased to `diff -N -p -u'
diff is /usr/bin/diff

So no, it isn’t necessarily the case. What happens if I use -y, which is a conflicting output type? Let’s see:

$ diff -y
diff: conflicting output style options
diff: Try 'diff --help' for more information.

Note that I didn’t even finish the command line! It detected invalid output styles and that was it. Yet it appears I did not actually specify conflicting output style types – clearly I only specified one option so this means indeed the alias was used, which means that while I specified options, those options are included and not excluding other options (certain programs will take the last option as the one that rules but not all do and diff does not here). If however, I were to do:

$ /usr/bin/diff -y
/usr/bin/diff: missing operand after '-y'
/usr/bin/diff: Try '/usr/bin/diff --help' for more information.

There we go: the error as expected. That’s how you get around it. But let’s move on to the ugly because “getting around it” is only if you remember and more so do not ever rely on aliases! Especially do not rely on it for certain commands. This cannot be overstated! The ugly is this:

It is unfortunate but Red Hat Linux based distributions have this by default and not only is it baby-sitting (which is both risky but also obnoxious much of the time … something about the two being related) it has an inherent risk. Let’s take a look at default alias for root’s  ‘rm':

# type -a rm
rm is aliased to `rm -i'
rm is /usr/bin/rm

-i means interactive. rm is of course remove. Okay so what is the big deal, surely this is helpful because as root you can wipe out the entire file system? Okay that’s fine but you can also argue the same with chown and chmod (always be careful recursively with these – well in general even – utilities… but these specifically are dangerous; they can break the system with ease). I’ll get to those in a bit. The risk is quite simple. You rely on the alias which means you never think about the risks involved; indeed, you just type ‘n’ if you don’t want to delete the files encountered (and you can send yes to all by piping ‘yes’, among other ways, if you wanted to at a one time avoid the nuisance). The risk then is, what if by chance you are an administrator (a new administrator) on another (different) system and it does not have the -i option? You then go to do something like (and one hopes you aren’t root but I’m going to show it as if I was root – in fact I’m not running this command – because it is serious):

# /usr/bin/pwd
/etc
# rm *
#

The pwd command was more of to show you a possibility. Sure, there are directories there that won’t be wiped out because there was no recursive option, but even if you are fast with sending an interrupt (usually ctrl-c but can be shown and also set with the stty command, see stty –help for more info), you are going to have lost files. The above would actually have shown that some files were directories after the rm * but before the last #, but all the files in /etc itself would be gone. And this is indeed an example of “the problem is that which is between the keyboard and chair” or “PEBKAC” (“problem exists between keyboard and chair”) or even “PICNIC” (problem in chair not in computer”), among others. Why is that? Because you relied on something one way and therefore never thought to get in the habit of being careful (and either always specifying -i or using the command in a safe manner like always making sure you know exactly what you are typing). As for chown and chmod? Well if you look at the man pages, you see the following options (for both):

--no-preserve-root
 do not treat '/' specially (the default)
--preserve-root
 fail to operate recursively on '/'

Now if you look at the man page for rm, and see these options, you’ll note a different default:

--no-preserve-root
 do not treat '/' specially
--preserve-root
 do not remove '/' (default)

The problem? You might get used to the supposed helpful behaviour with rm which would show you:

rm: it is dangerous to operate recursively on ‘/’
 rm: use --no-preserve-root to override this failsafe

So you are protected from your carelessness (you shouldn’t be careless… yes it happens and I’m guilty of it too, but this is one of the things backups were invented for, as well as only being as privileged as is necessary and only for the task in hand). But that protection is a mistake itself. This is especially true when you then look at chown and chmod, both of which are ALSO dangerous when recursively done on / (actually on many directories recursively, example to not do it on is /etc as that will break a lot of things, too). And don’t even get me started on the mistake of: chown -R luser.luser .*/ because even if you are in /home/luser/lusers, then as long as you are root (it is a risk for users to change owners and so therefore only root can do that) then you will be changing the root file system and everything under it (/etc, /bin/, /dev/, everything) to be owned by luser as the user and luser as the group. Hope you had backups. You’ll definitely need them. Oh, and yes, any recursive action on .* is a risky thing indeed. To see this in action in a safe manner, as some user, in their home directory or even a sub-directory of their home directory, try the following:

$ /usr/bin/ls -alR .*

… and you’ll notice it going to /home and then / and everything below it! The reason is the way path globbing works (try man -s 7 glob). I’d suggest you read the whole thing but the one in particular is under Pathnames.

So yes, if you rely on aliases which is relying on not thinking (which is a problem itself in so many ways) then you’re setting yourself up for a disaster. Whether that disaster in fact happens is not guaranteed but one should be prepared and not set themselves up for it in the first place. And unfortunately some distributions set you up for this by default. I’m somewhat of the mind to alias rm to ‘rm –no-preserve-root’ but I think most would consider me crazy (they’re probably more correct than they think). As for the alias rm in /root/.bashrc, here’s how you remove it (or maybe if you prefer to comment it out). Just like everything else, there’s many ways, this is at the command prompt:

# /usr/bin/sed -i 's,alias \(rm\|cp\|mv\),#alias \1,g' /root/.bashrc

Oh, by the way, yes, cp and mv (hence the command above commenting all three out) are also aliased in root’s .bashrc to use interactive mode and yes the risks are the same (you risk overwriting files when you aren’t on an aliased account and this might even be the same system that you are used to it with root but you don’t have it on all your accounts which means if you were just as root and remembered it was fine then you logout back to your normal, non-privileged user and you do some maintenance there, what happens if you then use one of those commands that is not aliased to -i? Indeed, aliases can be slightly good, bad and very ugly). Note that (although you should do this anyway) even if you were to source (by ‘source /root/.bashrc’ or equally ‘. /root/.bashrc’) the file again the aliases would still exist because it didn’t unalias them (you could of course run that too but better is log out and the next time you are logged in you won’t have that curse upon you).

One more thing that I think others should be aware of as it further proves my point about forgetting aliases (whether you have them or not). The reason I wrote this is twofold:

  • First, I’ve delayed the alias issue with rm (and similar commands) but it is something I’ve long thought of and it is indeed a serious trap.
  • Second, and this is where I really make the point: the reason this came up is one of my accounts on my server had the alias for diff as above. I don’t even remember setting it up! In fact, I don’t even know what I might have used diff for, with that account! That right there proves my point entirely (and yes, I removed it). Be aware of aliases and always be careful especially as a privileged user…

 

The Hidden Dangers of Migrating Configuration Files

One of the things I have suggested (in person, here, elsewhere) time and again, is the user is more often than not, the real problem. It is the truth, it really is. I also tell others and more often write about how there should be no shame with admitting to mistakes. The only mistake you can make is not admitting to your mistakes, because if you don’t admit to your mistakes then you cannot learn from them; indeed, hiding behind a mask is not going to make the problem go away but will actually make it worse (and appear as not a problem at the same time). So then let me make something very clear, and this too is something I’ve written about (and mentioned to people otherwise) before: administrators are users too. Any administrator not admitting to making blunders, is one that is either lying (and a poor liar at that, I might add) or their idea of administration is logging into a computer, remotely and running a few basic commands, then logging out. Anyone that uses a computer in any way is a user, it as simple as that. So what does this have to do with migrating configuration files? It is something I just noticed in full and it is a huge blunder on my part. It is actually really stupid but it is something to learn from, like everything else in life.

At somewhere around 5 PM / 17:00 PST on June 16, my server was up for 2 years, 330 days, 23 hours, 29 minutes and 17 seconds. I know this because of the uptime daemon I wrote some time ago. However, around that time, also, there was a problem with the server. I did not know it until the next morning at about 4:00, because I had gone for the night. The problem is keyboard would not waken the monitor (once turned on) and I could not ssh in to the server, from this computer; indeed, the network appeared down. In fact, it was down. However, the LEDs on the motherboard (thanks to a side window in case) were lit, the fan lights were lit and the fans were indeed moving. The only thing is, the system itself was unresponsive. The suspect is something that I cannot prove one way or another. The suspect is, however, this: an out of memory issue, and the thinking is the Linux OOM killer killed a critical process (and/or was not able to resolve the issue in time). I backed up every log file at that time, in the case I ever wanted to look in to it further (probably won’t be enough information but there was enough information to tell me at about what time it stopped). There had been a recent library (glibc which is, well, very much part of the kernel and everything else) update but Linux is really good about this so it really is anyone’s guess. All I know is when logs stopped updating. The end result is I had to do a hard reboot. And since CentOS 7 came out a month or two later, I figured why not? True I don’t like systemd but there are other things I do like about CentOS 7 and the programmer in me really liked the idea of GCC 4.8.x and C/C++11 support. Besides, I manage on Fedora Core (a remote server and the computer I write from) so I can manage in CentOS 7. Well here’s the problem: I originally had trouble (the day was bad and I naively went against my intuition which was telling me repeatedly, “this is a big mistake” – it was). Then I got it working the next day, when I was more clear. However, just like CentOS 5 to CentOS 6 had certain major services (in that case it was Dovecot) having major releases, the same happened here only this time it was Apache. And there were quite some configuration changes indeed, as it was a major release (from 2.2 to 2.4). I made a critical mistake, however:

I migrated old configuration files for Apache. Here is what happened and here is why I finally noticed it (and why I did not notice it before). This (migrating old files) is indeed dangerous if you are not very careful (keep in mind the major changes means that unless you have other systems with the same layout, you will not be 100% aware of all – keyword – changes). Even if you are careful, and even if things appear fine (no error, no warning, everything seems to work), there is always the danger of something that changed that is in fact a problem. And that is exactly what happened. Let me explain.

In Apache 2.2.x you had the main config file /etc/httpd/conf/httpd.conf and you also had the directory /etc/httpd/conf.d (with extra configuration files like the one for mod_ssl, mod_security, and so on). In the main config file, however, in the beginning, you had the LoadModule directives so that everything works fine. But since the configuration file has <IfModule></IfModule> blocks, as long as the module in question is not required, there is no harm. You can consider it optional. In Apache 2.4.x, however, early in the file /etc/httpd/conf/httpd.conf it includes the directory /etc/httpd/conf.modules.d which then has, among other files, 00-base.conf and in that file is the LoadModules directive. And here is where the problem arose. I had made a test run of the install but without thinking of the <IfModule></IfModule> and non-required modules, and since the other Include directive is at the end of the file, there surely was no harm in shifting things around, right? Well, looking back it is easy to see where I screwed up and how. But yes, there was harm. And while I noticed this issue, it didn’t exactly register (perhaps something to do with sleep deprivation combined with reading daily logs in the early morning and more than that, being human, i.e., not perfect by any means, not even close). Furthermore, error log was fine and so in logwatch output, I did indeed see httpd logs. But something didn’t register until I saw the following:


0.00 MB transferred in 2 responses (1xx 0, 2xx 1, 3xx 1, 4xx 0, 5xx 0)
2 Content pages (0.00 MB)


Certainly that could not be right! I looked at my website yesterday, even, and more than once. But then something else occurred to me. I began to think about it, and it had been sometime that all I saw was the typical scanning for vulnerabilities that every webserver gets. I had not in fact saw much more. The closest would be:


2.79 MB transferred in 228 responses (1xx 0, 2xx 191, 3xx 36, 4xx 1, 5xx 0)
4 Images (0.00 MB),
224 Content pages (2.79 MB),


And yet, I knew I had custom scripts for logwatch, that I made some time back (that shows other information that I want to see, that isn’t in the default logwatch httpd script/config). But I figured that maybe I forgot to restore it. The simple solution was to move the include directives to before the <IfModule></IfModule> blocks, or in other words, much earlier in the file, not at the end.

To be true to my nature and word, I’m going to share what I actually saw in logs. This, I hope, will show exactly how sincere I am when I suggest that people admit to their mistakes and to not worry about so-called weaknesses. If there is any human weakness, it is the inability to understand that perfection isn’t possible. But that is more as I put it before: blinded by a fallacy. If you cannot admit to mistakes then you are hiding from the truth and ironically you are not fooling anyone but yourself.

The log entries looked like this:


vhost


Yes, really. I’m 100% serious. How could I screw up that bad? It is quite simple: it evaluated to that, because at the end of the config file, I include a separate directory that holds the vhosts themselves. But the CustomLog format I use, that I cleverly named vhost (because it shows the vhost as well as some other vhost specifics), was not in fact evaluated (the log modules were not loaded at the time). And in the <VirtualHost></VirtualHost> blocks I have CustomLog directives which would normally refer to a format. This means the custom log format was not used. The reason the error logs worked is I did not make a custom error log. But since the log modules were loaded after the configuration of the log formats, the access logs had the format of “vhost” as a string, and that is it. Brilliant example of “the problem is that which is between the keyboard and chair” as I worded it circa 1999 (and others have put it other ways, longer than me, for sure). And to continue with sharing such a stupid blunder, I’m going to point out that this has been this way about 41 days and 3 hours. Yes, I noticed it but I only noticed it in a (local) test log file (test virtual host). Either way, it did not register as a problem (it should have but it absolutely did not!). I have no idea why it didn’t, but it didn’t. True, I have had serious sleep issues, but that is irrelevant. The fact is: I made a huge mistake with the migration of configuration files. It was my own fault, I am 100% to blame, and there is nothing else to it. But this is indeed something to consider, because no one is perfect and when there is a major configuration directory restructure (or any type of significant restructure) there are risks to keep in mind. This is just nature: significant changes do require getting accustomed to things and all it takes is being distracted, not thinking of one tiny thing, or even not understanding something in particular, in order for a problem to occur. Thankfully, though, most of the time problems are noticed quickly and fixed quickly, too.  But in this case I really screwed up and I am only thankful it wasn’t something more serious. Something to learn from, however, and that is exactly what I’ve done.

 

chkconfig: alternatives system

This will be a somewhat quick write-up. Today I wanted to link in a library to a program that is in the main Fedora Core repository (but it excludes the library due to policy).  In the past I had done this by making my own RPM package with the release one above the main release, or, if you will excuse the pun, alternatively not installing the Fedora Core version at all but only mine. I then had the thought of why not use the alternatives system? After all, if I wanted to change the default I could do that. This RPM isn’t going to be in any of my repositories (I added one for CentOS 7 in the past few months) but realistically it could. There was one thing that bothered me about the alternatives system, however:

I could never quite remember the proper installation of an alternatives group because I never actually looked at it with a clear head and although it is clear (the description) once I looked in to it more intently, I always confused the link versus the path. Regardless, today I decided to sort it out once and for all. This is how each option works:


alternatives --install <link> <name> <path> <priority> [--initscript <service>] [--slave <link> <name> <path>]*

The link parameter is the symlink which exists in /etc/alternatives and which points to the currently preferred default. The name parameter is the name of the alternative group itself. The path is the actual target of the symlink. –initscripts is a Red Hat Linux specific and although I primarily work with Red Hat I will not cover it. Priority is a number, the highest number will be selected in auto mode (see below). –slave is for groupings; for instance, if the program I was building had a man page but so does the main one (the one from Fedora Core repository), what happens when I use man on the program name? With the groups it allows updating based on the master. For the example I will use a program I wrote but yet there is another one out there (also in the main Fedora Core repository): an uptimed. Let’s say mine is called ‘suptimed’ and the other is ‘uptimed’. So the files ‘/usr/bin/suptimed’ and ‘/usr/bin/uptimed’ exist. Further, the man pages for suptimed and uptimed are ‘/usr/share/man/man1/suptimed.1.gz’ and ‘/usr/share/man/man1/uptimed.1.gz’  (without the ‘s). This also includes just enough files to explain the syntax.

alternatives --install /usr/bin/uptimed uptimed /usr/bin/suptimed 1000 --slave /usr/share/man/man1/uptimed.1.gz uptimed.1.gz /usr/share/man/man1/suptimed.1.gz

While this is a hypothetical example (as in there might be more files to include in the slaves [1]), it should explain it well enough. After this, if you were run uptimed it would run suptimed instead. Furthermore, if you were to type ‘man uptimed’ it would show suptimed’s man page. Under /etc/alternatives you would see symlinks called uptimed and uptimed.1.gz with the first pointing to /usr/bin/suptimed and the second pointing to /usr/share/man/man1/suptimed.1.gz

The syntax given above that has [–slave <link> <name> <path>]* and specifically the * after it means you can use it more than once, depending on how many slaves. As for the [ and ] that is the typical way of showing options (not required) and their parameters (which may or may not be required for the option). The angle brackets indicate required arguments. This is a general rule, or perhaps even a de-facto standard.


alternatives --remove <name> <path>

Exact same parameter meanings. So to remove suptimed from the group  (and as master it might then not even use alternatives – the trick is when there IS an alternative) I would use:

alternatives --remove uptimed /usr/bin/suptimed

alternatives --auto <name>

As explained above. Name has the same meaning.


alternatives --config <name>

Allows you to configure the alternative for the group. Is a TUI (text user interface).

The rest I won’t get in to. The –config option is also not part of the original implementation (Debian Linux). –display <name> and –list are quite straight forward.

Debugging With GDB: Conditional Breakpoints

Monday I discovered a script error in what is one of many scripts running in a process. I knew that it was not the script itself but rather a change I had made in the source code (so not the script but the script engine because I implemented a bug in it) that caused the error. But in this case, the script in question is one of many of the same type, each running in a sequence. This means if I am to attach to the debugger and set a breakpoint I have to check that it is the proper instance and if not, continue until the next instance, repeating this until I get to the proper instance. Even then, I have to make sure that I don’t make a mistake and that ultimately I find the problem (unless I want to repeat the process again, which usually is not preferred).

I never really thought of having to do this because I rarely use the debugger at all let alone debug something like this.  But when I do it is time-consuming. So I had a brilliant idea: make a simple function that the script could call and then trap that function. When I reach that breakpoint I then step into the script. Well I was going to write about this but on a whim I decided to look at GDB’s help for breakpoints. I saw something interesting but I did not look in to it until today. Well, as it turns out, GDB already has this functionality, only better. So this is how it works:

  • While GDB is attached to the process you set a breakpoint to the function you need to debug. For example, you want to set a breakpoint at cow: break cow
  • GDB will tell you which breakpoint number it is. It’ll look something like: ‘Breakpoint 1 at 0xdeadbeef: cow’ where ‘0xdeadbeef’ is the address of the function ‘cow’ in the program space and ‘cow’ is the function you set a breakpoint to. Okay, the function cow is probably not there it almost assuredly does not have the address ‘0xdeadbeef’ although it could happen (and it would be very ironic yet amusing indeed), but this is just to show the output (and show how much fun hexadecimal can be, at least how fun it is to me). Regardless, you have the breakpoint number and that is critical for the next step, which is – if you will excuse the pun – the beef of the entire process.
  • So one might ask, does GDB have the ability to check the variable passed in to the function for the condition? Yes, it does, and it also has the ability to dereference a pointer (or access a member function or variable on an object) passed in to the function. So if cow has a parameter of Cow *c and c has a function idnum (or a member variable idnum) then you can indeed make use of it in the condition. This brings us to the last step (besides debugging the bug you implemented, that is).
  • Command in gdb: ‘cond 1 c->idnum == 57005′ (without the quotes) will instruct GDB to only stop at function cow (at 0xdeadbeef) when c->idnum (or if you prefer and you specified the condition as c->idnum() == 57005, then c->idnum() is checked) is 57005. Why 57005 for the example? Because 0xdead is 57005 in decimal. So all you have to do now is tell GDB to continue: ‘c’ (also without the quotes). When it stops you’ll be at function ‘cow’ and c->idnum will be equal to 57005. To contrast, if you had made the condition as c->idnum != 57005 then it will break whenever the cow is alive (to further the example above).

That’s all there is to it!

Open Source and Security

One of the things I often write about is how open source is in fact good for security. Some will argue the opposite to the end. But what they are relying on, at best, is security through obscurity. Just because the source code is not readily available does not mean it is not possible to find flaws or even reverse engineer it. It doesn’t mean it cannot be modified, either. I can find – as could anyone else – countless examples of this. I have personally added a feature to a Windows dll file – a rather important one, that is shell32.dll – in the past. I then went on to convince Windows file integrity check to not only see the modified file as correct but if I were to have replaced it with the original, unmodified one, then it would have replaced it with my modified version. And how did I add a feature without the source code? My point exactly. So to believe you cannot uncover how it works (or, as some will have you believe, modify and/or add features) is a huge mistake. But whatever. This is about open source and security. Before I can get in to that, however, I want to bring up something else I often write about at times.

That thing I write about is this: one should always admit to mistakes. You shouldn’t get angry and you shouldn’t take it as anything but a learning opportunity. Indeed, if you use it to better yourself, better whatever you made the mistake in (let’s say you are working on a project at work and you make a mistake that throws the project off in some way) and therefore better everything and everyone involved (or around), then you have gained and not lost. Sure, you might in some cases actually lose something (time and/or money, for example) but all good comes with bad and the reverse is true too: all bad comes with good. Put another way, I am by no means suggesting open source is perfect.

The only thing that is perfect is imperfection.
— Xexyl

I thought of that the other day. Or maybe better put, I actually got around to putting it in my fortune file (I keep a file of pending ideas for quotes as well as the fortune file itself). The idea is incredibly simple: the only thing that will consistently happen without failure is ‘failure’, time and time again. In other words, there is no perfection. ‘Failure’ because it isn’t a failure if you learn from it; it is instead successfully learning yet another thing and is another opportunity to grow. On the subject of failure or not, I want to add a more recent quote  (this was thought of earlier this month or later in August than when I originally posted this, which was 15 August) that I think really nails this idea very well:

There is no such thing as human weakness, there is only
strength and… those blinded by… the fallacy of perfection.
— Xexyl

In short, the only weakness is the product of ones’ mind. There is no perfection but if you accept this you will be much further ahead (if you don’t accept it you will be less able to take advantage of what imperfection offers). All of this together is important, though. I refer to admitting mistakes and how it is only a good thing. I also suggest that open source is by no means perfect and therefore to be critical of it, as if it is less secure, is flawed. But here’s the thing. I can think of a rather critical open source library that is used by a lot of servers, that has had a terrible year. One might think with this, and specifically what it is (which I will get to in a moment), it is somehow less secure or more problematic. What is this software? Well, let me start by noting the following CVE fixes that were pushed in to update repositories, yesterday:


 – fix CVE-2014-3505 – doublefree in DTLS packet processing
– fix CVE-2014-3506 – avoid memory exhaustion in DTLS
– fix CVE-2014-3507 – avoid memory leak in DTLS
– fix CVE-2014-3508 – fix OID handling to avoid information leak
– fix CVE-2014-3509 – fix race condition when parsing server hello
– fix CVE-2014-3510 – fix DoS in anonymous (EC)DH handling in DTLS
– fix CVE-2014-3511 – disallow protocol downgrade via fragmentation


To those who are not aware, I refer to the same software that had Heartbleed vulnerability. It therefore is also the same software as some other CVE fixes not too long after that. And indeed it seems that OpenSSL is having a bad year. Well, whatever – or perhaps better put, whomever – is the source (and yes, I truly do love puns) of the flaws, is irrelevant. What is relevant is this: they clearly are having issues. Someone or some people adding the changes are clearly not doing proper sanity checks and in general not auditing well enough. This just happens, however. It is part of life. It is a bad patch (to those that do not like puns, I am sorry, but yes, there goes another one) of time for them. They’ll get over it. Everyone does, even though it is inevitable that it happens again. As I put it: This just happens.

To those who want to be critical, and not constructively critical, I would like to remind you of the following points:

  • Many websites use OpenSSL to encrypt your data and this includes your online purchases and other credit card transactions. Maybe instead of being only negative you should think about your own mistakes more rather than attacking others? I’m not suggesting that you not considering yours, but in the case you are not, think about this. If nothing else, consider that this type of criticism will lead to nothing and since OpenSSL is critical (I am not consciously and deliberately making all these puns, it is just in my nature) then this can lead to no good and certainly is of no help.
  • No one is perfect, as I not only suggested above, but also suggested at other times. I’ll also bring it up again, in the future. Because thinking yourself infallible is going to lead to more embarrassment than if you understand this and prepare yourself, always being on the look out for mistakes and reacting appropriately.
  • Most importantly: this has nothing to do with open source versus closed source. Closed source has its issues too, including less people that can audit it. The Linux kernel, for example, and I mean the source code thereof, is on many people’s computers and that is a lot to see any issues. Issues still have and still will happen, however.

With that, I would like to end with one more thought. Apache, the organization that maintains the popular web server as well as other projects, is really to be commended for their post-attack analyses. They have a section on their website, which details attacks and the details include what mistakes they made, what happened, what they did to fix it as well as what they learned. That is impressive. I don’t know if any closed source corporations do that, but either way, it is something to really think about. It is genuine, it takes real courage to do it, and it benefits everyone. This is one example. There are immature comments there but that shows just how impressive Apache is to do this (they have other incident reports I seem to recall). The specific incident is reported here.

Steve Gibson: Self-proclaimed Security Expert, King of Charlatans


2014/08/28:
Just to clarify a few points. Firstly, I have five usable IP addresses. That is because, like I explain below, some of the IPs are not usable for systems but instead have other functions. Secondly, the ports detected as closed and my firewall returning icmp errors. It is true  I do return that but of course there’s missing ports there and none of the others are open (that is, none have services bound to them), either. There are times I flat out drop packets to the floor but if I have the logs I’m not sure which log file it is (due to log rotation), to check for sure. There’s indeed some inconsistencies. But the point remains the same: there was absolutely nothing running on any of those ports just like the ports it detected as ‘stealth’ (which is more like not receiving a response and what some might call filtered but in the end it does not mean nothing is there and it does not mean you are some how immune to attacks). Third, I revised the footnote about FQDN, IP addresses and what they resolve to. There’s a few things that I was not clear with and in some ways unfair with, too. I was taking an issue to one  thing in particular and I did a very poor job of it I might add (something I am highly successful at I admit).


One might think I have better things to worry about than write about a known charlatan but I have always been somewhat bemused with his idea of security (perhaps because he is clueless and his suggestions are unhelpful to those who believe him which is a risk to everyone). More importantly though, I want to dispel the mythical value of what he likes to call stealth ports (and even more than that anything that is not stealth is somehow a risk). This, however, will not only tackle that, it will be done in what some might consider an immature way. I am admitting it full on though. I’m bored and I wanted to show just how useless his scans are by making a mockery of those scans. So while this may seem childish to some, I am merely having fun while writing about ONE of MANY flaws Steve Gibson is LITTERED with (I use the word littered figuratively and literally).

So let’s begin, shall we? I’ll go in the order of pages you go through to have his ShieldsUP! scan begin. First page I get I see the following:

Greetings!

Without your knowledge or explicit permission, the Windows networking technology which connects your computer to the Internet may be offering some or all of your computer’s data to the entire world at this very moment!

Greetings indeed. Firstly, I am very well aware of what my system reveals. I also know that this has nothing to do with permission (anyone who thinks they have a say in what their machine reveals, when connecting to the Internet – or phone to a phone network, or … – is very naive and anyone suggesting that there IS permission involved, is a complete fool). On the other hand, I was not aware I am running Windows. You cannot detect that yet you scan ports which would give you one way to determine OS? Here’s a funny part of that: since I run a passive fingerprinting service (p0f) MY SYSTEM determined your server’s OS (well, technically, kernel but all things considering, that is the most important bit, isn’t it? Indeed it is not 100% correct, however, but that goes with fingerprinting in general and I know that it DOES detect MY system correctly). So not only is MY host revealing information YOURS is too. Ironic? Absolutely not! Amusing? Yes. And lastly, let’s finish this part up: “all of your computer’s data to the entire world at this very moment!” You know, if it were not for the fact people believe you, that would be hilarious too. Let’s break that into two parts. First, ALL of my computer’s data? Really now? Anyone who can think rationally knows that this is nothing but sensationalism at best but much more than that: it is you proclaiming to be an expert and then ABUSING that claim to MANIPULATE others into believing you (put another way: it is by no means revealing ALL data, not in the logical – data – sense or physical – hardware – sense). And the entire world? So you’re telling me that every single host on the Internet is analyzing my host at this very moment? If that were the case, my system’s resources would be too full to even connect to your website. Okay, context would suggest that you mean COULD but frankly I already covered that this is not the case (I challenge you to name the directory that is most often my current working directory let alone know that said directory even exists on my system).

If you are using a personal firewall product which LOGS contacts by other systems, you should expect to see entries from this site’s probing IP addresses: 4.79.142.192 -thru- 4.79.142.207. Since we own this IP range, these packets will …

Well, technically, based on that range, your block is 4.79.142.192/28. And technically your block includes (a) the network address, (b) the default gateway and (c) the broadcast address. That means that the IPs that would be probing is in the range more like ‘4.79.142.193’ – ‘4.79.142.206’. And people really trust you? You don’t even know basic networking and they trust you with security?

Your Internet connection’s IP address is uniquely associated with the following “machine name”:

wolfenstein.xexyl.net

Technically that is the FQDN (fully-qualified domain name[1]), not “machine name” as you put it. You continue in this paragraph:

The string of text above is known as your Internet connection’s “reverse DNS.” The end of the string is probably a domain name related to your ISP. This will be common to all customers of this ISP. But the beginning of the string uniquely identifies your Internet connection. The question is: Is the beginning of the string an “account ID” that is uniquely and permanently tied to you, or is it merely related to your current public IP address and thus subject to change?

Again, your terminology is rather mixed up. While it is true that it you did a reverse lookup on my IP, it isn’t exactly “reverse DNS”. But since you are trying to simplify (read: dumb it down to your level) it for others, and since I know I can be seriously pedantic, I’ll let it slide. But it has nothing to do with my Internet connection itself (I have exactly one). It has to do with my IP address of which I have many (many if you consider my IPv6 block, but only 5 if you consider IPv4). You don’t exactly have the same FQDN on more than one machine any more than you have the same IP on more than one network interface (even on the same system). So no, it is NOT my Internet connection but THE specific host that went to your website and in particular the IP assigned to that host I connected from. And the “string” has nothing to do with an “account ID” either. But I’ll get back to that in a minute.

The concern is that any web site can easily retrieve this unique “machine name” (just as we have) whenever you visit. It may be used to uniquely identify you on the Internet. In that way it’s like a “supercookie” over which you have no control. You can not disable, delete, or change it. Due to the rapid erosion of online privacy, and the diminishing respect for the sanctity of the user, we wanted to make you aware of this possibility. Note also that reverse DNS may disclose your geographic location.

I can actually request a different block from my ISP and I can also change the IP on my network card. Then the only thing that is there is my IP and its FQDN (that is not in use and I can change the FQDN as I have reverse delegation yet according to you I cannot do any of that). I love your ridiculous terminology though. Supercookie? Whatever. As for it giving away my geographic location, let me make something very clear: the FQDN is irrelevant without the IP address. While it is true that the name will (sometimes) refer to a city it isn’t necessarily the same city or even county as the person USING it. The IP address is related to the network; the hostname is a CONVENIENCE for humans. You know, it used to be that host -> IP was done without DNS (since it didn’t exist) but rather a file that maintains the mapping (and still is used albeit very little). The reason it exists is for convenience and in general because no one would be able to know the IP of every domain name. Lastly, not all IPs resolve into a name.

If the machine name shown above is only a version of the IP address, then there is less cause for concern because the name will change as, when, and if your Internet IP changes. But if the machine name is a fixed account ID assigned by your ISP, as is often the case, then it will follow you and not change when your IP address does change. It can be used to persistently identify you as long as you use this ISP.

The occasions it resembles the IP is when the ISP has authority of the in-addr.arpa DNS zone of (your) IP and therefore has their own ‘default’ PTR record (but they don’t always have a PTR record which your suggestion does not account for; indeed, I could have removed the PTR record for my IP and then you’d have seen no hostname). But this does not indicate that it is static or not. Indeed, even dynamic IPs typically (not always) have a PTR record. Again, the name does not necessarily imply static: it is the IP that matters. And welcome to yesteryear… in these days you typically pay extra for static IPs but you suggest it is quite often that your “machine name is a fixed account ID” (which itself is completely misuse of terminology). On the other hand, you’re right: it won’t change when your IP address changes because the IP is relevant, not the hostname! And if your IP changes then it isn’t so persistent in identifying you, is it? It might identify your location but as multiple IPs (dynamic) and not a single IP.

There is no standard governing the format of these machine names, so this is not something we can automatically determine for you. If several of the numbers from your current IP address (23.120.238.106) appear in the machine name, then it is likely that the name is only related to the IP address and not to you.

Except ISP authentication logs and timestamps… And I repeat the above: the name can include exactly as you suggest and still be static!

But you may wish to make a note of the machine name shown above and check back from time to time to see whether the name follows any changes to your IP address, or whether it, instead, follows you.

Thanks for the suggestion but I think I’m fine since I’m the one that named it.

Now, let’s get to the last bit of the ShieldsUP! nonsense.

GRC Port Authority Report created on UTC: 2014-07-16 at 13:20:16
Results from scan of ports: 0-10550 Ports Open
72 Ports Closed
984 Ports Stealth
———————
1056 Ports TestedNO PORTS were found to be OPEN.Ports found to be CLOSED were: 0, 1, 2, 3, 4, 5, 6, 36, 37,
64, 66, 96, 97, 128, 159, 160,
189, 190, 219, 220, 249, 250,
279, 280, 306, 311, 340, 341,
369, 371, 399, 400, 429, 430,
460, 461, 490, 491, 520, 521,
550, 551, 581, 582, 608, 612,
641, 642, 672, 673, 734, 735,
765, 766, 795, 796, 825, 826,
855, 856, 884, 885, 915, 916,
945, 946, 975, 976, 1005, 1006,
1035, 1036Other than what is listed above, all ports are STEALTH.

TruStealth: FAILED – NOT all tested ports were STEALTH,
– NO unsolicited packets were received,
– A PING REPLY (ICMP Echo) WAS RECEIVED.

The ports you detected as “CLOSED” and not “STEALTH” were in fact returning an ICMP host-unreachable. You fail to take into account the golden rule of firewalls: that which is not explicitly permitted is forbidden. That means that even though I have no service running on any of those ports I still reject the packets to it. Incidentally, some ports you declared as “STEALTH” did exactly the same (because I only allow the ports in a specific IP block as the source network). The only time I drop packets to the floor is when state checks fail (e.g., a TCP SYN flag is set but it is already a known connection). I could prove that too, because I actually went and had you do the scan a second time but this time I added specific iptables rules for your IP block which changed the results quite a bit and indeed I used the same ICMP error code.

As for ping: administrators that block ping outright need to be hit over the head with a pot of sense. Rate limit by all means, that is more than understandable, but blocking ICMP echo requests (and indeed replies) only makes troubleshooting network connectivity issues more of a hassle and at the same time does absolutely nothing for security (fragmented packets and anything that can be abused obviously are dealt with differently because they are different!). Indeed, if they are going to attack they don’t really care if you respond to ICMP requests. If there is a vulnerability they will go after that and frankly hiding behind your “stealth” ports is only a false sense of security and/or security through obscurity (which is a false sense of security and even more harmful at the same time). Here’s two examples: First, if someone sends you a link (example: in email) and it seems to you to be legit and you click on it (there’s a lot of this in recent years and is ever increasing), the fact you have no services running does not mean you somehow are immune to XSS, phishing attacks, malware, or anything else. Security is, always has been and always will be a many layered thing. Secondly: social engineering.

And with that, I want to finish with the following:

If anyone wants the REAL story about Steve Gibson, you need only Google for “steve gibson charlatan” and see the many results. I can vouch for some of them but there really is no need for it – the evidence is so overwhelming that it doesn’t need any more validation. Here’s a good one though (which also shows his ignorance as well as how credible his proclamations are): http://www.theregister.co.uk/2002/02/25/steve_gibson_invents_broken_syncookies/ (actually it is a really good one). If you want a list of items, check the search result that refers to Attrition.org and you will see just how credible he is NOT. A good example is the one that links to a page about Gibson and XSS flaws which itself links to: http://seclists.org/vuln-dev/2002/May/25 which is itself offers a great amount of amusement (note that some of the links are no longer valid as it was years ago but that is the one at seclists.org and not the only incident).

[1] Technically, what his host is doing is taking the IP address and resolving it to a name (which is querying the PTR record as I refer to above). Since I have reverse delegation (so have authority) and have my own domain (which I also have authority of) I have my IPs resolve to fully-qualified domain names as such. FQDN is perhaps not the best wording (nor fair, especially) on my part in that I was abusing the fact that he is expecting normal PTR records that an ISP has rather than a server with proper an A record with a matching PTR record. What he refers to is as above: resolving the IP address to a name, which does not have to have a name. Equally, even if a domain exists by name, it does not have to resolve to an IP (“it is only registered”). He just gave it his own name for his own ego (or whatever else).

Death Valley, California, Safety Tips and Harry Potter

I guess this might be the most bizarre title for a post yet but it is a take on real life and fantasy and particularly the Harry Potter series. I am implying two things with real life. I will get to the Harry Potter part later. While it is a specific tragedy in Death Valley it is not an uncommon event and since I have many fond memories of Death Valley (and know the risks), I want to reflect on it all (because indeed fantasy is very much part of me, perhaps too much so).

For the first ten years of my life (approximate) I visited Death Valley each year, in November. It is a beautiful place with many wonderful sights. I have many fond memories of playing on the old kind of Tonka trucks (which is a very good example of “they don’t make [it] like they used to” as nowadays it is made out of plastic and what I’m about to describe would be impossible). My brother and I would take a quick climb up the hill right behind our tent, get on our Tonka trucks (each our own) and ride down, crashing or not, but having a lot of fun regardless. I remember the amazing sand dunes with the wind blowing like it tends to in a desert. I remember being fortunate enough that there was a ghost town with a person living there who could supply me with electricity for my nebulizer for an asthma attack (and fortunate enough to see many ghost towns from where miners in the California Gold Rush would have resided). I remember, absolutely, Furnace Creek with the visitor centre and how nice everyone was there. I even remember the garbage truck driver who let my brother and me activate the mechanism to pick up the bin. I remember the many rides on family friends’ dune buggies. The amazing hikes in the many canyons is probably a highlight (but certainly not the only highlight). Then there is Scotty’s Castle (they had a haunted house during Halloween if I recall). There is actually an underground river (which is an inspiration to another work I did but that is another story entirely). They have a swimming pool that is naturally warm. I remember all these things and more even if most of it is vague. It truly is a wonderful place.

Unfortunately, because of the vast area which spans more than 3,373,000 acres (according to Wiki which I seem to remember is about the right area – I’m sure the official Death Valley site would have more on this) and the very fact it is the hottest place on Earth (despite some claims; I am referring to officially acknowledged records) at  134.6 F / 57 C. That was, ironically enough, recorded this very month in 1913, on July 10 (again according to Wiki but from memory, other sources do have it in the early 1900s). This is an important bit (the day of the month in particular) for when I get to fantasy, by the way. Interestingly, the area I live in has a higher record for December and January than Death Valley by a few degrees (Death Valley: December and January at 89 F / 32 C ; my location I know I have seen on the thermostat at least 95 F / 35 C for both months although it could have been higher too). Regardless, Death Valley has a higher record by 10 C higher (my location record: 47 C / 116.6 F; Death Valley as above). And if you think of the size (as listed above) and that much of it is unknown territory for all but seasoned campers (which my family would fit that category), you have to be prepared. Make no mistake people: Death Valley and deserts in general, can be very, very dangerous. Always make sure you keep yourself hydrated. What is hydration though, for humans? It is keeping your electrolytes at a balanced level. This means that indeed too much water is as dangerous as too little water. As a general rule of thumb that was given to me by the RN (registered nurse) for a hematologist I had (another story entirely, as well, as for why I had one): if you are thirsty you waited too long. Furthermore, for Death Valley (for example) make sure you either have a guide or you know your way around (and keep track – no matter how you do this – where you go). That may include maps, compass, landmarks, and any other number of techniques. But it is absolutely critical. I have time and again read articles on the BBC where someone (or some people) from the UK or parts of Europe were unprepared and were found dead. It is a wonderful place but be prepared. Although this should be obvious, it often isn’t: Death Valley is better visited in the cooler months (close to Winter or even in Winter). I promise you this: it won’t be cold by any means. Even if you are used to blizzards in your area, you will still have plenty of heat year round in Death Valley. I should actually restate that slightly, thinking about a specific risk (and possibility). Deserts can drop to freezing temperatures! It is rare yes, but when it does it still will be cold. Furthermore, deserts can see lots of rain, even flash floods! Yes, I’ve experienced this exactly. Furthermore, as for risks, if it looks cloudy (or if you have a sense of smell like mine where you can smell rain that is about to drop, and no that is not an exaggeration – my sense of smell is incredibly strong) or there is a drizzle (or otherwise light rain) or more than that, do not even think about hiking the canyons! It is incredibly dangerous to attempt it! This cannot be stressed enough. As for deserts and freezing temperature, I live in a desert (most of Southern California is a desert) and while it was over 22 years ago (approximately) we still have seen snow on our yard. So desert does not mean no rain or no snow. I’ve seen people write about hot and dry climates and deserts (comparing the two) but that is exactly what a desert is: a hot and dry climate! But climate does not by any means somehow restrict what can or cannot happen. Just like Europe can see mid 30s (centigrade) so too can deserts see less than zero. And all this brings me to the last part: fantasy.

One of my favourite genres (reading – I rarely watch TV or films) is fantasy. While this is not the only series I have read, the Harry Potter series is one that I am referring to in particular to, as I already highlighted. Indeed, everything in Harry Potter has a reason, has a purpose and in general will be part of the entire story! That is how good it is and that is how much I enjoyed it (I also love puzzles so putting things together, or rather the need to do that, was a very nice treat indeed). I’m thankful for a friend that finally got me to read it (I had the books actually but never got around to reading the ones that were out, which would be up to and including book 3, Harry Potter and the Prisoner of Azkaban). The last two books I read the day it came out, in full, with hours to spare. Well, why on Earth would I be writing about fantasy, specifically Harry Potter, and Death Valley, together? I just read on the BBC, that Harry Potter Actor Dave Legeno has been found dead in Death Valley. He played the werewolf Fenrir Greyback. I will note the irony that today, the 12th of July, this year it is a full moon. I will also readily admit that in fantasy, not counting races by themselves (e.g., Elves, Dwarves, Trolls, …) werewolves are my favourite type of creature. I find the idea fascinating and there is a large part of me that wishes they were real. (As for my favourite race it would likely be Elves) I didn’t know the actor, of course, but the very fact he was British makes me think he too fell to the – if you will excuse the pun which is by no means meant to be offensive to his family or anyone else – fantasy of experiencing Death Valley, and unfortunately it was fatal. And remember I specifically wrote 1913, July 10 as the record temperature for Death Valley? Well, I did mean it when I wrote it has significance here: he was found dead on July 11 of this year. Whether that means he died on the 11th is not exactly known yet (it is indeed a very large expansion and it is only that hikers found him, that it is known) but that it was one day off is ironic indeed. It is completely possible he died on the 10th and it is also possible it was days before or even the 11th. This is one of those things that will be known after autopsy occurs as well as backtracking (by witnesses and other evidence) and not until then. Until then, it is anyone’s guess (and merely speculation). Regardless of this, it is another person who was unaware of the risks of which there are many (depending on where in Death Valley you might be in a vehicle; what happens if you run out of fuel and only have enough water for three days? There are so many scenarios but they are far too often not thought of or simply neglected). Two other critical bits of advice: don’t ignore the signs left all around the park (giving warnings) and always, without fail, tell someone where you will be! If someone knew where he was and knew approximately when he should be back (which should always be considered when telling someone else where you’ll be) they could have gone looking for him. This piece of advice, I might add, goes for hiking, canoeing and anything else (outside of Death Valley, this is a general rule), especially if you are alone (but truthfully – and I get the impression he WAS alone – you should not be alone in a place as large as Death Valley because there are many places to fall, there are animals that could harm you, and instead of having a story to bring home you risk not coming home at all). There are just so many risks so always be aware of that and prepare ahead of time. Regardless, I want to thank Dave for playing Fenrir Greyback. I don’t know if [you] played in any other films and I do not know anything about you or your past but I wish you knew the risks beforehand and my condolences (for whatever they can be and whatever they are worth) to your friends and family. I know that most will find this post out of character (again if you will excuse the indeed intended pun) for what I typically write about, but fantasy is something I am very fond of, and I have fond memories of Death Valley as well.