Heartbleed: A Letter to Robin Seggelman From a Fellow Programmer

Edit on 2014/04/18: This was actually a real mess. I did figure this out the other day but truth be told the reason it was a real mess is the same reason I’m only getting to update it today: too much going on at this time. To be honest I think that this could be changed significantly and it realistically should be done. Therefore, I’m removing the anger and spite from it (which is an indication of “the too much going on at this time”). While I do think people take things for granted and it annoys me a great deal, I think it detracts from my original point, which is this: no one is perfect and by understanding this you can not only better yourself but you can also better others (and that is why I shared a story where I made a very stupid mistake, hated myself for it but I was offered wisdom and I got over the obstacles). Unfortunately, there was a lot of anger (I’m not sure that is strong enough word even, sadly, and that is how I’ve been feeling overall) and spite thrown in it. Ironically, this pretty much defeated my entire credibility on the subject (being respectful, being understanding of others and others mistakes as well as [your] mistakes). I have now removed the update and the original portion that was bad as well as fixed a couple thoughts. As for the other things, while it had good intentions, it is not the correct approach and it is something I will either write about at some point in the future, or not. I don’t think it matters until then.

This is (or was) primarily to Robin who is the person responsible for the mistake that led to the heartbleed vulnerability. However, realistically it is to every person who makes a mistake (so that includes me, for sure – see above for a rather big example – and every single person on the face of Earth). More than that, it is to those who feel bad on themselves for making a mistake. While I don’t think Robin was feeling bad on himself I can relate the same for a very similar bug in a program I am a programmer for: ultimately memory corruption by way of a miscalculation resulting going out of the bounds of a buffer. It is an insidious type of bug; they are very hard to track down if you end up crashing (which is inevitable for programs that are meant to run 24/7) because the stack is completely trashed and the original instruction (the error) has long since happened. Unfortunately, the same type of bug can both be abused by causing a crash (and therefore potentially running arbitrary code) and by what the heartbleed vulnerability allows: leaking information (interestingly, a core dump that can be created from crashes can ALSO leak information because a core dump represents the process’ memory space at the time of the dump and then think about deliberately causing the program to crash at a specific time where some thing potentially confidential is in memory. This is kind of like RAM scrapers only by causing a program to crash at the right time) that should not be available. For the purpose of the ‘letter’ in question, you can consider ‘Robin’ as ‘you’ or if you prefer (by all means – I fully admit it) you can consider it as ‘Xexyl’. In fact, I think that is quite appropriate. Furthermore, when I write ‘you’ you can consider it yourself, your worst enemy or – more appropriately – me. The bottom line is that no one is perfect and I share some wisdom I was given by a good friend when I needed it. This is one of the – admittedly probably few – times where I am actually trying to give to others I don’t even know (maybe that is even the only time as I am emotionally and socially cold). One could argue this is because by giving this out, I am allowing others to accept that they too are not perfect and that accepting mistakes can only make the world better (which makes me better). Indeed, the cynical me (which is to say, me as I always have been) would believe – no, knows – this to be the case. But still, if it helps anyone, so be it.


When you made the mistake you were being exactly what was expected of you: you were being a human. Humans are imperfect and no one is immune to making mistakes. Further, when you wrote (which I saw first on the BBC) the following:

“It’s tempting to assume that, after the disclosure of the spying activities of the NSA and other agencies, but in this case it was a simple programming error in a new feature, which unfortunately occurred in a security-relevant area,” he told Fairfax Media.


“It was not intended at all, especially since I have previously fixed OpenSSL bugs myself, and was trying to contribute to the project.”

It somewhat disheartens (in some way or another) me. This is coming from someone who does not really experience what most would call positive emotions or indeed many emotions at all*. Why is it that I feel this way, then? Because, as a fellow programmer, I know all too well how making mistakes of serious consequence can effect your sense of morale. But make no mistake here: you can only learn and better yourself from it.

And as for you trying to contribute to the project: don’t think for a moment that you were trying. No, you WERE DOING which is much better than those who take for granted what OpenSSL provides (and taking that for granted is shameful but an unfortunate reality). The fact you made a mistake? Well, who doesn’t? No programmer has never made serious errors. Even when you are calculating the size of an array (or memory block) to allocate, all it takes is getting distracted for one nanosecond (the phone rings) or being tired (without realising that). I have done exactly this: I calculated an array size, made the other appropriate adjustments, committed the changes to [the] CVS repository and later on I got corrupt coredumps until it was fixed. It was of seemingly random nature (as is expected with buffer overflows). I looked through the diffs of recent revisions and finally I fixed it a fortnight later. It drove me crazy until I got it sorted. What had gone wrong? I remember chatting with someone who was involved in the project (but not by programming) at the time and I told her that I was feeling very sleepy. I then went to rest but the mistake (and not sensing how tired I was) was already made. And yes, this bug was a buffer overflow because I calculated the size incorrectly (meanwhile, I had fixed most if not all bugs in the bug file of said project). But indeed, the fact it drove me mad shows just how much programmers (especially those who program with no pay on free projects, like you and I) CARE about resolving problems and prefer to FIX rather than work-around problems (which only masks something else), sometimes for EVERY WAKING SECOND. This itself is nothing short of commendable and beyond a doubt a very respectable trait! And you should be respected for your contributions! OpenSSL is very important and you should be proud of yourself!

I have some wisdom for you. While I’m only a year or so older than you, I have a good friend from Holland who is something of a mentor to me. He surpasses me on programming and Unix both of which I’m very sufficient with but he is also older than me (he’s in his 40s). I know him by way of that bug I implemented (the buffer overflow one) was in his project. He offered me some advice and I have offered it to others in similar (even if not programming) circumstances (a mistake the person is down or disappointed with themselves or even just wondering how it happened). Now it is your turn. I am omitting parts of the message that are more project specific (which would need more background and is irrelevant to the point; basically he was suggesting he had made far worse mistakes in the past):

Don’t get stressed because you made a mistake. This
just happens.

In holland we have a saying which, when directly translated, means:

Where there is lumber work, there’s wood chips.

Which means as much as: When you do something, you are bound to make
a mess :)

And what it all boils down to is this, then: you accept that you made a mistake (you already did this) and you address it to the best of your ability (which presumably you did or would have if someone else did not beat you to it) and ultimately you learn which is growth. I suspect highly you also learned from this. That you were able to publicly admit it (let’s neglect the fact revision control systems can show that since most wouldn’t know how if they even had access to the repository) shows a huge amount of integrity and that my fellow programmer, is very respectable. You deserve nothing but praise – your honesty and integrity shines and you contribute to an important project.

Kind regards, and keep up the great work!


(If you excuse the irony: I told you the recipient should be Xexyl but now the sender is Xexyl. Which is it then? Who knows? I would suggest every person knows the truth. For me the truth is: both. As for the keep up the good work to myself? Well, I admit I try not to act like a narcissist but perhaps this time there is some sign of it, even if unintentional. Alternatively, it might just be the unintended irony in the first place or even that by fixing what was a huge rant full of rage, I am now “doing good” and “should keep it up”. Choose your poison, whatever it may be: if you have to poison yourself at least make it somewhat pleasant, right?)

*My point is specifically that I don’t really relate to people very well, if at all. And in addition, I don’t feel many emotions and the emotion I feel is quite negative. I don’t identify with them and this shows itself in various ways. Regardless, what I can relate to is being a programmer and knowing how – at least for the programmers I know – we strive to fix the bugs we implement (this is true: bugs aren’t created by accident – they are implemented) and are not satisfied until the program works properly (and by work I don’t mean seems to work, I mean it truly is properly functioning; indeed, memory corruption – much like, and interestingly enough, faulty RAM – can cause very seemingly random problems at times but at other times things seem OK but in reality things are not OK).

Windows XP End of Life: Is Microsoft Irresponsible?

With my being very critical of Microsoft one might make the (usually accurate) assumption that I’m about to blast Microsoft. Whether any one is expecting me to or not, I don’t know but I will make something very clear: I fully take responsibility for my actions and I fully accept my mistakes. I further make the best of situations. As such, I would like to hope everyone else does too. I know that is unrealistic at best; indeed, too many people are too afraid to admit when they don’t know something (and therefore irresponsibly make up pathetic and incorrect responses) or when they make a mistake. But the fact of the matter is this: humans aren’t perfect. Learn from your mistakes and better yourself in the process.

No, I am not going to blast Microsoft. Microsoft _was_ responsible. They reported the end of life _SEVEN YEARS AGO_! I am instead blasting those who are complaining (and complaining is putting it VERY nicely – it is more like that of a whiny little brat who throws a temper tantrum when they don’t get their own way on every single thing despite the fact they have been told in advance this will happen) about how they now have to quickly upgrade or not get updates, security updates included. For instance, let’s take two different groups. Let’s start with the manager Rosemayre Barry of London-based business, The Pet Chip Company who stated the following (to the BBC or at least it was reported on the BBC):

“XP has been excellent,” she says. “I’m very put out. When you purchase a product you don’t expect it to be discontinued, especially when it’s one of [Microsoft's] most used products.”


Sorry to burst your bubble, Rosemayre, but ALL software will eventually be discontinued (just like how smoke detectors, carbon monoxide detectors and the like have to be replaced over time and/or are improved over time and that is not even considering maintenance like battery replacement). You can complain all you want but this is not only the correct thing technically, it is economically unfeasible to continue with a product as old as Windows XP is. I don’t care how used it is or isn’t (I hardly expect it to be the most used product of Microsoft’s, however; I would argue its office suite is more used as it works on multiple versions of Windows and corporations rely on it a lot). I also know for a fact that corporations tend to have contracts with computer manufacturers that they LEASE computers for a few years at a time and when the time comes for the next lease they will get the more recent software and this includes the operating system.  Why would they do that? Well, again, it is economically better for the company, that’s why. And here’s some food for thought: Windows XP was released in 2001 and according to my trusty calculator (i.e., my brain) that means it is almost a 13 year old product (as it was August and we’re only in April). Well check this. Community ENTerprise OS (CentOS), a distribution of Linux, which is largely useful for servers, has a product life line, as far as I remember, of only 10 years. And you know something else? CentOS is very stable because it doesn’t have many updates or in other words is not on the bleeding edge. When a security flaw is fixed, the affected libraries and/or programs have fixes backported into the older libraries and/or programs. Indeed, the current GCC version is 4.8.2 and CentOS’s current version (unless you count my backport of 4.7.2 which you can find more info about at The Xexyl RPM Repository – possibly others exist somewhere else but for the time being the packages I maintain I have not updated to the 4.8.x tree) is 4.4.7 which was released on 20120313 or in other words the 13th of March in 2012. Yes, that is over _two years ago_. It means you don’t get the newer standards (even though the C and C++ standards that are most recent were in 2011 it is not to say that anything past the ratification date is somehow magically going to have it all. In fact, some features are still not complete in the most recent versions) but it also means your system remains stable and that is what a server needs to be: what good is a server if the service it offers is unstable (and I’m not referring to Internet connection stability! – that is another issue entirely and nothing to do with the operating system) and hard to use? Very little indeed. And realistically, 10 years is very reasonable if not more than very reasonable. Over the span of 10 years a lot changes including a lot of core changes (and let’s not forget standards changing) which means maintaining it for 10 years is quite significant and I cannot give anything but my highest praise to the team at CentOS – an open source and FREE operating system. To be fair to this manager, they at least DID upgrade to a more recent Windows, but the very complaint is beyond pathetic, irresponsible and outright naive and ignorant at best and stupid at worst. It is also unrealistic and unfair to Microsoft (and this coming from someone who is quite critical of Microsoft in general and someone who has railed on them more than once – in security and otherwise; in quotes about their capabilities and in articles alike – and quite harshly too; examples, one of which even includes a satirical image I made that is directed at Windows in general: Microsoft’s Irresponsible Failures and Why Microsoft Fails at the Global Spam Issue).

Next, let’s look at what the UK government has done: They are paying Microsoft £5.5m to extend updates of Microsoft Windows XP, Office 2003 and Exchange 2003 for ONE year. That is absolutely astonishing and I would think – to UK tax payers – atrocious. What the hell are they thinking? If the SEVEN years warning was not enough time what makes ONE extra year worth THAT MUCH? Furthermore, and most importantly, if they could not UPGRADE in SEVEN YEARS what makes any rational being expect them to UPGRADE WITHIN A YEAR? They claim they’re saving money. Yeah, right. Not only are they paying money to get updates for another year, they will STILL have to upgrade in due time, if they are to get updates. Think of it this way. When a major part of your car dies, you might consider fixing it. It will likely be pricey. If, however, shortly thereafter (let’s say within a year or two), another major part of your car dies. Also, the car has been used for quite some years and certainly out of warranty. What is the most logical and financially best assumption and choice? Is it to that this will be the last part to die – surely nothing else can go wrong! – and to pay for it and then wait until the third part dies (which it almost certainly will; it is mechanical and mechanical things die!)? Or is it maybe better to cut your losses and get a new car? I think we all know the answer. We of course, does not include the UK government.

The bottom line here is quite simple though: No, Microsoft is not being irresponsible. They are not being unreasonable either. No, they gave SEVEN YEARS notice. The only irresponsible and unreasonable people – and companies and/or government[s] – are those who STILL use Windows XP and especially those that are now forced to upgrade and at the same time whining worse than a spoiled brat who is used to getting his way but the one time he doesn’t, he throws a tantrum. Lastly, I want to point out the very dangerous fallacy these people are actually aligning themselves with. To those of us who remember when the TELNET and RSH protocols were prevalent, there came a time when enough was enough and standards had to change (e.g., secure shell aka ssh). Those who had any amount of logic in them UPGRADED. Many (though not as many as should have) saw many problems with them for far too long, among them the following (and note that these are on Unix systems and yes that means NO system is immune to security problems, be it Windows, Mac, Unix or anything else. Incidentally, Unix systems are what typically are used for servers which means customers data included in databases running on the servers, especially then as Windows NT was in its infancy by the time most – but probably not all – changed over):

  1. The fact a common configuration would allow “you” to remotely log in to a machine as a user from ANY HOST WITH NO PASSWORD? And of course it was PERFECTLY SAFE because after all, they won’t have a user with the same name, right? Well, ever occur to you that they could CREATE a user with that name? And ever hear of grabbing the password file remotely to find user names? Or a scrupulous employee who could do the same? An employee that was fired and wants revenge (and happens to have user names or maybe even stole data before they were completely locked out after being fired? Maybe they even left a backdoor in!)? For those who are slow that is sarcasm; it was NEVER safe and it WAS ALWAYS naive at best (this same problem is trust relationship and that is one of the biggest problems with security – too much trust is given far too easily). And indeed, Unix – just like the predecessor to the Internet – was NEVER designed with security in mind. That is why new standards are a good thing: to address problems and to extend, deprecate or obsolete standards (like, I don’t know, IPv6 as opposed to IPv4, anyone?).
  2. No encryption means sniffing could show the user and password (as well as other information in the traffic) to the sniffing party. Assuming that there is no one to sniff your traffic is security through obscurity at best and that is arguably worse than no security (it is a false sense of security and when taken to extreme some will refuse to believe it is a problem and therefore are blinded to the fact they already are or could be compromised any moment).

Consider those two examples for a moment. Finally, take the logic of “most people use it” or “it is so convenient and we shouldn’t have to upgrade” and where do you end up? Exactly like not upgrading from Windows XP or otherwise having tantrums about having to upgrade from Windows XP to something more recent (despite the seven years notice and it being in the news more and more as the deadline approached) or else not receive updates. In other words, you are staying behind times AND risking your data, your customer’s data, your system (and that means network if you have a network). And you know something? You had it coming to you so enjoy the problems YOU allowed and let’s hope that only you or your company is affected and not your customers (because it would be YOUR fault).

Preventing systemd-journald and crond from flooding logs

I will come out and admit it fully: there has always been at least one thing that bothered me a great deal with systemd. To be brutally honest there are quite a few things that have bothered me. But one of the most obnoxious ones is something they seem to not understand as a problem (despite the bug reports and for some people concern that someone had compromised their system due to the way the message is written): every time cron runs a task it shows not one, but two messages in the system log (/var/log/messages) and the journal. It is absolutely infuriating as it fills the log files which then get rotated out (due to size reaching its cap) and besides that, it is REALLY hard (minus grep -v on a pattern over multiple log files, but that should NOT be necessary!) to find other important log messages in the huge ugly disaster that the log file is left in. Equally as bad is that there is this log file, called – check this out – /var/log/cron with the information that should be ALL that is needed. But of course not; not only does it NEED to be in /var/log/messages and not only does it NEED to be in /var/log/cron it ALSO NEEDS to be in the journal, the so called improvement over logs. /sarcasm. Three places for the same bloody message? Really? What the hell is that? Anyone who knows enough to check logs will know enough that there are MULTIPLE LOGS for DIFFERENT reasons! So while I titled this about preventing the flooded logs that realistically is FAR too nice. It should be more like making systemd shut the hell up and knock off the stupid log flooding (which incidentally, could be considered by some a DoS – denial of service – attack since you make it much more difficult to normally manage and review logs).

Well, I had had WAY too much of this crap and while I’m easily irritated (and agitated lately) I think I’m not the only one who is completely fed up with the way they are handling it (or not handling it rather). So here is how you can make this flood stop. First, though, the message would look like this:

 Mar 16 04:55:01 server systemd: Starting Session 3880 of user luser.
Mar 16 04:55:01 server systemd: Started Session 3880 of user luser.

in /var/log/messages. As you can imagine, an unsuspecting user might see that on some of the system cron jobs (e.g., in /etc/cron.hourly/ which is run by root) and think that it is someone who logged in as root on their system (when in fact it is cron). Conveniently the clowns responsible make it end up in that file even though it is in the journal. Why is that? Oh, something like this, taken from journald.conf(5) (that is: man 5 journald.conf):

       ForwardToSyslog=, ForwardToKMsg=, ForwardToConsole=
Control whether log messages received by the journal daemon shall be forwarded to a traditional syslog daemon, to the kernel log buffer (kmsg), or to the system console. These options take boolean arguments. If forwarding to syslog is enabled but no syslog daemon is running, the respective option has no effect. By default, only forwarding to syslog is enabled. These settings may be overridden at boot time with the kernel    command line options “systemd.journald.forward_to_syslog=”, “systemd.journald.forward_to_kmsg=” and “systemd.journald.forward_to_console=”.

Someone remind me. Wasn’t the idea of Fedora Core 20 to REMOVE the syslog daemon from default install because the journal was sufficient, was causing logs to be stored twice (Ha! Nice number but too bad it is lower than the truth in at least the case of cron) and has had enough time to show it works? No, no need: that absolutely was their idea! Yet they clearly didn’t think very well, did they? If they forward to syslog then what about systems that are updated rather than new installs? The syslog daemon will be installed, geniuses! Yet here you forward to syslog. Brilliant, if your idea of brilliant is beyond stupid.

Oh, and if you think the rant is done,  I’m sorry to suggest no. What you also find with cron jobs is this, in /var/log/cron as it always has been (not the same entry or same instance but it shows you the info – in fact, it shows more specific info like WHAT was executed rather than just a session started for the user, vague and unhelpful as it is – and not two nonsense lines about ‘starting’ and then ‘started’; what ever happened to “no news is good news” i.e., if there is no output there is no error?):

Mar 29 20:55:01 server CROND[2926]: (luser) CMD (/home/luser/bin/script.sh)

(There also exists the normal run-scripts entry for hourly, daily, monthly cronjobs but those also show the commands executed).

And then there is the third copy: the journal, which includes BOTH of the above:

Feb 16 18:05:01 server systemd[1]: Starting Session 1544 of user luser.
Feb 16 18:05:01 server systemd[1]: Started Session 1544 of user luser..
Feb 16 18:05:01 server CROND[13241]: (luser) CMD (/home/luser/bin/script.sh)

Redundancy is good in computing but NOT in this way. Redundancy is good with logs but again, NOT in this way. No, this is just pure stupidity.

Now then, here’s how you can make journald cut this nonsense out.

  1. In /etc/systemd/ you will find several files. The first one to edit is “journald.conf”. In it, you need to uncomment (remove the # at the start of the line) the line that starts with: #Storage=
    You then need to change whatever is after the = to be “syslog”. (without the quotes).
  2. The next file (same directory) is “user.conf”. Again, you need to uncomment a line to activate the option. The line is #LogTarget= and you want to change what is after the = to “syslog”. (again, without the quotes)
  3. Next you need to edit “system.conf” (same directory still) and do the same change as in “user.conf” (note: I am not 100% sure that you need to do it for both “user.conf” and “system.conf” and if only one is required I don’t know which one nor do I care).
  4. Now, this may vary depending on what syslog daemon you have. I’m assuming rsyslogd. If that is the case change to the directory: /etc/rsyslog.d/
  5. Once in /etc/rsyslog.d/ create a file that does not exist – maybe cron.conf – and add the following lines:
    :msg, regex, “^.*Starting Session [0-9]* of user” stop
    :msg, regex, “^.*Started Session [0-9]* of user” stop

    Note on this: “stop” is for newer versions of rsyslogd which you will have if you’re using Fedora. Otherwise, for older versions, change the “stop” to a tilde (a “~”). If you check /var/log/messages after restarting rsyslogd and you notice that there is a problem with stop then you can try the other (it will also show you, if you try ~ first, that ~ is deprecated). Those two commands will, combined with the changes to systemd files, allow only the syslog to get the cron message which is now removed (which is fine because as I already noted, /var/log/cron has that info).

  6. To enable all of this you would want to do the following (you need to be root, in fact you need to be root for all of the steps) as given below.


# service rsyslogd restart
# systemctl restart systemd-journald.service

Note the following: the second command may or may not be enough. Since I only did this on a remote server and since I was not about to play the game of “is it because I didn’t restart the right service or is something else not properly configured?”. I’ve yet to do it on any local machines so I cannot remark on that more than that. If rebooting is an option and it does not work as described above then that could be one way around it.

Questions that might come to mind for some:

  1. Since we redirect journal to syslog, do we see the usual log messages? Yes, you do. For instance, you’ll see when someone uses ‘su’, you’ll see when (example) you restart a service that writes that it stops and/or starts again to the syslog, in /var/log/messages too.
  2. What about the fact this shunts cron messages out of the syslog? Well, as I mentioned, it is stored (in more thorough form) in /var/log/cron so you won’t lose it. The only thing that loses this is where it should not be stored in the first place: /var/log/messages
  3. How does this affect the journal? Good question. I actually don’t care – the journal uses more disk space and log rotation works just as well and so does backup, remote storage as well as compression of log files (if you have it set up to do that). My guess is this: you will find that future log messages are not sent to the journal but only syslog. I am not 100% certain of this however. I will know in time if I bother to check. I think it would depend on how the journal interprets the options; indeed, many other options that I thought might solve the problem were definitely not interpreted as I guessed. So the question really comes down to whether or not directing to syslog defers it from the journal or if it goes to both. For those low on disk space though, the journal uses way more. If you do a ‘systemctl status systemd-journald.service’ you might see something like: Runtime journal is using 6.2M (max allowed 49.7M, trying to leave 74.6M free of 491.1M available → current limit 49.7M) and another line like: Permanent journal is using 384.6M (max allowed 2.8G, trying to leave 4.0G free of 22.5G available → current limit 2.8G).
  4. Perhaps most importantly: does this prevent showing users logging in? No. You’ll still see, for example, the following:
    Mar 29 21:49:37 server systemd-logind: New session 167 of user luser.
    and when they log out:
    Mar 29 21:49:39 server systemd-logind: Removed session 167.

All that noted, hopefully someone will see this and be helped by it. What would be more ideal, however, is if the maintainers actually fixed the problem in the first place. Alas, they are only – just like you and me – human, and to be fair to them, they aren’t being paid for the work either.

whois and whatmask: dealing with abusive networks

(Update on 2013/03/11: I added another grep command as I just discovered another line that would give the netblock of an address directly from whois, so that you do not have to worry about finding the proper CIDR notation as it shows you. Ironically, the IP in question was from the same ISP I wrote about originally – hinet.net; regardless, the second grep output will show one of the many differences with the whois protocol outputs)

My longest standing friend decided last year, at the end of the year, that he wanted to get me some books (thanks a great deal, by the way, Mark – it means a damn lot and I’m eternally grateful we’ve stayed in contact throughout the years). While he lives in England and I live in California, we’ve “known” each other for almost 18 years. There was a problem with Amazon.com and he was also in New York as part of his job part of this time, however, and so the gifts did not arrive until yesterday. Now, of course I could not know every detail of the book, but one of the books was a Linux networking book. It is more like a recipe book and while there is some I know (and some know very well), and some that are not useful to me, there’s going to be some I find something of interest or use. Which brings me to this post. Obviously I know of the whois protocol, but what I did not know about is the utility ‘whatmask’. There is a similar utility called ‘ipcalc’ but on CentOS it is very different from the expected and I found many problems with it. So I was looking at the book (the name fails to come to mind at this time), briefly skimming sections, and I noticed they discussed this very thing and mentioned the alternative ‘whatmask’ on CentOS and Fedora Core.

I thought this would be very interesting to see. Sure, you can do it by hand but this is much more time efficient and allows you to get a quick summary. Further, with whois, you can confirm your suspicions. Yes, I know that if whois shows a netblock as (this is of course a private block) – that the CIDR notation is /8. But that is besides the point and if I were to consider that, then I would have nothing to write about (and it has been quite a while since I have written anything strictly technical – something I’ve been wanting to correct since my birthday last month but have been too busy working on a project that is pretty important to me).

Now, then, about dealing with abusive networks. Firstly, there are many ways to take care of a network. I am obviously not condoning nor suggesting anything malicious nor am I condoning or suggesting anything at their end. The Linux kernel has netfilter which is what iptables (and ip6tables) uses, the IPv4 (and IPv6) firewalls (respectively). Yes, I could write out an iptables rule to stop all traffic from a certain network, but this is less efficient than simply making a blackhole route of that address. The problem was, how do you determine the entire range of IPs that they own? I seem to remember that they had different blocks. Further, a whois on the domain won’t show the network block (forget for a moment that it does when you use an IP in the netblock). Either way, the below procedure can be done for any IP.

The network in question is hinet.net and is located in Taiwan. The abuse is not so much attack attempts and it is not necessarily the owner’s fault (it is an ISP). But what it is is a lot of spam attempts (to accounts that don’t exist on my end and relay attempts to other hosts, neither of which I allow, just like all responsible administrators; indeed, running an open relay – notwithstanding an administrator who unknowingly makes a mistake or has a flaw exploited on their server – is nothing but malicious, as far as I am concerned). Since this is an ISP (I know it is in fact because I remember seeing dynamic IPs in their block or blocks, before) they don’t need anything from my network. And even if they have customers who are corporations, the fact of the matter is, I am not a customer of said corporation, I’ve never seen such corporation, and I don’t actually care: abusive networks are not something anyone on the abusive end would tolerate (just like if someone walks up behind you and hits you in the back, you would not exactly tolerate it). So, let us take an IP in their network and see the ways to determine all IPs in the block that IP is in:

One of the IPs is ’′. This is one that I specifically added a blackhole route to and that means one thing and one thing only: I saw it attempt what I described, above. So what do you do? Well, firstly, I run fail2ban (one option of many) and I’m fairly restrictive on how many failures I allow (like, 1) before they are blocked. But, let’s assume you want to take care of ALL IPs in that block (because you’ve seen many over the years) and you don’t even want to give them a chance to connect to your services. Then, what you do is the following. Note that I am limiting the output here.

$ whois | grep -E 'NetRange|inetnum|CIDR'
inetnum: -

Note that if you see CIDR (see also the end of this post where I give another whois command piped to grep, where there is another line that shows the CIDR notation) in the output then you have the network block right there. But, if however, you see NetRange or inetnum (there may be others that I’ve not seen so your mileage may vary and may be wise to not pipe the output to grep), then you don’t have the block, at least not in a notation that setting a blackhole route will allow (again, see the end of the post as I discovered another field that gives the entire network mask).

Now, the inetnum output above would tell me that the CIDR notation is /16 so if I add a blackhole route for then I am set. But assume for a moment that you don’t know that. Well, here is where whatmask comes handy. Sort of. It does need a CIDR notation with or without an address. So if you take the fact that /32 is one single address (which whatmask will show as 0 usable addresses because it is considering a network block which therefore includes network address and broadcast address – it assumes the address you specified IS the network and broadcast address) and /0 is every single IPv4 address (which is 2 ^ 32 much like IPv6 has 2 ^ 128 IPs), /31 is 1 address and more generally, the common network block ranges (in CIDR notation) are: /8, /16 and /24 (/8 having the most addresses, /16 having less than /8 but more than /24, /24 having the least of those), then you know that the possible CIDR numbers you can specify is between /0 and /32. It won’t be /0 and it won’t be /32, it won’t even be /31 for a network block (at least not in this way; a network needs a broadcast – in IPv4 – and network address), so you can just play around with it if you don’t know. Over time you get used to recognising the proper CIDR notation but understand this: the number after the slash is how many bits are reserved for the network portion of the address. So if it is /8 then 32 – 8 = 24 is how many bits are available to hosts which is why the higher the number after the slash, the less number of IPs that are available. When you find the right number, you can then do this:

$ whatmask
IP Entered = ..................:
CIDR = ........................: /16
Netmask = .....................:
Netmask (hex) = ...............: 0xffff0000
Wildcard Bits = ...............:
Network Address = .............:
Broadcast Address = ...........:
Usable IP Addresses = .........: 65,534
First Usable IP Address = .....:
Last Usable IP Address = ......:

Now observe the following things:

  • The result of the filtered whois output shows: -
  • The Network Address line in the whatmask output is:
  • The Broadcast Address line in the whatmask output is:
  • The First Usable IP Address line in the whatmask output is:
  • The Last Usable IP Address line in the whatmask output is:
  • Add these together, and you know that the netblock IS - which means that the proper netblock in CIDR notation IS

Putting that together, you can add to your firewall script or some other script (that starts when you boot your computer so it stays there when you reboot next) a command like so (note the # as the prompt – you need to be root to do this so either add sudo in front of it or su to root then do what you need to do, followed by logging out of root):

# ip route add blackhole
# ip route show

(Technically, yes, the ip route show command will show more output but I am showing only the route we added, for sake of brevity)

After this, no IP in that range will NEVER reach your box directly (I won’t get into if they breach another box in your network and connect from that box to the box you blocked it from, neither will I discuss segregating networks, because those are other issues entirely).

As for the second grep output regarding whois directly giving you the CIDR notation (note that I’m only searching for one string in this one because I already showed the others I’m aware of and this specific IP uses what I’m searching for – indeed, I first did a whois on the IP with no grep, and that’s when I discovered this line):

$ whois|grep Netblock

So from that, as root, you could add a route to that range (or do whatever; put an iptables rule in or some such – blackhole routes are much more useful when blocking an entire subnet because it will use less resources though by how much so I don’t know and I have no real way to benchmark it. I don’t actually care though. The entire point of the post was not adding routes or adding firewall rules but rather with dealing with abusive networks. The same can be applied if you take some of the lists out there with networks that are known to be the source of attacks or if you want to block some network for abuse or some other reason entirely).

Dangerous and Stupid: Making Computer Programming Compulsory in School

This is something I thought of quite some time ago but for whatever reason I never got around to writing about it. However, since it seems England is now doing exactly what I would advise against, and since I’m not actually programming (for once), I will take the time to write this. And its probably for the best, that I’m not programming today, given how tired I am. But I guess the article and its clarity will either show that or not. Either way, here goes:

So firstly, what is this all about? To put it simply, in September, all primary and secondary state schools in England, will require students to learn programming (“coding” is the way many word it). To make it actually worse, it seems they (and I hope this is just, for example, the BBC wording it this way) do not know that there is a real difference between programming and coding.

Although I discussed that very topic before, let me make it clear (since even if those involved won’t see this, it is pretty important). You write code, yes, but much like a building contractor NEEDS plans and a layout of the building BEFORE construction, you REALLY NEED a plan BEFORE you start to write the code. If you don’t, then what are you really going to accomplish? If you don’t even know what you’re TRYING to write then how WILL you write it? You might as well rewrite it SEVERAL times and guess what? That is EXACTLY what you will be doing! How do I know? Because I’ve worked on real programming projects as well as stupid/no real use programs, that’s how. If you don’t have a purpose (what will it do, how will it behave if the user inputs invalid input, how will output look, etc.) you are not going to learn because all you’re doing is writing code with no meaning. Besides not learning properly, you’re more likely to learn bad programming practices (because after all, you’re not really working on anything, so “surely it is OK if I just use a hack or don’t use proper memory management!”). The real danger there is the fact it APPEARS to work further strengthens your reasons to use said bad practices in REAL projects (just because a computer program does not crash immediately, in the first hour of run time or even all the way to the program finishing – for those that are meant to be finished, anyway – does NOT mean it is functioning properly; sorry, but it is NOT that simple). There’s many quotes about debugging and there’s a saying (I cannot recall the ratio but I want to say 80:20 or 90:10) out there that X percent of the time on a programming project is spent debugging, and it is not exactly a low number, either.

The problem is this: computer programming involves a certain aptitude and not only will some students resent this (and just one student resenting is a problem with this type of thing) just as they resent other things, some might still enjoy it even if they don’t learn properly, which is a risk to others (see end of post). Also, you cannot teach security and if you cannot teach security you sure as hell cannot teach secure programming (and its true: they don’t and that is why they have organisations that guide programmers in secure programming – OWASP for web security alone and there’s others for system and application programming). As for resentment, take me for example, back in highschool. I didn’t want to take foreign language because I had no need for it, I was very very ill at the time (much more than I am now) and I have problems hearing certain sounds (of course the schools naive “hearing tests” told them otherwise even though I elaborated time and again that, yes, I hear the beeps but that doesn’t mean much in way of letters, words, and communication, when considering learning, which I do not hear perfectly, does it? The irony is I had hearing tubes put in when I was three – perhaps the school needed them? – so you would think they could figure this out but they were like all schools are: complete failures) which ultimately would (and indeed, DID) make it VERY difficult to learn another language. But I was required to take foreign language. So what did I do? I took the simplest of the offered languages (simplest in terms of whatever those ‘in the know’ suggested), the least amount of years required (two years) and I basically learned only what I absolutely needed to pass (in other words, I barely got the lowest passing mark which by itself was below average) and forgot it in no time after getting past the course.

The fact that programmers in the industry just increase static sized arrays to account for users inputting too many characters instead of properly allocating the right size of memory (and remembering to deallocate when finished) or using a dynamically sized type of container (or string) like C++’s vector (or string class), says it all. To make it more amusing (albeit in a bad way), there is this very relevant report, noted on the BBC, in February of 2013. Quoting part of it and giving full link below.

Children as young as 11 years old are writing malicious computer code to hack accounts on gaming sites and social networks, experts have said.


“As more schools are educating people for programming in this early stage, before they are adults and understand the impact of what they’re doing, this will continue to grow.” said Yuval Ben-Itzhak, chief technology officer at AVG.

Too bad adults still do these things then, isn’t it? But yes, this definitely will continue, for sure. More below.

Most were written using basic coding languages such as Visual Basic and C#, and were written in a way that contain quite literal schoolboy errors that professional hackers were unlikely to make – many exposing the original source of the code.

My point exactly: you’ll teach mistakes (see below also) and in programming there is no room for mistakes; thankfully here, at least, it was not for stealing credit card numbers, stealing identities or anything to that degree of seriousness. Sadly, malware these days has no real art to it, and takes little skill writing (anyone remember some of the graphical and sound effects in the payload of the old malware? At least back then any harm – bad as it could be – was done to the user rather than a global scale done for mass theft, fraud and the like. Plus, that most viruses in the old days were written in assembly, more often than not, shows how much has changed, skill wise, and for the worst).

The program, Runescape Gold Hack, promised to give the gamer free virtual currency to use in the game – but it in fact was being used to steal log-in details from unsuspecting users.


“When the researchers looked at the source code we found interesting information,” explained Mr Ben-Itzhak to the BBC.

“We found that the malware was trying to steal the data from people and send it to a specific email address.


“The malware author included in that code the exact email address and password and additional information – more experienced hackers would never put these type of details in malware.”


That email address belonged, Mr Ben-Itzhak said, to an 11-year-old boy in Canada.


Enough information was discoverable, thanks to the malware’s source code, that researchers were even able to find out which town the boy lived in – and that his parents had recently treated him to a new iPhone.

Purely classic, isn’t it? Sad though, that his parents gave him an iPhone while he was doing this (rather than teaching him right from wrong). But who am I to judge parenting? I’m not a parent…

Linda Sandvik is the co-founder of Code Club, an initiative that teaches children aged nine and up how to code.

She told the BBC that the benefits from teaching children to code far outweighed any of the risks that were outlined in the AVG report.

“We teach English, maths and science to all students because they are fundamental to understanding society,” she said.

“The same is true of digital technology. When we gain literacy, we not only learn to read, but also to write. It is not enough to just use computer programs.”

No, it isn’t. You’re just very naive or an idiot. I try to avoid direct insults but it is the truth and the truth cannot be ignored. It is enough to use computer programs and most don’t even want to know how computers work: THEY JUST WANT [IT] TO WORK AND THAT IS IT. There are little – arguably there are none – so called benefits. Why? Because those with the right mindset (hence aptitude) will either get into it or not. When they do get into it though, at least it’s more likely to be done properly. If they don’t then it wasn’t meant for them. Programming is a very peculiar thing in that it is in fact one of the only black and whites in the world: you either have it in your or you don’t. Perhaps instead of defending the kids (which ultimately puts the blame on them and even I, someone who doesn’t like being around kids, see that that is not entirely fair – shameful!) by suggesting that the gains outweigh the risks, you should be defending yourself! That is to say, you should be working on teaching ethical programming (and if you cannot do that, because, say, its up to the parents, then don’t teach it at all) rather than just here it is, do as you wish (i.e., lazy way out) attitude. Either way, those who are into programming will learn far more on their own and much quicker too (maybe with a reference manual but still, they don’t need a teacher to tell them how to do this, how to do that; you learn by KNOWING combined with DOING and EVALUATING the outcome, then STARTING ALL OVER). Full article here: http://www.bbc.co.uk/news/technology-21371609

To give a quick summary of everything, there is a well known quote that goes like this:

“90% of the code is written by 10% of the programmers.” –Robert C. Martin

Unfortunately, though, while that may be true (referring to programming productivity), there is a lot of code out there that is badly written and that risks EVERYONE (even if my system is not vulnerable to a certain flaw that is abused directly by criminals, I can be caught up in the fire if that only includes bandwidth and log file consumption on my end; worse however, is when it is a big company has a vulnerable system in use which ultimately risks customers credit card information, home address and any other confidential information). This folks, is why I put it under security, and not programming.

Fedora Core 20 Oddities

Addendum on 2013/12/30:

As I hoped (and somehow expected it to be but was in a really bad state of mind and very inpatient hence not giving it time, which I admit is very shameful and even hypocritical on my end) the issue with libselinux was in fact a bug. So, that makes updating remote servers (in a VM) much less nerve wracking (the delay on the relabel for instance would be concerning as there would be no way to know if there was a problem or not until later on):

- revert unexplained change to rhat.patch which broke SELinux disablement

I still find it odd that they would remove the MTA and syslog but to be fair to that even, at least it is not removed from the OS itself but merely the core group of packages. There is the question of why do I keep this post at all even, then? Because I find it odd (even if some of the most brilliant things that seem normal now, were originally deemed odd) and what is done is done which means it would be more fake on my end to suddenly remove it (besides, I do give credit to the Fedora Project too, which is a good thing for anyone who might only see the negative at times, like I myself was doing at the time of writing the post). That’s why. Unrelated, to anyone who happens to see this around the time of the edit date, while I don’t really see New Years as anything special (most holidays, in fact) I wish everyone a happy new year.

Addendum on 2013/12/27:

Two things I want to point out. The first is a specific part of my original post. The second is actually giving much more credit to Fedora Core than I may seem to give in the post. In all honesty I really value Fedora Core and what they have done and how far they have come along and the projects I maintain under Fedora need a more up-to-date distribution because I use it to its full potential (e.g., the 2011 C and C++ standards are not supported in older libraries on less often updated distributions). So despite my complaints in this post, I want to thank the Fedora project for how far they have come along, for continuing it and for it actually being a very good distribution. Keep it up Fedora. Nothing is expected to be perfect and you cannot please everyone but the fact the software in question that I am referring to can still be installed (and not removed completely) is really enough to make anything that might otherwise be very annoying more of a nuisance for new installs. Here are my two notes then:

First, the part about log file size. Last night after posting this I realised something that – at first thought might make my point less valid and realistically that would be nice as it is one less valid complaint – makes it a little bit worse. The problem? The journal could be viewed as just /var/log/messages which means that the extra size over /var/log/ in its entirety is actually worse; instead of it being bigger than /var/log by 313MB it is actually (if you consider all journal files) 313MB – 7.9MB of all my /var/log/messages files (again: former compressed, latter not) so for one log file (instead of all logs) 305.1MB larger just for one log type (the syslog).

Second, to be fair to Fedora Core: I could probably have worded ‘Quality Control’ better. I was a bit irked by the SELinux (configuration file not being a configuration file) issue and as I noted I’ve been fairly agitated lately, too. In fact, to be even more fair, Fedora has actually come a very long way (I remember trying it with release 1 or 2 and having quite a lot of trouble with it due to hardware support or lack thereof – but realistically it was probably not even that bad when you consider that it is not a single piece of software; you have the toolchain, you have the kernel, you have editors, the desktops, and much more and all this is something to consider: it takes time to get stable  – which it is – and fully functional) and while I find some of the things (that they decided to change this time around) quite laughable I still value their work and I still will use the distribution and as they do point out there’s no harm in leaving a syslog package or an MTA installed (I just was just dumbfounded that a Linux distribution – no matter which one – thinking it is a good idea to remove the syslog and at the same time also remove the MTA from the install). So even if this post seems rather aggressive and thankless towards Fedora Core, I really am actually quite thankful for their work. As I noted: it is a rant and as a rant it is being critical of certain things and usually not constructive criticism (which is why I added this note here). In fact, I will change the title of this post and the link, too. Is only fair and is the right thing to do.

The original post is as follows (I’m not updating the post to reflect on the title change as I already made the point clear, above, but for what its worth this is not really about quality control: the update went quite smoothly – I just find some of the things changed rather unlike a Linux distribution).

Need an example of horrible quality control? Well, let me tell you about this operating system I use and one that I’m usually quite fond of. One that is quite old nowadays and I feel has gone back to their early days as far as quality is concerned. Yes, Fedora Core 20. As a programmer myself I am both very well aware that mistakes happen and programmers are just as guilty as this, as well as creating things (like software) does involve risks (as well as using said created thing). I’m also usually very tolerant of mistakes with programming for I know very well how it works. I could also (and believe me – I absolutely am!) be angry at and at the same time I could be blaming myself that I went with the upgrade despite me having a really bad feeling with it (the first time I have had a very bad feeling looking at release notes as well as following the release plans during the development process). But at the same time, what can I do? At best I can wait but I can only wait for so long (and long is not at all a long time – more like fairly short when you consider end of life of the release) and hope the next release is better.

But I cannot remember many times where I have been as stunned (in a bad way) as much as I am, with any software. I don’t even know where to begin so I’ll just come out and write this first: Yes, this is a rant and although I am irritable lately I also admit that in general, I go full on with rants. Either way, it is also something that I feel needs to be written (even if just for me) as FC 20 is a horrible example of quality software. I was going to skip writing this until today, when I ran into two little issues that really bothered me (and I admit fully that with the way things have been lately, bothering me is pretty easy but…) enough where I had to look in to what the hell was going on in more detail. Before I get to that though, I’m going to take a stab at two of the changes in Fedora Core (both by the same person, mind you) that I thought were quite idiotic (perhaps because they are in many respects) when I first read it and I still do (plus the actual gains they claim are exactly the opposite or if nothing else not true and I provide proof of that).

On today’s Internet most SMTP hosts do not accept mail from a server which is not configured as a mail exchange for a real domain, hence the default configuration of sendmail is seldom useful. Even if the server is not tied to a real mail domain, it can be configured to authenticate as a user on the target server, but again, this requires explicit configuration on both ends and is fairly awkward. Something that doesn’t work without manual configuration should not be in the default install.

So let me get this straight. SMTP hosts do not accept mail from a server which is not configured as a mail exchange for a real domain? And even if the server is not tied to a real mail domain, it can be configured to authenticate as a user on the target server BUT AGAIN it requires explicit configuration on BOTH ends? Furthermore, since it doesn’t “work” without manual configuration it should not be in the default install? Okay then, so I ask why is static IP networking in the default install? I guess that would be because of the part where it is “useful”, yes? Or maybe it is because Unix (and therefore Fedora Core) is a network operating system so IT HAS to have support for static IP addresses? Well, with that logic, here is something that one would think is VERY OBVIOUS but clearly IS NOT: just because not every system is part of a network (for example, part of a domain or even just an intranet) does not mean it is useless. Also quite amusing is this: I use an MTA (guess which one?) in at least one cronjob and I only had to configure the MAIL SERVER. I wonder why and how that might be possible? /sarcasm

Most MUAs we ship (especially those we install by default) do not deliver to a local MTA anyway but rather include an SMTP client. Usually, they will not pick up mail delivered to local users. This means that unless the user knows about local mail and takes steps to receive local mail addressed to root, such messages are likely to be ignored. Our current setup in many ways hence currently operates as reliable /dev/null for important messages intended for root. Even worse, there is no rotation for this mail spool, meaning that this mailbox if it is unchecked will slowly eat up disk space in /var until disk space is entirely unavailable.

Wait a minute. Were we not referring to MTAs? Now you’re on about MUAs? Most bizarre is this part:
“Most MUAs we ship (especially those we install by default) do not deliver to a local MTA anyway but rather include an SMTP client.”

What the hell is a MUA if it is not an email client? And how ironic that you mention the word ‘local’ (even if MTA is client and server, typically, and therefore we have some redundancy here). One would think you could put that together with the first block of text I quoted. Sadly that seems unlikely. I won’t even bother going at the rest of that and instead will continue to the next part.

Many other distributions do not install an MTA by default anymore (Including Ubuntu since 2007), and so should we. Running systems without MTA is already widely tested.

The various tools (such as cron) which previously required a local MTA for operation have been updated already to deliver their job output to syslog rather than sendmail, which is a good default.

I will delay the part about syslog for a moment as I find that part especially amusing and another issue entirely. So: just because other distributions do not include an MTA you should too? Clearly the followers and not the state of the art that Fedora was meant to be. A shame, that. And sure, running systems without an MTA is tested but not all machines are MAIL SERVERS and even more not all need to SEND MAIL. Did you know that running systems without a HTTPD is widely tested? In fact, you could replace any number of other services and ask (or proclaim) the same stupid question (statement). /sarcasm

As for cronjob and specifically mail versus syslog, let us go to the next idiotic move!

Let’s change the default install to no longer install a syslog service by default — let’s remove rsyslog from the “comps” default.

The journal has been around for a few releases and is well tested. F19 already enabled persistent journal logging on disk, thus all logs have been stored twice on disk, once in journal files and once in /var/log/messages. This feature hence recommends no longer installing rsyslog by default, leaving only the journal in place.

A purely classic example of irony I must admit. Okay, sure, the journal acts as a log but really, to call it the syslog, is rather weak. Even worse is the supposed benefits to Fedora. Let me unravel that, now.

Our default install will need less footprint on disk and at runtime (especially since logs will not be kept around twice anymore). This is significant on systems with limited resources, like the Fedora Cloud image.

Oh really? Journals use less resources than logs? I would like that to be a (stupid) joke but sadly it is for real. Here, let me just refute that complete and utter nonsense with proof:

With journal we have this:
# du -sh /var/log
368M /var/log
With journal excluded:
# du -sh /var/log –exclude=’journal’
55M /var/log

Less resources what? If you do the math (368 – 55) you will note that the journal uses 313MB MORE than the regular logs. Even more to consider is that the journal is compressed and my log files (including those that have been rotating like normal) are not compressed.

Two more things, one leading into the next.

Also, we’ll boot a bit faster, which is always nice.

How lovely. I’ve noticed it actually taking longer time since Fedora 20 was installed, compared to Fedora 19 (I don’t dare update to Fedora 20 on the other – remote – server I have Fedora 19 installed on!). And I especially noticed it taking longer today because SELinux (which I have disabled on the machine I’m writing from, and the reasons will be made clear soon) decided to (after an update of libselinux, yesterday) now ignore /etc/sysconfig/selinux and so was enabled AND since it had been disabled it needed to relabel the whole file system (except the file systems that I have read-only). How brilliant of an idea is THAT? Ignore the configuration file so that it wastes disk space, basically (there’s more disk usage being eaten up). What is the purpose of the configuration file then? sestatus showed that it was indeed enabled but via configuration file it showed disabled (too bad the configuration file was ignored though which means it doesn’t matter what the configuration file has, which is, to be blunt, completely stupid; the configuration file is to, here’s a thought, configure something!).

As for why I have SELinux disabled on this machine? Well, I’ll give you an example of the problem it causes when it doesn’t work right. A denial came up and it turned it into a zombie process as shown here:
7544 0.0 0.0 0 0 ? Z 14:09 0:00 [kcmshell4]
Of course, I could be lying that the cause is that so let’s take a look at the sealert of /var/log/audit/audit.log:

type=AVC msg=audit(1388095776.191:191): avc: denied { write } for pid=7544 comm=”kcmshell4″ name=”icon-cache.kcache” dev=”dm-6″ ino=263190 scontext=unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c0.c1023 tcontext=system_u:object_r:tmp_t:s0 tclass=file

Was working fine until then. But after that point I had Firefox both hanging on me (had to send it a SIGTERM) and then closing almost immediately (some times and other times hanging again) on restarting it (and it was Firefox that was trying to write to the file in question).

And lastly, I’ll just quote the very descriptions of SELinux modes exactly from a website, as to why I find it more a problem than a gain (and hint: security is only security if it is not so much a hassle that people want to override it, e.g., being baby-sat when installing software or having to have 20 passwords and so they write it down in plain sight to compensate; there has to be a balance):

Setting this parameter will cause the machine to boot in permissive mode. If your machine will not boot in enforcing mode, this can allow you to boot it and figure out what is wrong. Sometimes you file system can get so messed up that this parameter is your only option.

This parameter will force the system to relabel. It does the same thing as “touch /.autorelabe; reboot”. Sometimes, if the machines labeling is really bad, you will need to boot in permissive mode in order for the autorelabel to succeed. An example of this is switching from strict to targeted policy. In strict policy shared libraries are labeled as shlib_t while ordinary files in /lib directories are labeled lib_t. strict policy only allows confined apps to execute shlib_t. In targeted policy shlib_t and lib_t are aliases. (Having these files labeled differently is of little security importance and leads to labeling problems in my opinion). So every file in /lib directories gets the label lib_t.
When you boot a machine that is labeled for targeted with strict policy the confined apps try to execute lib_t labeled shared libraries so and they are denied. /sbin/init tries this and blows up. So booting in permissive mode allows the system to relabel the shared libraries as shlib_t and then the next boot can be done in enforcing.

So you cannot even successfully relabel unless it is in permissive mode. How lovely is that when you remember how important labels are to SELinux.

That’s it. I just hope Fedora gets their act together and soon. Rant done.

In Memory of C.S. Lewis: 50 Years Later

Some time earlier this year or perhaps last year, I found out that C.S. Lewis died on the same day that JFK was assassinated. As would be expected, this meant hardly anything was said of Lewis and I find this sad to say the least. Since this is not at all a political site (and I assure you it never ever will be turned into such a cesspool!) or a news site, and since I have written before about fantasy – albeit briefly – I think it is about time C.S. Lewis is remembered. To be fair, the BBC did mention this fact the other day, but of course the real interest to most is that it is 50 years since JFK was assassinated and not 50 years since C.S. Lewis died. Well, for me it is 50 years since Lewis died, too.

I remember when I was in grade school the class had to read The Chronicles of Narnia: The Lion, The Witch and the Wardrobe and how much I enjoyed it. I was probably 5 and I had to read the entire series (my choice – the class only had to read the first) and I did it and I thoroughly enjoyed each and every one of them. It was my first exposure to fantasy and I’ve never looked back. Sure, my favourite author is Jules Verne who wrote more of adventures and science fiction (a combination of) but the truth is fantasy is very much a part of my life. Perhaps because of a specific multiuser dungeon (MUD) that I am a developer and designer for (which by itself is a wonderful thing: use my mind with programming and at the same time use my imagination), it is one of the most important things to me. Many would find MUDs destroyed their life because of addiction (and I admit I was at a time addicted to this MUD but my pleasure from playing was always overpowered by the prospect of programming for it, to which is another ‘addiction’ of mine but it is healthy as it brings me a lot of experience at the same time as joy) but it is the exact opposite for me. It was the first real project I was part of (a significant project, anyway) and it was a team project at that. But who cares about that? I’m going off topic. The point is fantasy is something that matters to me a great deal and C.S. Lewis is the author of the very first book I read in that genre.

There isn’t much to be said at this time, I admit, and part of that is I delayed this until the end of the day (I forgot to write it earlier) and I want to finish up. But one thing I find most interesting is that he was friends with Tolkien and while I’m not into religion, it is interesting to note that Tolkien was religious and is the very reason, I believe, that Lewis opened up to religion. Yet, even though I’m not into [that], I can find a sense of enjoyment from what he wrote. True, Narnia was in the fantasy genre and not a work of theology but it really shows how variety and/or differences is (are) not always a bad thing. Indeed, we would be extinct, I am sure of it, if we were all the same (not to mention it would be a boring life, at least it would to me). But the more we are open to others, the more we can learn and the more we can better ourselves. This very concept is how and why technology evolves as does anything else that evolves does. This very concept is part of evolving, period. Naturally we each go our own path and some will agree and some will disagree. That doesn’t matter to me either because that is exactly why we’re still here. After all, if everyone agreed with everything I said, this world might not be boring but that’s because I’m something of a lunatic – not because everyone agreed with me (I would find it pretty awkward if everyone did agree with me and I’m not always right and I’m willing to accept and admit that). Everyone has their own belief structure and their own goals, and I approve of that just as I approve of Lewis’ having his own beliefs (or what beliefs he had).

Thanks, C.S. Lewis, for your wonderful series involving the wonderful fantasy world called ‘Narnia’. It provided me much enjoyment and still does when I think of it.

Login-specific ssh commands restrictions

Updated on 2014/02/17 to fix a typo and to add two caveats with this method that I thought of (caveat #5 is one that if it effects you it is what can be a rather bad problem but if it does not it won’t matter at all and caveat #6 is something that should always be kept in mind when using smrsh).

One of the projects I work on involves a VirtualBox install of Fedora Core. The reason Fedora Core is used is twofold:

  1. It is on the bleeding edge. This is important because it has the latest and greatest standards. In this specific project this means I have access to most recent C and C++ standards. The server the VirtualBox runs on runs a more stable distribution that backports security fixes but otherwise maintains stability because there are less updates which means less chance of things to go wrong.
  2. As for binary distribution (which is beneficial to production — I used to love Linux From Scratch and Gentoo but the truth is compiling all the time takes a lot of time and therefore have less time working on the things I want to be working on) I am mostly biased for RedHat based distributions. This and reason 1, above means Fedora Core is the perfect distribution for the virtual machine.

However, from in the VM I need to access a CVS repository on my server (the server hosting the virtual machine is not the same). Now while I could use ssh with passwords or pass-phrases on the SSH keys I don’t allow password logins and as for the ssh keys I don’t (in the case of these two users) require pass phrases. This of course leaves a problem with the security: anyone who can access the virtual box and those users (which is not likely because no passwords and only sudo allows it and that is restricted to users in question; but not likely does not mean impossible).

However, while no one that has access to that virtual machine (iptables policy is to reject and selectively allows certain IPs access to SSH — which is how firewalls should be built, mind you: if it it is not stated that it is allowed then deny it flat out) is someone I don’t trust, the truth of the matter is I would be naive and foolish to not consider this problem!

So there’s a few ways of going about this. The first way is one I am not particularly fond of. While it has its uses, there is specifically using in /etc/ssh/sshd_config the option ForceCommand (in a Match block). Why I don’t like this is simple: ForceCommand implies that I know the single command that is necessary, and that means the arguments to the command too. I might be misinterpreting that and I honestly cannot be completely sure (I tested this some time ago so memory is indeed a problem) but I seem to remember this is the case. That means that I cannot use cvs because I cannot do ‘cvs update’, ‘cvs commit’, ‘cvs diff’, etc. So what can be done? Well, there is I’m sure more than one way (as often the case in UNIX and its derivatives) but the way I went about it is this (I am not showing how to create the account or do anything but limit the set of commands):

  1. Install sendmail (mostly you are after the binary smrsh – sendmail restricted shell). Note that if you use (for example) postfix (or any other MTA) then depending on your distribution (and how you install sendmail) it might mean you’ll need to adjust which MTA is used (see man alternatives) as the default!
  2. Assuming you now have /usr/bin/smrsh then you are ready to restrict the commands to the user(s) in question. First, set their shell to be /usr/bin/smrsh by either updating the /etc/passwd file or the better way run (as root) /usr/bin/chsh [user] where obviously [user] is the user in question. It goes without saying but DO NOT EVEN THINK ABOUT SETTING root TO USE THIS SHELL!
  3. Now you need to decide which commands they are allowed to use. For the purpose of the example it will be cvs (but do see the caveats!). So what you do is something like this (again, as root as you’re writing to system file):
    ln -s /usr/bin/cvs /etc/smrsh/cvs

which will create a symbolic link in /etc/smrsh called cvs which points to /usr/bin/cvs which is the command the user can use. If you wanted them to be able to use another command you would create a link just like with cvs.

  1. Assuming you already have the user’s ssh key installed, you’ll want to edit the file ~user/.ssh/authorized_keys file. If you have not installed their key then do that first. Now what do you need to add to the file and where in the file? Whichever key that you want to restrict (which is probably only allow one key or do it for all keys unless you want the hole of them copying the key from their ‘free’ machine to the ‘restricted’ machine) you need to add to the beginning of the line, the following:

That means that after you edit it will be in the form of (assuming that you use ssh-rsa):

no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa [key] [user@host]

Obviously [key] is the authorized key and [user@host] is the user@host they connect from. Although I cannot be 100% sure (as in I cannot remember but I am pretty sure this is not in fact a hole in my memory) I believe the user@host part is in fact just a helpful comment or reminder that this key is for this user and this host. Everything else will need to be there, however.

A few caveats need to be considered.

  1. Most importantly: If you don’t trust the user at all then they SHOULD NOT HAVE ACCESS AT ALL! This setup is used for when you have someone that needs access to the server but as a safety catch you don’t allow most commands as they don’t need it and/or there is a risk that someone else could abuse their access (for example if they are working in an office and they decide to step away for just a moment but keep their session unlocked).
  2. Those with physical access are irrelevant to this setup. Physical access = root access, period. Not debatable. If they want it they can have it.
  3. If they have more than one key installed then you either need to make sure all keys allow are set up the same way (preventing pty allocation etc.) or that it is impossible they can change the authorized_key file (which to be brutally honest, being sure of the latter is a guarantee of one thing: a false sense of security).
  4. This involves ssh keys. If you allow logins without ssh keys (e.g., with a password alone) then this is not really for you. ssh keys are ideal any way and you can have ssh keys with pass phrases too and even have a user password (think of if you grant them sudo access; now its an ssh key, a passphrase for that key [required] and for sudo their password [though, just like su, if too lax in allowing, it can be an issue], which means not only does a user need to have an authorized key they also need to know the pass phrase to that key. This is like having a ‘magical key’ that only fits in a lock when you also have a pass phrase you can input into a device that opens the keyhole. This is the safest approach especially if for any reason there is a chance that someone can steal the ssh key and you don’t happen to have ssh blocked at the firewall level by default (which is, depending on who needs access and from where, is a very good thing to consider doing).
  5. Something I neglected at first (not intentionally, just did not think to write it until a few days ago, today being 2014/02/17) is that if you have more than one login using this shell this may not be the most appropriate method to tackle the problem in question. The reason should be obvious but if it isn’t it is simply that the users might not have the same tasks (or, to put it another way, the commands they should be restricted to might be very different). This is definitely one downside to this method and is something to keep in mind.
  6. Even though I wrote caveat #1 which somewhat tackles this (it all comes down to trust which is given out too easily far too often), I feel it irresponsible to not discuss this. Depending on what you allow (and you should always be careful when adding a command to /etc/smrsh or in fact whenever you are deciding on something that changes the way things are dealt with by the system, especially if it involves users) you can run into trouble. Also consider that smrsh does allow && and ||. While I only allow certain IPs and only connections with an ssh-key (that is in authorized_key file), I only now thought to test this (which shows all the more how easy it is to forget something or even be unaware of something, that is actually a problem). The result? Because I disallow PTY allocations (in this setup) which means it is not possible to log in (unless of course through another user which is another can of worms entirely) it should be OK but do understand that breaking out of restricted shells – just like chroots – is not something that should be dismissed as “there is only one way and I know I prevent it” (a dangerous and incorrect assumption). This entire point (caveat 6) would be more of a problem if I did not disable PTY allocations but it still is something to strongly considered (again, if anyone can log on as this user through another user, then they aren’t exactly as restricted). On that, it should be noted that you should not ever add shell scripts or any program that can be told to run other programs (like procmail’s procmailrc file), to /etc/smrsh, and that also includes perl (or similar) and shells (that should be quite obvious but stating it anyway). sendmail’s restricted shell also allows certain built-ins like ‘exec’ so do keep that in mind too. Lastly, on the note of programs allowed, be mindful of the fact that this method DOES allow specifying command options, arguments and so even more care should be given when adding programs to /etc/smrsh (imagine you allow a program that can run another program – that is passed by name – and imagine now that the user runs a program you don’t want to allow; will it work? I’ve not tested it but it would be valuable to try it before deploying any possible loophole). If you do know in advance the command (exactly) then you should take the safer (more restrictive) approach built into sshd (just like always – you should always take the safest approach possible). I could go on about other things to consider but I won’t even try because besides I cannot possibly list everything (or even think of everything – I am human and not even remotely close to perfect) the the bottom line is this: there is always more to consider and it isn’t going to be a simple yes/no decision, so do consider all your options and their implications _before_ making any decision (in my case, with only one user with smrsh and only one command – which only runs scripts if configured on the server side – I would rather prevent normal logon and limit the commands than not).

With all that done, you should now have a login that can run cvs commands (but see caveat #6) but nothing else (sort of, see below). If they try something simple — even trying the following – they will get an error, as shown (and again, see caveat #6!):

$ ssh luser@server “ls -al”
smrsh: “ls” not available for sendmail programs (stat failed)

$ ssh luser@server
PTY allocation request failed on channel 0
Usage: -smrsh -c command
Connection to server closed.

On caveat #6, observe the following:

$ ssh luser@server “exec echo test && exec source ~/.bashrc”
smrsh: “source” not available for sendmail programs (stat failed)

Yes, that means that ‘echo test’ was in fact executed successfully (exec before ‘echo test’ was not strictly necessary for this, in case you wondered) — see below – but source was not (which is good; if you don’t know what source does do look it up as it is relevant). Note the following output:

$ ssh luser@server “exec ls || echo test”
smrsh: “ls” not available for sendmail programs (stat failed)
$ ssh luser@server “echo test”

And when looking at the above, specifically note the way the commands are invoked (or for the first time, the commands that I tried to invoke).

Rest In Peace Lou Reed

This will be fairly quick (or so I hope) because things have not been that great (“what is sleep ?” is the story) but I must write at least something before I do in fact try to sleep.

I just saw that Lou Reed has passed away. Now, those who know me well enough will know why I feel this is important: my favourite band collaborated with Lou Reed in 2011. I admit fully that I did not buy it (among the rare things of the band’s work I did not buy although this news may change that) because I did not like Lou’s voice. It was not that it was different that I did not like about the recording. No, that is something I have a huge amount of respect for: Metallica happens to do whatever it is they want and that includes shocking their fans. With shock comes (at times) disappointment. But at the end of the day the reality is they do what they want for themselves (and also for their fans, honestly – though some would disagree it is irrefutable) and that they are willing to risk upsetting someone for themselves shows not weakness but strength. Yes, strength, courage and let us all be realistic: we might not like change but without change the human species would be EXTINCT. So, good on Metallica for change. I don’t even have that much courage – I won’t deny that. Would I like to change that? Yes and no, which I think is how a lot of people view courage (or lack thereof and wanting to change/improve it). Regardless, them being comfortable doing this type of thing brings out their true colours and it is a beautiful rainbow of colours at that. They made mistakes. They are only human. Lars Ulrich pissed off a lot of people with Napster. But you know something? He also realised that perhaps his approach was not the best, and when a store in France (by mistake) released Death Magnetic a day early, not only did the band welcome it, Lars himself welcomed it and noted that things have changed.  They have.  Anyone who does not believe that is ignoring reality and also (in the case of them accepting Lars making a mistake) being unable to accept that no one is perfect but what matters is not perfection but instead always improving yourself and always being the best you can be. He does that and he does it quite well, regardless of how it comes across to some.  Don’t like him? That’s fine. No one likes everyone or everything. For instance: I did not think Lou Reed’s collaboration with Metallica was great at all. I didn’t dislike Lou Reed but I did dislike the way the recording sounded to my ears (his voice sort of drowned out the rest, for me). Still, I know a lot of fellow Clubbers respected his work and I know many more not part of the Metallica Camp respected him, too.

As for Metallica doing things for their fans and it being irrefutable, I have the following words to write: 30 Year Anniversary Celebration. Those who were fortunate enough to be there (and I was only there for one of the four nights) would fully agree, for sure. They truly do care about their fans and their fans care about them (I met people from Mexico, Denmark and Australia, to name three different locations in the world, that fans came from, while I was in San Francisco).

Lou Reed: The legend you left with you will never be forgotten and while I maybe did not like your voice (at least on Lulu) I still respect you, your personality, and you, period (and I always will). Rest in Peace, Lou, and thanks for allowing me to learn of you and what you are about (by collaborating with my favourite band).

“Smart” Technology Is Still Dumb

I can imagine it clearly: a group of kids forced to read why their “smart” tablet, phone or latest gadget is not actually all that smart. There would likely be outrage and flat out denial. Of course, neither of those reactions would change the fact that I am, firstly, correct, and secondly, I do not really care what kids (of today) think any more than I did when I was a kid.

The truth of the matter is that too much reliance on technology (reliance in the sense of letting it take over for human interaction) is a foolish thing to do. Yet I see and hear it far too often than I would like to. An interesting quote (that has been apparently widely mis-attributed to Albert Einstein) comes to mind:

Computers are incredibly fast, accurate and stupid; humans are incredibly slow, inaccurate and brilliant; together they are powerful beyond imagination.

I won’t even get into the definition of accurate, stupid or brilliant as the definitions really do not matter so much per se. Some might argue that the quote is not actually helpful to my point but actually, it is a perfect quote. The key is the last part of the quote. Think about “together they are powerful beyond imagination” for a minute. Indeed, the problem is not that technology is advanced but rather it is advanced enough that people think it knows best. That is a dangerous fallacious trap to fall in to. It really is quite simple: who created the technology? Yes, that is correct; humans! So it makes perfect sense that they are to be used together and not in place of (besides the issue with trust being given far too easily, would you really rather have computers employed instead of humans? I guess their pay raises would be hardware upgrades!)

When something is dumbed down so much that any thing can use it it there is a problem. The problem is that one need not pay attention. Some might wonder what I am getting at. Well here is what I am getting at: the extremes of allowing an automobile to do all the work – accelerating, braking, turning, changing lanes, detecting traffic signals and stop signs. While that might be better if the person is under the influence (which they should not be any way!) I really have to wonder where else it might be useful. What is even more scary is going even further than we already have (vehicles that fly their occupants somewhere without any human involvement, anyone? I really hope that one is never allowed although scary enough is the cars that supposedly can lift off too). This is not about making things do more than what they were made to do. No, this is about allowing a device to be given so much trust that it endangers others. While some of this may be hypothetical – considering it is not common yet, I would say it is hypothetical in a sense – the very idea is incredibly scary. Let me elaborate:

As it is right now we have traffic signals (lights), traffic signs and traffic rules to help maintain the traffic flow and sanity (in addition to some times having traffic being controlled by a police officer). But if you think about it further there is one other thing that changes those rules, isn’t there? When you hear (or see; after all, a deaf person is allowed to drive but a blind person is not) a siren or emergency lights on a vehicle with said siren, you are obligated to – for the safety of yourself, the safety of the driver and those having the emergency – find where it is and get out of its way. And for good reason. If you were in the ambulance you would want the same thing done for you. Further, if a fire department truck is headed towards your house that is on fire, would you want some selfish person blocking or otherwise delaying the emergency crew? Of course you wouldn’t!

So here’s where it gets interesting: let’s say there is a major change in society and vehicles do all the driving (do I need to remind anyone that even when horses pulled wagons, there were accidents and indeed deaths?). First, if there’s the idea (which I have read exactly this from those who are wanting this type of change) that there are no traffic signals needed, I can predict two very dangerous assumptions:

  1. Those who own old vehicles (hot rods anyone?) and in particular it is a hobby for the owner (they take it to car shows for example) there is not a chance that there would not be outrage if they were told they can no longer drive it or have it on the road.
  2. Worse than that: how is the state/county/city going to be absolutely 100% positive there are no cars that in fact are controlled by humans (besides the fact humans created the car, that is)? If any one thinks insurance, license and registration is sufficient then I have some shocking news for you: some people drive without these things. Is it illegal? Yes. But that is irrelevant when they still are driving in this hypothetical condition. In fact, it is more scary – if it happens now, how do you expect it to not happen when these new conditions are common?

Even if the county could be 100% sure of no drivers I have to ask what about emergency vehicles. As someone who has been in emergencies I can’t even begin to fathom how anyone would trust human (therefore imperfect) programmed (as a programmer for many years but who is also realistic I can say without a doubt that there is no such thing as real software that is 100% bug free 100% of the time and that is a fact that only a complete idiot would deny!) vehicles to detect the sirens or see emergency lights, and, at the same time react in a safe way along with all the other vehicles that are doing the same thing. And only a very ignorant person – or a damned (perhaps literally) fool – would think it is a good idea to let an emergency vehicle drive itself as it deems appropriate.

Bottom line is this: technology – whether it is a phone or a large machine – is only as smart as the least smart of the operator and the creator of the technology in question (regardless of whether the item in question is “smart”). We’re all human, however, and even those that are brilliant in general can still be dumb about some things and there is this other word “mistake” that comes to mind as incredibly significant.

POSIX Alarms: Deadlock Protection and Debugging

A concern for programmers when writing software that is to be running in the background for a long time (hours, days, weeks, months, longer) is infinite loops and the like. It’s very unlikely that this will happen but that does not mean one should not program defensively.

One way to protect against this (at least in so far as breaking out of it whether the user is there or not) is by setting up a timer to send the task a signal – e.g., SIGALRM. So for instance, if you want to have it checked for every 3 minutes you could set the timer to 180 seconds and set it so the timer repeats itself. Then, every normal pass (might be a second, might be more often, it doesn’t matter as long as it is small enough of an interval that the timer in question will be useful) you increase an integer. This integer is a counter that you can view, more or less, as a tick that is updated as long as the program is not caught in an infinite loop (thereby preventing it from being incremented). Along with this you handle the signal and the signal handler checks if the counter is 0 or not. If it is 0 then in any normal circumstances the process would be in a a state that it should not be so the handler simply calls the abort(3) library call. Otherwise reset the counter to 0 and in 3 minutes time the signal will be sent again.

There is however one potential pitfall for developers using such a scheme: when debugging a process the most effective and safest way to find a problem (or do whatever it is you’re using the debugger for) is by stepping one instruction at a time. But because the developer is going to be doing more than just setting a break-point and stepping instruction by instruction (debugging truly is a science that becomes easier as you gain more experience doing it) it is very possible that the developer runs out of time. If the counter is not zero and the debugger passes the signal then it will become zero. What happens when it is passed again, though? It’s very possible the process will call abort() which means the process terminates and that means your debugging session does too (which – depending on how close you are to isolating the problem – can be really annoying). There’s naturally a solution to this. One solution has its own problems though.

Before POSIX.1-2008 – if one is following their recommendations, which admittedly they aren’t always correct but this is not the case here – one could simply use the setitimer(2) system call with the ITIMER_VIRTUAL timer. From the man page of setitimer :

ITIMER_VIRTUAL decrements only when the process is executing, and delivers SIGVTALRM upon expiration.

So the solution then would be to handle SIGVTALRM and not SIGALRM. POSIX.1-2008 obsoletes the setitimer and getitimer calls in favour of the POSIX timers. Now one could argue that obsolete does not mean there is a real problem with using them. That is valid but what is also valid is POSIX is a standard and a standard for portability – hence POSIX actually meaning Portable Operating System Interface – and for portability it is best to use the newer API. Sure, POSIX has made big blunders before (Linus Torvalds is quoted on one of their blunders in the man page for accept for the curious) but on the other hand the the socket (networking) API that allows for both IPv4 and IPv6 to work together in the same application is documented in RFC 2553 and is part of POSIX.1-2001. However, there is the ‘problem’ that there are no virtual timers with the POSIX timers. This means while debugging the developer may run into the same problem (as above) if they do indeed use POSIX timers instead of the traditional BSD interval timers.

This morning I was pondering on how to best handle (pun is very much intended) this particular problem. I had a few thoughts and one thought led me into the real solution that requires absolutely no change in source code and only setting change in the debugger. This is GDB specific (as is the common debugger in the Linux world) but I would hope any decent debugger would have equivalents. It’s actually incredibly stupid simple, is something I’ve played around with before (in a different way) but until today I just had a variable that I could set when I’m in the debugger (truly this is one of those things that are right in front of you but too small or simple to really see). If you only use SIGALRM for this purpose then there should be no harm with this solution as it’s only relevant when the debugger is active (whether it effects other programs that do need to see SIGALRM is not something I can help other than pointing out you don’t necessarily have to have this on file; you could make it a GDB command that you run when necessary, for instance). At the GDB prompt (where you type any number of debugging or indeed debugger commands) you can request GDB to show you how signals are dealt with. The command:

info signals

lists each signal as well as how to act on it and finally a description of the signal. For instance, after making the below change, you would see (among the other signals) the following:

Signal  Stop Print Pass to program Description
SIGALRM No   No    No              Alarm clock

Indeed, all you need to do is tell GDB to ignore the signal. How do you do that? The following is all you need (you can put it into your .gdbinit file or you could type it when necessary or even make it a GDB command – which if you’re curious about try looking ‘help define’ and ‘help document’ while in GDB):

handle SIGALRM nopass

That is all you should need to do (hopefully I did not miss something else obvious that I had done in the past but this should be fine).

Intellectual Property EQUALS Innovation Prevention

There’s certain things that companies just love to do and unfortunately it is something that stifles innovation at best. The patent protection law suits that I have seen over the years are beyond absurd yet it still goes on. There’s another legal issue that bothers me to no end but it’s at least not companies attacking each other over complete and utter nonsense. Yes, this is something of a rant and at the same time part of it is something I really believe – and indeed I KNOW – more people need to look at differently.

I admit fully that I dislike Apple’s attitude but the main point of this post is not about Apple (even though I have some words to write about them). But I would be rather blind if I were to say the real problem is that Apple is a for-profit corporation with proprietary products. Even though I will show a rather pathetic image of Apple I fear Apple is not as bad as one company (that is also not the company in question). Oh, no, not at all. Caldera Linux which is now SCO Group has the most ridiculous legal claims that one would think it would be impossible to get worse. I don’t have an original source but I do remember this controversy in its beginning and Wikipedia does cite sources. The following comes from  Wikipedia and it is sadly not a joke:

SCO has stated that they did not know their own code was in Linux, so releasing it under the GPL does not count. However, as late as July and August 2006, long after that claim was made, they were still distributing ELF files (the subject of one of SCO’s claims regarding SVRx) under the GPL.

That claim (the above cited claim is from Wikipedia’s article on SCO-Linux controversies) is not only stupid it is hypocritical at best. Besides that, the law does not work the way they claim it does. The suggestion that they did not know that their source code was in the Linux kernel (which is ridiculous by itself – trying to sue Linux users – which is one of the cases – because it has their source code yet… they don’t know it had their source code? WHAT?!) is akin to someone breaking into SCO’s headquarters in the middle of the night, and when arrested claims: “I could not see very well in the dark. I thought it was actually a building I accidentally locked myself out just moments before of so it does not count that I targeted the wrong building!” It is truly a shame that the courts cannot seem to figure that out and even suggest something like my example to them.

Unfortunately the cases about open source are often in the wrong context (like the above) and worse still is that there’s other cases that are unrelated to open source. I will go back to Apple as an example (and Apple is not at all the only corporation doing this type of thing but it is one of the most interesting cases for very certain reasons that I will get to shortly). Like all the other companies, Apple knows that they are abusing a certain truth (that should be obvious) but many just can’t seem to grasp the concept so I will specifically mention it: Technology evolves! This is why a mobile/cell phone is a mobile version of a phone. If it was not for the fact that it is a telephone (which phone is short for) then it would not be called a mobile phone, would it? Yet there’s many cases in recent times where Apple abuses this fact for profit (and only profit) with patents that protects their so-called innovative and therefore intellectual property.

But I recall three specific things in Steve Jobs’ lifetime that make this very interesting. First is the story behind one of my all time favourite games: Breakout. The short version is that Jobs got his friend Steve Wozniak to do all the technical work with the agreement that he (Jobs) would split the money offered by Atari with Wozniak (50/50). The problem is that Jobs didn’t mention the bonus payout. The very fact Jobs had the job to begin with is that he lied about his supposed employer Hewlett Packard (that he did not work for) and his technical experience (that he did not have). The agreement was that they would each get 350 USD. However, Jobs insisted Wozniak finish the job in four days as that was a time limit Atari gave Jobs in order to get a bonus (but he wouldn’t tell Wozniak this). The bonus happened to be 5000 USD (yes, five thousand dollars) and Jobs took it all for himself (leaving Wozniak with 350 USD). And Wozniak was his friend? Shameful. The full story, for reference, can be found here if you Google the following: The story behind Breakout. I am not including a link because I just noticed that unfortunately their article is littered with spam comments and I am not about to link to such a thing. It’s a shame that they have not cleaned it up but nevertheless the article itself is an interesting read. The article is on classicgaming.gamespy.com with article id 395 (but beyond that Google for what I wrote above – sorry but I will not directly link to any spam).

The two other things are two quotes from Jobs at different times of his life about ideas and theft. I remember reading about each one at the times he said them but for reference they are the following:

We have always been shameless about stealing great ideas. – Steve Jobs, 1996


I’m going to destroy Android because it’s a stolen product. I’m willing to go thermo-nuclear war on this. – Steve Jobs, 2011

Of course that is only an example of others. As for Apple’s innovations? I don’t know who originally created this collection but  HUGE props to them. If you look at the image (that I am linking to in another tab because this is already too long and I have not even gotten to the main company I originally wanted to write about!) you will find one thing that is that Jobs’ quotes are quite consistent with his company’s actions. The image can be found here.

Now I will get to the real intention of this post (or what inspired me to finally write it). Yesterday I was reading about patent trolls and in particular a certain company known as Uniloc. I thank the judge in Texas for being sensible and I think all users of Linux and any one with a bit of common sense (not blinded by bias) that does not use Linux would do the same. The claim Uniloc made against Rackspace (which owns and hosts servers using Red Hat Linux) is absolutely the worst I have heard ever and that is saying a lot with all the above (SCO, anyone?). Their so called patent (that actually was – and hopefully no longer – a patent) included the following claim:

A method for processing floating-point numbers, each floating-point number having at least a sign portion, an exponent portion and a mantissa portion, comprising the steps of:

  • converting a floating-point number memory register representation to a floating-point register representation;
  • rounding the converted floating-point number;
  • performing an arithmetic computation upon said rounded number resulting in a new floating-point value; and
  • converting the resulting new floating-point register value to a floating-point memory register representation.

The lawyers for Uniloc suggested the following:

“This step requires an arithmetic computation to be performed, but is not limited to any particular one. It could be multiplication, division, a logarithmic operation, etc. Likewise, the conversion steps do not require the application of any particular mathematical formula, let alone recite a mathematical formula,” Uniloc’s brief explains.

Besides the fact that what they claim is a major improvement over IEEE 754 specifications is the SAME thing as the standard (see end of this post for an example), to have a patent for algorithms involving mathematical formulas would be devastating to mankind. As the judge stated it would be a very bad thing indeed:

“Even when tied to computing, since floating-point numbers are a computerized numeric format, the conversion of floating-point numbers has applications across fields as diverse as science, math, communications, security, graphics, and games. Thus, a patent on Claim 1 would cover vast end uses, impeding the onward march of science.”

Which is completely true. If this patent was protected instead of having innovation you would have companies spending their time and money on defending themselves against such utter bullocks potentially putting them out of business!

Want to know something else? TCP/IP (which you are using if you connected to this website) calculates a checksum which is quite important to the network. It uses a fairly simple algorithm. Now imagine if that was somehow possible to create a patent for? Good luck using the Internet: the internet protocol is required (and although IPv6 itself does not have a checksum that does not mean no checksum is used with IPv6 and this point is therefore still relevant and besides that it is one of many examples). As for Uniloc I have two additional things to point out:

First, their suggested improvement is that you reverse the order of the calculation. Well guess what? If 4 = 2 + x then you can also say that x = 4 – 2 and you can also say  4 = x + 2 but no matter the case x is 2! There IS NO improvement at all and mathematics is mathematics and cannot be patented (and be very thankful for that!).

Second, I find it quite amusing that if you query Uniloc’s web site the server will return the following (this is not obviously shown to user by default by e.g., Firefox, IE or Chrome but the fact of the matter is it is still there):

Server    Apache

(Indeed this can be changed as for example mine shows Hell’s Daemon – a bit of word play with how daemon is pronounced and what a daemon is in computer terms. However, I find it highly unlikely they have changed it to show Apache when it isn’t).

What does this mean? Oh I don’t know, maybe something about the operating systems it runs on would very likely use the same standard they are supposedly protecting? Linux is not the only operating system that Apache runs on and the same operations are done by other operating systems and libraries (I wonder why that might be seeing as how they claim its an improvement over a STANDARD?). I’m not about to fingerprint their server but I would be very surprised if the company is anything but money-hungry, arrogant and hypocritical fools.

Finding C Style typecasts During C++ Compiling

(As of 2013/05/11 there is a quick addendum at the end of the post)

This is a very quick post because it’s a very simple update to two posts from quite some time ago. I had discussed how to find C style casts in C++ source code and at the time of the first post I was thinking in casting to pointers. Then I made another post with to note that fact and a possible way to find all.

Well, there is a much better way. What is it ? With the g++ compiler you let it show you, that’s the way. How do you do it ? Use the option to g++:


and recompile all object files (if you use make you may very well have a make target called ‘clean’ then you could just run ‘make clean’ and then ‘make’).

g++ will then show you where every old style cast is used in the project’s source tree (that it encounters). Now it’s simply a matter of determining which new cast type you should be using for the specific cast, fix and recompile again. It’s that simple.

Addendum: Okay, to be truthful it is not entirely ‘that simple’. Yes, this is how to find the old style casts in YOUR source code but it should be noted that there are some system calls that are implemented as #define macros and do use the old style casts. There’s nothing you can (or should do) about those. Examples that come to mind are:

FD_* macros used for the select(2) system call.

WEXITSTATUS (and most likely the other related) macros for the wait(2) system call.

That all said the option is useful to use at times if you want to be sure your own source code does not use old style type casts.


Rest in Peace Jeff Hanneman

This is obviously not a technical piece but it is something that is important for me so I’m writing it before I forget about it. I regret that I missed several other significant deaths including indeed Ronnie James Dio of Black Sabbath (and others) and more recently Jon Lord of Deep Purple. Today is yet another sad day for metal music, an important part of my life.

I am even more upset now that health prevented me from seeing Slayer when I actually had tickets (in 2011) because Jeff is now gone. He was an amazing guitarist and his music will be well missed. The fact he died from liver failure and that I have a very close friend with liver disease as well as having lost family to liver disease, this really hits me hard. I typically do not think of the past and although that may be sad at times (hard to think of good memories in the past without thinking of the past) I would say it generally helps me cope with losses.

There really is not much else to say as this is truly a horrible loss to the metal heads of the world.

R.I.P. Jeff and thanks for your time and dedication with Slayer.

Solution: warning: do not list domain example.com in BOTH mydestination and virtual_mailbox_domains

It has been a LONG time since I wrote anything here and I have updated very little elsewhere (I did update some of the documents at http://docs.xexyl.net and I changed the main menu here a little bit but that is about it). The reason for such is a lot of health problems and such is life. While there’s a lot of stuff I have thought to write about the truth of the matter is they are quite low on my priority list. Nevertheless, since the topic in question is something easy enough (read: it’s not a lot to cover) to write about and since I have seen many people complain about the very warning by the Postfix mail server yet not find a solution I decided to spend a few minutes to write how to solve it. The complaints that are heard are typically along the lines of two themes :

But I do not have my domain in both mydestination and virtual_mailbox_domains!


There’s no explanation on how to get rid of this warning (aside from the ‘work around’ option to disable warnings) in the documentation.

Well, each has its problems but I will sort this out for those who have the warning I refer to. For the first problem, you must understand that just because you do not specify mydestination does not mean it is not set. Further, there is the chance there’s more than one instance of Postfix running (though to be fair this is less likely the problem). The second problem is not entirely true although maybe it should be made very clear in the documentation. That’s not up to me though.

So how do you solve the problem then? First, we’ll assume that your domain names are two of the for-documentation purposes domains:

  • example.com
  • example.org

We’ll also assume that the mail server is specifically example.com and you want either both to be a virtual domain or you want example.org to be the virtual domain. Choose your poison when reading this as it really does not matter in the slightest with this configuration.

So given the above parameters this is what you need to do :

  1. The server’s name should be what mydestination is set to, so in the configuration file (main.cf which is generally under /etc/postfix/main.cf) you need the line:
    • mydestination = example.com
  2. Next, assuming you have you your virtual domains file (a plain text file, one entry domain name per line) named (also in /etc/postfix/) virtual_domains then you should (adjusting to whatever your file name is) have the following in your main.cf file :
    • virtual_mailbox_domains = virtual_domains

    Note that if your file is actually a database file then you would need to adjust the line above to let Postfix know this (e.g., prepending ‘hash:’ without the quotes to virtual_domains and if you have updated the file do not forget to update the database file with for example postmap). It should go without saying but example.com (the final destination) should NOT be in the virtual domains file.

  3. The last step is essentially masquerading mydestination as a virtual domain (in the sense of the effect of the end result). Here Postfix is informed that local transport is to use the virtual transport service and that the local recipients are included in the virtual domains. That is possibly not the best way to word what is going on but the two lines should show all that is needed to understand it :
    • local_transport = virtual
    • local_recipient_maps = $alias_maps $virtual_mailbox_maps

    Assuming that Postfix virtual mailbox and virtual domains already worked then this, combined with reloading or restarting Postfix should be all you need to do. There is a way to ‘force’ Postfix to issue the warning (in the case that it’s not configured properly) but I do not have the motivation to write about it.

Hopefully this has been of help to some people and to those who I have left some questions remaining, I am sorry in advance. The real key is that your server name is the final destination and it is masqueraded as a virtual domain (but actually it is not in the truest sense of the word). This does however mean you might need to set up local users mail if they are to receive mail. I’m afraid you’re on your own there but if you only wanted virtual domains this should not be a real problem any way. Lastly, I am sorry if I have left anything out. As I said I have been quite unwell for some time and I may very well have neglected to mention something important. No idea when I’ll be writing next. Until then, so long and thanks for all the fish (yes, a little tribute to Douglas Adams as I missed his 61st birthday in March).