chkconfig: alternatives system

This will be a somewhat quick write-up. Today I wanted to link in a library to a program that is in the main Fedora Core repository (but it excludes the library due to policy).  In the past I had done this by making my own RPM package with the release one above the main release, or, if you will excuse the pun, alternatively not installing the Fedora Core version at all but only mine. I then had the thought of why not use the alternatives system? After all, if I wanted to change the default I could do that. This RPM isn’t going to be in any of my repositories (I added one for CentOS 7 in the past few months) but realistically it could. There was one thing that bothered me about the alternatives system, however:

I could never quite remember the proper installation of an alternatives group because I never actually looked at it with a clear head and although it is clear (the description) once I looked in to it more intently, I always confused the link versus the path. Regardless, today I decided to sort it out once and for all. This is how each option works:

alternatives –install <link> <name> <path> <priority> [--initscript <service>] [--slave <link> <name> <path>]*

The link parameter is the symlink which exists in /etc/alternatives and which points to the currently preferred default. The name parameter is the name of the alternative group itself. The path is the actual target of the symlink. –initscripts is a Red Hat Linux specific and although I primarily work with Red Hat I will not cover it. Priority is a number, the highest number will be selected in auto mode (see below). –slave is for groupings; for instance, if the program I was building had a man page but so does the main one (the one from Fedora Core repository), what happens when I use man on the program name? With the groups it allows updating based on the master. For the example I will use a program I wrote but yet there is another one out there (also in the main Fedora Core repository): an uptimed. Let’s say mine is called ‘suptimed’ and the other is ‘uptimed’. So the files ‘/usr/bin/suptimed’ and ‘/usr/bin/uptimed’ exist. Further, the man pages for suptimed and uptimed are ‘/usr/share/man/man1/suptimed.1.gz’ and ‘/usr/share/man/man1/uptimed.1.gz’  (without the ‘s). This also includes just enough files to explain the syntax.

alternatives –install /usr/bin/uptimed uptimed /usr/bin/suptimed 1000 –slave /usr/share/man/man1/uptimed.1.gz uptimed.1.gz /usr/share/man/man1/suptimed.1.gz

While this is a hypothetical example (as in there might be more files to include in the slaves [1]), it should explain it well enough. After this, if you were run uptimed it would run suptimed instead. Furthermore, if you were to type ‘man uptimed’ it would show suptimed’s man page. Under /etc/alternatives you would see symlinks called uptimed and uptimed.1.gz with the first pointing to /usr/bin/suptimed and the second pointing to /usr/share/man/man1/suptimed.1.gz

The syntax given above that has [--slave <link> <name> <path>]* and specifically the * after it means you can use it more than once, depending on how many slaves. As for the [ and ] that is the typical way of showing options (not required) and their parameters (which may or may not be required for the option). The angle brackets indicate required arguments. This is a general rule, or perhaps even a de-facto standard.

alternatives –remove <name> <path>


Exact same parameter meanings. So to remove suptimed from the group  (and as master it might then not even use alternatives – the trick is when there IS an alternative) I would use:

alternatives –remove uptimed /usr/bin/suptimed

alternatives –auto <name>

As explained above. Name has the same meaning.

alternatives –config <name>

Allows you to configure the alternative for the group. Is a TUI (text user interface).

The rest I won’t get in to. The –config option is also not part of the original implementation (Debian Linux). –display <name> and –list are quite straight forward.




Debugging With GDB: Conditional Breakpoints

Monday I discovered a script error in what is one of many scripts running in a process. I knew that it was not the script itself but rather a change I had made in the source code (so not the script but the script engine because I implemented a bug in it) that caused the error. But in this case, the script in question is one of many of the same type, each running in a sequence. This means if I am to attach to the debugger and set a breakpoint I have to check that it is the proper instance and if not, continue until the next instance, repeating this until I get to the proper instance. Even then, I have to make sure that I don’t make a mistake and that ultimately I find the problem (unless I want to repeat the process again, which usually is not preferred).

I never really thought of having to do this because I rarely use the debugger at all let alone debug something like this.  But when I do it is time-consuming. So I had a brilliant idea: make a simple function that the script could call and then trap that function. When I reach that breakpoint I then step into the script. Well I was going to write about this but on a whim I decided to look at GDB’s help for breakpoints. I saw something interesting but I did not look in to it until today. Well, as it turns out, GDB already has this functionality, only better. So this is how it works:

  • While GDB is attached to the process you set a breakpoint to the function you need to debug. For example, you want to set a breakpoint at cow: break cow
  • GDB will tell you which breakpoint number it is. It’ll look something like: ‘Breakpoint 1 at 0xdeadbeef: cow’ where ‘0xdeadbeef’ is the address of the function ‘cow’ in the program space and ‘cow’ is the function you set a breakpoint to. Okay, the function cow is probably not there it almost assuredly does not have the address ‘0xdeadbeef’ although it could happen (and it would be very ironic yet amusing indeed), but this is just to show the output (and show how much fun hexadecimal can be, at least how fun it is to me). Regardless, you have the breakpoint number and that is critical for the next step, which is – if you will excuse the pun – the beef of the entire process.
  • So one might ask, does GDB have the ability to check the variable passed in to the function for the condition? Yes, it does, and it also has the ability to dereference a pointer (or access a member function or variable on an object) passed in to the function. So if cow has a parameter of Cow *c and c has a function idnum (or a member variable idnum) then you can indeed make use of it in the condition. This brings us to the last step (besides debugging the bug you implemented, that is).
  • Command in gdb: ‘cond 1 c->idnum == 57005′ (without the quotes) will instruct GDB to only stop at function cow (at 0xdeadbeef) when c->idnum (or if you prefer and you specified the condition as c->idnum() == 57005, then c->idnum() is checked) is 57005. Why 57005 for the example? Because 0xdead is 57005 in decimal. So all you have to do now is tell GDB to continue: ‘c’ (also without the quotes). When it stops you’ll be at function ‘cow’ and c->idnum will be equal to 57005. To contrast, if you had made the condition as c->idnum != 57005 then it will break whenever the cow is alive (to further the example above).

That’s all there is to it!

Open Source and Security

One of the things I often write about is how open source is in fact good for security. Some will argue the opposite to the end. But what they are relying on, at best, is security through obscurity. Just because the source code is not readily available does not mean it is not possible to find flaws or even reverse engineer it. It doesn’t mean it cannot be modified, either. I can find – as could anyone else – countless examples of this. I have personally added a feature to a Windows dll file – a rather important one, that is shell32.dll – in the past. I then went on to convince Windows file integrity check to not only see the modified file as correct but if I were to have replaced it with the original, unmodified one, then it would have replaced it with my modified version. And how did I add a feature without the source code? My point exactly. So to believe you cannot uncover how it works (or, as some will have you believe, modify and/or add features) is a huge mistake. But whatever. This is about open source and security. Before I can get in to that, however, I want to bring up something else I often write about at times.

That thing I write about is this: one should always admit to mistakes. You shouldn’t get angry and you shouldn’t take it as anything but a learning opportunity. Indeed, if you use it to better yourself, better whatever you made the mistake in (let’s say you are working on a project at work and you make a mistake that throws the project off in some way) and therefore better everything and everyone involved (or around), then you have gained and not lost. Sure, you might in some cases actually lose something (time and/or money, for example) but all good comes with bad and the reverse is true too: all bad comes with good. Put another way, I am by no means suggesting open source is perfect.

The only thing that is perfect is imperfection.
— Xexyl

I thought of that the other day. Or maybe better put, I actually got around to putting it in my fortune file (I keep a file of pending ideas for quotes as well as the fortune file itself). The idea is incredibly simple: the only thing that will consistently happen without failure is ‘failure’, time and time again. In other words, there is no perfection. ‘Failure’ because it isn’t a failure if you learn from it; it is instead successfully learning yet another thing and is another opportunity to grow. On the subject of failure or not, I want to add a more recent quote  (this was thought of earlier this month or later in August than when I originally posted this, which was 15 August) that I think really nails this idea very well:

There is no such thing as human weakness, there is only
strength and… those blinded by… the fallacy of perfection.
— Xexyl

In short, the only weakness is the product of ones’ mind. There is no perfection but if you accept this you will be much further ahead (if you don’t accept it you will be less able to take advantage of what imperfection offers). All of this together is important, though. I refer to admitting mistakes and how it is only a good thing. I also suggest that open source is by no means perfect and therefore to be critical of it, as if it is less secure, is flawed. But here’s the thing. I can think of a rather critical open source library that is used by a lot of servers, that has had a terrible year. One might think with this, and specifically what it is (which I will get to in a moment), it is somehow less secure or more problematic. What is this software? Well, let me start by noting the following CVE fixes that were pushed in to update repositories, yesterday:

 – fix CVE-2014-3505 – doublefree in DTLS packet processing
– fix CVE-2014-3506 – avoid memory exhaustion in DTLS
– fix CVE-2014-3507 – avoid memory leak in DTLS
– fix CVE-2014-3508 – fix OID handling to avoid information leak
– fix CVE-2014-3509 – fix race condition when parsing server hello
– fix CVE-2014-3510 – fix DoS in anonymous (EC)DH handling in DTLS
– fix CVE-2014-3511 – disallow protocol downgrade via fragmentation

To those who are not aware, I refer to the same software that had Heartbleed vulnerability. It therefore is also the same software as some other CVE fixes not too long after that. And indeed it seems that OpenSSL is having a bad year. Well, whatever – or perhaps better put, whomever – is the source (and yes, I truly do love puns) of the flaws, is irrelevant. What is relevant is this: they clearly are having issues. Someone or some people adding the changes are clearly not doing proper sanity checks and in general not auditing well enough. This just happens, however. It is part of life. It is a bad patch (to those that do not like puns, I am sorry, but yes, there goes another one) of time for them. They’ll get over it. Everyone does, even though it is inevitable that it happens again. As I put it: This just happens.

To those who want to be critical, and not constructively critical, I would like to remind you of the following points:

  • Many websites use OpenSSL to encrypt your data and this includes your online purchases and other credit card transactions. Maybe instead of being only negative you should think about your own mistakes more rather than attacking others? I’m not suggesting that you not considering yours, but in the case you are not, think about this. If nothing else, consider that this type of criticism will lead to nothing and since OpenSSL is critical (I am not consciously and deliberately making all these puns, it is just in my nature) then this can lead to no good and certainly is of no help.
  • No one is perfect, as I not only suggested above, but also suggested at other times. I’ll also bring it up again, in the future. Because thinking yourself infallible is going to lead to more embarrassment than if you understand this and prepare yourself, always being on the look out for mistakes and reacting appropriately.
  • Most importantly: this has nothing to do with open source versus closed source. Closed source has its issues too, including less people that can audit it. The Linux kernel, for example, and I mean the source code thereof, is on many people’s computers and that is a lot to see any issues. Issues still have and still will happen, however.

With that, I would like to end with one more thought. Apache, the organization that maintains the popular web server as well as other projects, is really to be commended for their post-attack analyses. They have a section on their website, which details attacks and the details include what mistakes they made, what happened, what they did to fix it as well as what they learned. That is impressive. I don’t know if any closed source corporations do that, but either way, it is something to really think about. It is genuine, it takes real courage to do it, and it benefits everyone. This is one example. There are immature comments there but that shows just how impressive Apache is to do this (they have other incident reports I seem to recall). The specific incident is reported here.

Steve Gibson: Self-proclaimed Security Expert, King of Charlatans

2014/08/28: Just to clarify a few points. Firstly, I have five usable IP addresses. That is because, like I explain below, some of the IPs are not usable for systems but instead have other functions. Secondly, the ports detected as closed and my firewall returning icmp errors. It is true  I do return that but of course there’s missing ports there and none of the others are open (that is, none have services bound to them), either. There are times I flat out drop packets to the floor but if I have the logs I’m not sure which log file it is (due to log rotation), to check for sure. There’s indeed some inconsistencies. But the point remains the same: there was absolutely nothing running on any of those ports just like the ports it detected as ‘stealth’ (which is more like not receiving a response and what some might call filtered but in the end it does not mean nothing is there and it does not mean you are some how immune to attacks). Third, I revised the footnote about FQDN, IP addresses and what they resolve to. There’s a few things that I was not clear with and in some ways unfair with, too. I was taking an issue to one  thing in particular and I did a very poor job of it I might add (something I am highly successful at I admit).

One might think I have better things to worry about than write about a known charlatan but I have always been somewhat bemused with his idea of security (perhaps because he is clueless and his suggestions are unhelpful to those who believe him which is a risk to everyone). More importantly though, I want to dispel the mythical value of what he likes to call stealth ports (and even more than that anything that is not stealth is somehow a risk). This, however, will not only tackle that, it will be done in what some might consider an immature way. I am admitting it full on though. I’m bored and I wanted to show just how useless his scans are by making a mockery of those scans. So while this may seem childish to some, I am merely having fun while writing about ONE of MANY flaws Steve Gibson is LITTERED with (I use the word littered figuratively and literally).

So let’s begin, shall we? I’ll go in the order of pages you go through to have his ShieldsUP! scan begin. First page I get I see the following:


Without your knowledge or explicit permission, the Windows networking technology which connects your computer to the Internet may be offering some or all of your computer’s data to the entire world at this very moment!

Greetings indeed. Firstly, I am very well aware of what my system reveals. I also know that this has nothing to do with permission (anyone who thinks they have a say in what their machine reveals, when connecting to the Internet – or phone to a phone network, or … – is very naive and anyone suggesting that there IS permission involved, is a complete fool). On the other hand, I was not aware I am running Windows. You cannot detect that yet you scan ports which would give you one way to determine OS? Here’s a funny part of that: since I run a passive fingerprinting service (p0f) MY SYSTEM determined your server’s OS (well, technically, kernel but all things considering, that is the most important bit, isn’t it? Indeed it is not 100% correct, however, but that goes with fingerprinting in general and I know that it DOES detect MY system correctly). So not only is MY host revealing information YOURS is too. Ironic? Absolutely not! Amusing? Yes. And lastly, let’s finish this part up: “all of your computer’s data to the entire world at this very moment!” You know, if it were not for the fact people believe you, that would be hilarious too. Let’s break that into two parts. First, ALL of my computer’s data? Really now? Anyone who can think rationally knows that this is nothing but sensationalism at best but much more than that: it is you proclaiming to be an expert and then ABUSING that claim to MANIPULATE others into believing you (put another way: it is by no means revealing ALL data, not in the logical – data – sense or physical – hardware – sense). And the entire world? So you’re telling me that every single host on the Internet is analyzing my host at this very moment? If that were the case, my system’s resources would be too full to even connect to your website. Okay, context would suggest that you mean COULD but frankly I already covered that this is not the case (I challenge you to name the directory that is most often my current working directory let alone know that said directory even exists on my system).

If you are using a personal firewall product which LOGS contacts by other systems, you should expect to see entries from this site’s probing IP addresses: -thru- Since we own this IP range, these packets will …

Well, technically, based on that range, your block is And technically your block includes (a) the network address, (b) the default gateway and (c) the broadcast address. That means that the IPs that would be probing is in the range more like ‘’ – ‘’. And people really trust you? You don’t even know basic networking and they trust you with security?

Your Internet connection’s IP address is uniquely associated with the following “machine name”:

Technically that is the FQDN (fully-qualified domain name[1]), not “machine name” as you put it. You continue in this paragraph:

The string of text above is known as your Internet connection’s “reverse DNS.” The end of the string is probably a domain name related to your ISP. This will be common to all customers of this ISP. But the beginning of the string uniquely identifies your Internet connection. The question is: Is the beginning of the string an “account ID” that is uniquely and permanently tied to you, or is it merely related to your current public IP address and thus subject to change?

Again, your terminology is rather mixed up. While it is true that it you did a reverse lookup on my IP, it isn’t exactly “reverse DNS”. But since you are trying to simplify (read: dumb it down to your level) it for others, and since I know I can be seriously pedantic, I’ll let it slide. But it has nothing to do with my Internet connection itself (I have exactly one). It has to do with my IP address of which I have many (many if you consider my IPv6 block, but only 5 if you consider IPv4). You don’t exactly have the same FQDN on more than one machine any more than you have the same IP on more than one network interface (even on the same system). So no, it is NOT my Internet connection but THE specific host that went to your website and in particular the IP assigned to that host I connected from. And the “string” has nothing to do with an “account ID” either. But I’ll get back to that in a minute.

The concern is that any web site can easily retrieve this unique “machine name” (just as we have) whenever you visit. It may be used to uniquely identify you on the Internet. In that way it’s like a “supercookie” over which you have no control. You can not disable, delete, or change it. Due to the rapid erosion of online privacy, and the diminishing respect for the sanctity of the user, we wanted to make you aware of this possibility. Note also that reverse DNS may disclose your geographic location.

I can actually request a different block from my ISP and I can also change the IP on my network card. Then the only thing that is there is my IP and its FQDN (that is not in use and I can change the FQDN as I have reverse delegation yet according to you I cannot do any of that). I love your ridiculous terminology though. Supercookie? Whatever. As for it giving away my geographic location, let me make something very clear: the FQDN is irrelevant without the IP address. While it is true that the name will (sometimes) refer to a city it isn’t necessarily the same city or even county as the person USING it. The IP address is related to the network; the hostname is a CONVENIENCE for humans. You know, it used to be that host -> IP was done without DNS (since it didn’t exist) but rather a file that maintains the mapping (and still is used albeit very little). The reason it exists is for convenience and in general because no one would be able to know the IP of every domain name. Lastly, not all IPs resolve into a name.

If the machine name shown above is only a version of the IP address, then there is less cause for concern because the name will change as, when, and if your Internet IP changes. But if the machine name is a fixed account ID assigned by your ISP, as is often the case, then it will follow you and not change when your IP address does change. It can be used to persistently identify you as long as you use this ISP.

The occasions it resembles the IP is when the ISP has authority of the DNS zone of (your) IP and therefore has their own ‘default’ PTR record (but they don’t always have a PTR record which your suggestion does not account for; indeed, I could have removed the PTR record for my IP and then you’d have seen no hostname). But this does not indicate that it is static or not. Indeed, even dynamic IPs typically (not always) have a PTR record. Again, the name does not necessarily imply static: it is the IP that matters. And welcome to yesteryear… in these days you typically pay extra for static IPs but you suggest it is quite often that your “machine name is a fixed account ID” (which itself is completely misuse of terminology). On the other hand, you’re right: it won’t change when your IP address changes because the IP is relevant, not the hostname! And if your IP changes then it isn’t so persistent in identifying you, is it? It might identify your location but as multiple IPs (dynamic) and not a single IP.

There is no standard governing the format of these machine names, so this is not something we can automatically determine for you. If several of the numbers from your current IP address ( appear in the machine name, then it is likely that the name is only related to the IP address and not to you.

Except ISP authentication logs and timestamps… And I repeat the above: the name can include exactly as you suggest and still be static!

But you may wish to make a note of the machine name shown above and check back from time to time to see whether the name follows any changes to your IP address, or whether it, instead, follows you.

Thanks for the suggestion but I think I’m fine since I’m the one that named it.

Now, let’s get to the last bit of the ShieldsUP! nonsense.

GRC Port Authority Report created on UTC: 2014-07-16 at 13:20:16
Results from scan of ports: 0-10550 Ports Open
72 Ports Closed
984 Ports Stealth
1056 Ports TestedNO PORTS were found to be OPEN.Ports found to be CLOSED were: 0, 1, 2, 3, 4, 5, 6, 36, 37,
64, 66, 96, 97, 128, 159, 160,
189, 190, 219, 220, 249, 250,
279, 280, 306, 311, 340, 341,
369, 371, 399, 400, 429, 430,
460, 461, 490, 491, 520, 521,
550, 551, 581, 582, 608, 612,
641, 642, 672, 673, 734, 735,
765, 766, 795, 796, 825, 826,
855, 856, 884, 885, 915, 916,
945, 946, 975, 976, 1005, 1006,
1035, 1036

Other than what is listed above, all ports are STEALTH.

TruStealth: FAILED – NOT all tested ports were STEALTH,
– NO unsolicited packets were received,

The ports you detected as “CLOSED” and not “STEALTH” were in fact returning an ICMP host-unreachable. You fail to take into account the golden rule of firewalls: that which is not explicitly permitted is forbidden. That means that even though I have no service running on any of those ports I still reject the packets to it. Incidentally, some ports you declared as “STEALTH” did exactly the same (because I only allow the ports in a specific IP block as the source network). The only time I drop packets to the floor is when state checks fail (e.g., a TCP SYN flag is set but it is already a known connection). I could prove that too, because I actually went and had you do the scan a second time but this time I added specific iptables rules for your IP block which changed the results quite a bit and indeed I used the same ICMP error code.

As for ping: administrators that block ping outright need to be hit over the head with a pot of sense. Rate limit by all means, that is more than understandable, but blocking ICMP echo requests (and indeed replies) only makes troubleshooting network connectivity issues more of a hassle and at the same time does absolutely nothing for security (fragmented packets and anything that can be abused obviously are dealt with differently because they are different!). Indeed, if they are going to attack they don’t really care if you respond to ICMP requests. If there is a vulnerability they will go after that and frankly hiding behind your “stealth” ports is only a false sense of security and/or security through obscurity (which is a false sense of security and even more harmful at the same time). Here’s two examples: First, if someone sends you a link (example: in email) and it seems to you to be legit and you click on it (there’s a lot of this in recent years and is ever increasing), the fact you have no services running does not mean you somehow are immune to XSS, phishing attacks, malware, or anything else. Security is, always has been and always will be a many layered thing. Secondly: social engineering.

And with that, I want to finish with the following:

If anyone wants the REAL story about Steve Gibson, you need only Google for “steve gibson charlatan” and see the many results. I can vouch for some of them but there really is no need for it – the evidence is so overwhelming that it doesn’t need any more validation. Here’s a good one though (which also shows his ignorance as well as how credible his proclamations are): (actually it is a really good one). If you want a list of items, check the search result that refers to and you will see just how credible he is NOT. A good example is the one that links to a page about Gibson and XSS flaws which itself links to: which is itself offers a great amount of amusement (note that some of the links are no longer valid as it was years ago but that is the one at and not the only incident).

[1] Technically, what his host is doing is taking the IP address and resolving it to a name (which is querying the PTR record as I refer to above). Since I have reverse delegation (so have authority) and have my own domain (which I also have authority of) I have my IPs resolve to fully-qualified domain names as such. FQDN is perhaps not the best wording (nor fair, especially) on my part in that I was abusing the fact that he is expecting normal PTR records that an ISP has rather than a server with proper an A record with a matching PTR record. What he refers to is as above: resolving the IP address to a name, which does not have to have a name. Equally, even if a domain exists by name, it does not have to resolve to an IP (“it is only registered”). He just gave it his own name for his own ego (or whatever else).

Death Valley, California, Safety Tips and Harry Potter

I guess this might be the most bizarre title for a post yet but it is a take on real life and fantasy and particularly the Harry Potter series. I am implying two things with real life. I will get to the Harry Potter part later. While it is a specific tragedy in Death Valley it is not an uncommon event and since I have many fond memories of Death Valley (and know the risks), I want to reflect on it all (because indeed fantasy is very much part of me, perhaps too much so).

For the first ten years of my life (approximate) I visited Death Valley each year, in November. It is a beautiful place with many wonderful sights. I have many fond memories of playing on the old kind of Tonka trucks (which is a very good example of “they don’t make [it] like they used to” as nowadays it is made out of plastic and what I’m about to describe would be impossible). My brother and I would take a quick climb up the hill right behind our tent, get on our Tonka trucks (each our own) and ride down, crashing or not, but having a lot of fun regardless. I remember the amazing sand dunes with the wind blowing like it tends to in a desert. I remember being fortunate enough that there was a ghost town with a person living there who could supply me with electricity for my nebulizer for an asthma attack (and fortunate enough to see many ghost towns from where miners in the California Gold Rush would have resided). I remember, absolutely, Furnace Creek with the visitor centre and how nice everyone was there. I even remember the garbage truck driver who let my brother and me activate the mechanism to pick up the bin. I remember the many rides on family friends’ dune buggies. The amazing hikes in the many canyons is probably a highlight (but certainly not the only highlight). Then there is Scotty’s Castle (they had a haunted house during Halloween if I recall). There is actually an underground river (which is an inspiration to another work I did but that is another story entirely). They have a swimming pool that is naturally warm. I remember all these things and more even if most of it is vague. It truly is a wonderful place.

Unfortunately, because of the vast area which spans more than 3,373,000 acres (according to Wiki which I seem to remember is about the right area – I’m sure the official Death Valley site would have more on this) and the very fact it is the hottest place on Earth (despite some claims; I am referring to officially acknowledged records) at  134.6 F / 57 C. That was, ironically enough, recorded this very month in 1913, on July 10 (again according to Wiki but from memory, other sources do have it in the early 1900s). This is an important bit (the day of the month in particular) for when I get to fantasy, by the way. Interestingly, the area I live in has a higher record for December and January than Death Valley by a few degrees (Death Valley: December and January at 89 F / 32 C ; my location I know I have seen on the thermostat at least 95 F / 35 C for both months although it could have been higher too). Regardless, Death Valley has a higher record by 10 C higher (my location record: 47 C / 116.6 F; Death Valley as above). And if you think of the size (as listed above) and that much of it is unknown territory for all but seasoned campers (which my family would fit that category), you have to be prepared. Make no mistake people: Death Valley and deserts in general, can be very, very dangerous. Always make sure you keep yourself hydrated. What is hydration though, for humans? It is keeping your electrolytes at a balanced level. This means that indeed too much water is as dangerous as too little water. As a general rule of thumb that was given to me by the RN (registered nurse) for a hematologist I had (another story entirely, as well, as for why I had one): if you are thirsty you waited too long. Furthermore, for Death Valley (for example) make sure you either have a guide or you know your way around (and keep track – no matter how you do this – where you go). That may include maps, compass, landmarks, and any other number of techniques. But it is absolutely critical. I have time and again read articles on the BBC where someone (or some people) from the UK or parts of Europe were unprepared and were found dead. It is a wonderful place but be prepared. Although this should be obvious, it often isn’t: Death Valley is better visited in the cooler months (close to Winter or even in Winter). I promise you this: it won’t be cold by any means. Even if you are used to blizzards in your area, you will still have plenty of heat year round in Death Valley. I should actually restate that slightly, thinking about a specific risk (and possibility). Deserts can drop to freezing temperatures! It is rare yes, but when it does it still will be cold. Furthermore, deserts can see lots of rain, even flash floods! Yes, I’ve experienced this exactly. Furthermore, as for risks, if it looks cloudy (or if you have a sense of smell like mine where you can smell rain that is about to drop, and no that is not an exaggeration – my sense of smell is incredibly strong) or there is a drizzle (or otherwise light rain) or more than that, do not even think about hiking the canyons! It is incredibly dangerous to attempt it! This cannot be stressed enough. As for deserts and freezing temperature, I live in a desert (most of Southern California is a desert) and while it was over 22 years ago (approximately) we still have seen snow on our yard. So desert does not mean no rain or no snow. I’ve seen people write about hot and dry climates and deserts (comparing the two) but that is exactly what a desert is: a hot and dry climate! But climate does not by any means somehow restrict what can or cannot happen. Just like Europe can see mid 30s (centigrade) so too can deserts see less than zero. And all this brings me to the last part: fantasy.

One of my favourite genres (reading – I rarely watch TV or films) is fantasy. While this is not the only series I have read, the Harry Potter series is one that I am referring to in particular to, as I already highlighted. Indeed, everything in Harry Potter has a reason, has a purpose and in general will be part of the entire story! That is how good it is and that is how much I enjoyed it (I also love puzzles so putting things together, or rather the need to do that, was a very nice treat indeed). I’m thankful for a friend that finally got me to read it (I had the books actually but never got around to reading the ones that were out, which would be up to and including book 3, Harry Potter and the Prisoner of Azkaban). The last two books I read the day it came out, in full, with hours to spare. Well, why on Earth would I be writing about fantasy, specifically Harry Potter, and Death Valley, together? I just read on the BBC, that Harry Potter Actor Dave Legeno has been found dead in Death Valley. He played the werewolf Fenrir Greyback. I will note the irony that today, the 12th of July, this year it is a full moon. I will also readily admit that in fantasy, not counting races by themselves (e.g., Elves, Dwarves, Trolls, …) werewolves are my favourite type of creature. I find the idea fascinating and there is a large part of me that wishes they were real. (As for my favourite race it would likely be Elves) I didn’t know the actor, of course, but the very fact he was British makes me think he too fell to the – if you will excuse the pun which is by no means meant to be offensive to his family or anyone else – fantasy of experiencing Death Valley, and unfortunately it was fatal. And remember I specifically wrote 1913, July 10 as the record temperature for Death Valley? Well, I did mean it when I wrote it has significance here: he was found dead on July 11 of this year. Whether that means he died on the 11th is not exactly known yet (it is indeed a very large expansion and it is only that hikers found him, that it is known) but that it was one day off is ironic indeed. It is completely possible he died on the 10th and it is also possible it was days before or even the 11th. This is one of those things that will be known after autopsy occurs as well as backtracking (by witnesses and other evidence) and not until then. Until then, it is anyone’s guess (and merely speculation). Regardless of this, it is another person who was unaware of the risks of which there are many (depending on where in Death Valley you might be in a vehicle; what happens if you run out of fuel and only have enough water for three days? There are so many scenarios but they are far too often not thought of or simply neglected). Two other critical bits of advice: don’t ignore the signs left all around the park (giving warnings) and always, without fail, tell someone where you will be! If someone knew where he was and knew approximately when he should be back (which should always be considered when telling someone else where you’ll be) they could have gone looking for him. This piece of advice, I might add, goes for hiking, canoeing and anything else (outside of Death Valley, this is a general rule), especially if you are alone (but truthfully – and I get the impression he WAS alone – you should not be alone in a place as large as Death Valley because there are many places to fall, there are animals that could harm you, and instead of having a story to bring home you risk not coming home at all). There are just so many risks so always be aware of that and prepare ahead of time. Regardless, I want to thank Dave for playing Fenrir Greyback. I don’t know if [you] played in any other films and I do not know anything about you or your past but I wish you knew the risks beforehand and my condolences (for whatever they can be and whatever they are worth) to your friends and family. I know that most will find this post out of character (again if you will excuse the indeed intended pun) for what I typically write about, but fantasy is something I am very fond of, and I have fond memories of Death Valley as well.

“I ‘Told’ You So!”

Update on 2014/06/25: Added a word that makes something more clear (specifically the pilots were not BEING responsible but I wrote “were not responsible”).

I was just checking the BBC live news feed I have in my bookmark bar in Firefox and I noticed something of interest. What is that? How automated vehicle systems (whether controlled by humans or not it is still created by and automation itself has its own flaws) are indeed dangerous. Now why is that interesting to me? Because I have written about this before in more than one way! So let us break this article down a bit:

The crew of the Asiana flight that crashed in San Francisco “over-relied on automated systems” the head of the US transport safety agency has said.

How many times have I written about things being dumbed down to the point where people are unable – or refuse – to think and act accordingly to X, Y and Z? I know it has been more than once but apparently it was not enough! Actually, I would rather state: apparently not enough people are thinking at all. That is certainly a concern to any rational being. Or it should be.

Chris Hart, acting chairman of the National Transportation Safety Board (NTSB), said such systems were allowing serious errors to occur.

Clearly. As the title suggests: I ‘told’ you so!

The NTSB said the 6 July 2013 crash, which killed three, was caused by pilot mismanagement of the plane’s descent.

Again: relying on “smart” technology is relying on the smartest of the designer and the user (which doesn’t leave much chance, does it?). But actually in this case it is even worse. The reasons: First, they are endangering others lives (and three died – is that enough yet?). Second is the fact that they are operating machinery, not using a stupid (which is what a “smart” phone is) phone. I specifically wrote about emergency vehicles and this and here we are, where exactly the situation arises: there are events that absolutely cannot be accounted for automatically and require that a person is paying attention and using the tool responsibly!

During the meeting on Tuesday, Mr Hart said the Asiana crew did not fully understand the automated systems on the Boeing 777, but the issues they encountered were not unique.

This is also called “dumbing the system down” isn’t it? Yes, because when you are no longer required to think and know how something works, you cannot fix problems!

“In their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid,” Mr Hart said.

Much like I wrote about related to all of and then some: computer security, computer problems, emergency vehicles and in general automated vehicles. This is another example.

The South Korea-based airline said those flying the plane reasonably believed the automatic throttle would keep the plane flying fast enough to land safely.

Making assumptions at the risk of others lives is irresponsible and frankly reprehensible! I would argue it is potentially – and in this case, is – murderous!

But that feature was shut off after a pilot idled it to correct an unexplained climb earlier in the landing.

Does all of this start to make sense? No? It should. Look what the pilot did? Why? A stupid mistake or an evil gremlin took over him momentarily? Maybe the gremlin IS their stupidity.

The airline argued the automated system should have been designed so that the auto throttle would maintain the proper speed after the pilot put it in “hold mode”.

They should rather be saying sorry and then some. They should also be taking care of the mistake THEY made (at least as much as they can; they already killed – and yes, that is the proper way of wording it – three people)!

Boeing has been warned about this feature by US and European airline regulators.

The blame shouldn’t be placed on Boeing if they didn’t actually neglect and they are doing what it seems everyone wants: automation. Is that such a good idea? As I pointed out many times: no. Let me reword that a bit. Is Honda responsible for a drunk getting behind the wheel and then killing a family of five, four, three, two or even one person (themselves included – realistically that would be the only one who is not innocent!)? No? Then why the hell should Boeing be blamed for a pilot misusing the equipment? The pilot is not being responsible and the reason (and how) the pilot is not being responsible is irrelevant!

“Asiana has a point, but this is not the first time it has happened,” John Cox, an aviation safety consultant, told the Associated Press news agency.

It won’t be the last, either. Mark my words. I wish I was wrong but until people wake up it won’t be fixed (that isn’t even including the planes already in commission).

“Any of these highly automated airplanes have these conditions that require special training and pilot awareness. … This is something that has been known for many years.”

And neglected. Because why? Here I go again: it is so dumbed down, so automatic that the burden shouldn’t be placed on the operators! Well guess what? Life isn’t fair. Maybe you didn’t notice that or you like to ignore the bad parts of life, but the fact remains life isn’t fair and they (the pilots and in general the airliner) are playing the pathetic blame game (which really is saying “I’m too immature and irresponsible and not only that I cannot dare admit that I am not perfect. Because of that it HAS to be someone else who is at fault!”).

Among the recommendations the NTSB made in its report:

  • The Federal Aviation Administration should require Boeing to develop “enhanced” training for automated systems, including editing the training manual to adequately describe the auto-throttle programme.
  • Asiana should change its automated flying policy to include more manual flight both in training and during normal operations
  • Boeing should develop a change to its automatic flight control systems to make sure the plane “energy state” remains at or above minimum level needed to stay aloft during the entire flight.

My rebuttal to the three points:

  • They should actually insist upon “improving” the fully automated system (like scrapping the idea). True, this wasn’t completely automated but it seems that many want that (Google self driving cars, anyone?). Because let’s all be real, are they of use here? No, they are not. They’re killing – scrap that, murdering! – people. And that is how it always will be! There is never enough training. There is always the need to stay in the loop. The same applies to medicine, science, security (computer, network and otherwise), and pretty much everything in life!
  • Great idea. A bit late of them though, isn’t it? In fact, a bit late of all airliners that rely on such a stupid design!
  • Well they could always improve but the same thing can be said for cars, computers, medicinal science, other science, and here we go again: everything in this world! But bottom line is this: it is not at all Boeing’s fault. They’re doing what everyone seems to want.

And people STILL want flying cars? Really? How can anyone be THAT stupid? While I don’t find it hard to believe such people exist, I still find it shocking. To close this, I’ll make a few final remarks:

This might be the wrong time, according to some, since it is just reported. But it is not! If it is not the right time now, then when? This same thing happens with everything of this nature! Humans always wait until a disaster (natural or man made) happens until doing something. And then they pretend (lying about it in the process) to be better but what happens next? They do the same thing all over again. And guess what also happens at that time? The same damned discussions (that I dissected, above) occurs! Here’s a computer security example: I’ve lost count with the number of times NASA has suggested they would be improving policies with their network and I have also lost count of times they then went on to LATER be compromised AGAIN with the SAME or EQUALLY stupid CAUSE! Why is this? Irresponsibility and complete and utter stupidity. Aside from the fact that the only thing we learn thing from history is that – and yes, pun is most definitely intended - we do not learn a bloody thing from history! And that is because of stupidity and irresponsibility.

Make no mistake, people:

  1. This will continue happening until humans wake up (which I fear that since even in 2014 ‘they’ have not woken up, they never will!).
  2. I told you so, I was right then and I am still right!
  3. Not only did I tell you so about computer security (in the context of automation) I also told you about real life incidents, including emergencies. And I was right then and I am still right!

Hurts? Well some times that’s the best way. Build some pain threshold as you’ll certainly need it. If only it was everyone’s head at risk, because they’re so thick that they’d survive! Instead we all are at risk because of others (including ourselves, our families, everyone’s families, et al.). Even those like me who suggest this time and again are at risk (because they are either forced in to using the automation or they are surrounded by drones – any pun is much intended here, as well – who willingly use their “smart” everything… smart everything except their brain, that is!

SELinux, Security and Irony Involved

I’ve thought of this in the past and I’ve been trying to do more things (than usual) to keep me busy (there’s too few things that I spend time doing, more often than not), and so I thought I would finally get to this. So, when it comes to SELinux, there are two schools of thought:

  1. Enable and learn it.
  2. Disable it; it isn’t worth the trouble.

There is also a combination of the two: put it in permissive mode so you can at least see alerts (much like logs might be used). But for the purpose of this post, I’m going to only include the mainstream thoughts (so 1 and 2, above). Before that though, I want to point something out. It is indeed true I put this in the security category but there is a bit more to it than that, as those who read it will find out at the end (anyone who knows me – and yes this is a hint – will know I am referring to irony, as the title refers to). I am not going to give a suggestion on the debate of SELinux (and that is what it is – a debate). I don’t enjoy and there is no use in having endless debates on what is good, bad, what should be done, debating on whether or not to do something or even debating on a debating (and yes the latter two DO happen – I’ve been involved in a project that had this and I stayed out of it and did what I knew to be best for the project over all). That is all a waste of time and I’ll leave it to those who enjoy that kind of thing. Because indeed the two schools of thought do involve quite a bit of emotion (something I try to avoid itself, even) – they are so passionate about it, so involved in it, that it really doesn’t matter what is suggested from the other side. It is taking “we all see and hear what we want to see and hear” to the extreme. This is all the more likely when you have two sides. It doesn’t matter what they are debating and it doesn’t matter how knowledgeable they are or are not and nothing else matters either: they believe their side’s purpose so strongly, so passionately that much of it is mindless and futile (neither side sees anything but their own side’s view). Now then…

Let’s start with the first. Yes, security is indeed – as I’ve written about before – layered and multiple layers it always has been and always should be. And indeed there are things SELinux can protect against. On the other hand, security has to have a balance or else there is even less security (password aging + many different accounts so different passwords + password requirements/restrictions = a recipe for disaster). In fact, it is a false sense of security and that is a very bad thing. So let’s get to point two. Yes, that’s all I’m going to write on the first point. As I already wrote, there isn’t much to it notwithstanding endless debates: it has pros and it has cons, that’s all there is to it.

Then there’s the school of thought that SELinux is not worth the time and so should just be disabled. I know what they mean, not only with the labelling of the file systems (I wrote about this before and how SELinux itself has issues at times all because of labels, and so you have to have it fix itself). That labelling issue is bad itself but then consider how it affects maintenance (even worse for administrators that maintain many systems). For instance, new directories in Apache configuration, as one example. Yes, part of this is laziness but again there’s a balance. While this machine (the one I write from, not does not use it I still practice safe computing, I only deal with software in main repositories, and in general I follow exactly as I preach: multiple layers of security. And finally, to end this post, we get to some irony. I know those who know me well enough will also know very well that I absolutely love irony, sarcasm, satire, puns and in general wit. So here it is – and as a warning – this is very potent irony so for those who don’t know what I’m about to write, prepare yourselves:

You know all that talk about the NSA and its spying (nothing alarming, mind you, nor anything new… they’ve been this way a long long time and which country doesn’t have a spy network anyway? Be honest!), it supposedly placing backdoors in software and even deliberately weakening portions of encryption schemes? Yeah, that agency that there’s been fuss about ever since Snowden started releasing the information last year. Guess who is involved with SELinux? Exactly folks: the NSA is very much part of SELinux. In fact, the credit to SELinux belongs to the NSA. I’m not at all suggesting they tampered with any of it, mind you. I don’t know and as I already pointed out I don’t care to debate or throw around conspiracy theories. It is all a waste of time and I’m not about to libel the NSA (or anyone, any company or anything) about anything (not only is it illegal it is pathetic and simply unethical, none of which is appealing to me), directly or indirectly. All I’m doing is pointing out the irony to those that forget (or never knew of) SELinux in its infancy and linking it with the heated discussion about the NSA of today. So which is it? Should you use SELinux or not? That’s for each administrator to decide but I think actually the real thing I don’t understand (but do ponder about) is: where do people come up with the energy, motivation and time to bicker about the most unimportant, futile things? And more than that, why do they bother? I guess we’re all guilty to some extent but some take it too far too often.

As a quick clarification, primarily for those who might misunderstand what I find ironic in the situation, the irony isn’t that the NSA is part of (or was if not still) SELinux. It isn’t anything like that. What IS ironic is that many are beyond surprised (and I still don’t know why they are but then again I know of NSA’s history) at the revelations (about the NSA) and/or fearful of their actions and I would assume that some of those people are those who encourage use of SELinux. Whether that is the case and whether it did or did not change their views, I obviously cannot know. So put simply, the irony is that many have faith in SELinux which the NSA was (or is) an essential part of and now there is much furor about the NSA after the revelations (“revelations” is how I would put it, at least for a decent amount of it).

Fully-automated ‘Security’ is Dangerous

Thought of a better name on 2014/06/11. Still leaving the aside, below, as much of it is still relevant.

(As a brief aside, before I get to the point: This could probably be better named. By a fair bit, even. The main idea is security is a many layered concept and it involves computers – and its software – as well as people, and not either or and in fact it might involve multiples of each kind. Indeed, humans are the weakest link in the chain but as an interesting paradox, humans are still a very necessary part of the chain. Also, while it may seem I’m being critical in much of this, I am actually leading to much less criticism, giving the said organisation the benefit of the doubt as well as getting to the entire point and even wishing the entire event success!)

In our ever ‘connected’ world it appears – at least to me – that there is so much more discussion about automatically solving problems without any human interaction (I’m not referring to things like calculating a new value for Pi, puzzles, mazes or any thing like that; that IS interesting and that is indeed useful, including to security, even if indirectly and yes, this is about security but security itself on a whole scale). I find this ironic and in a potentially dangerous way. Why are we having everything connected if we are to detach ourselves from the devices (or in some cases the attached so to the device that they are detached from the entire world)? (Besides brief examples, I’ll ignore the part where so many are so attached to their bloody phone – which is, as noted, the same thing as being detached from the world despite the idea they are ever more attached or perhaps better stated, ‘connected’ – that they walk into walls, people – like someone did to me the other day, same cause – and even walking off a pier in Australia while checking Facebook! Why would I ignore that? Because I find it so pathetic yet so funny that I hope more people do stupid things like that that I absolutely will laugh at as should be done, as long as they are not risking others lives [including rescuers lives, mind you; that's the only potential problem with the last example: it could have been worse and due to some klutz the emergency crew could be at risk instead of taking care of someone else]. After all, those that are going to do it don’t get the problem so I may as well get a laugh at their idiocy, just like everyone should do. Laughing is healthy. Besides those points it is irrelevant to this post). Of course, the idea of having everything connected also brings the thought of automation. Well, that’s a problem for many things including security.

I just read that the DARPA (which is the agency that created ARPANet, you know, the predecessor to the Internet – and ARPA still is referred to in DNS, for example) is running a competition as such:

“Over the next two years, innovators worldwide are invited to answer the call of Cyber Grand Challenge. In 2016, DARPA will hold the world’s first all-computer Capture the Flag tournament live on stage co-located with the DEF CON Conference in Las Vegas where automated systems may take the first steps towards a defensible, connected future.”

Now, first, a disclaimer of sorts. Having had (and still do) friends who have been to (and continue to go to) DefCon (and I’m referring to the earlier years of DefCon as well) and not only did they go there, they bugged me relentlessly (you know who you are!) for years to go there too (which I always refused, much to their dismay, but I refused for multiple reasons, including the complete truth: the smoke there would kill me if nothing else did before that), I know very well that there are highly skilled individuals there. I have indirectly written about groups that go there, even. Yes, they’re highly capable, and as the article I read about this competition points out, DefCon already has a capture the flag style tournament, and has for many years (and they really are skilled, I would suggest that except charlatans like Carolyn P. Meinel, many are much more skilled than me and I’m fine with that. It only makes sense anyway: they have a much less hectic life). Of course the difference here is fully automated without any human intervention. And that is a potentially dangerous thing. I would like to believe they (DARPA) would know better seeing as how the Internet (and the predecessor thereof) was never designed with security in mind (and there is never enough foresight) – security as in computer security, anyway. The original reason for it was a network of networks capable of withstanding a nuclear attack. Yes, folks, the Cold War brought one good thing to the world: the Internet. Imagine that paranoia would lead us to the wonderful Internet. Perhaps though, it wasn’t paranoia. It is quite hard to know, as after all a certain United States of America President considered the Soviet Union “the Evil Empire” and as far as I know wanted to delve further on that theme, which is not only sheer idiocy it is complete lunacy (actually it is much worse than that)! To liken a country to that, it boggles the mind. Regardless, that’s just why I view it ironic (and would like to think they would know better). Nevertheless, I know that they (DARPA, that is) mean well (or well, I hope so). But there is still a dangerous thing here.

Here is the danger: By allowing a computer to operate on its own and assuming it will always work, you are essentially taking a great risk and no one will forget what assuming does, either. I think that this is actually a bit understated, because you’re relying on trust. And as anyone who has been into security for 15-20 years (or more) knows, trust is given far too easily. It is a major cause of security mishaps. People are too trusting. I know I’ve written about this before, but I’ll just mention the names of the utilities (rsh, rcp, …) that were at one point the norm and briefly explain the problem: the configuration option – that was often used! – which allowed logging in to a certain host WITHOUT a password from ANY IP, as long as you login as a CERTAIN login! And people have refuted this by using the logic of, they don’t have a login with that name (and note: there is a system wide configuration file of this and also per user which makes it even more of a nightmare). Well, if it is their own system, or if they compromised it, guess what they can do? Exactly – create a login with that name. Now they’re more or less a local user which is so much closer to rooting (or put another way, gaining complete control of) the system (which potentially allows further systems to be compromised).

So why is the DARPA even considering fully automated intervention/protection? While I would like to claim that I am the first one to notice this (and more so put it in similar words), I am not, but it is true: the only thing we learn from history is that we don’t learn a damned thing from history (or we don’t pay attention, which is even worse because it is flat out stupidity). The very fact that systems were compromised by something that was ignored, not thought of prior to (or thought of in a certain way – yes, different angles provide different insights) or new innovations come along to trample over what was once considered safe is all that should be needed in order to understand this. But if not, perhaps this question will resonate better: does lacking encryption mean anything to you, your company, or anyone else? For instance, telnet, a service that allows authentication and isn’t encrypted (logging in, as in sending login and password in clear over the wire). If THAT was not foreseen you can be sure that there will ALWAYS be something that cannot be predicted. Something I have – as I am sure everyone has – experienced is that things will go wrong when you least expect them to. Not only that, much like I wrote not too long ago, it is as if a curse has been cast on you and things start to come crashing down in a long sequence of horrible luck.

Make no mistake: I expect nothing but greatness from the folks at DefCon. However, there’s never a fool-proof, 100% secure solution (and they know that!). The best you can expect is to always be on the look out, always making sure things are OK, keeping up to date on new techniques, new vulnerabilities, and so on, so software in addition to humans! This is exactly why you cannot teach security; you can only learn it – by applying knowledge, thinking power and something else that schools cannot give you: real life experience. No matter how good someone is, there’s going to be someone who can get the better of that person. I’m no different. Security websites have been compromised before, they will in the future. Just like pretty much every other kind of site (example: one of my other sites, before this website and when I wasn’t hosting on my own, the host made a terrible blunder, one that compromised their entire network and put them out of business. But guess what? Indeed, the websites they hosted were defaced, including the other site of mine. And you know what? That’s not exactly uncommon, for defaced websites to occur in mass batches simply because the webhost had a server – or servers – compromised* and well, the defacer had to make their point and name made). So while I know DefCon will deliver, I know also it to be a mistake for DARPA to think there will at some point be no need for human intervention (and I truly hope they actually mean it to be in addition to humans; I did not, after all, read the entire statement, but it makes for a topic to write about and that’s all that matters). Well, there is one time this will happen: when either the Sun is dead (and so life here is dead) or humans obliterate each other, directly or indirectly. But computers will hardly care at that point. The best case scenario is that they can intervene certain (and indeed perhaps many) attacks. But there will never be a 100% accurate way to do this. If it were so, heuristics and the many other tricks that anti-virus (and malware itself) products deploy, would be much more successful and have no need for updates. But has this happened? No. That’s why it is a constant battle between malware writers and anti-malware writers: new techniques, new people in the field, things changing (or more generally, technology evolving like it always will) and in general a volatile environment will always keep things interesting. Lastly, there is one other thing: humans are the weakest link in the security chain. That is a critical thing to remember.

*I have had my server scanned by different sites and they didn’t know they had a customer (or in some cases, a system – owned by a student, perhaps – at a school campus) that had their account compromised. Indeed, I tipped the webhosts (and schools) off that they had a rogue scanner trying to find vulnerable systems (all of which my server filtered but nothing is fool-proof, remember that). They were thankful, for the most part, that I informed them. But here’s the thing: they’re human and even though they are a company that should be looking out for that kind of thing, they aren’t perfect (because they are human). In other words: no site is 100% immune 100% of the time.

Good luck, however, to those in the competition. I would specifically mention particular (potential) participants (like the people who bugged me for years to go there!) but I’d rather not state them here by name, for specific (quite a few) reasons. Regardless, I do wish them all good luck (those I know and those I do not know). It WILL be interesting. But one can hope (and I want to believe they are keeping this in mind) the DARPA knows this is only ONE of the MANY LAYERS that make up security (security is, always has been and always will be, made up of multiple layers).

Implementing TCP Keepalive Via Socket Options in C

Update on 2014/06/08: Fixed an error with IPv6 problem (that I refer to but do not elaborate too much on). Obviously an MTU of 14800 is not less than 1500 and well, I won’t go beyond that: I meant 1480 (although I found a reason for a different, lower MTU, but I don’t remember specifics and is besides the point of TCP Keepalives and manipulating them with the setsockopt call).

Update on 2014/21/05: I added the reference [1] that I forgot to add after suggesting that there would be a note on the specific topic (gateway in ISP terms versus the more general network gateway).

Important Update (fix) on 2014/13/05: I forgot a very important #include in the source file I link to at the end of the post. While I don’t include (pardon the irony) creation of the socket, I don’t have any of the source in a function, I DO use the proper #include files because without those the functions that I call will not be declared which will result in compiler errors. The problem is, because I have #ifdef .. #endif blocks for the relevant socket options, without this file (that is now #include’d) it would silently be skipped. The file in question is netinet/tcp.h (relative to /usr/include/). Without that file included, the socket options would not be #define’d and therefore this post is less than useful and in fact less than useless.

This will be fairly short as I am quite preoccupied and it is a fairly simple topic (which is actually the reason I’m able to discuss it). In recent times I noticed a couple issues with a network connection that I am often idle but should not be dropped as the application itself uses the setsockopt(2) call to enable TCP Keepalives at the socket level when prior to binding itself to its ports (and this option should be inherited by the connecting clients). While they did inherit this property, there was a problem and it only showed itself when over IPv4 (IPv6 had another problem and that was resolved by changing the MTU to 1480, down from 1500 via my network configuration. It didn’t always have this problem – serious latency, well over two minutes, when being sent a page worth of text – but I have this vague memory that my modem/router, in such context known as a “gateway”[1] – used to have its MTU at 1480 but is now 1500). While I initially did think of TCP Keepalives I did not actually think beyond the fact the application enables SOL_KEEPALIVE through the setsockopt call (which should have resolved the issue, in my thinking). But a friend suggested – after me mentioning that it was within 1 to 2 hours of idleness – that it might be the actual time between initial keepalive being sent, the amount of probes to send if nothing is received by the other side, and how often to send the probes.

This thought was especially interesting because the initial keepalive is sent after (by default, under Linux) 7200 seconds (which is 2 hours). Since it usually took an hour before I noticed it (by actually having a reason to no longer be idle) then it would stand to reason that the keepalive time is too high. So to test this, I initially set on both sides (server and client) to have much shorter keepalive time (via the sysctl command). This did not seem to help, however. So, to really figure this out I sniffed the traffic on both ends. This means I could see when the server sends a keepalive probe, when (or if) the client sends a response, and if there is no response, I would see the next probe (presumably). It turns out that my end did receive the keepalive. However, it only received it one time. In other words, if I set the time to 10 minutes (600 seconds), was idle for 10 minutes, I would receive a keepalive and respond. But 10 minutes later (so 20 minutes of being idle), the keepalive was NOT received. This is when I saw the further probes being sent by the server (but none of which were received at the client end and so the connection was considered ‘dead’ after a certain amount of probes). Well, as it turns out, this can be remedied by taking advantage of setsockopt a bit further.

As far as I am aware, keepalive is not set by default on sockets (even TCP sockets) so you would in that case need to set that option first. Here is an example of how to set all the related options. Note that I was not interested in playing around with the finding the optimal time for the server and client (actually in this case it is for the client even though the server is the one that sets the socket options). Therefore, the time could potentially be higher than I set it to. This applies to all keepalive values. Nevertheless, for my issue, after the last three calls to setsockopt, recompiling the server, rebooting it and trying again, the problem is resolved (in fact I might not actually need two of the three additional calls but again I was not wanting to play around with the settings for long). The first call will turn on keepalive support and the following three set the options related to TCP keepalives, that I will comment on. This should be considered a mixture of pseudo-code and not. That is, I am not including error reporting of any real degree, nor am I gracefully handling the error. I’m also not including the creation of the socket or the other related things. This is strictly setting keepalive, printing a basic error (with the perror call) and exiting. Further, I’m not explaining how setsockopt works except what is in the file and that in the example the file descriptor referring to the socket is the variable ‘s’ (which again is not being created for you).

The actual snippet can be found here.

[1]Gateway is actually much more than a router/modem combination. Indeed, there is the default gateway which allows traffic destined for another network to actually get there (when there is no other gateway to route traffic through, in the routing tables). In general, a gateway is a router (which allows traffic destined for another network to actually get there and is therefore similar to the default gateway because the default gateway IS a gateway). It can have other features along with it, but in the sense of ISPs a gateway is often a modem and router combination. But the modem is in general serves a different purpose than a gateway so this is why I initially brought this up (but forgot to actually – if you pardon any word play – address).

Programmers are Human Too

Yes, as of 2014/06/08 this has changed titles and is quite different. I think this is better overall because it is more to the point (that I was trying to get across). It was originally about the Heartbleed vulnerability in OpenSSL. I have some remarks about that, below, and then I will write about the new title. I could argue that this title is not even the best. Really it is about how things will never go exactly as planned, 100% of the time. That’s a universal truth. First, though, about OpenSSL.

I have sense the time of writing the original post (quite some time ago even) seen the actual source code and it was worse than I thought (there were absolutely no sanity check, no checks at all, which is a ghastly error and very naive: you cannot ever be too careful especially when dealing with input, whether from a file, a user or anything else). Of course, I noticed some days ago that more vulnerabilities were found in OpenSSL. The question is then: why do I tend to harp on open source is more secure, generally? Because generally it IS. The reason is the source exists on many people’s computers, which means more can verify the source (both in security bugs or any bugs even as well as whether or not it has been tampered with) and also many more people can view it and find errors (and the open source community is really good on fixing errors simply because they care about programming; it is a passion and no programmer who is – if you will excuse the much intended word play, encryption and all – worth their salt will not be bothered by a bug in their software). True, others can find errors but that itself is good because let’s be completely honest: how many find bugs/errors (security included) in Windows? MacOS? Other proprietary software? Exactly: many. The only difference is with open source it is easier to find the errors (or rather, spot errors) and if it is a programmer they might very well fix it and send a patch in (and/or report it – you may not believe that but if you look at the bugzilla for different software, including security sections, you will find quite some entries). Relying on closed source for security (as in if they cannot see the source code then they cannot find bugs to exploit – which, by the way is a fallacy unless there is no way to read the symbols in the software and there is also no way to, for example, use a hex editor on it or a disassembler even) is more or less – in the context being security –  is nothing of security but rather security through obscurity (which I would rather call it “insecurity”, “false sense of security through denial” or even – to be more blunt – “asking for a security problem when and where you don’t expect it” to give three examples of how bad it is). Indeed, security through obscurity, is just like a poorly designed firewall, worse than none, because you believe you are safe (all the while not safe but truly have no idea how bad it is or isn’t) and since you believe you have a safe setup you won’t look in to the issue further (rather than constantly changing as new ideas, new risks, new anything, comes up). Nevertheless, the fact is programmers are human too and while some things might seem completely stupid, blind or anything in between, we all make mistakes.

So, essentially: yes, bugs in software or even hardware (which has happened and will happen again) can be beyond frustrating for the user (and they can also be the same to the programmers involved, mind you, as well as other programmers who need it fixed but cannot). But so can a leak in your house, plumbing problems or even nature (tree falling onto your house, for example). The truth of the matter is, unless you have somehow forgotten and deemed yourself perfect (which I assure you, no one is, especially not programmers – I’m not the only programmer that has observed this, mind you – but  no one else is either which means you are not perfect, either), you cannot realistically expect any one else to be perfect. Problems will happen, always.

To bring the issue with OpenSSL into perspective, or maybe better stated is: to give a non computer, real life phenomenon, think about a time when something went wrong (more than the usual thing that everyone experiences on a daily basis). For instance, the motor in your car needs replacement. That’s not a daily occurrence (or one can hope not!). While this won’t always happen, I know from experience – and others I have discussed this with, agree – that often when things start breaking down, you feel like it is one thing after another, and it is as if you were cursed. And how long it goes (days, weeks, …) will vary, but the fact of the matter is, multiple things go wrong and often when it is the worst time and/or least expected (things are going incredibly well and suddenly something horrible happens).

Well so too can this happen with software. I think the best way to look at it is this: the bugs have already been fixed and while it is true that bug fixes often introduce new bugs (because as I put it: programmers – myself included – implement bugs and that is completely true) but that goes for any new feature (any modification to software is bound to implement bugs – it will not always happen but it always has the potential). The only kind of software or design (of something else) that has zero problems is the kind that doesn’t exist. This is why the RFCs obsolete things, this is why telnet and rcp/rsh were replaced with ssh (over time even! Some were very slow to change over and when you look at the vulnerabilities, especially related to trust with rcp/rsh, it is shocking how slow administrators were to replace them!). This is why TCP syn cookies were introduced, this is why everything in the universe (and I use the word universe in the literal sense: indeed, the universe that has the planets we all know of, including Earth) changes. In short: no matter what safety mechanisms are in place, something else will eventually happen (as for the universe, does solar storms mean anything to Earth? What about the sun dying? Yes, both do mean something to the Earth!).

So what is the way to go about this? Address them as they come, in the way you can. That includes, by the way, giving constructive criticism as well as help where you can (which isn’t always possible – e.g., I’m not an electrician so I sure as hell cannot offer advice on a situation except I can refer you to an electrician I know to be good and trustworthy as well as experienced). I think that is the only way to stay semi-sane in such a chaotic world. Whether anyone agrees, I cannot change nor will I try to change their view. All I am doing is reminding others (which I admit is probably not many – but I don’t mind it: I’m not exactly outgoing, social, and so I don’t mind that I don’t have a widespread target. I write for the sake of writing, any way) that nothing is perfect, not humans, not anything else. If you can understand that you can actually better yourself (which, even if you don’t use that fact to better others deliberately, you at least are better for yourself and incidentally you will better others too even if only indirectly; how you feel is not only contagious but if you’re in a better mood or you have insight in to something, then others who are around you or work/deal with/correspond with you, will also feel that vibe and/or gain that insight).

Windows XP End of Life: Is Microsoft Irresponsible?

With my being very critical of Microsoft one might make the (usually accurate) assumption that I’m about to blast Microsoft. Whether any one is expecting me to or not, I don’t know but I will make something very clear: I fully take responsibility for my actions and I fully accept my mistakes. I further make the best of situations. As such, I would like to hope everyone else does too. I know that is unrealistic at best; indeed, too many people are too afraid to admit when they don’t know something (and therefore irresponsibly make up pathetic and incorrect responses) or when they make a mistake. But the fact of the matter is this: humans aren’t perfect. Learn from your mistakes and better yourself in the process.

No, I am not going to blast Microsoft. Microsoft _was_ responsible. They reported the end of life _SEVEN YEARS AGO_! I am instead blasting those who are complaining (and complaining is putting it VERY nicely – it is more like that of a whiny little brat who throws a temper tantrum when they don’t get their own way on every single thing despite the fact they have been told in advance this will happen) about how they now have to quickly upgrade or not get updates, security updates included. For instance, let’s take two different groups. Let’s start with the manager Rosemayre Barry of London-based business, The Pet Chip Company who stated the following (to the BBC or at least it was reported on the BBC):

“XP has been excellent,” she says. “I’m very put out. When you purchase a product you don’t expect it to be discontinued, especially when it’s one of [Microsoft's] most used products.”


Sorry to burst your bubble, Rosemayre, but ALL software will eventually be discontinued (just like how smoke detectors, carbon monoxide detectors and the like have to be replaced over time and/or are improved over time and that is not even considering maintenance like battery replacement). You can complain all you want but this is not only the correct thing technically, it is economically unfeasible to continue with a product as old as Windows XP is. I don’t care how used it is or isn’t (I hardly expect it to be the most used product of Microsoft’s, however; I would argue its office suite is more used as it works on multiple versions of Windows and corporations rely on it a lot). I also know for a fact that corporations tend to have contracts with computer manufacturers that they LEASE computers for a few years at a time and when the time comes for the next lease they will get the more recent software and this includes the operating system.  Why would they do that? Well, again, it is economically better for the company, that’s why. And here’s some food for thought: Windows XP was released in 2001 and according to my trusty calculator (i.e., my brain) that means it is almost a 13 year old product (as it was August and we’re only in April). Well check this. Community ENTerprise OS (CentOS), a distribution of Linux, which is largely useful for servers, has a product life line, as far as I remember, of only 10 years. And you know something else? CentOS is very stable because it doesn’t have many updates or in other words is not on the bleeding edge. When a security flaw is fixed, the affected libraries and/or programs have fixes backported into the older libraries and/or programs. Indeed, the current GCC version is 4.8.2 and CentOS’s current version (unless you count my backport of 4.7.2 which you can find more info about at The Xexyl RPM Repository – possibly others exist somewhere else but for the time being the packages I maintain I have not updated to the 4.8.x tree) is 4.4.7 which was released on 20120313 or in other words the 13th of March in 2012. Yes, that is over _two years ago_. It means you don’t get the newer standards (even though the C and C++ standards that are most recent were in 2011 it is not to say that anything past the ratification date is somehow magically going to have it all. In fact, some features are still not complete in the most recent versions) but it also means your system remains stable and that is what a server needs to be: what good is a server if the service it offers is unstable (and I’m not referring to Internet connection stability! – that is another issue entirely and nothing to do with the operating system) and hard to use? Very little indeed. And realistically, 10 years is very reasonable if not more than very reasonable. Over the span of 10 years a lot changes including a lot of core changes (and let’s not forget standards changing) which means maintaining it for 10 years is quite significant and I cannot give anything but my highest praise to the team at CentOS – an open source and FREE operating system. To be fair to this manager, they at least DID upgrade to a more recent Windows, but the very complaint is beyond pathetic, irresponsible and outright naive and ignorant at best and stupid at worst. It is also unrealistic and unfair to Microsoft (and this coming from someone who is quite critical of Microsoft in general and someone who has railed on them more than once – in security and otherwise; in quotes about their capabilities and in articles alike – and quite harshly too; examples, one of which even includes a satirical image I made that is directed at Windows in general: Microsoft’s Irresponsible Failures and Why Microsoft Fails at the Global Spam Issue).

Next, let’s look at what the UK government has done: They are paying Microsoft £5.5m to extend updates of Microsoft Windows XP, Office 2003 and Exchange 2003 for ONE year. That is absolutely astonishing and I would think – to UK tax payers – atrocious. What the hell are they thinking? If the SEVEN years warning was not enough time what makes ONE extra year worth THAT MUCH? Furthermore, and most importantly, if they could not UPGRADE in SEVEN YEARS what makes any rational being expect them to UPGRADE WITHIN A YEAR? They claim they’re saving money. Yeah, right. Not only are they paying money to get updates for another year, they will STILL have to upgrade in due time, if they are to get updates. Think of it this way. When a major part of your car dies, you might consider fixing it. It will likely be pricey. If, however, shortly thereafter (let’s say within a year or two), another major part of your car dies. Also, the car has been used for quite some years and certainly out of warranty. What is the most logical and financially best assumption and choice? Is it to that this will be the last part to die – surely nothing else can go wrong! – and to pay for it and then wait until the third part dies (which it almost certainly will; it is mechanical and mechanical things die!)? Or is it maybe better to cut your losses and get a new car? I think we all know the answer. We of course, does not include the UK government.

The bottom line here is quite simple though: No, Microsoft is not being irresponsible. They are not being unreasonable either. No, they gave SEVEN YEARS notice. The only irresponsible and unreasonable people – and companies and/or government[s] – are those who STILL use Windows XP and especially those that are now forced to upgrade and at the same time whining worse than a spoiled brat who is used to getting his way but the one time he doesn’t, he throws a tantrum. Lastly, I want to point out the very dangerous fallacy these people are actually aligning themselves with. To those of us who remember when the TELNET and RSH protocols were prevalent, there came a time when enough was enough and standards had to change (e.g., secure shell aka ssh). Those who had any amount of logic in them UPGRADED. Many (though not as many as should have) saw many problems with them for far too long, among them the following (and note that these are on Unix systems and yes that means NO system is immune to security problems, be it Windows, Mac, Unix or anything else. Incidentally, Unix systems are what typically are used for servers which means customers data included in databases running on the servers, especially then as Windows NT was in its infancy by the time most – but probably not all – changed over):

  1. The fact a common configuration would allow “you” to remotely log in to a machine as a user from ANY HOST WITH NO PASSWORD? And of course it was PERFECTLY SAFE because after all, they won’t have a user with the same name, right? Well, ever occur to you that they could CREATE a user with that name? And ever hear of grabbing the password file remotely to find user names? Or a scrupulous employee who could do the same? An employee that was fired and wants revenge (and happens to have user names or maybe even stole data before they were completely locked out after being fired? Maybe they even left a backdoor in!)? For those who are slow that is sarcasm; it was NEVER safe and it WAS ALWAYS naive at best (this same problem is trust relationship and that is one of the biggest problems with security – too much trust is given far too easily). And indeed, Unix – just like the predecessor to the Internet – was NEVER designed with security in mind. That is why new standards are a good thing: to address problems and to extend, deprecate or obsolete standards (like, I don’t know, IPv6 as opposed to IPv4, anyone?).
  2. No encryption means sniffing could show the user and password (as well as other information in the traffic) to the sniffing party. Assuming that there is no one to sniff your traffic is security through obscurity at best and that is arguably worse than no security (it is a false sense of security and when taken to extreme some will refuse to believe it is a problem and therefore are blinded to the fact they already are or could be compromised any moment).

Consider those two examples for a moment. Finally, take the logic of “most people use it” or “it is so convenient and we shouldn’t have to upgrade” and where do you end up? Exactly like not upgrading from Windows XP or otherwise having tantrums about having to upgrade from Windows XP to something more recent (despite the seven years notice and it being in the news more and more as the deadline approached) or else not receive updates. In other words, you are staying behind times AND risking your data, your customer’s data, your system (and that means network if you have a network). And you know something? You had it coming to you so enjoy the problems YOU allowed and let’s hope that only you or your company is affected and not your customers (because it would be YOUR fault).

Preventing systemd-journald and crond from flooding logs

Update on 2014/21/05: I should point out that the change I suggest in /etc/systemd/journald.conf is in fact a (non fatal) error. But it is simply ignored. I’ve not bothered to play with it beyond that. I somehow suspect (but could very well be wrong) that uncommenting it and leaving it empty will either not work or be set to the default. However, since it simply gives a warning in the logs but still does as I want, I don’t see it as harmful or a problem (certainly not enough to test more).

I will come out and admit it fully: there has always been at least one thing that bothered me a great deal with systemd. To be brutally honest there are quite a few things that have bothered me. But one of the most obnoxious ones is something they seem to not understand as a problem (despite the bug reports and for some people concern that someone had compromised their system due to the way the message is written): every time cron runs a task it shows not one, but two messages in the system log (/var/log/messages) and the journal. It is absolutely infuriating as it fills the log files which then get rotated out (due to size reaching its cap) and besides that, it is REALLY hard (minus grep -v on a pattern over multiple log files, but that should NOT be necessary!) to find other important log messages in the huge ugly disaster that the log file is left in. Equally as bad is that there is this log file, called – check this out – /var/log/cron with the information that should be ALL that is needed. But of course not; not only does it NEED to be in /var/log/messages and not only does it NEED to be in /var/log/cron it ALSO NEEDS to be in the journal, the so called improvement over logs. /sarcasm. Three places for the same bloody message? Really? What the hell is that? Anyone who knows enough to check logs will know enough that there are MULTIPLE LOGS for DIFFERENT reasons! So while I titled this about preventing the flooded logs that realistically is FAR too nice. It should be more like making systemd shut the hell up and knock off the stupid log flooding (which incidentally, could be considered by some a DoS – denial of service – attack since you make it much more difficult to normally manage and review logs).

Well, I had had WAY too much of this crap and while I’m easily irritated (and agitated lately) I think I’m not the only one who is completely fed up with the way they are handling it (or not handling it rather). So here is how you can make this flood stop. First, though, the message would look like this:

 Mar 16 04:55:01 server systemd: Starting Session 3880 of user luser.
Mar 16 04:55:01 server systemd: Started Session 3880 of user luser.

in /var/log/messages. As you can imagine, an unsuspecting user might see that on some of the system cron jobs (e.g., in /etc/cron.hourly/ which is run by root) and think that it is someone who logged in as root on their system (when in fact it is cron). Conveniently the clowns responsible make it end up in that file even though it is in the journal. Why is that? Oh, something like this, taken from journald.conf(5) (that is: man 5 journald.conf):

       ForwardToSyslog=, ForwardToKMsg=, ForwardToConsole=
Control whether log messages received by the journal daemon shall be forwarded to a traditional syslog daemon, to the kernel log buffer (kmsg), or to the system console. These options take boolean arguments. If forwarding to syslog is enabled but no syslog daemon is running, the respective option has no effect. By default, only forwarding to syslog is enabled. These settings may be overridden at boot time with the kernel    command line options “systemd.journald.forward_to_syslog=”, “systemd.journald.forward_to_kmsg=” and “systemd.journald.forward_to_console=”.

Someone remind me. Wasn’t the idea of Fedora Core 20 to REMOVE the syslog daemon from default install because the journal was sufficient, was causing logs to be stored twice (Ha! Nice number but too bad it is lower than the truth in at least the case of cron) and has had enough time to show it works? No, no need: that absolutely was their idea! Yet they clearly didn’t think very well, did they? If they forward to syslog then what about systems that are updated rather than new installs? The syslog daemon will be installed, geniuses! Yet here you forward to syslog. Brilliant, if your idea of brilliant is beyond stupid.

Oh, and if you think the rant is done,  I’m sorry to suggest no. What you also find with cron jobs is this, in /var/log/cron as it always has been (not the same entry or same instance but it shows you the info – in fact, it shows more specific info like WHAT was executed rather than just a session started for the user, vague and unhelpful as it is – and not two nonsense lines about ‘starting’ and then ‘started'; what ever happened to “no news is good news” i.e., if there is no output there is no error?):

Mar 29 20:55:01 server CROND[2926]: (luser) CMD (/home/luser/bin/

(There also exists the normal run-scripts entry for hourly, daily, monthly cronjobs but those also show the commands executed).

And then there is the third copy: the journal, which includes BOTH of the above:

Feb 16 18:05:01 server systemd[1]: Starting Session 1544 of user luser.
Feb 16 18:05:01 server systemd[1]: Started Session 1544 of user luser..
Feb 16 18:05:01 server CROND[13241]: (luser) CMD (/home/luser/bin/

Redundancy is good in computing but NOT in this way. Redundancy is good with logs but again, NOT in this way. No, this is just pure stupidity.

Now then, here’s how you can make journald cut this nonsense out.

  1. In /etc/systemd/ you will find several files. The first one to edit is “journald.conf”. In it, you need to uncomment (remove the # at the start of the line) the line that starts with: #Storage=
    You then need to change whatever is after the = to be “syslog”. (without the quotes).
  2. The next file (same directory) is “user.conf”. Again, you need to uncomment a line to activate the option. The line is #LogTarget= and you want to change what is after the = to “syslog”. (again, without the quotes)
  3. Next you need to edit “system.conf” (same directory still) and do the same change as in “user.conf” (note: I am not 100% sure that you need to do it for both “user.conf” and “system.conf” and if only one is required I don’t know which one nor do I care).
  4. Now, this may vary depending on what syslog daemon you have. I’m assuming rsyslogd. If that is the case change to the directory: /etc/rsyslog.d/
  5. Once in /etc/rsyslog.d/ create a file that does not exist – maybe cron.conf – and add the following lines:
    :msg, regex, “^.*Starting Session [0-9]* of user” stop
    :msg, regex, “^.*Started Session [0-9]* of user” stopNote on this: “stop” is for newer versions of rsyslogd which you will have if you’re using Fedora. Otherwise, for older versions, change the “stop” to a tilde (a “~”). If you check /var/log/messages after restarting rsyslogd and you notice that there is a problem with stop then you can try the other (it will also show you, if you try ~ first, that ~ is deprecated). Those two commands will, combined with the changes to systemd files, allow only the syslog to get the cron message which is now removed (which is fine because as I already noted, /var/log/cron has that info).
  6. To enable all of this you would want to do the following (you need to be root, in fact you need to be root for all of the steps) as given below.


# service rsyslogd restart
# systemctl restart systemd-journald.service

Note the following: the second command may or may not be enough. Since I only did this on a remote server and since I was not about to play the game of “is it because I didn’t restart the right service or is something else not properly configured?”. I’ve yet to do it on any local machines so I cannot remark on that more than that. If rebooting is an option and it does not work as described above then that could be one way around it.

Questions that might come to mind for some:

  1. Since we redirect journal to syslog, do we see the usual log messages? Yes, you do. For instance, you’ll see when someone uses ‘su’, you’ll see when (example) you restart a service that writes that it stops and/or starts again to the syslog, in /var/log/messages too.
  2. What about the fact this shunts cron messages out of the syslog? Well, as I mentioned, it is stored (in more thorough form) in /var/log/cron so you won’t lose it. The only thing that loses this is where it should not be stored in the first place: /var/log/messages
  3. How does this affect the journal? Good question. I actually don’t care – the journal uses more disk space and log rotation works just as well and so does backup, remote storage as well as compression of log files (if you have it set up to do that). My guess is this: you will find that future log messages are not sent to the journal but only syslog. I am not 100% certain of this however. I will know in time if I bother to check. I think it would depend on how the journal interprets the options; indeed, many other options that I thought might solve the problem were definitely not interpreted as I guessed. So the question really comes down to whether or not directing to syslog defers it from the journal or if it goes to both. For those low on disk space though, the journal uses way more. If you do a ‘systemctl status systemd-journald.service’ you might see something like: Runtime journal is using 6.2M (max allowed 49.7M, trying to leave 74.6M free of 491.1M available → current limit 49.7M) and another line like: Permanent journal is using 384.6M (max allowed 2.8G, trying to leave 4.0G free of 22.5G available → current limit 2.8G).
  4. Perhaps most importantly: does this prevent showing users logging in? No. You’ll still see, for example, the following:
    Mar 29 21:49:37 server systemd-logind: New session 167 of user luser.
    and when they log out:
    Mar 29 21:49:39 server systemd-logind: Removed session 167.

All that noted, hopefully someone will see this and be helped by it. What would be more ideal, however, is if the maintainers actually fixed the problem in the first place. Alas, they are only – just like you and me – human, and to be fair to them, they aren’t being paid for the work either.

whois and whatmask: dealing with abusive networks

(Update on 2013/03/11: I added another grep command as I just discovered another line that would give the netblock of an address directly from whois, so that you do not have to worry about finding the proper CIDR notation as it shows you. Ironically, the IP in question was from the same ISP I wrote about originally –; regardless, the second grep output will show one of the many differences with the whois protocol outputs)

My longest standing friend decided last year, at the end of the year, that he wanted to get me some books (thanks a great deal, by the way, Mark – it means a damn lot and I’m eternally grateful we’ve stayed in contact throughout the years). While he lives in England and I live in California, we’ve “known” each other for almost 18 years. There was a problem with and he was also in New York as part of his job part of this time, however, and so the gifts did not arrive until yesterday. Now, of course I could not know every detail of the book, but one of the books was a Linux networking book. It is more like a recipe book and while there is some I know (and some know very well), and some that are not useful to me, there’s going to be some I find something of interest or use. Which brings me to this post. Obviously I know of the whois protocol, but what I did not know about is the utility ‘whatmask’. There is a similar utility called ‘ipcalc’ but on CentOS it is very different from the expected and I found many problems with it. So I was looking at the book (the name fails to come to mind at this time), briefly skimming sections, and I noticed they discussed this very thing and mentioned the alternative ‘whatmask’ on CentOS and Fedora Core.

I thought this would be very interesting to see. Sure, you can do it by hand but this is much more time efficient and allows you to get a quick summary. Further, with whois, you can confirm your suspicions. Yes, I know that if whois shows a netblock as (this is of course a private block) – that the CIDR notation is /8. But that is besides the point and if I were to consider that, then I would have nothing to write about (and it has been quite a while since I have written anything strictly technical – something I’ve been wanting to correct since my birthday last month but have been too busy working on a project that is pretty important to me).

Now, then, about dealing with abusive networks. Firstly, there are many ways to take care of a network. I am obviously not condoning nor suggesting anything malicious nor am I condoning or suggesting anything at their end. The Linux kernel has netfilter which is what iptables (and ip6tables) uses, the IPv4 (and IPv6) firewalls (respectively). Yes, I could write out an iptables rule to stop all traffic from a certain network, but this is less efficient than simply making a blackhole route of that address. The problem was, how do you determine the entire range of IPs that they own? I seem to remember that they had different blocks. Further, a whois on the domain won’t show the network block (forget for a moment that it does when you use an IP in the netblock). Either way, the below procedure can be done for any IP.

The network in question is and is located in Taiwan. The abuse is not so much attack attempts and it is not necessarily the owner’s fault (it is an ISP). But what it is is a lot of spam attempts (to accounts that don’t exist on my end and relay attempts to other hosts, neither of which I allow, just like all responsible administrators; indeed, running an open relay – notwithstanding an administrator who unknowingly makes a mistake or has a flaw exploited on their server – is nothing but malicious, as far as I am concerned). Since this is an ISP (I know it is in fact because I remember seeing dynamic IPs in their block or blocks, before) they don’t need anything from my network. And even if they have customers who are corporations, the fact of the matter is, I am not a customer of said corporation, I’ve never seen such corporation, and I don’t actually care: abusive networks are not something anyone on the abusive end would tolerate (just like if someone walks up behind you and hits you in the back, you would not exactly tolerate it). So, let us take an IP in their network and see the ways to determine all IPs in the block that IP is in:

One of the IPs is ‘’. This is one that I specifically added a blackhole route to and that means one thing and one thing only: I saw it attempt what I described, above. So what do you do? Well, firstly, I run fail2ban (one option of many) and I’m fairly restrictive on how many failures I allow (like, 1) before they are blocked. But, let’s assume you want to take care of ALL IPs in that block (because you’ve seen many over the years) and you don’t even want to give them a chance to connect to your services. Then, what you do is the following. Note that I am limiting the output here.

$ whois | grep -E 'NetRange|inetnum|CIDR'
inetnum: -

Note that if you see CIDR (see also the end of this post where I give another whois command piped to grep, where there is another line that shows the CIDR notation) in the output then you have the network block right there. But, if however, you see NetRange or inetnum (there may be others that I’ve not seen so your mileage may vary and may be wise to not pipe the output to grep), then you don’t have the block, at least not in a notation that setting a blackhole route will allow (again, see the end of the post as I discovered another field that gives the entire network mask).

Now, the inetnum output above would tell me that the CIDR notation is /16 so if I add a blackhole route for then I am set. But assume for a moment that you don’t know that. Well, here is where whatmask comes handy. Sort of. It does need a CIDR notation with or without an address. So if you take the fact that /32 is one single address (which whatmask will show as 0 usable addresses because it is considering a network block which therefore includes network address and broadcast address – it assumes the address you specified IS the network and broadcast address) and /0 is every single IPv4 address (which is 2 ^ 32 much like IPv6 has 2 ^ 128 IPs), /31 is 1 address and more generally, the common network block ranges (in CIDR notation) are: /8, /16 and /24 (/8 having the most addresses, /16 having less than /8 but more than /24, /24 having the least of those), then you know that the possible CIDR numbers you can specify is between /0 and /32. It won’t be /0 and it won’t be /32, it won’t even be /31 for a network block (at least not in this way; a network needs a broadcast – in IPv4 – and network address), so you can just play around with it if you don’t know. Over time you get used to recognising the proper CIDR notation but understand this: the number after the slash is how many bits are reserved for the network portion of the address. So if it is /8 then 32 – 8 = 24 is how many bits are available to hosts which is why the higher the number after the slash, the less number of IPs that are available. When you find the right number, you can then do this:

$ whatmask
IP Entered = ..................:
CIDR = ........................: /16
Netmask = .....................:
Netmask (hex) = ...............: 0xffff0000
Wildcard Bits = ...............:
Network Address = .............:
Broadcast Address = ...........:
Usable IP Addresses = .........: 65,534
First Usable IP Address = .....:
Last Usable IP Address = ......:

Now observe the following things:

  • The result of the filtered whois output shows: -
  • The Network Address line in the whatmask output is:
  • The Broadcast Address line in the whatmask output is:
  • The First Usable IP Address line in the whatmask output is:
  • The Last Usable IP Address line in the whatmask output is:
  • Add these together, and you know that the netblock IS - which means that the proper netblock in CIDR notation IS

Putting that together, you can add to your firewall script or some other script (that starts when you boot your computer so it stays there when you reboot next) a command like so (note the # as the prompt – you need to be root to do this so either add sudo in front of it or su to root then do what you need to do, followed by logging out of root):

# ip route add blackhole
# ip route show

(Technically, yes, the ip route show command will show more output but I am showing only the route we added, for sake of brevity)

After this, no IP in that range will NEVER reach your box directly (I won’t get into if they breach another box in your network and connect from that box to the box you blocked it from, neither will I discuss segregating networks, because those are other issues entirely).

As for the second grep output regarding whois directly giving you the CIDR notation (note that I’m only searching for one string in this one because I already showed the others I’m aware of and this specific IP uses what I’m searching for – indeed, I first did a whois on the IP with no grep, and that’s when I discovered this line):

$ whois|grep Netblock

So from that, as root, you could add a route to that range (or do whatever; put an iptables rule in or some such – blackhole routes are much more useful when blocking an entire subnet because it will use less resources though by how much so I don’t know and I have no real way to benchmark it. I don’t actually care though. The entire point of the post was not adding routes or adding firewall rules but rather with dealing with abusive networks. The same can be applied if you take some of the lists out there with networks that are known to be the source of attacks or if you want to block some network for abuse or some other reason entirely).

Dangerous and Stupid: Making Computer Programming Compulsory in School

This is something I thought of quite some time ago but for whatever reason I never got around to writing about it. However, since it seems England is now doing exactly what I would advise against, and since I’m not actually programming (for once), I will take the time to write this. And its probably for the best, that I’m not programming today, given how tired I am. But I guess the article and its clarity will either show that or not. Either way, here goes:

So firstly, what is this all about? To put it simply, in September, all primary and secondary state schools in England, will require students to learn programming (“coding” is the way many word it). To make it actually worse, it seems they (and I hope this is just, for example, the BBC wording it this way) do not know that there is a real difference between programming and coding.

Although I discussed that very topic before, let me make it clear (since even if those involved won’t see this, it is pretty important). You write code, yes, but much like a building contractor NEEDS plans and a layout of the building BEFORE construction, you REALLY NEED a plan BEFORE you start to write the code. If you don’t, then what are you really going to accomplish? If you don’t even know what you’re TRYING to write then how WILL you write it? You might as well rewrite it SEVERAL times and guess what? That is EXACTLY what you will be doing! How do I know? Because I’ve worked on real programming projects as well as stupid/no real use programs, that’s how. If you don’t have a purpose (what will it do, how will it behave if the user inputs invalid input, how will output look, etc.) you are not going to learn because all you’re doing is writing code with no meaning. Besides not learning properly, you’re more likely to learn bad programming practices (because after all, you’re not really working on anything, so “surely it is OK if I just use a hack or don’t use proper memory management!”). The real danger there is the fact it APPEARS to work further strengthens your reasons to use said bad practices in REAL projects (just because a computer program does not crash immediately, in the first hour of run time or even all the way to the program finishing – for those that are meant to be finished, anyway – does NOT mean it is functioning properly; sorry, but it is NOT that simple). There’s many quotes about debugging and there’s a saying (I cannot recall the ratio but I want to say 80:20 or 90:10) out there that X percent of the time on a programming project is spent debugging, and it is not exactly a low number, either.

The problem is this: computer programming involves a certain aptitude and not only will some students resent this (and just one student resenting is a problem with this type of thing) just as they resent other things, some might still enjoy it even if they don’t learn properly, which is a risk to others (see end of post). Also, you cannot teach security and if you cannot teach security you sure as hell cannot teach secure programming (and its true: they don’t and that is why they have organisations that guide programmers in secure programming – OWASP for web security alone and there’s others for system and application programming). As for resentment, take me for example, back in highschool. I didn’t want to take foreign language because I had no need for it, I was very very ill at the time (much more than I am now) and I have problems hearing certain sounds (of course the schools naive “hearing tests” told them otherwise even though I elaborated time and again that, yes, I hear the beeps but that doesn’t mean much in way of letters, words, and communication, when considering learning, which I do not hear perfectly, does it? The irony is I had hearing tubes put in when I was three – perhaps the school needed them? – so you would think they could figure this out but they were like all schools are: complete failures) which ultimately would (and indeed, DID) make it VERY difficult to learn another language. But I was required to take foreign language. So what did I do? I took the simplest of the offered languages (simplest in terms of whatever those ‘in the know’ suggested), the least amount of years required (two years) and I basically learned only what I absolutely needed to pass (in other words, I barely got the lowest passing mark which by itself was below average) and forgot it in no time after getting past the course.

The fact that programmers in the industry just increase static sized arrays to account for users inputting too many characters instead of properly allocating the right size of memory (and remembering to deallocate when finished) or using a dynamically sized type of container (or string) like C++’s vector (or string class), says it all. To make it more amusing (albeit in a bad way), there is this very relevant report, noted on the BBC, in February of 2013. Quoting part of it and giving full link below.

Children as young as 11 years old are writing malicious computer code to hack accounts on gaming sites and social networks, experts have said.


“As more schools are educating people for programming in this early stage, before they are adults and understand the impact of what they’re doing, this will continue to grow.” said Yuval Ben-Itzhak, chief technology officer at AVG.

Too bad adults still do these things then, isn’t it? But yes, this definitely will continue, for sure. More below.

Most were written using basic coding languages such as Visual Basic and C#, and were written in a way that contain quite literal schoolboy errors that professional hackers were unlikely to make – many exposing the original source of the code.

My point exactly: you’ll teach mistakes (see below also) and in programming there is no room for mistakes; thankfully here, at least, it was not for stealing credit card numbers, stealing identities or anything to that degree of seriousness. Sadly, malware these days has no real art to it, and takes little skill writing (anyone remember some of the graphical and sound effects in the payload of the old malware? At least back then any harm – bad as it could be – was done to the user rather than a global scale done for mass theft, fraud and the like. Plus, that most viruses in the old days were written in assembly, more often than not, shows how much has changed, skill wise, and for the worst).

The program, Runescape Gold Hack, promised to give the gamer free virtual currency to use in the game – but it in fact was being used to steal log-in details from unsuspecting users.


“When the researchers looked at the source code we found interesting information,” explained Mr Ben-Itzhak to the BBC.

“We found that the malware was trying to steal the data from people and send it to a specific email address.


“The malware author included in that code the exact email address and password and additional information – more experienced hackers would never put these type of details in malware.”


That email address belonged, Mr Ben-Itzhak said, to an 11-year-old boy in Canada.


Enough information was discoverable, thanks to the malware’s source code, that researchers were even able to find out which town the boy lived in – and that his parents had recently treated him to a new iPhone.

Purely classic, isn’t it? Sad though, that his parents gave him an iPhone while he was doing this (rather than teaching him right from wrong). But who am I to judge parenting? I’m not a parent…

Linda Sandvik is the co-founder of Code Club, an initiative that teaches children aged nine and up how to code.

She told the BBC that the benefits from teaching children to code far outweighed any of the risks that were outlined in the AVG report.

“We teach English, maths and science to all students because they are fundamental to understanding society,” she said.

“The same is true of digital technology. When we gain literacy, we not only learn to read, but also to write. It is not enough to just use computer programs.”

No, it isn’t. You’re just very naive or an idiot. I try to avoid direct insults but it is the truth and the truth cannot be ignored. It is enough to use computer programs and most don’t even want to know how computers work: THEY JUST WANT [IT] TO WORK AND THAT IS IT. There are little – arguably there are none – so called benefits. Why? Because those with the right mindset (hence aptitude) will either get into it or not. When they do get into it though, at least it’s more likely to be done properly. If they don’t then it wasn’t meant for them. Programming is a very peculiar thing in that it is in fact one of the only black and whites in the world: you either have it in your or you don’t. Perhaps instead of defending the kids (which ultimately puts the blame on them and even I, someone who doesn’t like being around kids, see that that is not entirely fair – shameful!) by suggesting that the gains outweigh the risks, you should be defending yourself! That is to say, you should be working on teaching ethical programming (and if you cannot do that, because, say, its up to the parents, then don’t teach it at all) rather than just here it is, do as you wish (i.e., lazy way out) attitude. Either way, those who are into programming will learn far more on their own and much quicker too (maybe with a reference manual but still, they don’t need a teacher to tell them how to do this, how to do that; you learn by KNOWING combined with DOING and EVALUATING the outcome, then STARTING ALL OVER). Full article here:

To give a quick summary of everything, there is a well known quote that goes like this:

“90% of the code is written by 10% of the programmers.” –Robert C. Martin

Unfortunately, though, while that may be true (referring to programming productivity), there is a lot of code out there that is badly written and that risks EVERYONE (even if my system is not vulnerable to a certain flaw that is abused directly by criminals, I can be caught up in the fire if that only includes bandwidth and log file consumption on my end; worse however, is when it is a big company has a vulnerable system in use which ultimately risks customers credit card information, home address and any other confidential information). This folks, is why I put it under security, and not programming.

Fedora Core 20 Oddities

Fixing a mistake that I had right the first time but erroneously ‘fixed’.
In actuality, the journal does include more than /var/log/messages does (and I noticed this after the fact which is when I documented how to prevent systemd from flooding the logs… but forgot about this post until yesterday). Still, as pointed out, the journal (nor messages) does not need to show information on (for example) cronjobs. Furthermore, the fact remains that /var/log/messages and /var/log/ itself excluding the journal is smaller than the journal itself by a fair bit (at this time: logs + journal = 818MB, logs – journal = 57MB).

Addendum on 2013/12/30:

As I hoped (and somehow expected it to be but was in a really bad state of mind and very inpatient hence not giving it time, which I admit is very shameful and even hypocritical on my end) the issue with libselinux was in fact a bug. So, that makes updating remote servers (in a VM) much less nerve wracking (the delay on the relabel for instance would be concerning as there would be no way to know if there was a problem or not until later on):

- revert unexplained change to rhat.patch which broke SELinux disablement

I still find it odd that they would remove the MTA and syslog but to be fair to that even, at least it is not removed from the OS itself but merely the core group of packages. There is the question of why do I keep this post at all even, then? Because I find it odd (even if some of the most brilliant things that seem normal now, were originally deemed odd) and what is done is done which means it would be more fake on my end to suddenly remove it (besides, I do give credit to the Fedora Project too, which is a good thing for anyone who might only see the negative at times, like I myself was doing at the time of writing the post). That’s why. Unrelated, to anyone who happens to see this around the time of the edit date, while I don’t really see New Years as anything special (most holidays, in fact) I wish everyone a happy new year.

Addendum on 2013/12/27:

Two things I want to point out. The first is a specific part of my original post. The second is actually giving much more credit to Fedora Core than I may seem to give in the post. In all honesty I really value Fedora Core and what they have done and how far they have come along and the projects I maintain under Fedora need a more up-to-date distribution because I use it to its full potential (e.g., the 2011 C and C++ standards are not supported in older libraries on less often updated distributions). So despite my complaints in this post, I want to thank the Fedora project for how far they have come along, for continuing it and for it actually being a very good distribution. Keep it up Fedora. Nothing is expected to be perfect and you cannot please everyone but the fact the software in question that I am referring to can still be installed (and not removed completely) is really enough to make anything that might otherwise be very annoying more of a nuisance for new installs. Here are my two notes then:

First, the part about log file size. Last night after posting this I realised something that – at first thought might make my point less valid and realistically that would be nice as it is one less valid complaint – makes it a little bit worse. The problem? The journal could be viewed as just /var/log/messages which means that the extra size over /var/log/ in its entirety is actually worse; instead of it being bigger than /var/log by 313MB it is actually (if you consider all journal files) 313MB – 7.9MB of all my /var/log/messages files (again: former compressed, latter not) so for one log file (instead of all logs) 305.1MB larger just for one log type (the syslog).

Second, to be fair to Fedora Core: I could probably have worded ‘Quality Control’ better. I was a bit irked by the SELinux (configuration file not being a configuration file) issue and as I noted I’ve been fairly agitated lately, too. In fact, to be even more fair, Fedora has actually come a very long way (I remember trying it with release 1 or 2 and having quite a lot of trouble with it due to hardware support or lack thereof – but realistically it was probably not even that bad when you consider that it is not a single piece of software; you have the toolchain, you have the kernel, you have editors, the desktops, and much more and all this is something to consider: it takes time to get stable  – which it is – and fully functional) and while I find some of the things (that they decided to change this time around) quite laughable I still value their work and I still will use the distribution and as they do point out there’s no harm in leaving a syslog package or an MTA installed (I just was just dumbfounded that a Linux distribution – no matter which one – thinking it is a good idea to remove the syslog and at the same time also remove the MTA from the install). So even if this post seems rather aggressive and thankless towards Fedora Core, I really am actually quite thankful for their work. As I noted: it is a rant and as a rant it is being critical of certain things and usually not constructive criticism (which is why I added this note here). In fact, I will change the title of this post and the link, too. Is only fair and is the right thing to do.

The original post is as follows (I’m not updating the post to reflect on the title change as I already made the point clear, above, but for what its worth this is not really about quality control: the update went quite smoothly – I just find some of the things changed rather unlike a Linux distribution).

Need an example of horrible quality control? Well, let me tell you about this operating system I use and one that I’m usually quite fond of. One that is quite old nowadays and I feel has gone back to their early days as far as quality is concerned. Yes, Fedora Core 20. As a programmer myself I am both very well aware that mistakes happen and programmers are just as guilty as this, as well as creating things (like software) does involve risks (as well as using said created thing). I’m also usually very tolerant of mistakes with programming for I know very well how it works. I could also (and believe me – I absolutely am!) be angry at and at the same time I could be blaming myself that I went with the upgrade despite me having a really bad feeling with it (the first time I have had a very bad feeling looking at release notes as well as following the release plans during the development process). But at the same time, what can I do? At best I can wait but I can only wait for so long (and long is not at all a long time – more like fairly short when you consider end of life of the release) and hope the next release is better.

But I cannot remember many times where I have been as stunned (in a bad way) as much as I am, with any software. I don’t even know where to begin so I’ll just come out and write this first: Yes, this is a rant and although I am irritable lately I also admit that in general, I go full on with rants. Either way, it is also something that I feel needs to be written (even if just for me) as FC 20 is a horrible example of quality software. I was going to skip writing this until today, when I ran into two little issues that really bothered me (and I admit fully that with the way things have been lately, bothering me is pretty easy but…) enough where I had to look in to what the hell was going on in more detail. Before I get to that though, I’m going to take a stab at two of the changes in Fedora Core (both by the same person, mind you) that I thought were quite idiotic (perhaps because they are in many respects) when I first read it and I still do (plus the actual gains they claim are exactly the opposite or if nothing else not true and I provide proof of that).

On today’s Internet most SMTP hosts do not accept mail from a server which is not configured as a mail exchange for a real domain, hence the default configuration of sendmail is seldom useful. Even if the server is not tied to a real mail domain, it can be configured to authenticate as a user on the target server, but again, this requires explicit configuration on both ends and is fairly awkward. Something that doesn’t work without manual configuration should not be in the default install.

So let me get this straight. SMTP hosts do not accept mail from a server which is not configured as a mail exchange for a real domain? And even if the server is not tied to a real mail domain, it can be configured to authenticate as a user on the target server BUT AGAIN it requires explicit configuration on BOTH ends? Furthermore, since it doesn’t “work” without manual configuration it should not be in the default install? Okay then, so I ask why is static IP networking in the default install? I guess that would be because of the part where it is “useful”, yes? Or maybe it is because Unix (and therefore Fedora Core) is a network operating system so IT HAS to have support for static IP addresses? Well, with that logic, here is something that one would think is VERY OBVIOUS but clearly IS NOT: just because not every system is part of a network (for example, part of a domain or even just an intranet) does not mean it is useless. Also quite amusing is this: I use an MTA (guess which one?) in at least one cronjob and I only had to configure the MAIL SERVER. I wonder why and how that might be possible? /sarcasm

Most MUAs we ship (especially those we install by default) do not deliver to a local MTA anyway but rather include an SMTP client. Usually, they will not pick up mail delivered to local users. This means that unless the user knows about local mail and takes steps to receive local mail addressed to root, such messages are likely to be ignored. Our current setup in many ways hence currently operates as reliable /dev/null for important messages intended for root. Even worse, there is no rotation for this mail spool, meaning that this mailbox if it is unchecked will slowly eat up disk space in /var until disk space is entirely unavailable.

Wait a minute. Were we not referring to MTAs? Now you’re on about MUAs? Most bizarre is this part:
“Most MUAs we ship (especially those we install by default) do not deliver to a local MTA anyway but rather include an SMTP client.”

What the hell is a MUA if it is not an email client? And how ironic that you mention the word ‘local’ (even if MTA is client and server, typically, and therefore we have some redundancy here). One would think you could put that together with the first block of text I quoted. Sadly that seems unlikely. I won’t even bother going at the rest of that and instead will continue to the next part.

Many other distributions do not install an MTA by default anymore (Including Ubuntu since 2007), and so should we. Running systems without MTA is already widely tested.

The various tools (such as cron) which previously required a local MTA for operation have been updated already to deliver their job output to syslog rather than sendmail, which is a good default.

I will delay the part about syslog for a moment as I find that part especially amusing and another issue entirely. So: just because other distributions do not include an MTA you should too? Clearly the followers and not the state of the art that Fedora was meant to be. A shame, that. And sure, running systems without an MTA is tested but not all machines are MAIL SERVERS and even more not all need to SEND MAIL. Did you know that running systems without a HTTPD is widely tested? In fact, you could replace any number of other services and ask (or proclaim) the same stupid question (statement). /sarcasm

As for cronjob and specifically mail versus syslog, let us go to the next idiotic move!

Let’s change the default install to no longer install a syslog service by default — let’s remove rsyslog from the “comps” default.

The journal has been around for a few releases and is well tested. F19 already enabled persistent journal logging on disk, thus all logs have been stored twice on disk, once in journal files and once in /var/log/messages. This feature hence recommends no longer installing rsyslog by default, leaving only the journal in place.

A purely classic example of irony I must admit. Okay, sure, the journal acts as a log but really, to call it the syslog, is rather weak. Even worse is the supposed benefits to Fedora. Let me unravel that, now.

Our default install will need less footprint on disk and at runtime (especially since logs will not be kept around twice anymore). This is significant on systems with limited resources, like the Fedora Cloud image.

Oh really? Journals use less resources than logs? I would like that to be a (stupid) joke but sadly it is for real. Here, let me just refute that complete and utter nonsense with proof:

With journal we have this:
# du -sh /var/log
368M /var/log
With journal excluded:
# du -sh /var/log –exclude=’journal’
55M /var/log

Less resources what? If you do the math (368 – 55) you will note that the journal uses 313MB MORE than the regular logs. Even more to consider is that the journal is compressed and my log files (including those that have been rotating like normal) are not compressed.

Two more things, one leading into the next.

Also, we’ll boot a bit faster, which is always nice.

How lovely. I’ve noticed it actually taking longer time since Fedora 20 was installed, compared to Fedora 19 (I don’t dare update to Fedora 20 on the other – remote – server I have Fedora 19 installed on!). And I especially noticed it taking longer today because SELinux (which I have disabled on the machine I’m writing from, and the reasons will be made clear soon) decided to (after an update of libselinux, yesterday) now ignore /etc/sysconfig/selinux and so was enabled AND since it had been disabled it needed to relabel the whole file system (except the file systems that I have read-only). How brilliant of an idea is THAT? Ignore the configuration file so that it wastes disk space, basically (there’s more disk usage being eaten up). What is the purpose of the configuration file then? sestatus showed that it was indeed enabled but via configuration file it showed disabled (too bad the configuration file was ignored though which means it doesn’t matter what the configuration file has, which is, to be blunt, completely stupid; the configuration file is to, here’s a thought, configure something!).

As for why I have SELinux disabled on this machine? Well, I’ll give you an example of the problem it causes when it doesn’t work right. A denial came up and it turned it into a zombie process as shown here:
7544 0.0 0.0 0 0 ? Z 14:09 0:00 [kcmshell4]
Of course, I could be lying that the cause is that so let’s take a look at the sealert of /var/log/audit/audit.log:

type=AVC msg=audit(1388095776.191:191): avc: denied { write } for pid=7544 comm=”kcmshell4″ name=”icon-cache.kcache” dev=”dm-6″ ino=263190 scontext=unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c0.c1023 tcontext=system_u:object_r:tmp_t:s0 tclass=file

Was working fine until then. But after that point I had Firefox both hanging on me (had to send it a SIGTERM) and then closing almost immediately (some times and other times hanging again) on restarting it (and it was Firefox that was trying to write to the file in question).

And lastly, I’ll just quote the very descriptions of SELinux modes exactly from a website, as to why I find it more a problem than a gain (and hint: security is only security if it is not so much a hassle that people want to override it, e.g., being baby-sat when installing software or having to have 20 passwords and so they write it down in plain sight to compensate; there has to be a balance):

Setting this parameter will cause the machine to boot in permissive mode. If your machine will not boot in enforcing mode, this can allow you to boot it and figure out what is wrong. Sometimes you file system can get so messed up that this parameter is your only option.

This parameter will force the system to relabel. It does the same thing as “touch /.autorelabe; reboot”. Sometimes, if the machines labeling is really bad, you will need to boot in permissive mode in order for the autorelabel to succeed. An example of this is switching from strict to targeted policy. In strict policy shared libraries are labeled as shlib_t while ordinary files in /lib directories are labeled lib_t. strict policy only allows confined apps to execute shlib_t. In targeted policy shlib_t and lib_t are aliases. (Having these files labeled differently is of little security importance and leads to labeling problems in my opinion). So every file in /lib directories gets the label lib_t.
When you boot a machine that is labeled for targeted with strict policy the confined apps try to execute lib_t labeled shared libraries so and they are denied. /sbin/init tries this and blows up. So booting in permissive mode allows the system to relabel the shared libraries as shlib_t and then the next boot can be done in enforcing.

So you cannot even successfully relabel unless it is in permissive mode. How lovely is that when you remember how important labels are to SELinux.

That’s it. I just hope Fedora gets their act together and soon. Rant done.