Solution: systemd-sysv-generator: Could not find init script for

This is something I’ve seen in a Fedora VM. It seemed odd to me because the services were not enabled and actually weren’t installed (but had been). I never really looked in to it, however; after all, it is just a warning and I had seen odd things in systemd before (as I’ve documented). But just a bit ago I decided to look in to it. It is quite simple really (however, I’m elaborating on different parts and this includes safe computing practises; indeed, there are some things to consider and if you don’t know how certain things work in full you would be wise to learn those things).

systemd-sysv-generator is a simple wrapper to allow services – that are not systemd enabled (or maybe ‘capable’ is better), that is, those that don’t have systemd files (and instead, as is suggested by the name, is SysV init scripts) – to work with systemd. All one needs to do is determine this (which isn’t all that complicated) and it becomes quite natural to suspect that something refers to what used to exist but does not now. That suggests, also, that it is a dangling symlink (much like a dangling pointer in C, the symlink points to something that doesn’t exist but at one point did). This means that you need only rm the symlink itself. To find all broken symlinks in /etc/rc.d (which is the directory in which all the SysV init scripts are located) you can use the following. Note that I pass the option –color=auto to ls for those who have colour terminals (which means most) and therefore can see that they are indeed broken. However, I’ll explain the reason this works as such. I’ll show how to remove the links but I’m first including how to print the link names. It should be obvious why but just in case: never ever copy and paste a command that deletes (or truncates or otherwise overwrites) files, without knowing the full consequences. Sometimes broken links are installed and I wouldn’t go around deleting just any link (it is one of those things that “don’t do this unless you truly know what you are doing” – it just isn’t safe practise to delete files because they appear useless to you; see below, also).

To find broken links in /etc/rc.d you would use the following command (you shouldn’t need to be root to display them; you would be to delete them, whether through sudo, su with the option -c or as root is up to you):
$ find -L /etc/rc.d -type l -a -exec ls –color=auto -l ‘{}’ \;
The -L option means dereference symlinks. We only care about files under /etc/rc.d so we specify that as where to find the files. -type l means symbolic link. The result of this type is always false unless -L is specified and the link is broken. The -a (you can omit the -a because two expressions, one following the next, implies it) stands for and which means only execute the ls command if the previous expression is true. Since we want to find broken symlinks, and since -L dereferences symlinks, if the file doesn’t exist the dereferenced file cannot be a symlink. Therefore, because of the return value (as previously explained), if the link is broken, we need the and option. The -exec syntax of find is something that I think throws people off. I never had this problem; I read it in a book many years ago and it made sense to me. The idea, however, is this: you escape (in this case through single quotes) the {} (and therefore you have ‘{}’). That is the current object. ls –color=auto is for colouring, as I explained earlier (you don’t have to use that option, of course). The \; you can read as the end of the command. You could also do ‘;’ but the general form is escaping it (hence \; ). Thus, in my case, I saw (not including colour) the following (snipping some of it for sake of brevity):

lrwxrwxrwx 1 root root 18 Jan 15 2013 /etc/rc.d/rc2.d/K85ebtables -> ../init.d/ebtables
lrwxrwxrwx. 1 root root 14 Sep 27 2012 /etc/rc.d/rc2.d/S90tcsd -> ../init.d/tcsd
lrwxrwxrwx 1 root root 18 Jan 15 2013 /etc/rc.d/rc6.d/K85ebtables -> ../init.d/ebtables
lrwxrwxrwx. 1 root root 14 Sep 27 2012 /etc/rc.d/rc6.d/K10tcsd -> ../init.d/tcsd
lrwxrwxrwx 1 root root 18 Jan 15 2013 /etc/rc.d/rc4.d/K85ebtables -> ../init.d/ebtables
lrwxrwxrwx. 1 root root 14 Sep 27 2012 /etc/rc.d/rc4.d/S90tcsd -> ../init.d/tcsd
lrwxrwxrwx 1 root root 18 Jan 15 2013 /etc/rc.d/rc0.d/K85ebtables -> ../init.d/ebtables
lrwxrwxrwx. 1 root root 14 Sep 27 2012 /etc/rc.d/rc0.d/K10tcsd -> ../init.d/tcsd

Therefore, to delete the files, I would do the following. Note, however, that I am passing the -i (interactive) option to rm for demonstrations purposes. If you want to confirm that you copied and pasted it right (You should rather type it!), or that there isn’t anything else wrong (and this is a good idea in general; I mean no harm but if I make a typo myself, leave out some information by accident or… if I was being malicious.. it would not serve you well: while I am biased because I am incredibly cynical in life, being cynical when it comes to instructions in computer-land is a very good thing! Trust is far too easily given and this has been abused many times over the years, and I mean in the extreme: rooting a system which means gaining root access which means complete control of the system and potentially more of the network!). This is the command:
# find -L /etc/rc.d -type l -a -exec rm -i ‘{}’ \;
After that, assuming you confirmed every file, and assuming no errors, if you were to try the first find command (the one invoking ls), you would have no output because all broken links would be gone.

As for deleting files (that I referred to earlier, with regards to files that seem useless). This is more general, and by more general I mean the way symbolic links work in general. Depending on options to commands and otherwise how commands are invoked, you can run in to problems: is a symbolic link dereferenced, for example? If it is, then acting on it will act on the target. Otherwise it will act on the link itself (this is akin to C, also, where you can have a pointer to a structure but if you don’t dereference it you are only accessing the pointer, not what it points to!). In short: unless you know which commands do what in which conditions, you can run in to problems (whether you do or don’t is another matter entirely; taking the risk is however, not exactly the best choice). This is also what I described in a post some years back called The Spirit of Pranks – Technology Style and in particular the entry that I titled (one of the entries that were my own doing) ‘The “don’t type a command if you don’t know what it does” trick’. Of course, type also implies (maybe more so!) copy and paste. I want to point something out, finally, on this: this rule is especially important because what if you use a command that and specify the wildcard on a directory? It might be that if you use the recursive option, the command does not dereference the links (and for good reason). However, if you don’t specify that option and you instead use a wildcard (or otherwise a file globbing pattern), you might cause damage rather tha solving your problem. I’ll give you an example, and one that almost bit me hard (and I’ve since that time never had a link to this again): if you have a symlink to /dev/null and you run a command that changes the file (it was I believe owner, since only root can change the owner and therefore the command was run as root!), you can break your system. It might seem absurd, but /dev/null is critical. Furthermore, changing the owners, modes, and so on, of files (in /etc is a good example but it isn’t the only example) can break your system. Indeed, a big mistake is when people accidentally (or don’t know better) use recursively change ownership. The problem is: .* does not mean all dot files/directories below the current working directory. It means all .dot files/directorys in the entire filesystem, when acting recursively! It might seem like it would, because after all, a* shows files that start with an a but it isn’t that simple. If you want more information here, read section 7 of glob in the manpages). And while it is similar to regular expressions it is NOT the same! As I think I’ve made clear before (On this subject, in fact), the pattern would be ‘.??*’ (without the quotes, of course). However, the proper way is to be in the starting directory and then recurse on the path ‘.’ (without the quote). But again, this can be very risky; if you are in /etc you can cause a lot of problems (same with /usr …).

Multiple Default Routes Per Network Interface

Problem: You have two NICs in one system where the first NIC is for Internet bound traffic and the second NIC is for an intranet (in my case attached to the switch but is not allowed to communicate with the first NIC and is isolated from the Internet through ingress and egress filtering). Furthermore, the first NIC is associated (by IP addresses assigned to it) with more than one network. How do you make sure that inbound and outbound traffic for one network uses its default gateway and every other network uses its own default gateway as well? After all, if you have a /32 (single IP) and a /29 block, then you would expect the default routes to be different. But both are on the same interface!

Taking the method of configuring multiple default routes found here with a slight change will solve the problem. I did this a while back and I offered some hints there. But somewhat related to a previous post of mine, that of IP masquerading based on destination port, found here, I have decided to explain it more thoroughly (actually, there really isn’t much to explain at all). So what is this slight change? In fact, the change is only that instead of dealing with multiple NICs (one or more for each network), you only concern yourself with one NIC. Yes, it is that simple. I do have a bonus, however, albeit nothing much. If you use SysV initscripts  (e.g. Red Hat), this is how you configure it so that it remains across reboots; contrary to what is often suggested, you typically do not need to write your own script and neither do you have to modify a script that runs on boot, for these types of changes: the support is already there and for good reasons – no one would expect you to reconfigure your network every time you reboot! This is all you do:

After replacing the IP addresses (this includes all the IPs: the network, the gateway and the IP associated with the interface in question) with your own, making sure the configuration works (after typing – and remember, if you type it you’re more likely to remember it, as opposed to copy and pasting! – it and testing it at the shell as well as from other hosts that are relevant), you take each object type (route and rule) and create a file (i.e. for each interface and for each object type) in /etc/sysconfig/network-scripts with a few tweaks. For rules, if your NIC is eth0, you’d have the following file:
/etc/sysconfig/network-scripts/rule-eth0

In this file, you would take what follows the ‘:’ from the command ‘ip rule show’ and place it in to the file. Note that you want the rules that apply to the table the rules apply to! You don’t want the other tables!

You would also have the file:
/etc/sysconfig/network-scripts/route-eth0

and inside it you would place the ip route add command (that was typed in previously) except that you don’t include the ‘ip route add’ but only what follows it (to the right).

That’s all there is to it!

(Of course, that’s all there is to if you’ve read the original document and equally important, if my being distracted while writing this – which I was – has not created a null … route).

Rant: The State of Cyber War


2015/02/19:
Prepend ‘Rant:’  to the title. I’m on the fence of whether it would be better as ‘Viewpoint:’ or ‘Rant:’ but for now I’m erring on rant because there is aggression and rant is therefore more likely what people would think of (even though there certainly are personal viewpoints here). Yes, the point is valid – cyberwar is a dangerous game indeed, and a game that is unlikely to stop (this is not War Games, after all – at least not as the movie portrays). But sarcasm is in my blood, it really is (and it has always been this way, even as a toddler), and if I were a god I would likely be the god of sarcasm with a quick and nasty temper. I won’t deny it; the way I express myself is probably not the best, certainly it is not the way most people express themselves. I do try to be reasonable – and fair – and this is one of the reasons I try to always have a point but the fact remains that it is often in a vague and cryptic manner with a tendency to go too far.


I want to take some time to clarify some things that I wrote yesterday, that being 2015/01/20. The intent that this is less aggressive. With that in mind, here goes:

Perhaps it is true that I should not have tackled what I did yesterday. Based on how I’ve felt for quite some time, it wouldn’t surprise me one way or another. But even though I might have gone too far in some ways (or in some parts), there is the fact that the point remains the same: attacking someone (or some entity) as well as provoking them, followed by whining (and that is EXACTLY what it is) when the same happens in return, is stupid and hypocritical. Would you really lunge at someone and then complain about how much harm they caused you in return? That is what it is – you reap what you sow.

The fact remains that participating in cyber war, in general, is a dangerous game (and directly attacking other nations computers is NOT defence – it is the cyber equivalent of declaring war on and invading the country). In other words, yes, the suggestion that if all the activity being done was in the real world, it would be a nuclear holocaust, isn’t so far off the radar (if you will excuse the pun).

Lastly, the very idea that nations need laws to somehow share more information with corporations, it is an absurd and dangerous lie. That they also claim it is for the citizens’ own safety and good, is actually scary. Shockingly scary. Yesterday I didn’t want to elaborate on why I chose Nazi Germany to compare. I’ll get to the point momentarily, but to anyone who might want to know (not that I believe I have many readers): there’s many reasons, one of which is I’m a WW2 buff. I studied Nazi Germany extensively. While there is much I don’t remember (compared to before), I still remember a lot. So here it is: the Schutzstaffel, otherwise known as the SS, and the secret police the Gestapo? They made these same claims too: that they were doing it for everyone’s good. The world still hasn’t learned from the war and they didn’t learn much about tolerance, either, did they? Not even close. Tolerance for group one and not group two is not any more tolerant than being tolerant for group two and not group one. But that is exactly what happens to this day and it is a double standard (at best). I’ll not get in to the history aside from this final point, a point that should not be dismissed (I know it is and will continue to be, though): the problem isn’t how far someone travels (i.e. the severity of each act); the problem is how willing they are to create the path that allows the travelling (i.e. what laws come into existence to remove the blockades). In other words, the very idea that they’re needing more and more control (notice how this doesn’t stop after they get the next amount? And then the next? And next? And start all over ad infinitum? Ask yourself how new you really believe that is) for the same purpose – for the safety and the good of the population is scarily similar to what should never be ignored. But it is ignored. Ironically – as I already suggested – the rationale of ‘why they need this’ remains the same. The part that does change is they continue to need more and more (of what they already supposedly have!) to fulfil the same(!) task. The more they need, the more they abuse (it) and the more they abuse the more power they crave. Truly feel safer? What makes you feel that way? What actually changed? Sadly some (many, obviously) will fall for this claim – repeatedly. It is ironically just like history.


This will be a piece that is mixed with ridicule as well as a warning. I fully acknowledge that the warning will be largely dismissed, at least dismissed by those who actually should not ignore it. But then you can’t really reason with politicians, can you? No – one of the definitions of a politician is a complete idiot that is dangerously high on a power trip, desperate for power, one that spreads fear, incites hate and anger. That is about as far as I’ll go in that regard because, as I’ve made clear before, politics is one of the most potent cesspools known to mankind (I have no problem, however, with actually attacking their arguments, their claims, when it comes to computers). To this end, another warning: while I do have good intent here, it was a thoroughly obnoxious day. So the ridicule might go too far. I’ll be in good company though, won’t I? Not sure it is the right company but such is life. So with that:

It seems to me that, given the circumstances, now is a good time as ever to write about the risks of cyber wars. What circumstances? It has been claimed by the New York Times, as well as Germany’s Der Spiegel, that the reason the US officials suggested that North Korea was responsible for the attacks on Sony, is that they had access to North Korea’s network. Yes, that means they compromised North Korea’s network. Yes, that means more hypocrisy indeed (there is never enough of that, is there?). These two news sources, I might add? These are the agencies that Edward Snowden leaked the spying accusations to. As I’ve made clear before, the NSA has a long history of hissy fits about encryption (and otherwise needing control) so really the only news to me is what specifically they were up to – as a spying agency, and as a spying agency that has the history that the NSA has, it isn’t really surprising. Sadly this makes it even more believable. Let’s see, what are the claims that are known, and what is the US saying in defence of themselves?

“While no two situations are the same, it is our shared goal to prevent bad actors from exploiting, disrupting or damaging US commercial networks and cyber infrastructure,” said spokesman Brian Hale.

Noted. That says quite a lot, doesn’t it? The fact no two situations are the same and the fact you want to prevent bad actors from ‘exploiting, disrupting or damaging US commercial networks and cyber infrastructures’, this is why you don’t count yourself, right? Because it is a different situation than yourselves. Makes sense – and since there’s often a lot of assumptions in this topic, I don’t see why I shouldn’t assume here, too, that my suggestion makes sense. That isn’t wrong, is it? Or perhaps it is because you aren’t actually actors but instead being your usual self? That seems quite plausible, too, I must admit. Then again, maybe it is simply that you don’t consider reputation all that important? That would explain why you find it perfectly acceptable to BREAK THE LAW as long as it is for those you (supposedly) are protecting? But it gets better, doesn’t it?

When it becomes clear that cyber criminals have the ability and intent to do damage, we work cooperatively to defend networks.”

You work to defend the networks. If it is on the land of the US, of course. But is that really the full truth? I don’t know if I agree. The BBC makes claim that the paper reporting this new information says that you – that is the officials supposedly investigating the crime – believed that North Korea was mapping the network for two months prior to the attack. Considering that one of the very first things an attacker will do, when going after a network, is getting as much information as they can about the organisation (its employees, its hours of work, its network, the hosts, the services, everything means as much as they can possibly gather), this would make sense, wouldn’t it? Yet it took you two months to figure this out? Weren’t you monitoring them? Why would you not alert them? Why would you not help them? (One hopes you don’t do this with your own although seeing as how there are issues at times, it makes me wonder) Is it because there is some other use out of it? I don’t know, maybe making it an excuse to give more power, more control to the government, with regards to what capabilities you are allowed? Something like this:

A senior Democrat on the House Intelligence Committee on Friday will reintroduce a controversial bill that would help the public and private sectors share information about cybersecurity threats.

“The reason I’m putting bill in now is I want to keep the momentum going on what’s happening out there in the world,” Rep. Dutch Ruppersberger (D-Md.), told The Hill in an interview, referring to the recent Sony hack, which the FBI blamed on North Korea.

Or maybe it was because then more sanctions could hit North Korea? Because a cyber attack is the right reason for that over all other things, yes (and of course, sanctioning an isolated state isn’t provocative – it is actually helping the relationship, that’s what sanctions are for, right? It won’t hurt the citizens and actually it will make them feel better, that other nations care enough to cause them more grief ‘as long as it teaches the nation to do what’s right’)? On the other hand, it is possible the attacks are being abused to do that (some might argue all of the above – they’re probably right). My problem with this, goes further. This is encouraging cyber attacks (yet more recently there is the suggestion that the US and the UK will participate in war games, the irony in it all is too much, when you consider why the two nations want to do this). “I want to keep the momentum going on what’s happening out there in the world” implies that you honestly don’t care why, you just have to have more control, as if it is as I put it – a drug. Nazi Germany did similar and their excuses were similar too; they were protecting others from whatever they didn’t like (even though the claim was a farce). Indeed, that ‘keep the momentum’ remark also implies that you’re fine with the attack, an attack that revealed confidential information about people whose only crime was working for Sony. You’re fine with it because it gives you an excuse to tighten the noose (ironically the law allows more spying on the citizens of the nation you claim to be protecting – again, just like Nazi Germany). (Yes, I’m deliberately comparing it to Nazi Germany even though there are plenty of others.  There are many reasons for it. I won’t elaborate on it aside from that.) And the law you want to introduce isn’t just making it easier to share information about cybersecurity threats. There is not a single thing that prevents you from doing that now! Nothing prevents the corporations from informing you of attacks (as they recently showed you exactly that – because the government refused to warn them even knowing it was imminent, a shameful act indeed). The only thing that prevents the government from sharing that information is they would rather not. But that isn’t a legal issue. No, the law you refer to is CISPA – the law that allows spying on citizens (among other things). Anything else is an absurd lie – a lie designed to manipulate the situation (yet again just like Nazi Germany)! No, it wouldn’t have prevented the attack, either. Ironically, given the claim by the government, it seems the one thing that could have prevented it would be the government themselves; the government that claims they need more power (yes, because having been monitoring North Korea wasn’t enough!). Indeed – people in power can never get enough, no matter what they have already, and the more they get (and they are often so insistent on it that it is expected they will) the more they want in a vicious cycle just like the addictive properties (and effects) of drugs.

The fact of the matter is: the United States is not just a victim of cyber crime; they are perpetrators just as much as other nations (perhaps in some ways more so). It is just the United States has the status to get away with it more easily. The reality is that by participating in cyber wars, you’re actually creating more problems and I do not refer just between nations (but that is what this is about). The end result: each and every nation that adds fuel to the fire rationally has only themselves to blame when they fall prey. In addition, corporations that are attacked because of where they reside also could rationally blame their nation (whether they are allowed to be there or not is frankly an irrelevant point (and keep in mind that Sony is a Japanese corporation, not an American corporation, and this, I fear, makes that argument – of nationality and otherwise location – less relevant)). If nations acted this way in the real world – and let’s be honest: that is already far too extreme – there would be a nuclear holocaust. Much of what is claimed is defence is actually direct provocation. As for North Korea versus the United States, I will remind you again: the two nations are not allies and in fact are more like enemies; consequently, provoking them and then complaining about the response, is hardly helpful. It is actually about as stupid and arrogant as when a human goes in to the ocean, even with warning signs about recent shark activity, gets killed (why go in to its territory?) and what do officials do? They hunt the shark down and viciously slaughter it, as if it was deliberately in the territory of humans (because humans live in the oceans, yes?) to slaughter as many (humans) possible (never mind the fact that many shark attacks are accidental). Yes, that is a great analogy: go in to their territory, run into problems and then it is their fault; they shouldn’t have been where I wanted to be! Indeed.

Source IP Masquerading by Destination Port


2015/01/11:
Fixed previous mixup on ports (that I noted but did not fix). I also cleaned up some less-relevant (if not outright irrelevant) stuff and tried to make some things a bit clearer. In addition, while I haven’t yet, I’m considering going through all my how-tos here and move them (or add them to) http://docs.xexyl.net (which has more documents I’ve not made notice to here). This would be among them. If I do this I’d likely try to clean this up even more and make it more clear. For now, however, there should be fewer inconsistencies and errors.


(Another name might be: Source IP selection based on destination port or service)

Problem: you have a static IP (singular or block) and you also have a dynamic IP. For some hosts (i.e. server) you want static IP always, for other hosts (e.g. workstation or desktop) you might have a static IP and a dynamic IP. As for the latter, it might be – since you have a static IP block – that you have a fully-qualified domain name (FQDN). You might also have a PTR record (reverse the address and add an .in-addr.arpa = 127.0.0.1 (localhost) becomes 1.0.0.127.in-addr.arpa) so that if you claim you are host.example.com then the host you are claiming this to, can verify it by confirming that your IP does indeed resolve to host.example.com and not something else (if anything). This is most common with sending email. What to do when you sometimes want to use a dynamic IP (e.g. for web browsing) but certain services should really use a static IP ?

There’s two approaches, depending on what you need. You can use ip routes (and rules) so that depending on source (and/or destination) IP it should use one IP whereas otherwise it’ll use a  different IP. This is how I had my desktop configured because the hosts I ssh in to do ingress filtering by IP blocks (so I have to use my static IP – they’re declared static for a reason and it is the most reliable IP to allow). But something I did not like made me think of (spur of the moment solution came to mind) a way to not have to add many IPs (to the static routing table) simply to solve this problem. The issue is that if I am sending mail to a host that doesn’t resolve to my IP block (which would then know I am indeed who I claim to be), if I’m connecting via my dynamic IP block, I obviously do not have a PTR record. So the fact I claim to be a certain host that the PTR record does not match means any number of things (and this is another subject entirely). But then, surely I don’t want to maintain a list of IPs that should use one route (therefore one source IP) and otherwise use the default (dynamic) IP. So what can you do? Well if you know that certain services are going to prefer (if not require) you to be who you claim, then you can use iptables to masquerade your connection to be from the static IP.

Yes, this involves NAT and specifically source NAT. (And yes, I generally hate NAT but there are uses for it. This is definitely one of them) Now there are two ways to do this. The usual way (‘usual’ might be ‘more common’) is through the iptables target MASQUERADE (yes, all caps indeed). As the man page shows though, that is more so if you have a dynamic IP (it still works, though). I seem to remember it has similar if not exact options (which means you might replace ‘SNAT’ with ‘MASQUERADE’ and get away with it). Regardless, I’m going to use the SNAT target.

If I send mail through port 465 or 587 (SMTP is 25 and mail servers do use it but if you use the former ports – i.e. for secure authentication for purpose of mailing – then it is what matters (and you can do this for multiple ports (and indeed completely different services))) then the following rules will masquerade any packet sent to port 465 or 587 (to any host that matches the rule) so that the source IP will be that which you specify. A note: originally I included what I have in my set up: only match if the source IP is not in the 10.0.0.0/8 block. I’m removing this, however, because I feel it makes it harder to understand. Therefore I am only including the bare minimum. The IP below, 192.168.1.66 would be YOUR static IP (or whichever IP you want to send and receive packets from).


# iptables -t nat -I POSTROUTING -p tcp –dport 465 -j SNAT –to-source 192.168.1.66
# iptables -t nat -I POSTROUTING -p tcp –dport 587 -j SNAT –to-source 192.168.1.66
# echo “Additional rules can be added for other ports”


# is the prompt in this case so don’t type it (I have to state this because, firstly, if you were to copy it to the shell it would be a comment and therefore not run, and second, you should always be careful at the command line and especially when root). So to break down the rules (will only explain the first – the second is the same as the first except we care about a different port) I’ll explain it in one go. Then I’ll explain certain parts below, in more detail. The -t option specifies which table to add the rule to (in this case nat). We insert this rule at the front of the POSTROUTING chain ( -I POSTROUTING ) (in the nat table as noted), specifying that the protocol must be tcp (-p tcp (yes, lowercase here)) and that the destination port is 465 (–dport 465). If the packet matches this criteria, then we jump to the target SNAT (-j SNAT), passing the option –to-source with the IP we want to masquerade as: 192.168.1.66 would be the IP you want to send the related packets from (and have your host translate the inbound packets related to). The basic form, then, is that what follows –to-source is YOUR IP you want to send and receive packets FROM when communicating TO PORT 465. So in my live setup, the IP after –to-source would be my static IP and the rest would be the same. This way, whenever I send to to port 465 on any host, I will use my static IP; when I send to a different port (not specified in any other rule or manipulated elsewhere), I would use my dynamic IP. You can add additional rules and you can change the ports, you can specify port ranges (in one rule) and you can adapt the rules as you see fit. The idea, however, is hopefully clear. I did try rearranging this several times and I think it is mostly understandable now but if not I’ll hopefully be able to explain it better in the future.

In any case, the above is a much cleaner way to deal with the problems combined: specific services (so ports) regardless of the IP (which there could be many including some you might only have the host names (although using hostnames in firewall rules is definitely not to be suggested, which means you would be better off resolving it to the IP but it is still possible). Compare this to keeping a list of all IPs (and keeping it updated) so that certain services work properly (multiple definitions of work properly, two I described in this write-up). Anything else will be fine per your configuration. I should lastly point out that this obviously won’t be of help if you have IPv6 enabled and default to using it (over IPv4): I specifically force thunderbird and firefox to use IPv4 so this works for me (there are other ways around this, too (but is out of the scope of this article)). At my end this is important because I don’t have authority over my IPv6 block (and it would be different with ip6tables).

The Consequences of the Sony Attack

While this should maybe be under security, I want to highlight some other things, too. It is rather interesting to me, but so many people, every year around this time, talk about resolutions. While I’m going to get to the issue the title refers to, I actually think security (and therefore that issue) is a perfect thing to discuss with resolutions. Indeed, I find this interesting yet also something of a farce. I call New Years Resolution what they are: nonsense.

Why on Earth do people think that a certain time is any better than another, to be better about (or accomplish, or… ) something? Is that not absurd? If the idea is to improve yourself, why not always do so? If you can only improve yourself when you’re ‘supposed’ to, you’re not actually improving yourself: you’re actually sealing your fate in a vicious cycle of only do something (that supposedly is better for you) for a short time in the year and then  wait until next year. What is so special about this time of year, and why does it happen every year? The answer to the rhetorical question is, of course: because (they) only last a few weeks before giving up which really means they only pretend to care – you want to improve or you don’t, it is that simple. The only other part of it is that some people believe they want to change in some way but they actually don’t want to (which conveniently fits in ‘or you don’t’) – they try to convince themselves of it but they don’t really want to. I’m a perfect example, actually, but not in the sense of New Years and not actually bettering myself (but it indeed is cyclical). I often try to tell myself I need to be more social (I am incredibly asocial – I’m essentially a hermit that has Internet access and will go to doctors but aside from that I tend to shy away from gatherings). But it doesn’t last and I then come to the conclusion (in a repeated cycle like I described New Years Resolutions) that no, I was only thinking I wanted to change this. In reality I was lying to myself (something I admit I do probably far more often than I’d like to – wait for it – believe) about it.

Where does this go with security then? Correct: you should always be improving your standards (just like everything else, if you truly do want to improve) and this goes for security in normal cases but it goes double (if not triple or quadruple or…) after an attack. It is most interesting that those in to security (I’m not even going to include myself here simply because I don’t – as I suggested moments ago – generally like to be included in a group) are calling the claim that North Korea is the sole responsibility, nonsense. Yet at the same time, those who should be paying attention to them, are just pointing the finger (perhaps figuratively and literally pointing the finger!). I find it rather sad that even despite two groups admitting their role in the attack, the authorities then decide to re-frame it to… North Korea decided to contract the work out. Why not admit to a lost cause? Not only is it impossible to verify (let’s also remember justice is a farce (and unfair first impressions does not help here) and people actually admit to committing crimes they were wrongly accused of (and then they are serving time for a crime they didn’t commit and in fact someone else is free – justice indeed)), it isn’t as if no other country would do similar. That includes the United States of America. So what does this equate to? Instead of trying to figure out what can be learned from the attack, it is playing the game of victim only – a victim of the same thing in the past (and in the past…) and not changing because of it. While I don’t know Sony’s point of view is now what I do know is there was a leaked email from Sony and if I recall, it was the CEO himself. And what was included but the idea that there was nothing they could have done (and the so-called security experts they called in (as if security is only once in a while!) made the claim to Sony!), it was unprecedented and nothing like it has been seen before (if I had a dollar for every time I’ve heard/read/been told/etc. that, I would be filthy rich…). They’re wrong though. This isn’t the first time Sony has been subjected to serious attacks. I doubt it’ll be the last. The last one was not the first, either, as I recall.

The fact is: attacks happen and more attempts happen. Sony is not the only victim. I see a lot of attempts on my (low profile) server. They’re a huge, international company. Of course they’re going to see attacks. Make the best of old news or repeat history (we already know where much of society fits in this selection). It is expected and here is the brutal truth, folks, and this is something Sony (and others) would do well to understand (because the mentality that there was nothing that could be done is actually exactly what I’m going to describe): if you want a sure fire way to get breached, all you need to do is not care about it, tell yourself there is nothing that can be done and just accept the future. In short, do nothing but admit defeat, even before it happened. Ironically, by doing this you’ve actually already lost (yet if you don’t go this route you haven’t lost). Indeed this is a self-fulfilling prophecy: if you so wish to meet this fate, by all means, don’t learn from it. If that makes you feel better then  who is anyone to judge you over it? Certainly I won’t. But I also won’t feel sympathy (for those who won’t change – the fact others are affected is another issue entirely).

As for the United States, I find this rather amusing. I know I’ve explained this before, that I think (because it is the case) the idea of freedom of expression is often taken too far (“kids will be kids – yes, kids will be kids until, that is, they get revenge on bullies, and then how dare them, how dare their parents for not teaching them right from wrong… “). But then since many in the United States are champions of this idea, that freedom of expression (the problem is taking it to the extreme – the problem is not the idea itself: all good comes with bad and all bad comes with good (the two are subtle but it is different: it means even bad people have some good even if it is hard to see for most people)) is ever important, I have to ask: why can’t other countries, other people, also express things that they find important to them (no matter what it is)? I’m not suggesting any one who sees this is not following this, but nevertheless, some do. So to elaborate on the rhetorical question: If someone (or some country, or…) attacks someone, regardless of the legalities, regardless of the ethics, morals, whatever else (that comes to the mind of the ones judging), they are technically expressing themselves. To express something is to convey a thought or feeling through words, gestures or conduct. If they were responsible for the attack, and for the reasons given, then they are by definition expressing themselves! Call it a paradox if you want (but being a paradox does not mean it isn’t valid, remember that) but the reality is the anger they had (and how they display it) is expressing; you might not like how they do it or in what way but they could say the same about you, couldn’t they? It is a healthy thing to believe strongly in something. I feel strongly about some things (but arguably less than I should). But I would like to believe that those who do feel strongly about something do not let it cloud their judgement and that they don’t let it apply to only them (or someone/something they agree with (I already gave an example of this)). Yes, I’m pointing the meaning of expression out because it is true and something that really should be considered. I do look at things from a lot of angles and if you will excuse me, I feel strongly that it is a good thing to do (and yes, I am conveniently expressing myself on the whole issue (see how I did that?)).

 

The Art of Easter Eggs

This is obviously something that is best classified in the general topic, simply because the software I write about is Unix and its derivatives (primarily Linux). What inspired this is two things in particular:

  • I discovered an easter egg in the editor vim earlier this year (which is to say, late September, early October).
  • Besides fond memories of easter eggs I discovered over the years, I enjoy designing them myself, for programs I write (or at least one that others will see, i.e., a specific MUD).

This is just for fun (which is expected with a topic about easter eggs) and basically a list of easter eggs that are memorable to me, that either I discovered on my own or remember reading about them at some point over the years (I’ll specify which is which). I won’t list any I have implemented any where at all and I never will. These easter eggs, mind you, are all old with perhaps the exception of the vim one (which I suspect has been there for quite some time but it is new to me). Therefore I don’t think this is harmful. If however you enjoy finding them on your own, don’t read this. That is my warning.

  • Colossal Cave Adventure, also known as Advent, is an old text based game, somewhat like a MUD only single player. You interact with objects, open/close doors, you can get lost, you can die, and you gain points, too. While the version I am looking at now (version 4, one I fixed a segfault in, for a friend and therefore have it locally) is not the one I played years ago (it was an earlier version that I played), it is still fun and absolutely has easter eggs. The narrator (I guess you could call it) doesn’t take to swearing kindly. There are many responses to swearing and there are many words it sees as swearing. I’ll leave it to your imagination except for the one I find most amusing (at least, literally it is amusing – it contradicts itself):

    ? screw you

    I trust you know what “you” might be, ’cause I don’t.

    Interestingly, when said friend referred to a crash, and they didn’t know exactly what triggered it (it was for her friend who has a Mac and first it failed to compile to which i fixed that) except that it occurred after a command was typed. What that command in question was, I don’t remember (they didn’t know and in fact it wasn’t a specific command and not only that command – it happened more than once) but I had the idea to play with exactly the above: as I was swearing at the computer, it gave me the information I needed; it was a segfault and I recompiled (with debugging symbols – the source is actually obfuscated and I didn’t think of running it through a beautifier and the programmer in me thought to make it drop a core), removed the limit on core size, caused it to crash (therefore dumping core) and found that there was a dereference on a NULL pointer (which, as I’ve discussed before, is much preferable than a pointer that was never assigned to anything – at all – or otherwise pointing to garbage). Added an if, recompiled and it was all fine.

  • I liked this one a lot although I admit I enjoyed more so figuring out how to defeat the boss (and therefore win the game) more than the easter egg (which I also discovered on my own, if memory serves me correctly). I played the game a lot and I beat it many times. The last area  – Icon of Sin – is one hell – indeed, it is intended – of a toxic dump full of demons and monsters alike… but very well worth playing through (unless you are very easily frustrated). This is one of the few computer games I played – most were console games. The game in question is DOOM 2. The easter egg is the severed head of one of the developers, John Romero’s. If you are curious, check http://doom.wikia.com/wiki/ as they have a picture and (for those wondering how it is found) how to find it. What that Wiki page informed me of, something I did not know, is at the beginning of the last area – Icon of Sin – the voice says something that explains – once you decipher it – how to defeat the last boss. I had (have is a much better word) a knack for figuring out how things work and how to solve things (puzzles, games, …) and so I beat it without the hint (there are quite a few things in the area that can make or break your success but I quite enjoy these things).
  • Mortal Kombat series is another game I really enjoyed for a lot of years. These features are more well known, perhaps, but there are hidden characters in the series. One character, named by the reversing the last name of the two developers (or two of, being Ed Boon and John Tobias) is Noob Saibot. He appeared at some points (don’t remember specifics) and says “Toasty”. While checking the Mortal Kombat Wiki, I saw two other names that ring a bell: Smoke and Jade. Looking further it seems that I did indeed go beyond seeing them in the background (definitely this) and in fact fought against them (whether I figured out how to do this on my own or anything else I really cannot remember – I suspect not by myself in full).

More generally, I know there are many others I discovered (or was told about and enjoyed) over the years. I’ll reflect on a theme, one I did not do at all but I remember reading way back when. Then I’ll get to the vim easter egg.

So if you search Google for ‘Bill Gates is the antichrist’. The entry on http://urbanlegends.about.com is much of what it used to be (if not all). It is unfortunate that it isn’t the original, the one I saw so long ago: the original was lost because it was on Geocities and that is long dead. There is an easter egg in one of Microsoft Office (Excel 95 maybe?) that is listed. There’s also some maths with Bill Gates name (think: decimal values being added up) and what it equates to. Funnily enough, among listed is (not so much related to Bill Gates but is is still relevant to the fact I mention the editor vim – though in this case it is vi more so):

Note that the internet is also commony known as the World Wide Web or WWW... One way to write WWW is V/ (VI):

WWW V/ V/ V/ 666
Something to ponder upon, right?

Why is that amusing? Because of the editor wars between vi and emacs. This is one of those wars that is not hell-bent (can’t help it) on flaming but rather wit and humour. Wikipedia has an entry on it but it is claimed vi is the editor of the devil for the above reason (‘vi’, Roman numeral for 6). (There were more examples in that Wikipedia article but that’s the relevant one).

As for the easter egg in Vim I will give an explanation of why and how I discovered it (because I found that more useful than the easter egg), allowing those who are curious, to try it themselves (tip: you can change it as well!). Of all the programs I use, the one I use the most (perhaps better stated is, of all the utilities), is the shell and in my case ‘konsole’ (at my server I don’t have a GUI and so I just use the console itself). I usually have 5-10 tabs (or more) which means 5-10+ shells open at any time. Since I use vim for my editor of choice (I used to use vi but years ago tried vim and I agree with the name: vim is indeed VI iMproved), and since it allows you to open one file and then switch to another file without exiting (you can also open more than one ‘window’, each with another file and this applies equally), the current task in the tab (of konsole) shows the original invocation. This was annoying for many reasons. Looking in to how to fix this, it would be something like putting this in your runtime file (per-user would be ~/.vimrc but you could also use system-wide but I tend to frown upon enforcing changes on all accounts, even if they can disable it):
:auto BufEnter * let &titlestring = hostname() . ":vim " . expand("%p")
:set title titlestring=%<%F%=%l/%L-%P titlelen=70

Now if you open vim with (example): ‘vim file1′ you would note as it is before (in konsole tab): ‘vim file1′ (it might show other information like the hostname or however you configure it but this is up to you, in the profile settings[1]). However, if you were to be in command mode and then use ‘:e file2′ you would now see the tab has been updated to show ‘vim file2′. Now if you quit vim (command mode): ‘:q’ you will see the tab title has changed again. “Thanks for flying vim!” As for how you can change it, I’ll leave that to you but it is noted in the help file (‘:help title’ and read that entry as well as the entries below it, about titles). As an interesting bit, because I wanted to confirm that indeed the two changes are exactly what is needed, I commented out (prepend with a double quote) the first line, saved and (in another shell) started vim. It then shows as the title the name of the file followed by much whitespace then what is usually in the status (bottom of screen by default): current line/line count % where current line is the line where the cursor is, the line count is how many lines total and % is what percentage of the file the cursor is at.

As a final note: Enjoy easter eggs, whether you find them on your own or not: we put them there for our own enjoyment as well as yours! Although I am obviously biased, I think it really shows how programmers are clever and how easily they are amused. It is a good thing, though, it is a good way to release frustrations and some of the time programmers are not really appreciated (or the amount of effort they can put in is not always respected) so these things just show that they too can have fun and when others find it, they hopefully enjoy it as much if not more than the actual program.

Viewpoint: The Attack on Sony

2014/12/21:
I am redacting my original post because while I strongly believe it is misguided and unhelpful what is being claimed (by the US government), I think also that the way I addressed this was was not helpful, either. Certainly it detracts from my main point. While I often will keep things I’ve written, as I put it in my about section, I also believe in fixing mistakes where necessary. I’ve also noted that some of my writing will come off as a rant or otherwise aggressive and that I fix it where I can (and I always try to get a point across but often fail because of aggressiveness, whether it was intentional or not – yesterday’s aggressiveness was not intentional by any means). It is interesting to note that the other day I actually went to write something about this issue and I decided to delay it because I felt I was not in the right mindset. Apparently I was still not in the right mindset, yesterday. Of course, even though I’m fixing the post, that does not at all mean what was public will not remain public: once on the Internet it is as good as on the Internet (and even if all references were removed it still doesn’t mean there isn’t a single person who saw it and potentially captured it: I’ve done exactly this and I know others have too). But I still believe in taking responsibility and addressing mistakes where possible, and addressing means fixing the issue(s). So with that, my modified view on the attack on Sony. Do note that the title could very well be better worded (and this is how it was yesterday, too). I’m not sure how else to word it so it’ll suffice.


The last time, Sony brought (after the ‘first’ – the quotes are important here – attack) brought in a security professional. Yet, while some might find it ironic (it isn’t) there was another attack. First tip of old: if you consider security after an attack, or after deployment (e.g. in software development), then you’re behind at best and you may very well be too late, too. Second tip of old: in general, notwithstanding certain (rare and still not well advised) cases, if your network was compromised (there is one thing to consider here[1]), the only safe way is to start all over with improved policies, based on what you learned from the attack. There is the reason I quoted ‘first': while it isn’t a guarantee (by all means, given what was claimed, it could have been individual but that makes it even worse, not better!) I wouldn’t be surprised if they had left a backdoor or otherwise hadn’t truly left. This very bit is a common thing, isn’t it? Why would it not be? While some do it for a challenge (and I would argue this is far less common these days) there is this simple fact that they use the breached network for many things. This includes bouncing (and this isn’t counting bouncing off of proxies). This brings me to the first real point about the ‘evidence': IP address.

I could elaborate on why IP address doesn’t mean much, certainly not for proof, but I think I have a better way. If you were to lose your mobile phone, or if someone were to steal it, who is the rightful owner? You? Yes. Okay, so what happens if they then use that phone to pull pranks, make threatening calls, or otherwise abuse the fact it isn’t their phone? Is it your fault, is it your responsibility? No? Then what makes you think IP address is any different? It isn’t. There’s far too many possibilities. Worse is that even if the IPs are from North Korea (which I haven’t seen them nor do I really care – it is irrelevant to the point) it doesn’t mean it is state sponsored. It also doesn’t mean it isn’t. And that is exactly the problem: it is speculation and until it is actually confirmed it may as well be slander. I’m sorry to say that being confident (as at least one US official has stated) doesn’t equate to reality, most certainly not 100% of the time; I know this personally as does anyone who has been delusional but is not currently having said delusion: I was confident that traffic signals were spying on me and me alone as one example. Yes, I’m able to admit this publicly. Why? Why not is better asked. While I am by no means suggesting they are delusional here, my point is that being confident does not equate to reality (and this applies to those who are not delusional). While this is not necessarily any better of an example, this is something that specifically makes my point that many things are not as they seem: Mirror Lake in Yosemite National Park, to name one of several (I seem to remember there is one in Canada, too). This should all be kept in mind when dealing with accusations. I know, I know, there is the addition to IP that the attack ‘looks’ similar to a previous attack: it is still speculation until proven otherwise. Again I’m going to give a non-technical example: some countries purchase aircraft from other countries. But that doesn’t mean the jet flying over a country IS the country that manufactured the jet. Similarly, some countries share flags while others have flags that are similar to another. That doesn’t mean that the countries are the same.

There’s also been the claim that this attack is unprecedented. I don’t think so. Neither was it impossible to prevent. Yes, there is always someone who can best another, but that doesn’t mean there is never room for improvement; there is always room for improvement! Always. Just like some attacks are not prevented, many more are. But to throw blame elsewhere, and to not address the real problem is a problem itself. This is not the first time Sony has come under attack. They also aren’t the only one to be compromised more than once. I know for a fact that Kevin Mitnick, what many would call a notorious hacker (and he calls himself a hacker too, or at least he did) fell prey to some that bested his him and consequently compromised his network. His company after his release from federal prison (2001 comes to mind as his release date but I’d have to check to confirm). In addition, the reason he was caught the second time around (indeed his arrest in the mid 1990s was not his first time being in trouble with the law) was because someone bested him then, as I recall someone he had attacked himself. I certainly do not call him a hacker, not even by the media’s definition of hacker: he is excellent at social engineering, that much is true. Regardless, this was not the first time Sony was compromised. You would think that someone like Mitnick would be able to not fall prey here, given his title. But then it is easier to forgive a company like Sony. The only thing that matters is (and I really hope they do exactly this) they re-evaluate their policies and implement the improvements. This goes for every entity. The only true mistake is not learning from your mistakes. The only failure is not learning from your supposed failures; we all make mistakes and none of us succeed in everything.

I would like to leave with some final thoughts: The group that claimed responsibility for the attack on Sony, GOP – Guardians of Peace – only started to use the film about North Korea’s leader after it was suggested it was related. Because of this, and since North Korea was enraged about the film (I have some thoughts here, which I share below[2]) prior to the attack, it was now North Korea’s fault. This is a fallacy. A fallacy is illogical deduction and even if they are responsible, the logic used as above, is flawed.

In the end, IP address, similarity in attack and other such things are not indicative of anything, not indicative unless you wish to believe only what you want to believe. As a final example: Robert Tappan Morris, indeed the author of the infamous ‘Internet Worm’, aka ‘The Morris Worm’, made his worm appear to come from a different university than his. This was to throw the authorities off his trail. Due to mistakes on his part, however, the worm brought the affected machines to their knees. The effects of it were out and he was tracked down. Combine this with the fact that (for instance) many viruses, worms, trojan horses, backdoors and otherwise malware, are families of malware (which might not be written by the same person and indeed this is the case), this shows too that similarities does not imply equivalence (nor same source).


[1] Does web defacement constitute network being compromised? It could. But it could also not be. File integrity checks would help determine this here (but is not perfect either, if the attacker gets root access).  It is true that with content management systems, a web defacement makes it easier and not requiring compromising the system itself (especially true if there is a configuration in the web files that specifically deny the CMS from modifying those files; i.e. you can only use the interface for the content and nothing else). In the end, the only safe way (but mind the fact that depending on how long ago the attack was (and attack implies original access, not defacement!) backups could also have a backdoor or indeed anything else). On this latter bit, backups: this is one of many reasons that the backup volume should either only be mounted when backing up (or restoring) or made immutable (except during writing to the volume). Another reason is user error (and yes, as I’ve made clear, administrators count as users): if a command you run, a script you write or something else you run (or is affected by a bug) goes badly, what if you wipe out (or damage) your backup? (Redundant backup isn’t necessarily the answerany more than redundant storage; the point of backup is having it in multiple locations (e.g. off-site (no, this does not include the cloud or anything you do not have complete control over!) and on-site), not having multiple copies: the difference is subtle but something that should be understood).

[2]As for North Korea being enraged. The fact remains that North Korea and the United States are not on good terms. So when you consider the film’s plot, and you consider the different culture, it isn’t all that surprising, is it? And you can see how it can provoke them. The subject of free speech (and more specifically freedom of expression) is usually brought up and indeed it is here too. Unfortunately though, like many things in this world, it is very often taken too far. It is especially taken too far when defending someone or something that (you) agree with (or sympathise with). It is also defended when you disagree or dislike that which is offended or upset by the expressions. As something I am unfortunately familiar with far too well, many people excuse bullies (even minor bullying is wrong but minor bullying develops further in to moderate to extreme to beyond extreme) as “kids will be kids”. The problem is, you can only abuse someone so much, before they snap. So yes, kids will be kids – until victims of bullying – also kids – get revenge on bullies. Then the kids are now horrible, and the “kids will be kids” is no where to be seen or heard. I am eternally grateful that there isn’t a violent streak in me, because I would have been another example of the above. I did get revenge in ways, but I did it subtly and non-violently. I also enjoyed outsmarting them, making them look like fools without them even knowing just how much so. The reality is violence doesn’t solve anything, at least not in positive ways, but if you subject someone to abuse, when they get revenge (which indeed includes violence), it is natural and expected. To explain this, there is a phenomenon called ‘identifying with the aggressor’. This is exactly why domestic abuse runs in families: the victims are not in power, are helpless and suffer because of it. But they also see that in order to gain control, which means to stop the abuse, they can become abusive themselves. So continues a vicious cycle…

George Boole: 150 Years Later

This will be fairly quick as there isn’t much to write and I’m trying to turn off the lights for the day (the wording is… intentional). By chance I saw an entry in my BBC feed called George Boole and the AND OR NOT gates. Being a programmer, I immediately knew what it was referring to (of course programmers aren’t the only ones who would know it but they definitely do). What I did not (yes, I know, I can’t help it) is that it was 150 years ago that Boole himself died.

I find it rather interesting because within the last few months I started (and have not finished it yet) an article on boolean logic, looking at it in a different way than I’ve seen it explained. The reason it isn’t finished is, along with that thing called ‘real life’ (or that’s what I am told, anyway – I’ve yet to confirm it to my satisfaction …), it was a multiple-topic article (and it is more specifically how C handles boolean which is not the same as many other languages, e.g. Java). I just never finished it. I will in time but I could not pass this day up without remarking on the fact that he is indirectly responsible for so many things that many take for granted (even simple things like flashlights – if I am thinking right (and I’m not really putting any thought in to this at all for above noted reason) it is indeed the NOT gate that makes it work). It is simply amazing that such simple things as boolean (which can get complicated, yes, but everything can get complicated and the point remains that boolean gates themselves are simple in design) can give life to so many things. But yet this is something that is common: some of the most crazy, stupid sounding (and stupid simple) ideas are actually brilliant and perhaps much more than the person had in mind, when they thought of it in the first place (and boolean logic is only one example).

As an aside, tomorrow, December 9, as I recall, is Grace Hopper’s birthday. She also played a significant role in computing (and indeed she is the one who made popular the term debugging).

101 Years of Xexyl…

… for those who can count in binary, at least; indeed it was five years ago yesterday that I registered xexyl.net. I would have never suspected I would have what I have here, today. I would never ever imagined having my own RPM repository and yet that is only one of the accomplishments here (for whatever they are each worth).


I fully admit I live in something of a fantasy world (which is something of a paradox: if I admit it does that make it real? If it is real then what is fantasy and how real is it?) and so it seems appropriate that, given the anniversary of xexyl.net, I reflect upon the history of xexyl.net and some of the views I have shared (the namesake is much older, as I have made aware in the about section. It was many years ago that I played the game Xexyz and it clearly made an impact – perhaps not unlike the space rocket that was launched in to the moon, some years back… – in me. But xexyl.net is only five years old and while I have older domains, this is the first one I really feel is part of me).

I have written quite some off the wall, completely bizarre and (pseudo) random articles, but I try to always have some meaning to them (no matter how small or large and no matter how obvious or not) even if the meanings are somewhat ambiguous, cryptic and vague (as hard as it is to imagine that someone who elaborates as much as I do on any one topic, I do in fact abuse ambiguity and vagueness and much of what I write and indeed say, is cryptic). I do know however that I do not succeed in this attempt. To suggest anything else is to believe in perfection in the sense of no room for improvement.

I strongly believe that there is one kind of perfection that people should strive for, something that many might not think of as ‘perfect': constantly improving yourself, eternally evolving as a person. When you learn something new or accomplish something (no matter how small or large), rather than think you are finished (something that one definition of ‘perfect’ suggests) you should think of it as a percentage: every time you reach ‘perfection’ – as 100% – you should strive for 200% of the last mile (200% of 1 is 2, 200% of 2 is 4, 200% of 4 is 8, etc.). This is, interestingly enough, exactly like binary: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 and so on (each increment is a two times the previous value). In between the powers of 2 you make use of the other bits. For example, 1b + 1b (1 + 1 decimal) is 10b (2 decimal). 10b + 1b (2 + 1 decimal) is 11b (3 decimal). 11b + 1b (3 + 1 decimal) is 100b (4 decimal). This repeats in powers of 2 because binary is base 2. I’ve written about this before but this is what I will call – from this point onward – ‘binary perfection’. It is also the only ideal perfection exactly because it is constantly evolving. This may very well be an eccentric way to look at it but I am incredibly eccentric person. Still, this is the ‘perfect analogy’ and I daresay is a brilliant and accurate analogy.

As always, true to my word, I will continue this when I can. Because as long as I admit my mistakes I am not in denial; as long as I am not in denial, I can learn more, improve myself and those around me. While I do it for myself (this is one of the rare things I consider myself and myself alone), if it betters anyone else, then I will consider it a positive side effect. But indeed there are times where I am inactive for long periods of time and there are other times where I have a couple or more posts in a month (or a fortnight or whatever it is). This is because of what I have pointed out: I do this for me but I also believe in openness with respect to sharing knowledge and experience. This includes but is not limited to programming (and by programming I refer to experience, concepts as well as published works, whether my work alone or my contributions to others’ works). But I am not an open person and I never have been. Perhaps this is best: I am a rather dark, twisted individual, an individual possessed by many demons. These demons are insidious monsters of fire that lash out at me (and at times my surroundings) but they are MY demons and I’ll be damned if anyone tries to take them away from me.

I am Xexyl and this is my manifesto of and for eternal madness…

The Secret: Trust and Privacy

First, as is typical of me, the title is deliberate but beyond the pun it actually is an important thing to consider, which is what I’m going to tackle here. The secret does indeed imply multiple things and that includes the secret to secrets, the relation between privacy and security and how trust is involved in all of this. I was going to write a revision to my post about encryption being important (and I might still to amend one thing, to give credit to the FBI boss about something, something commendable) but I just read an article on the BBC that I feel gives me much more to write about. So let me begin with trust.

Trust is something I refer to a lot, in person and here and pretty much everywhere I write about something that considering trust is a good thing. Indeed, trust is given far too easily. As I have outlined before, even a little bit of trust – seemingly harmless – can be abused. Make no mistake: it is abused. The problem is if you’re too willing to trust how do you know when you’ve been too trusting? While I understand people need to have some established trust with in their social circles, there are some things that do not need to be shared and there are things that really should not be entrusted to anyone except yourself, and that potentially includes your significant other. Computer security is something that fits here. Security in general is. The other problem is: ignorance. Ignorance is not wrong but it does hurt and if you don’t understand the risks of something (which I would argue the fanatical and especially the younger Facebook and other social media users, are) is risky, how do you proceed? For kids it is harder as it is known that kids just do not seem to understand that they are not immortal, not immune to things that really are quite dangerous. However, if you are too trusting with computers, you are opening a – yes, I know – a huge can of worms, and it can cause all sorts of problems (any of taking complete ownership of your computer, monitoring your activities which can lead to identify theft, phishing and many other things, from …). The list of how many issues that granting trust can lead to, is, I fear, unlimited in size. It is that serious. You have to find a balance and it is incredibly hard to do, no matter how experienced you are. I’ve made the general ideas clear before, but I don’t think I’ve actually tackled this issue with privacy and secrecy. I think it is time I do that.

In the wake of the Edward Snowden leaks, many more people are concerned for their privacy. While they should have always been concerned, it doesn’t really change the fact that they are now at least somewhat more cautious (or many are, at least). I have put this thought to words in multiple ways. The most recent is when I made a really stupid mistake (leading to me – perhaps a bit too critical but the point is the same – awarding myself the ID 10 T award), all because I was far more exhausted than I thought. Had I been clear I wouldn’t have had the problem. But I wasn’t clear headed and how could I know it? You only know it once it is too late (this goes for driving too and that makes it even more scary because you could hurt someone else, whether someone you care about or someone you don’t even know). The best way to word this is new on my part: Despite the dismissal people suggest (“what you don’t know cannot hurt you” is 100% wrong), the reality is this: what you don’t know can hurt you, it likely will and worse is it could even kill you! This is not an exaggeration at all. I don’t really need to get in to examples. The point is these people had no idea to what extent spying was taking place. Worse still they didn’t suspect any thing of the sort. (You should actually expect the worst in this type of thing but I suppose that takes some time to learn and come to terms with.) Regardless, they do now. It has been reported – and this is not surprising really, is it? – that a great population of the United States are now very concerned with privacy, have much less trust in the governments (not just the US government, folks – don’t fall for the trap that only some countries do it, you’re only risking harm to yourselves if you do!) in privacy. What some might not think of (although certainly more and more do and will over time), and this is something I somewhat was getting at with the encryption post, is this: If the NSA could not keep secret (and that is ironic itself, isn’t it? Very ironic and to the point of hilarity) their own activities (own is keyword number one) secret (and safe!) then how can you expect them to keep YOUR (keyword number two) information secret and safe? You cannot. There is no excuse for it: they aren’t the only ones, government, corporations, it really doesn’t matter, too many think of security after the fact (and those that do think of it in the beginning are still prone to making a mistake or not thinking of all angles… or a bug in a critical component of their system, leads to the hard work in place, being much less useful or relevant). The fact they are a spying agency and they couldn’t keep that secret is to someone who is easily amused (like myself), hilarious. But it is also serious isn’t it? Yes, and it actually strengthens (or further shows) my point that I will get to in the end (about secrets). To make matters worse (as hard as that is to fathom), you have the increase (and I will tell everyone this, this is not going to go away and it is not going to be contained – no, it will only get worse) in point of sale attacks (e.g. through malware) that has in less than a year led to more corporations having major leaks of confidential information than I would like to see in five or even ten years. This is the number of corporations – the amount of victims is millions (per corporation, even)! This information includes credit card details, email addresses, home addresses, … basically the information that can help phish you even enough to steal your identity (to name one of the more extreme possibilities). Even if they don’t use it for phishing you would be naive to expect them to not use the stolen information.

I know I elaborate a lot and unfortunately I haven’t tied it all together yet. I promise it is short, however (although I do give some examples below, too, that do add up in length). There is only one way to keep something safe, and that is this: don’t share it. The moment you share something with anyone, the moment you write it down, type it (even if you don’t save it to disk), do some activity that is seen by a third party (webcam or video tape, anyone?), it is not a secret. While the latter (being seen by camera) is not always applicable, the rest is. And what good is a secret that is no longer a secret? Exactly: it is no longer secret and therefore cannot be considered a secret. Considering it safe because you trust someone – regardless of what you think they will do to keep it safe and regardless of how much you think you know them – is a dangerous thing (case in point: the phenomenon called, as I seem to remember, revenge porn). In the end, always be careful with what you reveal. No one is immune to these risks (if you are careless someone will be quite pleased to abuse it) and I consider myself just as vulnerable exactly because I AM vulnerable!

On a whole, here is a summary of secrets, trust and security: the secret to staying safe and as secure as possible, is to not give out trust for things that need not be shared with anyone in the first place. If you think you must share something, think twice really hard and consider it again: you might not need to no matter how much the person (or entity) claims it will benefit you. Do you really, honestly, need to turn your thermostat on by your computer or phone? No, you do not and some thermostats have been exposed to have security flaws (in recent times). It isn’t that important. What might seem to be convenient might actually be the opposite in the end.

Bottom line there is this: If someone insists you need something from them or their company, they do not have you in your best interest! Who is anyone else to judge whether you need their service or product?

A classic example and a funny story where the con-artist was exposed: If you go to a specialist to have an antique valued and they offer to buy it you should never do it because if they tell you something is worth X it is one thing. It is however another thing entirely to tell you it is worth X and then offer to buy it from you. The story: years ago, my mother caught a smog-check service in their fraud (and they were consequently shutdown for it, as should be) because despite being female – and therefore what the con-artist thought would be easy prey, nice try loser – she is incredibly smart and he was a complete moron. He was so moronic that despite my mother being there listening to the previous transaction between the customer (“victim”) and himself, he told my mother the same story: you have a certain problem and I’ll charge X to fix it. The moron didn’t even change the story at all – he used it word for word, same problem, same price, right in front of my mother. In short: those telling you the value of something and then telling you they’re willing to buy/fix/whatever, are liars. Some are better liars but they’re still liars.

It is even worse when they are (example) calling you – i.e., you didn’t go to them! Unsolicited tech support calls, anyone? Yes, this happened not long ago. I really pissed off this person by turning the tables on him. While what I did is commendable (As he claimed, I wasting his time which means he lost time he could be cheating someone else) do note that some would have instead fallen victim and the reason he kept up until I decided to play along (and make a fool of him, as you’ll see if you read), is exactly because they are trained: trying to manipulate, trying to keep me on the line as long as possible (which means more time to try to convince me I need their service), and they only wanted to cheat me out of money (or worse: cause a problem with my computer that they were claiming to fix). Even though I got the better of them (as I always have) and to the point of him claiming I was wasting HIS time, they will just continue on and try the next until they find a victim. It is just like spam: as long as it pays they will keep it up. People do respond (directly and indirectly) to spam and it will not end because of this, as annoying as it may be. Again, if some entity is telling you you need their service or product, it is not with your best interest but their interest! That is undeniable even if you initially went to them, if they are insisting you need their product or service, they are only their to gain and not help. This is very different from going to a doctor and them telling you something serious (although certainly there are quacks out there, there is a difference and it isn’t too difficult to discern). Always be thinking!

2014 ID10T World Champion


2014/11/02:
There are two things I want to point out. The first one is noting that my mistake is not as bad as it initially seems because prior to systemd, this would not have been a problem at all. Second, I am remarking on why I admit to these types of things:

First, and perhaps the most frustrating for me (but what is done is done and I cannot change it but only accept it and move on) is that previously, before /bin, /sbin, /lib and /lib64 were made symbolic links to /usr/bin, /usr/sbin, /usr/lib and /usr/lib64, I would have been fine. Indeed, I can see that is where my mind was, besides the other part I discussed (about how files can be deleted yet still used as long as a reference is available; it is only once all references to the file are closed that the file is no longer usable). Where was mount, umount before this? And did it use /usr/lib64 or was it /lib64 ? The annoying thing is: it was under /bin and /lib64 which means that it used to be – but is not in systemd – on the root volume. So umount on /usr would have meant /usr would be gone but however /bin would still be there. So I would have still had access to /bin/mount. Alas, that is one of the things I didn’t like about some changes over the years, and it hit me hard. Eventually I will laugh at it entirely but for now I can only laugh in some ways (it IS funny but I’m more annoyed at myself currently). As I get to in my second point, I’m not renaming this post (dignity remains strong) even though it is not as bad as I made it sound, initially. While I would argue it was a rather stupid mistake, I don’t know if champion is still correct. Maybe better is last place in the final round, is more correct. Maybe not even that. Regardless, the title (for once the pun is not intended) is remaining the same.

Second, some might wonder why I admit to such a thing as below (as well as other things like when I messed up Apache logs… or other things I’m sure I have written about, before… and will in the future…) when xexyl.net is more about computers in general, primarily focusing on programming, Linux (typically Red Hat based distributions) and security. The reason I include things like the below is that I know that my greatest strength is that I’m willing to accept mistakes that I make; I don’t ever place the blame on someone or something else if I am responsible. Equally I address my mistakes in the best way possible. Now ask yourself this: If I don’t accept my mistakes, can I possibly take care of the problem? If I did not make a mistake – which is what being in denial really is – then there isn’t a problem at all. So how can I fix a problem that isn’t a problem? No one is perfect, and my typical joke aside (I consider myself, much of the time, to be no one, and “no one is perfect”), it is my thinking that if I can publicly admit to mistakes then it shows just how serious I am when I suggest to others (for example, here) that the only mistake is not accepting your own mistakes. So to that end, I made a mistake. Life goes on…


There are various web pages out there about computer user errors. A fun one that I’m aware of is top 10 worst mistakes at the command line. While I certainly cannot make claim to some of the obvious ones known, I am by no means perfect. Indeed, I have made many mistakes over the years and I wouldn’t have it any other way: the only mistake would be to not accept the mistake(s) and therefore not learn from them (although the mistake I’ll reveal here is one that is hard to learn from in some ways, as I explain: fatigue is something that is very hard to determine and by extension being tired means you don’t even know you are as tired as you are). Since I often call myself a no-one or nobody (exactly what Nemo in Captain Nemo in 20,000 Leagues Under the Sea means, in Latin), I have a great deal of amusement from the idea of “no one is perfect” exactly because of what I consider myself. But humour aside I am not perfect at all. While I have remarked on this before, I think the gem of them all is this:

There is no such thing as human weakness, there is only
strength and… those blinded by… the fallacy of perfection.
— Xexyl

If you can accept that truth then you can always learn, always expand yourself, always improve yourself and potentially those around you. This is hard for some to accept but those who do accept it know exactly what I mean. I assure everyone, you are not perfect!

So with that out of the way, let me get to the point of this post. I admit that mistakes of the past fail to come to my mind although I know I’ve made many and some more idiotic than others. However, around 6:00 today I made what is absolutely my worst mistake ever, and one that gives me the honour and privilege to be the holder of the title:  2014 IDI0T World Champion.

What is it? Prepare yourselves and challenge yourself as well. A while back I renamed the LVM volume group on my server. Something however, occurred to me, being that – obviously – some file systems are not able to be umounted in order to be mounted to the new volume group. That doesn’t mean that files at the current mount point cannot be accessed. What it does mean, however, is that if I update the kernel I will have in the bootloader a reference to the old volume group. This means I will have to update the entry the next time I reboot. I did keep this in mind and I almost went this route until this morning when I got the wise (which is to say really, really stupid) idea of running:

# init S

in order to get to single user mode, thereby making most filesystems easier to umount. Of course, I had already fixed /home, /opt and a few others that don’t have to be open. I was not thinking in full here, however, and it went from this to much worse. After logging in as root (again, obviously) to “fix” things, I went to tackle /usr which is where all hell broke loose…

It used to be that you would have /bin and /sbin on a different file system (or if nothing else, not be the same as) than /usr/bin and /usr/sbin. However, in more modern systems, you have the following:

$ ls -l /{,s}bin
lrwxrwxrwx. 1 root root 7 Dec 18  2013 /bin -> usr/bin
lrwxrwxrwx. 1 root root 8 Dec 18  2013 /sbin -> usr/sbin

which means that anything that used to be under /bin would now be /usr/bin. In addition, you also had /lib and (for 64-bit builds) /lib64. However, similar to the above, you also have:

$ ls -l /lib{,64}
lrwxrwxrwx. 1 root root 7 Dec 18  2013 /lib -> usr/lib
lrwxrwxrwx. 1 root root 9 Dec 18  2013 /lib64 -> usr/lib64

which means you absolutely need /usr to be mounted! Even if I had (a recent upgrade to latest release of server combined with me not installing busybox again for statically linked commands) busybox (or similar) installed, I would have been screwed over by the simple fact that once /usr is umounted and therefore I have no way to run mount again! Most disturbing is that I knew what I was about to do was risky, and risky because I was going to use an option that had potential for risk without the worry as I just described. However, as soon as I ran the command but before I confirmed it, I knew I would be forced to do a hard reboot. The command is as such:

# /usr/bin/umount -l /usr

Indeed, I just made it impossible to mount, change run level, do much of anything other than reboot (and not by command! That was already made impossible by my idiocy!). And so I did. Of course, I still had to update the boot entry. While that is the least of my worries (was no problem), it is ironic indeed because I would have had to do that regardless of when I rebooted next. So all things considered, for the time being, I am, I fear, the 2014 World Holder of the ID 10 T award. Indeed, I’m calling myself an idiot. I would argue that idiot is putting it way too nicely.

As for the -l option, given the description in umount(1), the hour it was and the sleep I did (not) get last night, I was thinking along the lines of (and this is why I didn’t think beyond it, stupid as that is!) as long as you have a reference to a file, even if it is deleted, you still can use it and even have the chance to restore it (or execute it or… keep it running). Once all file references are gone, if it is deleted, then it is gone. So when I read:

-l, –lazy
Lazy unmount. Detach the filesystem from the filesystem hierarchy now, and cleanup all references to the filesystem as soon as it is not busy anymore. (Requires kernel 2.4.11 or later.)

I only thought of the latter part and not the detach NOW portion. In addition, I wasn’t thinking of the commands themselves. Clearly if programs are under /usr then I might need /usr to … run mount! This is a perfect example, I might add, of how dangerous being tired is: you might think you have the clarity to work on something but the reality is if you don’t have that clarity then you don’t have the clarity to determine whether or not you have ability to judge any of it in the first place. This implies I likely won’t get much done today but at least I did do one thing: I fixed the logical volume rename issue. That is something even if it obliterated my (good) system uptime and at the same time revealing how bad MY uptime was (I should not have been at the server let alone up at all!).

Using ‘script’ and ‘tail’ to watch a shell session in real-time

This is an old trick that my longest standing friend Mark and I used years ago on one of his UltraSPARC stations while having fun doing any number of things. It can be used for all sorts of needs (e.g., showing someone how to do something, allowing someone to help debug your problem to name two of many others) but the main idea is that person is running tasks (for the purpose of this article I will pretend this person is the victim) and more generally using the shell, while the other person (and pretending that this person is the snoop) is watching everything, even if they’re across the world. It works as long as both are on the same system and that the victim writes output (directs to) a file that the snoop can read (as in open for reading).

Before I get to how to do this, I want to point something else out. If you look at the man page for script, you will see the following block of text under OPTIONS:

-f, –flush
Flush output after each write. This is nice for telecooperation: one person does `mkfifo foo; script -f foo’, and another can supervise real-time what is being done using `cat foo’.

But there are two problems with this method both due to the fact that the watching party (as I put it for amusement, the snoop) has control. For example, if I do indeed type at the shell:

$ mkfifo /tmp/$(whoami).log ; script --flush -f /tmp/$(whoami).log

… then my session will block, waiting for the snoop to type at their prompt:

$ cat /tmp/luser.log

(assuming my login name is indeed luser). And until that happens, even if I type a command, no output occurs on my end (the command is not ignored, however). Once the other person does type that I will see the output of script (showing that the the output is being written to /tmp/luser.log and any output from commands that I might have typed). The other user will see the output too, including which file is being written to. Secondly, the snoop decides when to stop. When they hit ctrl-c then once I begin to type, I will see at my end, something like this:

$ lScript done, file is /tmp/luser.log
$

Note that I hit the letter l, as if I was going to type ls (for example) and then I see the script done output. If I finish the command, let’s say by typing s and then hit enter, I will instead of seeing the output of ls, I will see (since typing ls hardly takes any time I will show it as it would appear on my screen, with the command completed, or one would suspect so):

$ lScript done, file is /tmp/luser.log
$ s
-bash: s: command not found

Yes, that means that the first character closes my end (the lScript is not a typo, that is what appears on my screen), shows me the typical message after script is done and then and only then do I get to enter a command proper.

So the question is, is there a way that I can control the starting of the file, and even more than that, could the snoop check on the file later (doesn’t watch in the beginning) or stop in the middle and then start watching again? Absolutely. Here’s how:

  • Instead of making a fifo (first in first out, i.e., a queue) I specify a file to write the script output to (a normal file with a caveat as below), or alternatively let the default file name be the output, instead. So what I type is:
    $ script --flush -f /tmp/$(whoami).log
    Script started, file is /tmp/luser.log
    $
  • After that is done, I inform (somewhere else, or otherwise they use the –retry option of tail, to repeatedly try until interrupted or the file can be followed) the snoop (now THAT is something you don’t expect to ever be true, is it? Why would I inform a snoop of anything at all?! This is of course WHY I chose the analogy in the first place) and they then type:
    $ tail -f /tmp/luser.log

    And they will see – by default – the last ten lines of the session (the session implies the script log, so not the last ten lines of my screen!). They could of course specify how many lines but the point is they will now be following (that’s what -f does) the output of the file, which means whenever I type a command, they will see that as well as any output. This will happen until they hit ctrl-c or I type ‘exit’ (and if I do that they will still try to follow the file so they will need to hit ctrl-c too). Note that even if I remove the log file while they’re watching it, they will still see the output until I exit the script session. This is because they have a file descriptor of the log file and so while the file is no longer written to, they are still following it (this is because of how inodes work).

As for the caveat I referred to, it is simply this: control characters are also sent to the file and so it isn’t ASCII only. Furthermore, because of the same reason, using text editors (e.g., vi) will not show correctly to the snoop.

In the end, this is probably not often used but it is very useful when it is indeed needed. Lastly, if you were cat the output file, you’d see it as if you were watching the file in real-time. Most importantly: do not ever do anything that would reveal confidential information and if you do have anything you don’t want shown to the world, do not use /tmp or any public-readable file (and rm it when done too!). Yes, you can have someone read a file in your directory as long as they know the full path and have proper permissions to the directory and file.

Encryption IS Critical

I admit that I’m not big on mobile phones (and I also admit this starts out on phones but it is a general thing and the rules apply to all types of nodes). I’ve pointed this out before, especially with regards to so-called smart technology. However, just because I personally don’t have much use for it, most of the time, does not mean that the devices should not be as secure as possible. Thus, I am firstly, giving credit to Apple (which all things considered is exceptionally rare) and Google (which is also very rare). I don’t like Apple particularly because of Steve Jobs’ arrogance (which I’ve also written about) but that is only part of it. At the same time, I do have fond memories of the early Apple computers. As for Google, I have serious issues with them but I haven’t actually put words to it here (or anywhere actually). But just because I don’t like them does not mean they can never do something right or that I approve of. To suggest that would be me being exactly as I call them out for. Well, since Apple and Google recently suggested they were to enable encryption by default for iOS and Android, I want to commend them for it: encryption is important.

There is the suggestion, most often by authorities (but not always as – and this is something I was planning on and I might still write about – Kaspersky showed not too long ago when they suggested similar things), that encryption (and more generally, privacy) is a problem and a risk to ones safety (and others’ safety). The problem here is that they are lying to themselves or they are simply ignorant (ignore the obvious please, I know it but it is besides the point for this discussion). They are also risking the people they claim to want to protect and they also risk themselves. Indeed, how many times has government data been stolen? More than I would like to believe and I don’t even try to keep track of it (statistics can be interesting but I don’t find the subject of government – or indeed other entity – failures all that interesting. Well, not usually). The problem really comes down to this, doesn’t it? If someone has access to your or another person’s private details, and it is not protected (or poorly protected), then what can be done to you or that other person if someone ELSE gets that information? Identify theft? Yes. Easier time gathering other information about you, who you work for, your friends, family, your friends’ families, etc.? Yes. One of the first things an attacker will do is gather information because it is that useful in attacks, isn’t it? And yet, that’s only two issues of many more, and both of those are serious.

On the subject of encryption and the suggestion that “if you have nothing to hide you have nothing to fear”, there is a simple way to obliterate it. All one needs to do is ask a certain (or similar) question and explanation following, directed at the very naive and foolish person (Facebook founder has suggested similar, as an example). The question is along the lines of: Is that why you keep your bank account, credit cards, keys, passwords, etc., away from others? You suggest that you shouldn’t have a need to keep something private because you have nothing to hide unless you did something wrong (and so the only time you need to fear is when you are in fact doing something wrong). But here you are hiding something (that you wouldn’t want others knowing, in other words, and with your logic it follows that you did something wrong), yet here you are hiding your private information. The truth is that if you have that mentality, you are either lying to yourself (and ironically hiding something from yourself and therefore not exactly following your suggestion) or you have hidden intent or reasons to want others information (which, ironically enough, is also hiding something – your intent). And at the same time,  you know full well that YOU do want your information private (and YOU should want it private!).

But while I’m not surprised here, I still find it hard to fathom how certain people, corporations and other entities still think strong encryption is a bad thing. Never mind the fact that many high-profile cases of criminal data confiscated by police has been encrypted and yet revealed. Never mind the above. It is about control and power and we all know that the only people worthy of power are those who do not seek it but are somehow bestowed with it. So what am I getting at? It seems that, according to the BBC, the FBI boss is concerned about Apple’s and Google’s plans. Now I’m not going to be critical of this person, the FBI in general or anything of the sort. I made aware in the past that I won’t get in to the cesspool that is politics. However, what I will do is remark on something this person said but not remark on it by itself. Rather I will refer to something most amusing. What he said is this:

“What concerns me about this is companies marketing something expressly to allow people to place themselves beyond the law,” he said.

“I am a huge believer in the rule of law, but I am also a believer that no-one in this country is beyond the law,” he added.

But yet, if you look at the man page of expect, which allows interactive things that a Unix shell cannot do by itself, you’ll note the following capability:

  • Cause your computer to dial you back, so that you can login without paying for the call.

That is, as far as I am aware, a type of toll fraud. Why am I even bringing this up, though? What does this have to do with the topic? Well, if you look further at the man page, you’ll see the following:

ACKNOWLEDGMENTS
Thanks to John Ousterhout for Tcl, and Scott Paisley for inspiration. Thanks to Rob Savoye for Expect’s autoconfiguration code.

The HISTORY file documents much of the evolution of expect. It makes interesting reading and might give you further insight to this software. Thanks to the people mentioned in it who sent me bug fixes and gave other assis‐
tance.

Design and implementation of Expect was paid for in part by the U.S. government and is therefore in the public domain. However the author and NIST would like credit if this program and documentation or portions of them are
used.
29 December 1994

I’m not at all suggesting that the FBI paid for this, and I’m not at all suggesting anyone in the government paid for it (it is, after all, from 1994). And I’m not suggesting they approve of this. But I AM pointing out the irony. This is what I meant earlier – it all comes down to WHO is saying WHAT and WHY they are saying it. And it isn’t always what it appears or is claimed. Make no mistake people, encryption IS Important, just like PCI compliance, auditing (regular corporation auditing of different types, auditing of medical professionals, auditing in everything), and anyone suggesting otherwise is ignoring some very critical truths. So consider that a reminder, if you will, of why encryption is a good thing. Like it or not, many humans have no problem with theft, no problem with manipulation, no problem with destroying animals or their habitat (Amazon forest, anyone?). It is by no means a good thing but it is still reality and not thinking about it is a grave mistake (including indeed literally, and yes, I admit that that is pointing out a pun). We cannot control others in everything but that doesn’t mean we aren’t responsible for our own actions and ignoring something that risks yourself (never mind others here) places the blame on you, not someone else.

shell aliases: the good, the bad and the ugly


2014/11/11:
I erroneously claimed that the -f option is required with the rm command to remove non-empty directories. This is only a partial truth. You need -r as that is for recursion which traversing a file system, it would traverse directories encountered, until there are no more directories found (and indeed file system loops can occur which programs do consider). But -f isn’t for non-empty directories as such but rather write-permission related. Specifically, in relation to recurse (-r), if you specify -r you’ll still be prompted whether to recurse in to the directory, if it is write-protected (or write-protected file in). If you specify -f, you will not be prompted. Of course, there’s other reasons you might not be able to remove the directory or any files in it, but that is another issue entirely. Furthermore, root need not concern themselves with write permission, at least in the sense that they can override it.


2014/10/07:
Please observe the irony (that actually further proves my point, and that itself is ironic as well) that I suggest using the absolute path name and then I do not (with sed). This is what I mean by I am guilty of the same mistakes. It is something I have done over the years: work on getting in to the habit (of using absolute paths) and then it slides and then it happens all over again. This is why it is so important to get it right the first time (and this rule applies to security in general!). To make it worse, I knew it before I had root access to any machine (ever), years back. But this is also what I discussed with convenience getting in the way with security (and aliases only further add to the convenience/security conflict, especially with how certain aliases enable coloured output or some other feature). Be aware of what you are doing, always, and beware of not taking this all to heart. (And you can bet I’ve made a mental note to do this. Again.) Note that this rule won’t apply to shell built-ins unless you use the program too (some – e.g., echo – have both). The command ‘type’ is a built-in, though, and it is not a program. You can check by using the command itself like (type -P type will show nothing because there is no file on disk for type). Note also that I’ve not updated the commands where I show how aliases work (or commands that might be aliased). I’ve also not updated ls (and truthfully it probably is less of an issue, unless you are root, of course) but do note how to determine all ways a command can be invoked:

$ type -a ls
ls is aliased to `ls --color=auto'
ls is /usr/bin/ls

This could in theory could be only for Unix and its derivatives, but I feel there are similar risks in other environments. For instance, in DOS, extensions of programs had a priority so that if you didn’t type ‘DOOM2.EXE’ it would check – if I recall correctly, ‘DOOM2.BAT’ and ‘DOOM2.COM’ and then ‘DOOM2.EXE’. I don’t remember if that is the exact order but with no privilege separation you had the ability to rename files so that it if you wanted to write a wrapper to DOOM2 you could do it easily enough  (I use DOOM2 in the example because not only was it one of my favourite graphical computer games, one I beat repeatedly I enjoyed it so much, much more than the original DOOM… I also happened to write a wrapper for DOOM2 itself, back then). Similarly, Windows doesn’t show extensions at all (by default, last I knew anyway) and so if a file is called ‘doom.txt.exe’ then double clicking on it would actually execute the executable instead of opening a text file (but the user would only see the name ‘doom.txt’). This is a serious flaw in multiple ways. Unix has its own issues with paths (but at least you can redefine them and there IS privilege separation). But it isn’t without its faults. Indeed, Unix wasn’t designed with security in mind and that is why so many changes have been implemented over the years (the same goes for the Internet main protocols – e.g., IP, TCP, UDP, ICMP – as well as other protocols at say, the application layer – all in their own ways). This is why things are so easy to exploit. This time I will discuss the issue of shell aliases.

General idea for finding the program (or script or…) to execute is also a priority. This is why when you are root (or using a privileged command) you should always use a fully-qualified name (primarily known as using the absolute file name). It is arguably better to always do this because, what if someone modified your PATH, added a file in your bin directory, updated your aliases, … ? Now you risk running what you don’t intend to. There is a way to determine all the ways it could be invoked but you should not rely on this, either. So the good, then the bad and then the ugly of the way this works (remember, security and convenience conflict with each other a lot, which is quite unfortunate but something that cannot be forgotten!). When I refer to aliases understand that aliases are even worse than the others (PATH and $HOME/bin/) in some ways, which I will get to at the ugly.


THE GOOD


There is one case where aliases are fine (or at least not so bad as the others; the others is when you use options). It isn’t without flaws, however. Either way: let’s say you’re like me and you’re a member of the Cult of VI (as opposed to the Church of Emacs). You have vi installed but you also like vim features (and so have it installed too). You might want vi in some cases but vim in others (for instance, root uses vi and other users use vim, contrived example or not is up to your own interpretation). If you place in $HOME/.bashrc the following line, then you can override what happens when you type the command in question as follows:

$ /usr/bin/grep alias $HOME/.bashrc
alias vi='vim'

Then typing ‘vi’ at the shell will open vim. Equally, if you type ‘vi -O file1 file2′ it will be run as ‘vim -O file1 file2′. This is useful but even then it has its risks. It is up to the user to decide, however (and after all, if a user is compromised you should assume the system is compromised because if it hasn’t been already it likely will be, so what’s the harm? Well I would disagree that there is no harm – indeed there is – but…)


THE BAD AND THE UGLY


Indeed, this is both bad and ugly. First, the bad part: confusion. Some utilities have conflicting options. So if you alias a command to use your favourite options, what if one day you want to use another option (or see if you like it) and you are used to typing the basename (so not the absolute name)? You get an error about conflicting options (or you get results you don’t expect)? Is it a bug in the program itself? Well, check aliases as well as where else the problem might occur. In  bash (for example) you can use:

$ type -P diff
/usr/bin/diff

However, is that necessarily what is executed? Let’s take a further look:

$ type -a diff
diff is aliased to `diff -N -p -u'
diff is /usr/bin/diff

So no, it isn’t necessarily the case. What happens if I use -y, which is a conflicting output type? Let’s see:

$ diff -y
diff: conflicting output style options
diff: Try 'diff --help' for more information.

Note that I didn’t even finish the command line! It detected invalid output styles and that was it. Yet it appears I did not actually specify conflicting output style types – clearly I only specified one option so this means indeed the alias was used, which means that while I specified options, those options are included and not excluding other options (certain programs will take the last option as the one that rules but not all do and diff does not here). If however, I were to do:

$ /usr/bin/diff -y
/usr/bin/diff: missing operand after '-y'
/usr/bin/diff: Try '/usr/bin/diff --help' for more information.

There we go: the error as expected. That’s how you get around it. But let’s move on to the ugly because “getting around it” is only if you remember and more so do not ever rely on aliases! Especially do not rely on it for certain commands. This cannot be overstated! The ugly is this:

It is unfortunate but Red Hat Linux based distributions have this by default and not only is it baby-sitting (which is both risky but also obnoxious much of the time … something about the two being related) it has an inherent risk. Let’s take a look at default alias for root’s  ‘rm':

# type -a rm
rm is aliased to `rm -i'
rm is /usr/bin/rm

-i means interactive. rm is of course remove. Okay so what is the big deal, surely this is helpful because as root you can wipe out the entire file system? Okay that’s fine but you can also argue the same with chown and chmod (always be careful recursively with these – well in general even – utilities… but these specifically are dangerous; they can break the system with ease). I’ll get to those in a bit. The risk is quite simple. You rely on the alias which means you never think about the risks involved; indeed, you just type ‘n’ if you don’t want to delete the files encountered (and you can send yes to all by piping ‘yes’, among other ways, if you wanted to at a one time avoid the nuisance). The risk then is, what if by chance you are an administrator (a new administrator) on another (different) system and it does not have the -i option? You then go to do something like (and one hopes you aren’t root but I’m going to show it as if I was root – in fact I’m not running this command – because it is serious):

# /usr/bin/pwd
/etc
# rm *
#

The pwd command was more of to show you a possibility. Sure, there are directories there that won’t be wiped out because there was no recursive option, but even if you are fast with sending an interrupt (usually ctrl-c but can be shown and also set with the stty command, see stty –help for more info), you are going to have lost files. The above would actually have shown that some files were directories after the rm * but before the last #, but all the files in /etc itself would be gone. And this is indeed an example of “the problem is that which is between the keyboard and chair” or “PEBKAC” (“problem exists between keyboard and chair”) or even “PICNIC” (problem in chair not in computer”), among others. Why is that? Because you relied on something one way and therefore never thought to get in the habit of being careful (and either always specifying -i or using the command in a safe manner like always making sure you know exactly what you are typing). As for chown and chmod? Well if you look at the man pages, you see the following options (for both):

--no-preserve-root
 do not treat '/' specially (the default)
--preserve-root
 fail to operate recursively on '/'

Now if you look at the man page for rm, and see these options, you’ll note a different default:

--no-preserve-root
 do not treat '/' specially
--preserve-root
 do not remove '/' (default)

The problem? You might get used to the supposed helpful behaviour with rm which would show you:

rm: it is dangerous to operate recursively on ‘/’
 rm: use --no-preserve-root to override this failsafe

So you are protected from your carelessness (you shouldn’t be careless… yes it happens and I’m guilty of it too, but this is one of the things backups were invented for, as well as only being as privileged as is necessary and only for the task in hand). But that protection is a mistake itself. This is especially true when you then look at chown and chmod, both of which are ALSO dangerous when recursively done on / (actually on many directories recursively, example to not do it on is /etc as that will break a lot of things, too). And don’t even get me started on the mistake of: chown -R luser.luser .*/ because even if you are in /home/luser/lusers, then as long as you are root (it is a risk for users to change owners and so therefore only root can do that) then you will be changing the root file system and everything under it (/etc, /bin/, /dev/, everything) to be owned by luser as the user and luser as the group. Hope you had backups. You’ll definitely need them. Oh, and yes, any recursive action on .* is a risky thing indeed. To see this in action in a safe manner, as some user, in their home directory or even a sub-directory of their home directory, try the following:

$ /usr/bin/ls -alR .*

… and you’ll notice it going to /home and then / and everything below it! The reason is the way path globbing works (try man -s 7 glob). I’d suggest you read the whole thing but the one in particular is under Pathnames.

So yes, if you rely on aliases which is relying on not thinking (which is a problem itself in so many ways) then you’re setting yourself up for a disaster. Whether that disaster in fact happens is not guaranteed but one should be prepared and not set themselves up for it in the first place. And unfortunately some distributions set you up for this by default. I’m somewhat of the mind to alias rm to ‘rm –no-preserve-root’ but I think most would consider me crazy (they’re probably more correct than they think). As for the alias rm in /root/.bashrc, here’s how you remove it (or maybe if you prefer to comment it out). Just like everything else, there’s many ways, this is at the command prompt:

# /usr/bin/sed -i 's,alias \(rm\|cp\|mv\),#alias \1,g' /root/.bashrc

Oh, by the way, yes, cp and mv (hence the command above commenting all three out) are also aliased in root’s .bashrc to use interactive mode and yes the risks are the same (you risk overwriting files when you aren’t on an aliased account and this might even be the same system that you are used to it with root but you don’t have it on all your accounts which means if you were just as root and remembered it was fine then you logout back to your normal, non-privileged user and you do some maintenance there, what happens if you then use one of those commands that is not aliased to -i? Indeed, aliases can be slightly good, bad and very ugly). Note that (although you should do this anyway) even if you were to source (by ‘source /root/.bashrc’ or equally ‘. /root/.bashrc’) the file again the aliases would still exist because it didn’t unalias them (you could of course run that too but better is log out and the next time you are logged in you won’t have that curse upon you).

One more thing that I think others should be aware of as it further proves my point about forgetting aliases (whether you have them or not). The reason I wrote this is twofold:

  • First, I’ve delayed the alias issue with rm (and similar commands) but it is something I’ve long thought of and it is indeed a serious trap.
  • Second, and this is where I really make the point: the reason this came up is one of my accounts on my server had the alias for diff as above. I don’t even remember setting it up! In fact, I don’t even know what I might have used diff for, with that account! That right there proves my point entirely (and yes, I removed it). Be aware of aliases and always be careful especially as a privileged user…

 

The Hidden Dangers of Migrating Configuration Files

One of the things I have suggested (in person, here, elsewhere) time and again, is the user is more often than not, the real problem. It is the truth, it really is. I also tell others and more often write about how there should be no shame with admitting to mistakes. The only mistake you can make is not admitting to your mistakes, because if you don’t admit to your mistakes then you cannot learn from them; indeed, hiding behind a mask is not going to make the problem go away but will actually make it worse (and appear as not a problem at the same time). So then let me make something very clear, and this too is something I’ve written about (and mentioned to people otherwise) before: administrators are users too. Any administrator not admitting to making blunders, is one that is either lying (and a poor liar at that, I might add) or their idea of administration is logging into a computer, remotely and running a few basic commands, then logging out. Anyone that uses a computer in any way is a user, it as simple as that. So what does this have to do with migrating configuration files? It is something I just noticed in full and it is a huge blunder on my part. It is actually really stupid but it is something to learn from, like everything else in life.

At somewhere around 5 PM / 17:00 PST on June 16, my server was up for 2 years, 330 days, 23 hours, 29 minutes and 17 seconds. I know this because of the uptime daemon I wrote some time ago. However, around that time, also, there was a problem with the server. I did not know it until the next morning at about 4:00, because I had gone for the night. The problem is keyboard would not waken the monitor (once turned on) and I could not ssh in to the server, from this computer; indeed, the network appeared down. In fact, it was down. However, the LEDs on the motherboard (thanks to a side window in case) were lit, the fan lights were lit and the fans were indeed moving. The only thing is, the system itself was unresponsive. The suspect is something that I cannot prove one way or another. The suspect is, however, this: an out of memory issue, and the thinking is the Linux OOM killer killed a critical process (and/or was not able to resolve the issue in time). I backed up every log file at that time, in the case I ever wanted to look in to it further (probably won’t be enough information but there was enough information to tell me at about what time it stopped). There had been a recent library (glibc which is, well, very much part of the kernel and everything else) update but Linux is really good about this so it really is anyone’s guess. All I know is when logs stopped updating. The end result is I had to do a hard reboot. And since CentOS 7 came out a month or two later, I figured why not? True I don’t like systemd but there are other things I do like about CentOS 7 and the programmer in me really liked the idea of GCC 4.8.x and C/C++11 support. Besides, I manage on Fedora Core (a remote server and the computer I write from) so I can manage in CentOS 7. Well here’s the problem: I originally had trouble (the day was bad and I naively went against my intuition which was telling me repeatedly, “this is a big mistake” – it was). Then I got it working the next day, when I was more clear. However, just like CentOS 5 to CentOS 6 had certain major services (in that case it was Dovecot) having major releases, the same happened here only this time it was Apache. And there were quite some configuration changes indeed, as it was a major release (from 2.2 to 2.4). I made a critical mistake, however:

I migrated old configuration files for Apache. Here is what happened and here is why I finally noticed it (and why I did not notice it before). This (migrating old files) is indeed dangerous if you are not very careful (keep in mind the major changes means that unless you have other systems with the same layout, you will not be 100% aware of all – keyword – changes). Even if you are careful, and even if things appear fine (no error, no warning, everything seems to work), there is always the danger of something that changed that is in fact a problem. And that is exactly what happened. Let me explain.

In Apache 2.2.x you had the main config file /etc/httpd/conf/httpd.conf and you also had the directory /etc/httpd/conf.d (with extra configuration files like the one for mod_ssl, mod_security, and so on). In the main config file, however, in the beginning, you had the LoadModule directives so that everything works fine. But since the configuration file has <IfModule></IfModule> blocks, as long as the module in question is not required, there is no harm. You can consider it optional. In Apache 2.4.x, however, early in the file /etc/httpd/conf/httpd.conf it includes the directory /etc/httpd/conf.modules.d which then has, among other files, 00-base.conf and in that file is the LoadModules directive. And here is where the problem arose. I had made a test run of the install but without thinking of the <IfModule></IfModule> and non-required modules, and since the other Include directive is at the end of the file, there surely was no harm in shifting things around, right? Well, looking back it is easy to see where I screwed up and how. But yes, there was harm. And while I noticed this issue, it didn’t exactly register (perhaps something to do with sleep deprivation combined with reading daily logs in the early morning and more than that, being human, i.e., not perfect by any means, not even close). Furthermore, error log was fine and so in logwatch output, I did indeed see httpd logs. But something didn’t register until I saw the following:


0.00 MB transferred in 2 responses (1xx 0, 2xx 1, 3xx 1, 4xx 0, 5xx 0)
2 Content pages (0.00 MB)


Certainly that could not be right! I looked at my website yesterday, even, and more than once. But then something else occurred to me. I began to think about it, and it had been sometime that all I saw was the typical scanning for vulnerabilities that every webserver gets. I had not in fact saw much more. The closest would be:


2.79 MB transferred in 228 responses (1xx 0, 2xx 191, 3xx 36, 4xx 1, 5xx 0)
4 Images (0.00 MB),
224 Content pages (2.79 MB),


And yet, I knew I had custom scripts for logwatch, that I made some time back (that shows other information that I want to see, that isn’t in the default logwatch httpd script/config). But I figured that maybe I forgot to restore it. The simple solution was to move the include directives to before the <IfModule></IfModule> blocks, or in other words, much earlier in the file, not at the end.

To be true to my nature and word, I’m going to share what I actually saw in logs. This, I hope, will show exactly how sincere I am when I suggest that people admit to their mistakes and to not worry about so-called weaknesses. If there is any human weakness, it is the inability to understand that perfection isn’t possible. But that is more as I put it before: blinded by a fallacy. If you cannot admit to mistakes then you are hiding from the truth and ironically you are not fooling anyone but yourself.

The log entries looked like this:


vhost


Yes, really. I’m 100% serious. How could I screw up that bad? It is quite simple: it evaluated to that, because at the end of the config file, I include a separate directory that holds the vhosts themselves. But the CustomLog format I use, that I cleverly named vhost (because it shows the vhost as well as some other vhost specifics), was not in fact evaluated (the log modules were not loaded at the time). And in the <VirtualHost></VirtualHost> blocks I have CustomLog directives which would normally refer to a format. This means the custom log format was not used. The reason the error logs worked is I did not make a custom error log. But since the log modules were loaded after the configuration of the log formats, the access logs had the format of “vhost” as a string, and that is it. Brilliant example of “the problem is that which is between the keyboard and chair” as I worded it circa 1999 (and others have put it other ways, longer than me, for sure). And to continue with sharing such a stupid blunder, I’m going to point out that this has been this way about 41 days and 3 hours. Yes, I noticed it but I only noticed it in a (local) test log file (test virtual host). Either way, it did not register as a problem (it should have but it absolutely did not!). I have no idea why it didn’t, but it didn’t. True, I have had serious sleep issues, but that is irrelevant. The fact is: I made a huge mistake with the migration of configuration files. It was my own fault, I am 100% to blame, and there is nothing else to it. But this is indeed something to consider, because no one is perfect and when there is a major configuration directory restructure (or any type of significant restructure) there are risks to keep in mind. This is just nature: significant changes do require getting accustomed to things and all it takes is being distracted, not thinking of one tiny thing, or even not understanding something in particular, in order for a problem to occur. Thankfully, though, most of the time problems are noticed quickly and fixed quickly, too.  But in this case I really screwed up and I am only thankful it wasn’t something more serious. Something to learn from, however, and that is exactly what I’ve done.