Steve Gibson: Self-proclaimed Security Expert, King of Charlatans

One might think I have better things to worry about than write about a known charlatan but I have always been somewhat bemused with his idea of security (perhaps because he is clueless and his suggestions are unhelpful to those who believe him which is a risk to everyone). More importantly though, I want to dispel the mythical value of what he likes to call stealth ports (and even more than that anything that is not stealth is somehow a risk). This, however, will not only tackle that, it will be done in what some might consider an immature way. I am admitting it full on though. I’m bored and I wanted to show just how useless his scans are by making a mockery of those scans. So while this may seem childish to some, I am merely having fun while writing about ONE of MANY flaws Steve Gibson is LITTERED with (I use the word littered figuratively and literally).

So let’s begin, shall we? I’ll go in the order of pages you go through to have his ShieldsUP! scan begin. First page I get I see the following:

Greetings!

Without your knowledge or explicit permission, the Windows networking technology which connects your computer to the Internet may be offering some or all of your computer’s data to the entire world at this very moment!

Greetings indeed. Firstly, I am very well aware of what my system reveals. I also know that this has nothing to do with permission (anyone who thinks they have a say in what their machine reveals, when connecting to the Internet – or phone to a phone network, or … – is very naive and anyone suggesting that there IS permission involved, is a complete fool). On the other hand, I was not aware I am running Windows. You cannot detect that yet you scan ports which would give you one way to determine OS? Here’s a funny part of that: since I run a passive fingerprinting service (p0f) MY SYSTEM determined your server’s OS (well, technically, kernel but all things considering, that is the most important bit, isn’t it? Indeed it is not 100% correct, however, but that goes with fingerprinting in general and I know that it DOES detect MY system correctly). So not only is MY host revealing information YOURS is too. Ironic? Absolutely not! Amusing? Yes. And lastly, let’s finish this part up: “all of your computer’s data to the entire world at this very moment!” You know, if it were not for the fact people believe you, that would be hilarious too. Let’s break that into two parts. First, ALL of my computer’s data? Really now? Anyone who can think rationally knows that this is nothing but sensationalism at best but much more than that: it is you proclaiming to be an expert and then ABUSING that claim to MANIPULATE others into believing you (put another way: it is by no means revealing ALL data, not in the logical – data – sense or physical – hardware – sense). And the entire world? So you’re telling me that every single host on the Internet is analyzing my host at this very moment? If that were the case, my system’s resources would be too full to even connect to your website. Okay, context would suggest that you mean COULD but frankly I already covered that this is not the case (I challenge you to name the directory that is most often my current working directory let alone know that said directory even exists on my system).

If you are using a personal firewall product which LOGS contacts by other systems, you should expect to see entries from this site’s probing IP addresses: 4.79.142.192 -thru- 4.79.142.207. Since we own this IP range, these packets will …

Well, technically, based on that range, your block is 4.79.142.192/28. And technically your block includes (a) the network address, (b) the default gateway and (c) the broadcast address. That means that the IPs that would be probing is in the range more like ’4.79.142.193′ – ’4.79.142.206′. And people really trust you? You don’t even know basic networking and they trust you with security?

Your Internet connection’s IP address is uniquely associated with the following “machine name”:

wolfenstein.xexyl.net

Technically that is the FQDN (fully-qualified domain name[1]), not “machine name” as you put it. You continue in this paragraph:

The string of text above is known as your Internet connection’s “reverse DNS.” The end of the string is probably a domain name related to your ISP. This will be common to all customers of this ISP. But the beginning of the string uniquely identifies your Internet connection. The question is: Is the beginning of the string an “account ID” that is uniquely and permanently tied to you, or is it merely related to your current public IP address and thus subject to change?

Again, your terminology is rather mixed up. While it is true that it you did a reverse lookup on my IP, it isn’t exactly “reverse DNS”. But since you are trying to simplify (read: dumb it down to your level) it for others, and since I know I can be seriously pedantic, I’ll let it slide. But it has nothing to do with my Internet connection itself (I have exactly one). It has to do with my IP address of which I have many (many if you consider my IPv6 block, but only 5 if you consider IPv4). You don’t exactly have the same FQDN on more than one machine any more than you have the same IP on more than one network interface (even on the same system). So no, it is NOT my Internet connection but THE specific host that went to your website and in particular the IP assigned to that host I connected from. And the “string” has nothing to do with an “account ID” either. But I’ll get back to that in a minute.

The concern is that any web site can easily retrieve this unique “machine name” (just as we have) whenever you visit. It may be used to uniquely identify you on the Internet. In that way it’s like a “supercookie” over which you have no control. You can not disable, delete, or change it. Due to the rapid erosion of online privacy, and the diminishing respect for the sanctity of the user, we wanted to make you aware of this possibility. Note also that reverse DNS may disclose your geographic location.

I can actually request a different block from my ISP and I can also change the IP on my network card. Then the only thing that is there is my IP and its FQDN (that is not in use and I can change the FQDN as I have reverse delegation yet according to you I cannot do any of that). I love your ridiculous terminology though. Supercookie? Whatever. As for it giving away my geographic location, let me make something very clear: the FQDN is irrelevant without the IP address. While it is true that the name will (sometimes) refer to a city it isn’t necessarily the same city or even county as the person USING it. The IP address is related to the network; the hostname is a CONVENIENCE for humans. You know, it used to be that host -> IP was done without DNS (since it didn’t exist) but rather a file that maintains the mapping (and still is used albeit very little). The reason it exists is for convenience and in general because no one would be able to know the IP of every domain name. Lastly, not all IPs resolve into a name.

If the machine name shown above is only a version of the IP address, then there is less cause for concern because the name will change as, when, and if your Internet IP changes. But if the machine name is a fixed account ID assigned by your ISP, as is often the case, then it will follow you and not change when your IP address does change. It can be used to persistently identify you as long as you use this ISP.

The occasions it resembles the IP is when the ISP has authority of the in-addr.arpa DNS zone of (your) IP and therefore has their own ‘default’ PTR record (but they don’t always have a PTR record which your suggestion does not account for; indeed, I could have removed the PTR record for my IP and then you’d have seen no hostname). But this does not indicate that it is static or not. Indeed, even dynamic IPs typically (not always) have a PTR record. Again, the name does not necessarily imply static: it is the IP that matters. And welcome to yesteryear… in these days you typically pay extra for static IPs but you suggest it is quite often that your “machine name is a fixed account ID” (which itself is completely misuse of terminology). On the other hand, you’re right: it won’t change when your IP address changes because the IP is relevant, not the hostname! And if your IP changes then it isn’t so persistent in identifying you, is it? It might identify your location but as multiple IPs (dynamic) and not a single IP.

There is no standard governing the format of these machine names, so this is not something we can automatically determine for you. If several of the numbers from your current IP address (23.120.238.106) appear in the machine name, then it is likely that the name is only related to the IP address and not to you.

Except ISP authentication logs and timestamps… And I repeat the above: the name can include exactly as you suggest and still be static!

But you may wish to make a note of the machine name shown above and check back from time to time to see whether the name follows any changes to your IP address, or whether it, instead, follows you.

Thanks for the suggestion but I think I’m fine since I’m the one that named it.

Now, let’s get to the last bit of the ShieldsUP! nonsense.

GRC Port Authority Report created on UTC: 2014-07-16 at 13:20:16
Results from scan of ports: 0-10550 Ports Open
72 Ports Closed
984 Ports Stealth
———————
1056 Ports Tested

NO PORTS were found to be OPEN.

Ports found to be CLOSED were: 0, 1, 2, 3, 4, 5, 6, 36, 37,
64, 66, 96, 97, 128, 159, 160,
189, 190, 219, 220, 249, 250,
279, 280, 306, 311, 340, 341,
369, 371, 399, 400, 429, 430,
460, 461, 490, 491, 520, 521,
550, 551, 581, 582, 608, 612,
641, 642, 672, 673, 734, 735,
765, 766, 795, 796, 825, 826,
855, 856, 884, 885, 915, 916,
945, 946, 975, 976, 1005, 1006,
1035, 1036

Other than what is listed above, all ports are STEALTH.

TruStealth: FAILED – NOT all tested ports were STEALTH,
- NO unsolicited packets were received,
- A PING REPLY (ICMP Echo) WAS RECEIVED.

The ports you detected as “CLOSED” and not “STEALTH” were in fact returning an ICMP host-unreachable. You fail to take into account the golden rule of firewalls: that which is not explicitly permitted is forbidden. That means that even though I have no service running on any of those ports I still reject the packets to it. Incidentally, some ports you declared as “STEALTH” did exactly the same (because I only allow the ports in a specific IP block as the source network). The only time I drop packets to the floor is when state checks fail (e.g., a TCP SYN flag is set but it is already a known connection). I could prove that too, because I actually went and had you do the scan a second time but this time I added specific iptables rules for your IP block which changed the results quite a bit and indeed I used the same ICMP error code.

As for ping: administrators that block ping outright need to be hit over the head with a pot of sense. Rate limit by all means, that is more than understandable, but blocking ICMP echo requests (and indeed replies) only makes troubleshooting network connectivity issues more of a hassle and at the same time does absolutely nothing for security (fragmented packets and anything that can be abused obviously are dealt with differently because they are different!). Indeed, if they are going to attack they don’t really care if you respond to ICMP requests. If there is a vulnerability they will go after that and frankly hiding behind your “stealth” ports is only a false sense of security and/or security through obscurity (which is a false sense of security and even more harmful at the same time). Here’s two examples: First, if someone sends you a link (example: in email) and it seems to you to be legit and you click on it (there’s a lot of this in recent years and is ever increasing), the fact you have no services running does not mean you somehow are immune to XSS, phishing attacks, malware, or anything else. Security is, always has been and always will be a many layered thing. Secondly: social engineering.

And with that, I want to finish with the following:

If anyone wants the REAL story about Steve Gibson, you need only Google for “steve gibson charlatan” and see the many results. I can vouch for some of them but there really is no need for it – the evidence is so overwhelming that it doesn’t need any more validation. Here’s a good one though (which also shows his ignorance as well as how credible his proclamations are): http://www.theregister.co.uk/2002/02/25/steve_gibson_invents_broken_syncookies/ which is a really good one. If you want a list of items, check the search result that refers to Attrition.org and you will see just how credible he is NOT. A good example is the one that links to a page about Gibson and XSS flaws which itself links to: http://seclists.org/vuln-dev/2002/May/25 which is itself offers a great amount of amusement (note that some of the links are no longer valid as it was years ago but that is the one at seclists.org and not the only incident).

[1] In this case. What he is referring to is taking the IP address and resolving it to a name (which is querying the PTR record as I refer to above). Since I have reverse delegation (so have authority) and have my own domain (which I also have authority of) I therefore can have my IPs resolve to fully-qualified domain names as such. In other words: it is true that the name above is a FQDN but it isn’t necessarily always the case (depending on how you want to interpret fully-qualified). Just as a clarification.

Death Valley, California, Safety Tips and Harry Potter

I guess this might be the most bizarre title for a post yet but it is a take on real life and fantasy and particularly the Harry Potter series. I am implying two things with real life. I will get to the Harry Potter part later. While it is a specific tragedy in Death Valley it is not an uncommon event and since I have many fond memories of Death Valley (and know the risks), I want to reflect on it all (because indeed fantasy is very much part of me, perhaps too much so).

For the first ten years of my life (approximate) I visited Death Valley each year, in November. It is a beautiful place with many wonderful sights. I have many fond memories of playing on the old kind of Tonka trucks (which is a very good example of “they don’t make [it] like they used to” as nowadays it is made out of plastic and what I’m about to describe would be impossible). My brother and I would take a quick climb up the hill right behind our tent, get on our Tonka trucks (each our own) and ride down, crashing or not, but having a lot of fun regardless. I remember the amazing sand dunes with the wind blowing like it tends to in a desert. I remember being fortunate enough that there was a ghost town with a person living there who could supply me with electricity for my nebulizer for an asthma attack (and fortunate enough to see many ghost towns from where miners in the California Gold Rush would have resided). I remember, absolutely, Furnace Creek with the visitor centre and how nice everyone was there. I even remember the garbage truck driver who let my brother and me activate the mechanism to pick up the bin. I remember the many rides on family friends’ dune buggies. The amazing hikes in the many canyons is probably a highlight (but certainly not the only highlight). Then there is Scotty’s Castle (they had a haunted house during Halloween if I recall). There is actually an underground river (which is an inspiration to another work I did but that is another story entirely). They have a swimming pool that is naturally warm. I remember all these things and more even if most of it is vague. It truly is a wonderful place.

Unfortunately, because of the vast area which spans more than 3,373,000 acres (according to Wiki which I seem to remember is about the right area – I’m sure the official Death Valley site would have more on this) and the very fact it is the hottest place on Earth (despite some claims; I am referring to officially acknowledged records) at  134.6 F / 57 C. That was, ironically enough, recorded this very month in 1913, on July 10 (again according to Wiki but from memory, other sources do have it in the early 1900s). This is an important bit (the day of the month in particular) for when I get to fantasy, by the way. Interestingly, the area I live in has a higher record for December and January than Death Valley by a few degrees (Death Valley: December and January at 89 F / 32 C ; my location I know I have seen on the thermostat at least 95 F / 35 C for both months although it could have been higher too). Regardless, Death Valley has a higher record by 10 C higher (my location record: 47 C / 116.6 F; Death Valley as above). And if you think of the size (as listed above) and that much of it is unknown territory for all but seasoned campers (which my family would fit that category), you have to be prepared. Make no mistake people: Death Valley and deserts in general, can be very, very dangerous. Always make sure you keep yourself hydrated. What is hydration though, for humans? It is keeping your electrolytes at a balanced level. This means that indeed too much water is as dangerous as too little water. As a general rule of thumb that was given to me by the RN (registered nurse) for a hematologist I had (another story entirely, as well, as for why I had one): if you are thirsty you waited too long. Furthermore, for Death Valley (for example) make sure you either have a guide or you know your way around (and keep track – no matter how you do this – where you go). That may include maps, compass, landmarks, and any other number of techniques. But it is absolutely critical. I have time and again read articles on the BBC where someone (or some people) from the UK or parts of Europe were unprepared and were found dead. It is a wonderful place but be prepared. Although this should be obvious, it often isn’t: Death Valley is better visited in the cooler months (close to Winter or even in Winter). I promise you this: it won’t be cold by any means. Even if you are used to blizzards in your area, you will still have plenty of heat year round in Death Valley. I should actually restate that slightly, thinking about a specific risk (and possibility). Deserts can drop to freezing temperatures! It is rare yes, but when it does it still will be cold. Furthermore, deserts can see lots of rain, even flash floods! Yes, I’ve experienced this exactly. Furthermore, as for risks, if it looks cloudy (or if you have a sense of smell like mine where you can smell rain that is about to drop, and no that is not an exaggeration – my sense of smell is incredibly strong) or there is a drizzle (or otherwise light rain) or more than that, do not even think about hiking the canyons! It is incredibly dangerous to attempt it! This cannot be stressed enough. As for deserts and freezing temperature, I live in a desert (most of Southern California is a desert) and while it was over 22 years ago (approximately) we still have seen snow on our yard. So desert does not mean no rain or no snow. I’ve seen people write about hot and dry climates and deserts (comparing the two) but that is exactly what a desert is: a hot and dry climate! But climate does not by any means somehow restrict what can or cannot happen. Just like Europe can see mid 30s (centigrade) so too can deserts see less than zero. And all this brings me to the last part: fantasy.

One of my favourite genres (reading – I rarely watch TV or films) is fantasy. While this is not the only series I have read, the Harry Potter series is one that I am referring to in particular to, as I already highlighted. Indeed, everything in Harry Potter has a reason, has a purpose and in general will be part of the entire story! That is how good it is and that is how much I enjoyed it (I also love puzzles so putting things together, or rather the need to do that, was a very nice treat indeed). I’m thankful for a friend that finally got me to read it (I had the books actually but never got around to reading the ones that were out, which would be up to and including book 3, Harry Potter and the Prisoner of Azkaban). The last two books I read the day it came out, in full, with hours to spare. Well, why on Earth would I be writing about fantasy, specifically Harry Potter, and Death Valley, together? I just read on the BBC, that Harry Potter Actor Dave Legeno has been found dead in Death Valley. He played the werewolf Fenrir Greyback. I will note the irony that today, the 12th of July, this year it is a full moon. I will also readily admit that in fantasy, not counting races by themselves (e.g., Elves, Dwarves, Trolls, …) werewolves are my favourite type of creature. I find the idea fascinating and there is a large part of me that wishes they were real. (As for my favourite race it would likely be Elves) I didn’t know the actor, of course, but the very fact he was British makes me think he too fell to the – if you will excuse the pun which is by no means meant to be offensive to his family or anyone else – fantasy of experiencing Death Valley, and unfortunately it was fatal. And remember I specifically wrote 1913, July 10 as the record temperature for Death Valley? Well, I did mean it when I wrote it has significance here: he was found dead on July 11 of this year. Whether that means he died on the 11th is not exactly known yet (it is indeed a very large expansion and it is only that hikers found him, that it is known) but that it was one day off is ironic indeed. It is completely possible he died on the 10th and it is also possible it was days before or even the 11th. This is one of those things that will be known after autopsy occurs as well as backtracking (by witnesses and other evidence) and not until then. Until then, it is anyone’s guess (and merely speculation). Regardless of this, it is another person who was unaware of the risks of which there are many (depending on where in Death Valley you might be in a vehicle; what happens if you run out of fuel and only have enough water for three days? There are so many scenarios but they are far too often not thought of or simply neglected). Two other critical bits of advice: don’t ignore the signs left all around the park (giving warnings) and always, without fail, tell someone where you will be! If someone knew where he was and knew approximately when he should be back (which should always be considered when telling someone else where you’ll be) they could have gone looking for him. This piece of advice, I might add, goes for hiking, canoeing and anything else (outside of Death Valley, this is a general rule), especially if you are alone (but truthfully – and I get the impression he WAS alone – you should not be alone in a place as large as Death Valley because there are many places to fall, there are animals that could harm you, and instead of having a story to bring home you risk not coming home at all). There are just so many risks so always be aware of that and prepare ahead of time. Regardless, I want to thank Dave for playing Fenrir Greyback. I don’t know if [you] played in any other films and I do not know anything about you or your past but I wish you knew the risks beforehand and my condolences (for whatever they can be and whatever they are worth) to your friends and family. I know that most will find this post out of character (again if you will excuse the indeed intended pun) for what I typically write about, but fantasy is something I am very fond of, and I have fond memories of Death Valley as well.

“I ‘Told’ You So!”

Update on 2014/06/25: Added a word that makes something more clear (specifically the pilots were not BEING responsible but I wrote “were not responsible”).

I was just checking the BBC live news feed I have in my bookmark bar in Firefox and I noticed something of interest. What is that? How automated vehicle systems (whether controlled by humans or not it is still created by and automation itself has its own flaws) are indeed dangerous. Now why is that interesting to me? Because I have written about this before in more than one way! So let us break this article down a bit:

The crew of the Asiana flight that crashed in San Francisco “over-relied on automated systems” the head of the US transport safety agency has said.

How many times have I written about things being dumbed down to the point where people are unable – or refuse – to think and act accordingly to X, Y and Z? I know it has been more than once but apparently it was not enough! Actually, I would rather state: apparently not enough people are thinking at all. That is certainly a concern to any rational being. Or it should be.

Chris Hart, acting chairman of the National Transportation Safety Board (NTSB), said such systems were allowing serious errors to occur.

Clearly. As the title suggests: I ‘told’ you so!

The NTSB said the 6 July 2013 crash, which killed three, was caused by pilot mismanagement of the plane’s descent.

Again: relying on “smart” technology is relying on the smartest of the designer and the user (which doesn’t leave much chance, does it?). But actually in this case it is even worse. The reasons: First, they are endangering others lives (and three died – is that enough yet?). Second is the fact that they are operating machinery, not using a stupid (which is what a “smart” phone is) phone. I specifically wrote about emergency vehicles and this and here we are, where exactly the situation arises: there are events that absolutely cannot be accounted for automatically and require that a person is paying attention and using the tool responsibly!

During the meeting on Tuesday, Mr Hart said the Asiana crew did not fully understand the automated systems on the Boeing 777, but the issues they encountered were not unique.

This is also called “dumbing the system down” isn’t it? Yes, because when you are no longer required to think and know how something works, you cannot fix problems!

“In their efforts to compensate for the unreliability of human performance, the designers of automated control systems have unwittingly created opportunities for new error types that can be even more serious than those they were seeking to avoid,” Mr Hart said.

Much like I wrote about related to all of and then some: computer security, computer problems, emergency vehicles and in general automated vehicles. This is another example.

The South Korea-based airline said those flying the plane reasonably believed the automatic throttle would keep the plane flying fast enough to land safely.

Making assumptions at the risk of others lives is irresponsible and frankly reprehensible! I would argue it is potentially – and in this case, is – murderous!

But that feature was shut off after a pilot idled it to correct an unexplained climb earlier in the landing.

Does all of this start to make sense? No? It should. Look what the pilot did? Why? A stupid mistake or an evil gremlin took over him momentarily? Maybe the gremlin IS their stupidity.

The airline argued the automated system should have been designed so that the auto throttle would maintain the proper speed after the pilot put it in “hold mode”.

They should rather be saying sorry and then some. They should also be taking care of the mistake THEY made (at least as much as they can; they already killed – and yes, that is the proper way of wording it – three people)!

Boeing has been warned about this feature by US and European airline regulators.

The blame shouldn’t be placed on Boeing if they didn’t actually neglect and they are doing what it seems everyone wants: automation. Is that such a good idea? As I pointed out many times: no. Let me reword that a bit. Is Honda responsible for a drunk getting behind the wheel and then killing a family of five, four, three, two or even one person (themselves included – realistically that would be the only one who is not innocent!)? No? Then why the hell should Boeing be blamed for a pilot misusing the equipment? The pilot is not being responsible and the reason (and how) the pilot is not being responsible is irrelevant!

“Asiana has a point, but this is not the first time it has happened,” John Cox, an aviation safety consultant, told the Associated Press news agency.

It won’t be the last, either. Mark my words. I wish I was wrong but until people wake up it won’t be fixed (that isn’t even including the planes already in commission).

“Any of these highly automated airplanes have these conditions that require special training and pilot awareness. … This is something that has been known for many years.”

And neglected. Because why? Here I go again: it is so dumbed down, so automatic that the burden shouldn’t be placed on the operators! Well guess what? Life isn’t fair. Maybe you didn’t notice that or you like to ignore the bad parts of life, but the fact remains life isn’t fair and they (the pilots and in general the airliner) are playing the pathetic blame game (which really is saying “I’m too immature and irresponsible and not only that I cannot dare admit that I am not perfect. Because of that it HAS to be someone else who is at fault!”).

Among the recommendations the NTSB made in its report:

  • The Federal Aviation Administration should require Boeing to develop “enhanced” training for automated systems, including editing the training manual to adequately describe the auto-throttle programme.
  • Asiana should change its automated flying policy to include more manual flight both in training and during normal operations
  • Boeing should develop a change to its automatic flight control systems to make sure the plane “energy state” remains at or above minimum level needed to stay aloft during the entire flight.

My rebuttal to the three points:

  • They should actually insist upon “improving” the fully automated system (like scrapping the idea). True, this wasn’t completely automated but it seems that many want that (Google self driving cars, anyone?). Because let’s all be real, are they of use here? No, they are not. They’re killing – scrap that, murdering! – people. And that is how it always will be! There is never enough training. There is always the need to stay in the loop. The same applies to medicine, science, security (computer, network and otherwise), and pretty much everything in life!
  • Great idea. A bit late of them though, isn’t it? In fact, a bit late of all airliners that rely on such a stupid design!
  • Well they could always improve but the same thing can be said for cars, computers, medicinal science, other science, and here we go again: everything in this world! But bottom line is this: it is not at all Boeing’s fault. They’re doing what everyone seems to want.

And people STILL want flying cars? Really? How can anyone be THAT stupid? While I don’t find it hard to believe such people exist, I still find it shocking. To close this, I’ll make a few final remarks:

This might be the wrong time, according to some, since it is just reported. But it is not! If it is not the right time now, then when? This same thing happens with everything of this nature! Humans always wait until a disaster (natural or man made) happens until doing something. And then they pretend (lying about it in the process) to be better but what happens next? They do the same thing all over again. And guess what also happens at that time? The same damned discussions (that I dissected, above) occurs! Here’s a computer security example: I’ve lost count with the number of times NASA has suggested they would be improving policies with their network and I have also lost count of times they then went on to LATER be compromised AGAIN with the SAME or EQUALLY stupid CAUSE! Why is this? Irresponsibility and complete and utter stupidity. Aside from the fact that the only thing we learn thing from history is that – and yes, pun is most definitely intended - we do not learn a bloody thing from history! And that is because of stupidity and irresponsibility.

Make no mistake, people:

  1. This will continue happening until humans wake up (which I fear that since even in 2014 ‘they’ have not woken up, they never will!).
  2. I told you so, I was right then and I am still right!
  3. Not only did I tell you so about computer security (in the context of automation) I also told you about real life incidents, including emergencies. And I was right then and I am still right!

Hurts? Well some times that’s the best way. Build some pain threshold as you’ll certainly need it. If only it was everyone’s head at risk, because they’re so thick that they’d survive! Instead we all are at risk because of others (including ourselves, our families, everyone’s families, et al.). Even those like me who suggest this time and again are at risk (because they are either forced in to using the automation or they are surrounded by drones – any pun is much intended here, as well – who willingly use their “smart” everything… smart everything except their brain, that is!

SELinux, Security and Irony Involved

I’ve thought of this in the past and I’ve been trying to do more things (than usual) to keep me busy (there’s too few things that I spend time doing, more often than not), and so I thought I would finally get to this. So, when it comes to SELinux, there are two schools of thought:

  1. Enable and learn it.
  2. Disable it; it isn’t worth the trouble.

There is also a combination of the two: put it in permissive mode so you can at least see alerts (much like logs might be used). But for the purpose of this post, I’m going to only include the mainstream thoughts (so 1 and 2, above). Before that though, I want to point something out. It is indeed true I put this in the security category but there is a bit more to it than that, as those who read it will find out at the end (anyone who knows me – and yes this is a hint – will know I am referring to irony, as the title refers to). I am not going to give a suggestion on the debate of SELinux (and that is what it is – a debate). I don’t enjoy and there is no use in having endless debates on what is good, bad, what should be done, debating on whether or not to do something or even debating on a debating (and yes the latter two DO happen – I’ve been involved in a project that had this and I stayed out of it and did what I knew to be best for the project over all). That is all a waste of time and I’ll leave it to those who enjoy that kind of thing. Because indeed the two schools of thought do involve quite a bit of emotion (something I try to avoid itself, even) – they are so passionate about it, so involved in it, that it really doesn’t matter what is suggested from the other side. It is taking “we all see and hear what we want to see and hear” to the extreme. This is all the more likely when you have two sides. It doesn’t matter what they are debating and it doesn’t matter how knowledgeable they are or are not and nothing else matters either: they believe their side’s purpose so strongly, so passionately that much of it is mindless and futile (neither side sees anything but their own side’s view). Now then…

Let’s start with the first. Yes, security is indeed – as I’ve written about before – layered and multiple layers it always has been and always should be. And indeed there are things SELinux can protect against. On the other hand, security has to have a balance or else there is even less security (password aging + many different accounts so different passwords + password requirements/restrictions = a recipe for disaster). In fact, it is a false sense of security and that is a very bad thing. So let’s get to point two. Yes, that’s all I’m going to write on the first point. As I already wrote, there isn’t much to it notwithstanding endless debates: it has pros and it has cons, that’s all there is to it.

Then there’s the school of thought that SELinux is not worth the time and so should just be disabled. I know what they mean, not only with the labelling of the file systems (I wrote about this before and how SELinux itself has issues at times all because of labels, and so you have to have it fix itself). That labelling issue is bad itself but then consider how it affects maintenance (even worse for administrators that maintain many systems). For instance, new directories in Apache configuration, as one example. Yes, part of this is laziness but again there’s a balance. While this machine (the one I write from, not xexyl.net) does not use it I still practice safe computing, I only deal with software in main repositories, and in general I follow exactly as I preach: multiple layers of security. And finally, to end this post, we get to some irony. I know those who know me well enough will also know very well that I absolutely love irony, sarcasm, satire, puns and in general wit. So here it is – and as a warning – this is very potent irony so for those who don’t know what I’m about to write, prepare yourselves:

You know all that talk about the NSA and its spying (nothing alarming, mind you, nor anything new… they’ve been this way a long long time and which country doesn’t have a spy network anyway? Be honest!), it supposedly placing backdoors in software and even deliberately weakening portions of encryption schemes? Yeah, that agency that there’s been fuss about ever since Snowden started releasing the information last year. Guess who is involved with SELinux? Exactly folks: the NSA is very much part of SELinux. In fact, the credit to SELinux belongs to the NSA. I’m not at all suggesting they tampered with any of it, mind you. I don’t know and as I already pointed out I don’t care to debate or throw around conspiracy theories. It is all a waste of time and I’m not about to libel the NSA (or anyone, any company or anything) about anything (not only is it illegal it is pathetic and simply unethical, none of which is appealing to me), directly or indirectly. All I’m doing is pointing out the irony to those that forget (or never knew of) SELinux in its infancy and linking it with the heated discussion about the NSA of today. So which is it? Should you use SELinux or not? That’s for each administrator to decide but I think actually the real thing I don’t understand (but do ponder about) is: where do people come up with the energy, motivation and time to bicker about the most unimportant, futile things? And more than that, why do they bother? I guess we’re all guilty to some extent but some take it too far too often.

As a quick clarification, primarily for those who might misunderstand what I find ironic in the situation, the irony isn’t that the NSA is part of (or was if not still) SELinux. It isn’t anything like that. What IS ironic is that many are beyond surprised (and I still don’t know why they are but then again I know of NSA’s history) at the revelations (about the NSA) and/or fearful of their actions and I would assume that some of those people are those who encourage use of SELinux. Whether that is the case and whether it did or did not change their views, I obviously cannot know. So put simply, the irony is that many have faith in SELinux which the NSA was (or is) an essential part of and now there is much furor about the NSA after the revelations (“revelations” is how I would put it, at least for a decent amount of it).

Fully-automated ‘Security’ is Dangerous

Thought of a better name on 2014/06/11. Still leaving the aside, below, as much of it is still relevant.

(As a brief aside, before I get to the point: This could probably be better named. By a fair bit, even. The main idea is security is a many layered concept and it involves computers – and its software – as well as people, and not either or and in fact it might involve multiples of each kind. Indeed, humans are the weakest link in the chain but as an interesting paradox, humans are still a very necessary part of the chain. Also, while it may seem I’m being critical in much of this, I am actually leading to much less criticism, giving the said organisation the benefit of the doubt as well as getting to the entire point and even wishing the entire event success!)

In our ever ‘connected’ world it appears – at least to me – that there is so much more discussion about automatically solving problems without any human interaction (I’m not referring to things like calculating a new value for Pi, puzzles, mazes or any thing like that; that IS interesting and that is indeed useful, including to security, even if indirectly and yes, this is about security but security itself on a whole scale). I find this ironic and in a potentially dangerous way. Why are we having everything connected if we are to detach ourselves from the devices (or in some cases the attached so to the device that they are detached from the entire world)? (Besides brief examples, I’ll ignore the part where so many are so attached to their bloody phone – which is, as noted, the same thing as being detached from the world despite the idea they are ever more attached or perhaps better stated, ‘connected’ – that they walk into walls, people – like someone did to me the other day, same cause – and even walking off a pier in Australia while checking Facebook! Why would I ignore that? Because I find it so pathetic yet so funny that I hope more people do stupid things like that that I absolutely will laugh at as should be done, as long as they are not risking others lives [including rescuers lives, mind you; that's the only potential problem with the last example: it could have been worse and due to some klutz the emergency crew could be at risk instead of taking care of someone else]. After all, those that are going to do it don’t get the problem so I may as well get a laugh at their idiocy, just like everyone should do. Laughing is healthy. Besides those points it is irrelevant to this post). Of course, the idea of having everything connected also brings the thought of automation. Well, that’s a problem for many things including security.

I just read that the DARPA (which is the agency that created ARPANet, you know, the predecessor to the Internet – and ARPA still is referred to in DNS, for example) is running a competition as such:

“Over the next two years, innovators worldwide are invited to answer the call of Cyber Grand Challenge. In 2016, DARPA will hold the world’s first all-computer Capture the Flag tournament live on stage co-located with the DEF CON Conference in Las Vegas where automated systems may take the first steps towards a defensible, connected future.”

Now, first, a disclaimer of sorts. Having had (and still do) friends who have been to (and continue to go to) DefCon (and I’m referring to the earlier years of DefCon as well) and not only did they go there, they bugged me relentlessly (you know who you are!) for years to go there too (which I always refused, much to their dismay, but I refused for multiple reasons, including the complete truth: the smoke there would kill me if nothing else did before that), I know very well that there are highly skilled individuals there. I have indirectly written about groups that go there, even. Yes, they’re highly capable, and as the article I read about this competition points out, DefCon already has a capture the flag style tournament, and has for many years (and they really are skilled, I would suggest that except charlatans like Carolyn P. Meinel, many are much more skilled than me and I’m fine with that. It only makes sense anyway: they have a much less hectic life). Of course the difference here is fully automated without any human intervention. And that is a potentially dangerous thing. I would like to believe they (DARPA) would know better seeing as how the Internet (and the predecessor thereof) was never designed with security in mind (and there is never enough foresight) – security as in computer security, anyway. The original reason for it was a network of networks capable of withstanding a nuclear attack. Yes, folks, the Cold War brought one good thing to the world: the Internet. Imagine that paranoia would lead us to the wonderful Internet. Perhaps though, it wasn’t paranoia. It is quite hard to know, as after all a certain United States of America President considered the Soviet Union “the Evil Empire” and as far as I know wanted to delve further on that theme, which is not only sheer idiocy it is complete lunacy (actually it is much worse than that)! To liken a country to that, it boggles the mind. Regardless, that’s just why I view it ironic (and would like to think they would know better). Nevertheless, I know that they (DARPA, that is) mean well (or well, I hope so). But there is still a dangerous thing here.

Here is the danger: By allowing a computer to operate on its own and assuming it will always work, you are essentially taking a great risk and no one will forget what assuming does, either. I think that this is actually a bit understated, because you’re relying on trust. And as anyone who has been into security for 15-20 years (or more) knows, trust is given far too easily. It is a major cause of security mishaps. People are too trusting. I know I’ve written about this before, but I’ll just mention the names of the utilities (rsh, rcp, …) that were at one point the norm and briefly explain the problem: the configuration option – that was often used! – which allowed logging in to a certain host WITHOUT a password from ANY IP, as long as you login as a CERTAIN login! And people have refuted this by using the logic of, they don’t have a login with that name (and note: there is a system wide configuration file of this and also per user which makes it even more of a nightmare). Well, if it is their own system, or if they compromised it, guess what they can do? Exactly – create a login with that name. Now they’re more or less a local user which is so much closer to rooting (or put another way, gaining complete control of) the system (which potentially allows further systems to be compromised).

So why is the DARPA even considering fully automated intervention/protection? While I would like to claim that I am the first one to notice this (and more so put it in similar words), I am not, but it is true: the only thing we learn from history is that we don’t learn a damned thing from history (or we don’t pay attention, which is even worse because it is flat out stupidity). The very fact that systems were compromised by something that was ignored, not thought of prior to (or thought of in a certain way – yes, different angles provide different insights) or new innovations come along to trample over what was once considered safe is all that should be needed in order to understand this. But if not, perhaps this question will resonate better: does lacking encryption mean anything to you, your company, or anyone else? For instance, telnet, a service that allows authentication and isn’t encrypted (logging in, as in sending login and password in clear over the wire). If THAT was not foreseen you can be sure that there will ALWAYS be something that cannot be predicted. Something I have – as I am sure everyone has – experienced is that things will go wrong when you least expect them to. Not only that, much like I wrote not too long ago, it is as if a curse has been cast on you and things start to come crashing down in a long sequence of horrible luck.

Make no mistake: I expect nothing but greatness from the folks at DefCon. However, there’s never a fool-proof, 100% secure solution (and they know that!). The best you can expect is to always be on the look out, always making sure things are OK, keeping up to date on new techniques, new vulnerabilities, and so on, so software in addition to humans! This is exactly why you cannot teach security; you can only learn it – by applying knowledge, thinking power and something else that schools cannot give you: real life experience. No matter how good someone is, there’s going to be someone who can get the better of that person. I’m no different. Security websites have been compromised before, they will in the future. Just like pretty much every other kind of site (example: one of my other sites, before this website and when I wasn’t hosting on my own, the host made a terrible blunder, one that compromised their entire network and put them out of business. But guess what? Indeed, the websites they hosted were defaced, including the other site of mine. And you know what? That’s not exactly uncommon, for defaced websites to occur in mass batches simply because the webhost had a server – or servers – compromised* and well, the defacer had to make their point and name made). So while I know DefCon will deliver, I know also it to be a mistake for DARPA to think there will at some point be no need for human intervention (and I truly hope they actually mean it to be in addition to humans; I did not, after all, read the entire statement, but it makes for a topic to write about and that’s all that matters). Well, there is one time this will happen: when either the Sun is dead (and so life here is dead) or humans obliterate each other, directly or indirectly. But computers will hardly care at that point. The best case scenario is that they can intervene certain (and indeed perhaps many) attacks. But there will never be a 100% accurate way to do this. If it were so, heuristics and the many other tricks that anti-virus (and malware itself) products deploy, would be much more successful and have no need for updates. But has this happened? No. That’s why it is a constant battle between malware writers and anti-malware writers: new techniques, new people in the field, things changing (or more generally, technology evolving like it always will) and in general a volatile environment will always keep things interesting. Lastly, there is one other thing: humans are the weakest link in the security chain. That is a critical thing to remember.

*I have had my server scanned by different sites and they didn’t know they had a customer (or in some cases, a system – owned by a student, perhaps – at a school campus) that had their account compromised. Indeed, I tipped the webhosts (and schools) off that they had a rogue scanner trying to find vulnerable systems (all of which my server filtered but nothing is fool-proof, remember that). They were thankful, for the most part, that I informed them. But here’s the thing: they’re human and even though they are a company that should be looking out for that kind of thing, they aren’t perfect (because they are human). In other words: no site is 100% immune 100% of the time.

Good luck, however, to those in the competition. I would specifically mention particular (potential) participants (like the people who bugged me for years to go there!) but I’d rather not state them here by name, for specific (quite a few) reasons. Regardless, I do wish them all good luck (those I know and those I do not know). It WILL be interesting. But one can hope (and I want to believe they are keeping this in mind) the DARPA knows this is only ONE of the MANY LAYERS that make up security (security is, always has been and always will be, made up of multiple layers).

Implementing TCP Keepalive Via Socket Options in C

Update on 2014/06/08: Fixed an error with IPv6 problem (that I refer to but do not elaborate too much on). Obviously an MTU of 14800 is not less than 1500 and well, I won’t go beyond that: I meant 1480 (although I found a reason for a different, lower MTU, but I don’t remember specifics and is besides the point of TCP Keepalives and manipulating them with the setsockopt call).

Update on 2014/21/05: I added the reference [1] that I forgot to add after suggesting that there would be a note on the specific topic (gateway in ISP terms versus the more general network gateway).

Important Update (fix) on 2014/13/05: I forgot a very important #include in the source file I link to at the end of the post. While I don’t include (pardon the irony) creation of the socket, I don’t have any of the source in a function, I DO use the proper #include files because without those the functions that I call will not be declared which will result in compiler errors. The problem is, because I have #ifdef .. #endif blocks for the relevant socket options, without this file (that is now #include’d) it would silently be skipped. The file in question is netinet/tcp.h (relative to /usr/include/). Without that file included, the socket options would not be #define’d and therefore this post is less than useful and in fact less than useless.

This will be fairly short as I am quite preoccupied and it is a fairly simple topic (which is actually the reason I’m able to discuss it). In recent times I noticed a couple issues with a network connection that I am often idle but should not be dropped as the application itself uses the setsockopt(2) call to enable TCP Keepalives at the socket level when prior to binding itself to its ports (and this option should be inherited by the connecting clients). While they did inherit this property, there was a problem and it only showed itself when over IPv4 (IPv6 had another problem and that was resolved by changing the MTU to 1480, down from 1500 via my network configuration. It didn’t always have this problem – serious latency, well over two minutes, when being sent a page worth of text – but I have this vague memory that my modem/router, in such context known as a “gateway”[1] – used to have its MTU at 1480 but is now 1500). While I initially did think of TCP Keepalives I did not actually think beyond the fact the application enables SOL_KEEPALIVE through the setsockopt call (which should have resolved the issue, in my thinking). But a friend suggested – after me mentioning that it was within 1 to 2 hours of idleness – that it might be the actual time between initial keepalive being sent, the amount of probes to send if nothing is received by the other side, and how often to send the probes.

This thought was especially interesting because the initial keepalive is sent after (by default, under Linux) 7200 seconds (which is 2 hours). Since it usually took an hour before I noticed it (by actually having a reason to no longer be idle) then it would stand to reason that the keepalive time is too high. So to test this, I initially set on both sides (server and client) to have much shorter keepalive time (via the sysctl command). This did not seem to help, however. So, to really figure this out I sniffed the traffic on both ends. This means I could see when the server sends a keepalive probe, when (or if) the client sends a response, and if there is no response, I would see the next probe (presumably). It turns out that my end did receive the keepalive. However, it only received it one time. In other words, if I set the time to 10 minutes (600 seconds), was idle for 10 minutes, I would receive a keepalive and respond. But 10 minutes later (so 20 minutes of being idle), the keepalive was NOT received. This is when I saw the further probes being sent by the server (but none of which were received at the client end and so the connection was considered ‘dead’ after a certain amount of probes). Well, as it turns out, this can be remedied by taking advantage of setsockopt a bit further.

As far as I am aware, keepalive is not set by default on sockets (even TCP sockets) so you would in that case need to set that option first. Here is an example of how to set all the related options. Note that I was not interested in playing around with the finding the optimal time for the server and client (actually in this case it is for the client even though the server is the one that sets the socket options). Therefore, the time could potentially be higher than I set it to. This applies to all keepalive values. Nevertheless, for my issue, after the last three calls to setsockopt, recompiling the server, rebooting it and trying again, the problem is resolved (in fact I might not actually need two of the three additional calls but again I was not wanting to play around with the settings for long). The first call will turn on keepalive support and the following three set the options related to TCP keepalives, that I will comment on. This should be considered a mixture of pseudo-code and not. That is, I am not including error reporting of any real degree, nor am I gracefully handling the error. I’m also not including the creation of the socket or the other related things. This is strictly setting keepalive, printing a basic error (with the perror call) and exiting. Further, I’m not explaining how setsockopt works except what is in the file and that in the example the file descriptor referring to the socket is the variable ‘s’ (which again is not being created for you).

The actual snippet can be found here.

[1]Gateway is actually much more than a router/modem combination. Indeed, there is the default gateway which allows traffic destined for another network to actually get there (when there is no other gateway to route traffic through, in the routing tables). In general, a gateway is a router (which allows traffic destined for another network to actually get there and is therefore similar to the default gateway because the default gateway IS a gateway). It can have other features along with it, but in the sense of ISPs a gateway is often a modem and router combination. But the modem is in general serves a different purpose than a gateway so this is why I initially brought this up (but forgot to actually – if you pardon any word play – address).

Programmers are Human Too

Yes, as of 2014/06/08 this has changed titles and is quite different. I think this is better overall because it is more to the point (that I was trying to get across). It was originally about the Heartbleed vulnerability in OpenSSL. I have some remarks about that, below, and then I will write about the new title. I could argue that this title is not even the best. Really it is about how things will never go exactly as planned, 100% of the time. That’s a universal truth. First, though, about OpenSSL.

I have sense the time of writing the original post (quite some time ago even) seen the actual source code and it was worse than I thought (there were absolutely no sanity check, no checks at all, which is a ghastly error and very naive: you cannot ever be too careful especially when dealing with input, whether from a file, a user or anything else). Of course, I noticed some days ago that more vulnerabilities were found in OpenSSL. The question is then: why do I tend to harp on open source is more secure, generally? Because generally it IS. The reason is the source exists on many people’s computers, which means more can verify the source (both in security bugs or any bugs even as well as whether or not it has been tampered with) and also many more people can view it and find errors (and the open source community is really good on fixing errors simply because they care about programming; it is a passion and no programmer who is – if you will excuse the much intended word play, encryption and all – worth their salt will not be bothered by a bug in their software). True, others can find errors but that itself is good because let’s be completely honest: how many find bugs/errors (security included) in Windows? MacOS? Other proprietary software? Exactly: many. The only difference is with open source it is easier to find the errors (or rather, spot errors) and if it is a programmer they might very well fix it and send a patch in (and/or report it – you may not believe that but if you look at the bugzilla for different software, including security sections, you will find quite some entries). Relying on closed source for security (as in if they cannot see the source code then they cannot find bugs to exploit – which, by the way is a fallacy unless there is no way to read the symbols in the software and there is also no way to, for example, use a hex editor on it or a disassembler even) is more or less – in the context being security -  is nothing of security but rather security through obscurity (which I would rather call it “insecurity”, “false sense of security through denial” or even – to be more blunt – “asking for a security problem when and where you don’t expect it” to give three examples of how bad it is). Indeed, security through obscurity, is just like a poorly designed firewall, worse than none, because you believe you are safe (all the while not safe but truly have no idea how bad it is or isn’t) and since you believe you have a safe setup you won’t look in to the issue further (rather than constantly changing as new ideas, new risks, new anything, comes up). Nevertheless, the fact is programmers are human too and while some things might seem completely stupid, blind or anything in between, we all make mistakes.

So, essentially: yes, bugs in software or even hardware (which has happened and will happen again) can be beyond frustrating for the user (and they can also be the same to the programmers involved, mind you, as well as other programmers who need it fixed but cannot). But so can a leak in your house, plumbing problems or even nature (tree falling onto your house, for example). The truth of the matter is, unless you have somehow forgotten and deemed yourself perfect (which I assure you, no one is, especially not programmers – I’m not the only programmer that has observed this, mind you – but  no one else is either which means you are not perfect, either), you cannot realistically expect any one else to be perfect. Problems will happen, always.

To bring the issue with OpenSSL into perspective, or maybe better stated is: to give a non computer, real life phenomenon, think about a time when something went wrong (more than the usual thing that everyone experiences on a daily basis). For instance, the motor in your car needs replacement. That’s not a daily occurrence (or one can hope not!). While this won’t always happen, I know from experience – and others I have discussed this with, agree – that often when things start breaking down, you feel like it is one thing after another, and it is as if you were cursed. And how long it goes (days, weeks, …) will vary, but the fact of the matter is, multiple things go wrong and often when it is the worst time and/or least expected (things are going incredibly well and suddenly something horrible happens).

Well so too can this happen with software. I think the best way to look at it is this: the bugs have already been fixed and while it is true that bug fixes often introduce new bugs (because as I put it: programmers – myself included – implement bugs and that is completely true) but that goes for any new feature (any modification to software is bound to implement bugs – it will not always happen but it always has the potential). The only kind of software or design (of something else) that has zero problems is the kind that doesn’t exist. This is why the RFCs obsolete things, this is why telnet and rcp/rsh were replaced with ssh (over time even! Some were very slow to change over and when you look at the vulnerabilities, especially related to trust with rcp/rsh, it is shocking how slow administrators were to replace them!). This is why TCP syn cookies were introduced, this is why everything in the universe (and I use the word universe in the literal sense: indeed, the universe that has the planets we all know of, including Earth) changes. In short: no matter what safety mechanisms are in place, something else will eventually happen (as for the universe, does solar storms mean anything to Earth? What about the sun dying? Yes, both do mean something to the Earth!).

So what is the way to go about this? Address them as they come, in the way you can. That includes, by the way, giving constructive criticism as well as help where you can (which isn’t always possible – e.g., I’m not an electrician so I sure as hell cannot offer advice on a situation except I can refer you to an electrician I know to be good and trustworthy as well as experienced). I think that is the only way to stay semi-sane in such a chaotic world. Whether anyone agrees, I cannot change nor will I try to change their view. All I am doing is reminding others (which I admit is probably not many – but I don’t mind it: I’m not exactly outgoing, social, and so I don’t mind that I don’t have a widespread target. I write for the sake of writing, any way) that nothing is perfect, not humans, not anything else. If you can understand that you can actually better yourself (which, even if you don’t use that fact to better others deliberately, you at least are better for yourself and incidentally you will better others too even if only indirectly; how you feel is not only contagious but if you’re in a better mood or you have insight in to something, then others who are around you or work/deal with/correspond with you, will also feel that vibe and/or gain that insight).

Windows XP End of Life: Is Microsoft Irresponsible?

With my being very critical of Microsoft one might make the (usually accurate) assumption that I’m about to blast Microsoft. Whether any one is expecting me to or not, I don’t know but I will make something very clear: I fully take responsibility for my actions and I fully accept my mistakes. I further make the best of situations. As such, I would like to hope everyone else does too. I know that is unrealistic at best; indeed, too many people are too afraid to admit when they don’t know something (and therefore irresponsibly make up pathetic and incorrect responses) or when they make a mistake. But the fact of the matter is this: humans aren’t perfect. Learn from your mistakes and better yourself in the process.

No, I am not going to blast Microsoft. Microsoft _was_ responsible. They reported the end of life _SEVEN YEARS AGO_! I am instead blasting those who are complaining (and complaining is putting it VERY nicely – it is more like that of a whiny little brat who throws a temper tantrum when they don’t get their own way on every single thing despite the fact they have been told in advance this will happen) about how they now have to quickly upgrade or not get updates, security updates included. For instance, let’s take two different groups. Let’s start with the manager Rosemayre Barry of London-based business, The Pet Chip Company who stated the following (to the BBC or at least it was reported on the BBC):

“XP has been excellent,” she says. “I’m very put out. When you purchase a product you don’t expect it to be discontinued, especially when it’s one of [Microsoft's] most used products.”

 

Sorry to burst your bubble, Rosemayre, but ALL software will eventually be discontinued (just like how smoke detectors, carbon monoxide detectors and the like have to be replaced over time and/or are improved over time and that is not even considering maintenance like battery replacement). You can complain all you want but this is not only the correct thing technically, it is economically unfeasible to continue with a product as old as Windows XP is. I don’t care how used it is or isn’t (I hardly expect it to be the most used product of Microsoft’s, however; I would argue its office suite is more used as it works on multiple versions of Windows and corporations rely on it a lot). I also know for a fact that corporations tend to have contracts with computer manufacturers that they LEASE computers for a few years at a time and when the time comes for the next lease they will get the more recent software and this includes the operating system.  Why would they do that? Well, again, it is economically better for the company, that’s why. And here’s some food for thought: Windows XP was released in 2001 and according to my trusty calculator (i.e., my brain) that means it is almost a 13 year old product (as it was August and we’re only in April). Well check this. Community ENTerprise OS (CentOS), a distribution of Linux, which is largely useful for servers, has a product life line, as far as I remember, of only 10 years. And you know something else? CentOS is very stable because it doesn’t have many updates or in other words is not on the bleeding edge. When a security flaw is fixed, the affected libraries and/or programs have fixes backported into the older libraries and/or programs. Indeed, the current GCC version is 4.8.2 and CentOS’s current version (unless you count my backport of 4.7.2 which you can find more info about at The Xexyl RPM Repository – possibly others exist somewhere else but for the time being the packages I maintain I have not updated to the 4.8.x tree) is 4.4.7 which was released on 20120313 or in other words the 13th of March in 2012. Yes, that is over _two years ago_. It means you don’t get the newer standards (even though the C and C++ standards that are most recent were in 2011 it is not to say that anything past the ratification date is somehow magically going to have it all. In fact, some features are still not complete in the most recent versions) but it also means your system remains stable and that is what a server needs to be: what good is a server if the service it offers is unstable (and I’m not referring to Internet connection stability! – that is another issue entirely and nothing to do with the operating system) and hard to use? Very little indeed. And realistically, 10 years is very reasonable if not more than very reasonable. Over the span of 10 years a lot changes including a lot of core changes (and let’s not forget standards changing) which means maintaining it for 10 years is quite significant and I cannot give anything but my highest praise to the team at CentOS – an open source and FREE operating system. To be fair to this manager, they at least DID upgrade to a more recent Windows, but the very complaint is beyond pathetic, irresponsible and outright naive and ignorant at best and stupid at worst. It is also unrealistic and unfair to Microsoft (and this coming from someone who is quite critical of Microsoft in general and someone who has railed on them more than once – in security and otherwise; in quotes about their capabilities and in articles alike – and quite harshly too; examples, one of which even includes a satirical image I made that is directed at Windows in general: Microsoft’s Irresponsible Failures and Why Microsoft Fails at the Global Spam Issue).

Next, let’s look at what the UK government has done: They are paying Microsoft £5.5m to extend updates of Microsoft Windows XP, Office 2003 and Exchange 2003 for ONE year. That is absolutely astonishing and I would think – to UK tax payers – atrocious. What the hell are they thinking? If the SEVEN years warning was not enough time what makes ONE extra year worth THAT MUCH? Furthermore, and most importantly, if they could not UPGRADE in SEVEN YEARS what makes any rational being expect them to UPGRADE WITHIN A YEAR? They claim they’re saving money. Yeah, right. Not only are they paying money to get updates for another year, they will STILL have to upgrade in due time, if they are to get updates. Think of it this way. When a major part of your car dies, you might consider fixing it. It will likely be pricey. If, however, shortly thereafter (let’s say within a year or two), another major part of your car dies. Also, the car has been used for quite some years and certainly out of warranty. What is the most logical and financially best assumption and choice? Is it to that this will be the last part to die – surely nothing else can go wrong! – and to pay for it and then wait until the third part dies (which it almost certainly will; it is mechanical and mechanical things die!)? Or is it maybe better to cut your losses and get a new car? I think we all know the answer. We of course, does not include the UK government.

The bottom line here is quite simple though: No, Microsoft is not being irresponsible. They are not being unreasonable either. No, they gave SEVEN YEARS notice. The only irresponsible and unreasonable people – and companies and/or government[s] – are those who STILL use Windows XP and especially those that are now forced to upgrade and at the same time whining worse than a spoiled brat who is used to getting his way but the one time he doesn’t, he throws a tantrum. Lastly, I want to point out the very dangerous fallacy these people are actually aligning themselves with. To those of us who remember when the TELNET and RSH protocols were prevalent, there came a time when enough was enough and standards had to change (e.g., secure shell aka ssh). Those who had any amount of logic in them UPGRADED. Many (though not as many as should have) saw many problems with them for far too long, among them the following (and note that these are on Unix systems and yes that means NO system is immune to security problems, be it Windows, Mac, Unix or anything else. Incidentally, Unix systems are what typically are used for servers which means customers data included in databases running on the servers, especially then as Windows NT was in its infancy by the time most – but probably not all – changed over):

  1. The fact a common configuration would allow “you” to remotely log in to a machine as a user from ANY HOST WITH NO PASSWORD? And of course it was PERFECTLY SAFE because after all, they won’t have a user with the same name, right? Well, ever occur to you that they could CREATE a user with that name? And ever hear of grabbing the password file remotely to find user names? Or a scrupulous employee who could do the same? An employee that was fired and wants revenge (and happens to have user names or maybe even stole data before they were completely locked out after being fired? Maybe they even left a backdoor in!)? For those who are slow that is sarcasm; it was NEVER safe and it WAS ALWAYS naive at best (this same problem is trust relationship and that is one of the biggest problems with security – too much trust is given far too easily). And indeed, Unix – just like the predecessor to the Internet – was NEVER designed with security in mind. That is why new standards are a good thing: to address problems and to extend, deprecate or obsolete standards (like, I don’t know, IPv6 as opposed to IPv4, anyone?).
  2. No encryption means sniffing could show the user and password (as well as other information in the traffic) to the sniffing party. Assuming that there is no one to sniff your traffic is security through obscurity at best and that is arguably worse than no security (it is a false sense of security and when taken to extreme some will refuse to believe it is a problem and therefore are blinded to the fact they already are or could be compromised any moment).

Consider those two examples for a moment. Finally, take the logic of “most people use it” or “it is so convenient and we shouldn’t have to upgrade” and where do you end up? Exactly like not upgrading from Windows XP or otherwise having tantrums about having to upgrade from Windows XP to something more recent (despite the seven years notice and it being in the news more and more as the deadline approached) or else not receive updates. In other words, you are staying behind times AND risking your data, your customer’s data, your system (and that means network if you have a network). And you know something? You had it coming to you so enjoy the problems YOU allowed and let’s hope that only you or your company is affected and not your customers (because it would be YOUR fault).

Preventing systemd-journald and crond from flooding logs

Update on 2014/21/05: I should point out that the change I suggest in /etc/systemd/journald.conf is in fact a (non fatal) error. But it is simply ignored. I’ve not bothered to play with it beyond that. I somehow suspect (but could very well be wrong) that uncommenting it and leaving it empty will either not work or be set to the default. However, since it simply gives a warning in the logs but still does as I want, I don’t see it as harmful or a problem (certainly not enough to test more).

I will come out and admit it fully: there has always been at least one thing that bothered me a great deal with systemd. To be brutally honest there are quite a few things that have bothered me. But one of the most obnoxious ones is something they seem to not understand as a problem (despite the bug reports and for some people concern that someone had compromised their system due to the way the message is written): every time cron runs a task it shows not one, but two messages in the system log (/var/log/messages) and the journal. It is absolutely infuriating as it fills the log files which then get rotated out (due to size reaching its cap) and besides that, it is REALLY hard (minus grep -v on a pattern over multiple log files, but that should NOT be necessary!) to find other important log messages in the huge ugly disaster that the log file is left in. Equally as bad is that there is this log file, called – check this out – /var/log/cron with the information that should be ALL that is needed. But of course not; not only does it NEED to be in /var/log/messages and not only does it NEED to be in /var/log/cron it ALSO NEEDS to be in the journal, the so called improvement over logs. /sarcasm. Three places for the same bloody message? Really? What the hell is that? Anyone who knows enough to check logs will know enough that there are MULTIPLE LOGS for DIFFERENT reasons! So while I titled this about preventing the flooded logs that realistically is FAR too nice. It should be more like making systemd shut the hell up and knock off the stupid log flooding (which incidentally, could be considered by some a DoS – denial of service – attack since you make it much more difficult to normally manage and review logs).

Well, I had had WAY too much of this crap and while I’m easily irritated (and agitated lately) I think I’m not the only one who is completely fed up with the way they are handling it (or not handling it rather). So here is how you can make this flood stop. First, though, the message would look like this:

 Mar 16 04:55:01 server systemd: Starting Session 3880 of user luser.
Mar 16 04:55:01 server systemd: Started Session 3880 of user luser.

in /var/log/messages. As you can imagine, an unsuspecting user might see that on some of the system cron jobs (e.g., in /etc/cron.hourly/ which is run by root) and think that it is someone who logged in as root on their system (when in fact it is cron). Conveniently the clowns responsible make it end up in that file even though it is in the journal. Why is that? Oh, something like this, taken from journald.conf(5) (that is: man 5 journald.conf):

       ForwardToSyslog=, ForwardToKMsg=, ForwardToConsole=
Control whether log messages received by the journal daemon shall be forwarded to a traditional syslog daemon, to the kernel log buffer (kmsg), or to the system console. These options take boolean arguments. If forwarding to syslog is enabled but no syslog daemon is running, the respective option has no effect. By default, only forwarding to syslog is enabled. These settings may be overridden at boot time with the kernel    command line options “systemd.journald.forward_to_syslog=”, “systemd.journald.forward_to_kmsg=” and “systemd.journald.forward_to_console=”.

Someone remind me. Wasn’t the idea of Fedora Core 20 to REMOVE the syslog daemon from default install because the journal was sufficient, was causing logs to be stored twice (Ha! Nice number but too bad it is lower than the truth in at least the case of cron) and has had enough time to show it works? No, no need: that absolutely was their idea! Yet they clearly didn’t think very well, did they? If they forward to syslog then what about systems that are updated rather than new installs? The syslog daemon will be installed, geniuses! Yet here you forward to syslog. Brilliant, if your idea of brilliant is beyond stupid.

Oh, and if you think the rant is done,  I’m sorry to suggest no. What you also find with cron jobs is this, in /var/log/cron as it always has been (not the same entry or same instance but it shows you the info – in fact, it shows more specific info like WHAT was executed rather than just a session started for the user, vague and unhelpful as it is – and not two nonsense lines about ‘starting’ and then ‘started’; what ever happened to “no news is good news” i.e., if there is no output there is no error?):

Mar 29 20:55:01 server CROND[2926]: (luser) CMD (/home/luser/bin/script.sh)

(There also exists the normal run-scripts entry for hourly, daily, monthly cronjobs but those also show the commands executed).

And then there is the third copy: the journal, which includes BOTH of the above:

Feb 16 18:05:01 server systemd[1]: Starting Session 1544 of user luser.
Feb 16 18:05:01 server systemd[1]: Started Session 1544 of user luser..
Feb 16 18:05:01 server CROND[13241]: (luser) CMD (/home/luser/bin/script.sh)

Redundancy is good in computing but NOT in this way. Redundancy is good with logs but again, NOT in this way. No, this is just pure stupidity.

Now then, here’s how you can make journald cut this nonsense out.

  1. In /etc/systemd/ you will find several files. The first one to edit is “journald.conf”. In it, you need to uncomment (remove the # at the start of the line) the line that starts with: #Storage=
    You then need to change whatever is after the = to be “syslog”. (without the quotes).
  2. The next file (same directory) is “user.conf”. Again, you need to uncomment a line to activate the option. The line is #LogTarget= and you want to change what is after the = to “syslog”. (again, without the quotes)
  3. Next you need to edit “system.conf” (same directory still) and do the same change as in “user.conf” (note: I am not 100% sure that you need to do it for both “user.conf” and “system.conf” and if only one is required I don’t know which one nor do I care).
  4. Now, this may vary depending on what syslog daemon you have. I’m assuming rsyslogd. If that is the case change to the directory: /etc/rsyslog.d/
  5. Once in /etc/rsyslog.d/ create a file that does not exist – maybe cron.conf – and add the following lines:
    :msg, regex, “^.*Starting Session [0-9]* of user” stop
    :msg, regex, “^.*Started Session [0-9]* of user” stopNote on this: “stop” is for newer versions of rsyslogd which you will have if you’re using Fedora. Otherwise, for older versions, change the “stop” to a tilde (a “~”). If you check /var/log/messages after restarting rsyslogd and you notice that there is a problem with stop then you can try the other (it will also show you, if you try ~ first, that ~ is deprecated). Those two commands will, combined with the changes to systemd files, allow only the syslog to get the cron message which is now removed (which is fine because as I already noted, /var/log/cron has that info).
  6. To enable all of this you would want to do the following (you need to be root, in fact you need to be root for all of the steps) as given below.

 

# service rsyslogd restart
# systemctl restart systemd-journald.service

Note the following: the second command may or may not be enough. Since I only did this on a remote server and since I was not about to play the game of “is it because I didn’t restart the right service or is something else not properly configured?”. I’ve yet to do it on any local machines so I cannot remark on that more than that. If rebooting is an option and it does not work as described above then that could be one way around it.

Questions that might come to mind for some:

  1. Since we redirect journal to syslog, do we see the usual log messages? Yes, you do. For instance, you’ll see when someone uses ‘su’, you’ll see when (example) you restart a service that writes that it stops and/or starts again to the syslog, in /var/log/messages too.
  2. What about the fact this shunts cron messages out of the syslog? Well, as I mentioned, it is stored (in more thorough form) in /var/log/cron so you won’t lose it. The only thing that loses this is where it should not be stored in the first place: /var/log/messages
  3. How does this affect the journal? Good question. I actually don’t care – the journal uses more disk space and log rotation works just as well and so does backup, remote storage as well as compression of log files (if you have it set up to do that). My guess is this: you will find that future log messages are not sent to the journal but only syslog. I am not 100% certain of this however. I will know in time if I bother to check. I think it would depend on how the journal interprets the options; indeed, many other options that I thought might solve the problem were definitely not interpreted as I guessed. So the question really comes down to whether or not directing to syslog defers it from the journal or if it goes to both. For those low on disk space though, the journal uses way more. If you do a ‘systemctl status systemd-journald.service’ you might see something like: Runtime journal is using 6.2M (max allowed 49.7M, trying to leave 74.6M free of 491.1M available → current limit 49.7M) and another line like: Permanent journal is using 384.6M (max allowed 2.8G, trying to leave 4.0G free of 22.5G available → current limit 2.8G).
  4. Perhaps most importantly: does this prevent showing users logging in? No. You’ll still see, for example, the following:
    Mar 29 21:49:37 server systemd-logind: New session 167 of user luser.
    and when they log out:
    Mar 29 21:49:39 server systemd-logind: Removed session 167.

All that noted, hopefully someone will see this and be helped by it. What would be more ideal, however, is if the maintainers actually fixed the problem in the first place. Alas, they are only – just like you and me – human, and to be fair to them, they aren’t being paid for the work either.

whois and whatmask: dealing with abusive networks

(Update on 2013/03/11: I added another grep command as I just discovered another line that would give the netblock of an address directly from whois, so that you do not have to worry about finding the proper CIDR notation as it shows you. Ironically, the IP in question was from the same ISP I wrote about originally – hinet.net; regardless, the second grep output will show one of the many differences with the whois protocol outputs)

My longest standing friend decided last year, at the end of the year, that he wanted to get me some books (thanks a great deal, by the way, Mark – it means a damn lot and I’m eternally grateful we’ve stayed in contact throughout the years). While he lives in England and I live in California, we’ve “known” each other for almost 18 years. There was a problem with Amazon.com and he was also in New York as part of his job part of this time, however, and so the gifts did not arrive until yesterday. Now, of course I could not know every detail of the book, but one of the books was a Linux networking book. It is more like a recipe book and while there is some I know (and some know very well), and some that are not useful to me, there’s going to be some I find something of interest or use. Which brings me to this post. Obviously I know of the whois protocol, but what I did not know about is the utility ‘whatmask’. There is a similar utility called ‘ipcalc’ but on CentOS it is very different from the expected and I found many problems with it. So I was looking at the book (the name fails to come to mind at this time), briefly skimming sections, and I noticed they discussed this very thing and mentioned the alternative ‘whatmask’ on CentOS and Fedora Core.

I thought this would be very interesting to see. Sure, you can do it by hand but this is much more time efficient and allows you to get a quick summary. Further, with whois, you can confirm your suspicions. Yes, I know that if whois shows a netblock as (this is of course a private block) 10.0.0.0 – 10.255.255.255 that the CIDR notation is /8. But that is besides the point and if I were to consider that, then I would have nothing to write about (and it has been quite a while since I have written anything strictly technical – something I’ve been wanting to correct since my birthday last month but have been too busy working on a project that is pretty important to me).

Now, then, about dealing with abusive networks. Firstly, there are many ways to take care of a network. I am obviously not condoning nor suggesting anything malicious nor am I condoning or suggesting anything at their end. The Linux kernel has netfilter which is what iptables (and ip6tables) uses, the IPv4 (and IPv6) firewalls (respectively). Yes, I could write out an iptables rule to stop all traffic from a certain network, but this is less efficient than simply making a blackhole route of that address. The problem was, how do you determine the entire range of IPs that they own? I seem to remember that they had different blocks. Further, a whois on the domain won’t show the network block (forget for a moment that it does when you use an IP in the netblock). Either way, the below procedure can be done for any IP.

The network in question is hinet.net and is located in Taiwan. The abuse is not so much attack attempts and it is not necessarily the owner’s fault (it is an ISP). But what it is is a lot of spam attempts (to accounts that don’t exist on my end and relay attempts to other hosts, neither of which I allow, just like all responsible administrators; indeed, running an open relay – notwithstanding an administrator who unknowingly makes a mistake or has a flaw exploited on their server – is nothing but malicious, as far as I am concerned). Since this is an ISP (I know it is in fact because I remember seeing dynamic IPs in their block or blocks, before) they don’t need anything from my network. And even if they have customers who are corporations, the fact of the matter is, I am not a customer of said corporation, I’ve never seen such corporation, and I don’t actually care: abusive networks are not something anyone on the abusive end would tolerate (just like if someone walks up behind you and hits you in the back, you would not exactly tolerate it). So, let us take an IP in their network and see the ways to determine all IPs in the block that IP is in:

One of the IPs is ’168.95.192.15′. This is one that I specifically added a blackhole route to and that means one thing and one thing only: I saw it attempt what I described, above. So what do you do? Well, firstly, I run fail2ban (one option of many) and I’m fairly restrictive on how many failures I allow (like, 1) before they are blocked. But, let’s assume you want to take care of ALL IPs in that block (because you’ve seen many over the years) and you don’t even want to give them a chance to connect to your services. Then, what you do is the following. Note that I am limiting the output here.

$ whois 168.95.192.15 | grep -E 'NetRange|inetnum|CIDR'
inetnum:        168.95.0.0 - 168.95.255.255

Note that if you see CIDR (see also the end of this post where I give another whois command piped to grep, where there is another line that shows the CIDR notation) in the output then you have the network block right there. But, if however, you see NetRange or inetnum (there may be others that I’ve not seen so your mileage may vary and may be wise to not pipe the output to grep), then you don’t have the block, at least not in a notation that setting a blackhole route will allow (again, see the end of the post as I discovered another field that gives the entire network mask).

Now, the inetnum output above would tell me that the CIDR notation is /16 so if I add a blackhole route for 168.95.0.0/16 then I am set. But assume for a moment that you don’t know that. Well, here is where whatmask comes handy. Sort of. It does need a CIDR notation with or without an address. So if you take the fact that /32 is one single address (which whatmask will show as 0 usable addresses because it is considering a network block which therefore includes network address and broadcast address – it assumes the address you specified IS the network and broadcast address) and /0 is every single IPv4 address (which is 2 ^ 32 much like IPv6 has 2 ^ 128 IPs), /31 is 1 address and more generally, the common network block ranges (in CIDR notation) are: /8, /16 and /24 (/8 having the most addresses, /16 having less than /8 but more than /24, /24 having the least of those), then you know that the possible CIDR numbers you can specify is between /0 and /32. It won’t be /0 and it won’t be /32, it won’t even be /31 for a network block (at least not in this way; a network needs a broadcast – in IPv4 – and network address), so you can just play around with it if you don’t know. Over time you get used to recognising the proper CIDR notation but understand this: the number after the slash is how many bits are reserved for the network portion of the address. So if it is /8 then 32 – 8 = 24 is how many bits are available to hosts which is why the higher the number after the slash, the less number of IPs that are available. When you find the right number, you can then do this:

$ whatmask 168.95.192.15/16
------------------------------------------------
TCP/IP NETWORK INFORMATION
------------------------------------------------
IP Entered = ..................: 168.95.192.15
CIDR = ........................: /16
Netmask = .....................: 255.255.0.0
Netmask (hex) = ...............: 0xffff0000
Wildcard Bits = ...............: 0.0.255.255
------------------------------------------------
Network Address = .............: 168.95.0.0
Broadcast Address = ...........: 168.95.255.255
Usable IP Addresses = .........: 65,534
First Usable IP Address = .....: 168.95.0.1
Last Usable IP Address = ......: 168.95.255.254

Now observe the following things:

  • The result of the filtered whois output shows: 168.95.0.0 - 168.95.255.255
  • The Network Address line in the whatmask output is: 168.95.0.0
  • The Broadcast Address line in the whatmask output is: 168.95.255.255
  • The First Usable IP Address line in the whatmask output is: 168.95.0.1
  • The Last Usable IP Address line in the whatmask output is: 168.95.255.254
  • Add these together, and you know that the netblock IS 168.95.0.0 - 168.95.255.255 which means that the proper netblock in CIDR notation IS 168.95.0.0/16

Putting that together, you can add to your firewall script or some other script (that starts when you boot your computer so it stays there when you reboot next) a command like so (note the # as the prompt – you need to be root to do this so either add sudo in front of it or su to root then do what you need to do, followed by logging out of root):

# ip route add blackhole 168.95.0.0/16
# ip route show
blackhole 168.95.0.0/16

(Technically, yes, the ip route show command will show more output but I am showing only the route we added, for sake of brevity)

After this, no IP in that range will NEVER reach your box directly (I won’t get into if they breach another box in your network and connect from that box to the box you blocked it from, neither will I discuss segregating networks, because those are other issues entirely).

As for the second grep output regarding whois directly giving you the CIDR notation (note that I’m only searching for one string in this one because I already showed the others I’m aware of and this specific IP uses what I’m searching for – indeed, I first did a whois on the IP with no grep, and that’s when I discovered this line):

$ whois 36.225.82.84|grep Netblock
Netblock: 36.225.0.0/16

So from that, as root, you could add a route to that range (or do whatever; put an iptables rule in or some such – blackhole routes are much more useful when blocking an entire subnet because it will use less resources though by how much so I don’t know and I have no real way to benchmark it. I don’t actually care though. The entire point of the post was not adding routes or adding firewall rules but rather with dealing with abusive networks. The same can be applied if you take some of the lists out there with networks that are known to be the source of attacks or if you want to block some network for abuse or some other reason entirely).

Dangerous and Stupid: Making Computer Programming Compulsory in School

This is something I thought of quite some time ago but for whatever reason I never got around to writing about it. However, since it seems England is now doing exactly what I would advise against, and since I’m not actually programming (for once), I will take the time to write this. And its probably for the best, that I’m not programming today, given how tired I am. But I guess the article and its clarity will either show that or not. Either way, here goes:

So firstly, what is this all about? To put it simply, in September, all primary and secondary state schools in England, will require students to learn programming (“coding” is the way many word it). To make it actually worse, it seems they (and I hope this is just, for example, the BBC wording it this way) do not know that there is a real difference between programming and coding.

Although I discussed that very topic before, let me make it clear (since even if those involved won’t see this, it is pretty important). You write code, yes, but much like a building contractor NEEDS plans and a layout of the building BEFORE construction, you REALLY NEED a plan BEFORE you start to write the code. If you don’t, then what are you really going to accomplish? If you don’t even know what you’re TRYING to write then how WILL you write it? You might as well rewrite it SEVERAL times and guess what? That is EXACTLY what you will be doing! How do I know? Because I’ve worked on real programming projects as well as stupid/no real use programs, that’s how. If you don’t have a purpose (what will it do, how will it behave if the user inputs invalid input, how will output look, etc.) you are not going to learn because all you’re doing is writing code with no meaning. Besides not learning properly, you’re more likely to learn bad programming practices (because after all, you’re not really working on anything, so “surely it is OK if I just use a hack or don’t use proper memory management!”). The real danger there is the fact it APPEARS to work further strengthens your reasons to use said bad practices in REAL projects (just because a computer program does not crash immediately, in the first hour of run time or even all the way to the program finishing – for those that are meant to be finished, anyway – does NOT mean it is functioning properly; sorry, but it is NOT that simple). There’s many quotes about debugging and there’s a saying (I cannot recall the ratio but I want to say 80:20 or 90:10) out there that X percent of the time on a programming project is spent debugging, and it is not exactly a low number, either.

The problem is this: computer programming involves a certain aptitude and not only will some students resent this (and just one student resenting is a problem with this type of thing) just as they resent other things, some might still enjoy it even if they don’t learn properly, which is a risk to others (see end of post). Also, you cannot teach security and if you cannot teach security you sure as hell cannot teach secure programming (and its true: they don’t and that is why they have organisations that guide programmers in secure programming – OWASP for web security alone and there’s others for system and application programming). As for resentment, take me for example, back in highschool. I didn’t want to take foreign language because I had no need for it, I was very very ill at the time (much more than I am now) and I have problems hearing certain sounds (of course the schools naive “hearing tests” told them otherwise even though I elaborated time and again that, yes, I hear the beeps but that doesn’t mean much in way of letters, words, and communication, when considering learning, which I do not hear perfectly, does it? The irony is I had hearing tubes put in when I was three – perhaps the school needed them? – so you would think they could figure this out but they were like all schools are: complete failures) which ultimately would (and indeed, DID) make it VERY difficult to learn another language. But I was required to take foreign language. So what did I do? I took the simplest of the offered languages (simplest in terms of whatever those ‘in the know’ suggested), the least amount of years required (two years) and I basically learned only what I absolutely needed to pass (in other words, I barely got the lowest passing mark which by itself was below average) and forgot it in no time after getting past the course.

The fact that programmers in the industry just increase static sized arrays to account for users inputting too many characters instead of properly allocating the right size of memory (and remembering to deallocate when finished) or using a dynamically sized type of container (or string) like C++’s vector (or string class), says it all. To make it more amusing (albeit in a bad way), there is this very relevant report, noted on the BBC, in February of 2013. Quoting part of it and giving full link below.

Children as young as 11 years old are writing malicious computer code to hack accounts on gaming sites and social networks, experts have said.

 

“As more schools are educating people for programming in this early stage, before they are adults and understand the impact of what they’re doing, this will continue to grow.” said Yuval Ben-Itzhak, chief technology officer at AVG.

Too bad adults still do these things then, isn’t it? But yes, this definitely will continue, for sure. More below.

Most were written using basic coding languages such as Visual Basic and C#, and were written in a way that contain quite literal schoolboy errors that professional hackers were unlikely to make – many exposing the original source of the code.

My point exactly: you’ll teach mistakes (see below also) and in programming there is no room for mistakes; thankfully here, at least, it was not for stealing credit card numbers, stealing identities or anything to that degree of seriousness. Sadly, malware these days has no real art to it, and takes little skill writing (anyone remember some of the graphical and sound effects in the payload of the old malware? At least back then any harm – bad as it could be – was done to the user rather than a global scale done for mass theft, fraud and the like. Plus, that most viruses in the old days were written in assembly, more often than not, shows how much has changed, skill wise, and for the worst).

The program, Runescape Gold Hack, promised to give the gamer free virtual currency to use in the game – but it in fact was being used to steal log-in details from unsuspecting users.

 

“When the researchers looked at the source code we found interesting information,” explained Mr Ben-Itzhak to the BBC.

“We found that the malware was trying to steal the data from people and send it to a specific email address.

 

“The malware author included in that code the exact email address and password and additional information – more experienced hackers would never put these type of details in malware.”

 

That email address belonged, Mr Ben-Itzhak said, to an 11-year-old boy in Canada.

 

Enough information was discoverable, thanks to the malware’s source code, that researchers were even able to find out which town the boy lived in – and that his parents had recently treated him to a new iPhone.

Purely classic, isn’t it? Sad though, that his parents gave him an iPhone while he was doing this (rather than teaching him right from wrong). But who am I to judge parenting? I’m not a parent…

Linda Sandvik is the co-founder of Code Club, an initiative that teaches children aged nine and up how to code.

She told the BBC that the benefits from teaching children to code far outweighed any of the risks that were outlined in the AVG report.

“We teach English, maths and science to all students because they are fundamental to understanding society,” she said.

“The same is true of digital technology. When we gain literacy, we not only learn to read, but also to write. It is not enough to just use computer programs.”

No, it isn’t. You’re just very naive or an idiot. I try to avoid direct insults but it is the truth and the truth cannot be ignored. It is enough to use computer programs and most don’t even want to know how computers work: THEY JUST WANT [IT] TO WORK AND THAT IS IT. There are little – arguably there are none – so called benefits. Why? Because those with the right mindset (hence aptitude) will either get into it or not. When they do get into it though, at least it’s more likely to be done properly. If they don’t then it wasn’t meant for them. Programming is a very peculiar thing in that it is in fact one of the only black and whites in the world: you either have it in your or you don’t. Perhaps instead of defending the kids (which ultimately puts the blame on them and even I, someone who doesn’t like being around kids, see that that is not entirely fair – shameful!) by suggesting that the gains outweigh the risks, you should be defending yourself! That is to say, you should be working on teaching ethical programming (and if you cannot do that, because, say, its up to the parents, then don’t teach it at all) rather than just here it is, do as you wish (i.e., lazy way out) attitude. Either way, those who are into programming will learn far more on their own and much quicker too (maybe with a reference manual but still, they don’t need a teacher to tell them how to do this, how to do that; you learn by KNOWING combined with DOING and EVALUATING the outcome, then STARTING ALL OVER). Full article here: http://www.bbc.co.uk/news/technology-21371609

To give a quick summary of everything, there is a well known quote that goes like this:

“90% of the code is written by 10% of the programmers.” –Robert C. Martin

Unfortunately, though, while that may be true (referring to programming productivity), there is a lot of code out there that is badly written and that risks EVERYONE (even if my system is not vulnerable to a certain flaw that is abused directly by criminals, I can be caught up in the fire if that only includes bandwidth and log file consumption on my end; worse however, is when it is a big company has a vulnerable system in use which ultimately risks customers credit card information, home address and any other confidential information). This folks, is why I put it under security, and not programming.

Fedora Core 20 Oddities

2014/06/22:
Fixing a mistake that I had right the first time but erroneously ‘fixed’.
In actuality, the journal does include more than /var/log/messages does (and I noticed this after the fact which is when I documented how to prevent systemd from flooding the logs… but forgot about this post until yesterday). Still, as pointed out, the journal (nor messages) does not need to show information on (for example) cronjobs. Furthermore, the fact remains that /var/log/messages and /var/log/ itself excluding the journal is smaller than the journal itself by a fair bit (at this time: logs + journal = 818MB, logs – journal = 57MB).

Addendum on 2013/12/30:

As I hoped (and somehow expected it to be but was in a really bad state of mind and very inpatient hence not giving it time, which I admit is very shameful and even hypocritical on my end) the issue with libselinux was in fact a bug. So, that makes updating remote servers (in a VM) much less nerve wracking (the delay on the relabel for instance would be concerning as there would be no way to know if there was a problem or not until later on):

- revert unexplained change to rhat.patch which broke SELinux disablement

I still find it odd that they would remove the MTA and syslog but to be fair to that even, at least it is not removed from the OS itself but merely the core group of packages. There is the question of why do I keep this post at all even, then? Because I find it odd (even if some of the most brilliant things that seem normal now, were originally deemed odd) and what is done is done which means it would be more fake on my end to suddenly remove it (besides, I do give credit to the Fedora Project too, which is a good thing for anyone who might only see the negative at times, like I myself was doing at the time of writing the post). That’s why. Unrelated, to anyone who happens to see this around the time of the edit date, while I don’t really see New Years as anything special (most holidays, in fact) I wish everyone a happy new year.

Addendum on 2013/12/27:

Two things I want to point out. The first is a specific part of my original post. The second is actually giving much more credit to Fedora Core than I may seem to give in the post. In all honesty I really value Fedora Core and what they have done and how far they have come along and the projects I maintain under Fedora need a more up-to-date distribution because I use it to its full potential (e.g., the 2011 C and C++ standards are not supported in older libraries on less often updated distributions). So despite my complaints in this post, I want to thank the Fedora project for how far they have come along, for continuing it and for it actually being a very good distribution. Keep it up Fedora. Nothing is expected to be perfect and you cannot please everyone but the fact the software in question that I am referring to can still be installed (and not removed completely) is really enough to make anything that might otherwise be very annoying more of a nuisance for new installs. Here are my two notes then:

First, the part about log file size. Last night after posting this I realised something that – at first thought might make my point less valid and realistically that would be nice as it is one less valid complaint – makes it a little bit worse. The problem? The journal could be viewed as just /var/log/messages which means that the extra size over /var/log/ in its entirety is actually worse; instead of it being bigger than /var/log by 313MB it is actually (if you consider all journal files) 313MB – 7.9MB of all my /var/log/messages files (again: former compressed, latter not) so for one log file (instead of all logs) 305.1MB larger just for one log type (the syslog).

Second, to be fair to Fedora Core: I could probably have worded ‘Quality Control’ better. I was a bit irked by the SELinux (configuration file not being a configuration file) issue and as I noted I’ve been fairly agitated lately, too. In fact, to be even more fair, Fedora has actually come a very long way (I remember trying it with release 1 or 2 and having quite a lot of trouble with it due to hardware support or lack thereof – but realistically it was probably not even that bad when you consider that it is not a single piece of software; you have the toolchain, you have the kernel, you have editors, the desktops, and much more and all this is something to consider: it takes time to get stable  – which it is – and fully functional) and while I find some of the things (that they decided to change this time around) quite laughable I still value their work and I still will use the distribution and as they do point out there’s no harm in leaving a syslog package or an MTA installed (I just was just dumbfounded that a Linux distribution – no matter which one – thinking it is a good idea to remove the syslog and at the same time also remove the MTA from the install). So even if this post seems rather aggressive and thankless towards Fedora Core, I really am actually quite thankful for their work. As I noted: it is a rant and as a rant it is being critical of certain things and usually not constructive criticism (which is why I added this note here). In fact, I will change the title of this post and the link, too. Is only fair and is the right thing to do.

The original post is as follows (I’m not updating the post to reflect on the title change as I already made the point clear, above, but for what its worth this is not really about quality control: the update went quite smoothly – I just find some of the things changed rather unlike a Linux distribution).

Need an example of horrible quality control? Well, let me tell you about this operating system I use and one that I’m usually quite fond of. One that is quite old nowadays and I feel has gone back to their early days as far as quality is concerned. Yes, Fedora Core 20. As a programmer myself I am both very well aware that mistakes happen and programmers are just as guilty as this, as well as creating things (like software) does involve risks (as well as using said created thing). I’m also usually very tolerant of mistakes with programming for I know very well how it works. I could also (and believe me – I absolutely am!) be angry at and at the same time I could be blaming myself that I went with the upgrade despite me having a really bad feeling with it (the first time I have had a very bad feeling looking at release notes as well as following the release plans during the development process). But at the same time, what can I do? At best I can wait but I can only wait for so long (and long is not at all a long time – more like fairly short when you consider end of life of the release) and hope the next release is better.

But I cannot remember many times where I have been as stunned (in a bad way) as much as I am, with any software. I don’t even know where to begin so I’ll just come out and write this first: Yes, this is a rant and although I am irritable lately I also admit that in general, I go full on with rants. Either way, it is also something that I feel needs to be written (even if just for me) as FC 20 is a horrible example of quality software. I was going to skip writing this until today, when I ran into two little issues that really bothered me (and I admit fully that with the way things have been lately, bothering me is pretty easy but…) enough where I had to look in to what the hell was going on in more detail. Before I get to that though, I’m going to take a stab at two of the changes in Fedora Core (both by the same person, mind you) that I thought were quite idiotic (perhaps because they are in many respects) when I first read it and I still do (plus the actual gains they claim are exactly the opposite or if nothing else not true and I provide proof of that).

On today’s Internet most SMTP hosts do not accept mail from a server which is not configured as a mail exchange for a real domain, hence the default configuration of sendmail is seldom useful. Even if the server is not tied to a real mail domain, it can be configured to authenticate as a user on the target server, but again, this requires explicit configuration on both ends and is fairly awkward. Something that doesn’t work without manual configuration should not be in the default install.

So let me get this straight. SMTP hosts do not accept mail from a server which is not configured as a mail exchange for a real domain? And even if the server is not tied to a real mail domain, it can be configured to authenticate as a user on the target server BUT AGAIN it requires explicit configuration on BOTH ends? Furthermore, since it doesn’t “work” without manual configuration it should not be in the default install? Okay then, so I ask why is static IP networking in the default install? I guess that would be because of the part where it is “useful”, yes? Or maybe it is because Unix (and therefore Fedora Core) is a network operating system so IT HAS to have support for static IP addresses? Well, with that logic, here is something that one would think is VERY OBVIOUS but clearly IS NOT: just because not every system is part of a network (for example, part of a domain or even just an intranet) does not mean it is useless. Also quite amusing is this: I use an MTA (guess which one?) in at least one cronjob and I only had to configure the MAIL SERVER. I wonder why and how that might be possible? /sarcasm

Most MUAs we ship (especially those we install by default) do not deliver to a local MTA anyway but rather include an SMTP client. Usually, they will not pick up mail delivered to local users. This means that unless the user knows about local mail and takes steps to receive local mail addressed to root, such messages are likely to be ignored. Our current setup in many ways hence currently operates as reliable /dev/null for important messages intended for root. Even worse, there is no rotation for this mail spool, meaning that this mailbox if it is unchecked will slowly eat up disk space in /var until disk space is entirely unavailable.

Wait a minute. Were we not referring to MTAs? Now you’re on about MUAs? Most bizarre is this part:
“Most MUAs we ship (especially those we install by default) do not deliver to a local MTA anyway but rather include an SMTP client.”

What the hell is a MUA if it is not an email client? And how ironic that you mention the word ‘local’ (even if MTA is client and server, typically, and therefore we have some redundancy here). One would think you could put that together with the first block of text I quoted. Sadly that seems unlikely. I won’t even bother going at the rest of that and instead will continue to the next part.

Many other distributions do not install an MTA by default anymore (Including Ubuntu since 2007), and so should we. Running systems without MTA is already widely tested.

The various tools (such as cron) which previously required a local MTA for operation have been updated already to deliver their job output to syslog rather than sendmail, which is a good default.

I will delay the part about syslog for a moment as I find that part especially amusing and another issue entirely. So: just because other distributions do not include an MTA you should too? Clearly the followers and not the state of the art that Fedora was meant to be. A shame, that. And sure, running systems without an MTA is tested but not all machines are MAIL SERVERS and even more not all need to SEND MAIL. Did you know that running systems without a HTTPD is widely tested? In fact, you could replace any number of other services and ask (or proclaim) the same stupid question (statement). /sarcasm

As for cronjob and specifically mail versus syslog, let us go to the next idiotic move!

Let’s change the default install to no longer install a syslog service by default — let’s remove rsyslog from the “comps” default.

The journal has been around for a few releases and is well tested. F19 already enabled persistent journal logging on disk, thus all logs have been stored twice on disk, once in journal files and once in /var/log/messages. This feature hence recommends no longer installing rsyslog by default, leaving only the journal in place.

A purely classic example of irony I must admit. Okay, sure, the journal acts as a log but really, to call it the syslog, is rather weak. Even worse is the supposed benefits to Fedora. Let me unravel that, now.

Our default install will need less footprint on disk and at runtime (especially since logs will not be kept around twice anymore). This is significant on systems with limited resources, like the Fedora Cloud image.

Oh really? Journals use less resources than logs? I would like that to be a (stupid) joke but sadly it is for real. Here, let me just refute that complete and utter nonsense with proof:

With journal we have this:
# du -sh /var/log
368M /var/log
With journal excluded:
# du -sh /var/log –exclude=’journal’
55M /var/log

Less resources what? If you do the math (368 – 55) you will note that the journal uses 313MB MORE than the regular logs. Even more to consider is that the journal is compressed and my log files (including those that have been rotating like normal) are not compressed.

Two more things, one leading into the next.

Also, we’ll boot a bit faster, which is always nice.

How lovely. I’ve noticed it actually taking longer time since Fedora 20 was installed, compared to Fedora 19 (I don’t dare update to Fedora 20 on the other – remote – server I have Fedora 19 installed on!). And I especially noticed it taking longer today because SELinux (which I have disabled on the machine I’m writing from, and the reasons will be made clear soon) decided to (after an update of libselinux, yesterday) now ignore /etc/sysconfig/selinux and so was enabled AND since it had been disabled it needed to relabel the whole file system (except the file systems that I have read-only). How brilliant of an idea is THAT? Ignore the configuration file so that it wastes disk space, basically (there’s more disk usage being eaten up). What is the purpose of the configuration file then? sestatus showed that it was indeed enabled but via configuration file it showed disabled (too bad the configuration file was ignored though which means it doesn’t matter what the configuration file has, which is, to be blunt, completely stupid; the configuration file is to, here’s a thought, configure something!).

As for why I have SELinux disabled on this machine? Well, I’ll give you an example of the problem it causes when it doesn’t work right. A denial came up and it turned it into a zombie process as shown here:
7544 0.0 0.0 0 0 ? Z 14:09 0:00 [kcmshell4]
Of course, I could be lying that the cause is that so let’s take a look at the sealert of /var/log/audit/audit.log:

type=AVC msg=audit(1388095776.191:191): avc: denied { write } for pid=7544 comm=”kcmshell4″ name=”icon-cache.kcache” dev=”dm-6″ ino=263190 scontext=unconfined_u:unconfined_r:mozilla_plugin_t:s0-s0:c0.c1023 tcontext=system_u:object_r:tmp_t:s0 tclass=file

Was working fine until then. But after that point I had Firefox both hanging on me (had to send it a SIGTERM) and then closing almost immediately (some times and other times hanging again) on restarting it (and it was Firefox that was trying to write to the file in question).

And lastly, I’ll just quote the very descriptions of SELinux modes exactly from a website, as to why I find it more a problem than a gain (and hint: security is only security if it is not so much a hassle that people want to override it, e.g., being baby-sat when installing software or having to have 20 passwords and so they write it down in plain sight to compensate; there has to be a balance):

enforcing=0
Setting this parameter will cause the machine to boot in permissive mode. If your machine will not boot in enforcing mode, this can allow you to boot it and figure out what is wrong. Sometimes you file system can get so messed up that this parameter is your only option.


autorelabel=1
This parameter will force the system to relabel. It does the same thing as “touch /.autorelabe; reboot”. Sometimes, if the machines labeling is really bad, you will need to boot in permissive mode in order for the autorelabel to succeed. An example of this is switching from strict to targeted policy. In strict policy shared libraries are labeled as shlib_t while ordinary files in /lib directories are labeled lib_t. strict policy only allows confined apps to execute shlib_t. In targeted policy shlib_t and lib_t are aliases. (Having these files labeled differently is of little security importance and leads to labeling problems in my opinion). So every file in /lib directories gets the label lib_t.
When you boot a machine that is labeled for targeted with strict policy the confined apps try to execute lib_t labeled shared libraries so and they are denied. /sbin/init tries this and blows up. So booting in permissive mode allows the system to relabel the shared libraries as shlib_t and then the next boot can be done in enforcing.

So you cannot even successfully relabel unless it is in permissive mode. How lovely is that when you remember how important labels are to SELinux.

That’s it. I just hope Fedora gets their act together and soon. Rant done.

In Memory of C.S. Lewis: 50 Years Later

Some time earlier this year or perhaps last year, I found out that C.S. Lewis died on the same day that JFK was assassinated. As would be expected, this meant hardly anything was said of Lewis and I find this sad to say the least. Since this is not at all a political site (and I assure you it never ever will be turned into such a cesspool!) or a news site, and since I have written before about fantasy – albeit briefly – I think it is about time C.S. Lewis is remembered. To be fair, the BBC did mention this fact the other day, but of course the real interest to most is that it is 50 years since JFK was assassinated and not 50 years since C.S. Lewis died. Well, for me it is 50 years since Lewis died, too.

I remember when I was in grade school the class had to read The Chronicles of Narnia: The Lion, The Witch and the Wardrobe and how much I enjoyed it. I was probably 5 and I had to read the entire series (my choice – the class only had to read the first) and I did it and I thoroughly enjoyed each and every one of them. It was my first exposure to fantasy and I’ve never looked back. Sure, my favourite author is Jules Verne who wrote more of adventures and science fiction (a combination of) but the truth is fantasy is very much a part of my life. Perhaps because of a specific multiuser dungeon (MUD) that I am a developer and designer for (which by itself is a wonderful thing: use my mind with programming and at the same time use my imagination), it is one of the most important things to me. Many would find MUDs destroyed their life because of addiction (and I admit I was at a time addicted to this MUD but my pleasure from playing was always overpowered by the prospect of programming for it, to which is another ‘addiction’ of mine but it is healthy as it brings me a lot of experience at the same time as joy) but it is the exact opposite for me. It was the first real project I was part of (a significant project, anyway) and it was a team project at that. But who cares about that? I’m going off topic. The point is fantasy is something that matters to me a great deal and C.S. Lewis is the author of the very first book I read in that genre.

There isn’t much to be said at this time, I admit, and part of that is I delayed this until the end of the day (I forgot to write it earlier) and I want to finish up. But one thing I find most interesting is that he was friends with Tolkien and while I’m not into religion, it is interesting to note that Tolkien was religious and is the very reason, I believe, that Lewis opened up to religion. Yet, even though I’m not into [that], I can find a sense of enjoyment from what he wrote. True, Narnia was in the fantasy genre and not a work of theology but it really shows how variety and/or differences is (are) not always a bad thing. Indeed, we would be extinct, I am sure of it, if we were all the same (not to mention it would be a boring life, at least it would to me). But the more we are open to others, the more we can learn and the more we can better ourselves. This very concept is how and why technology evolves as does anything else that evolves does. This very concept is part of evolving, period. Naturally we each go our own path and some will agree and some will disagree. That doesn’t matter to me either because that is exactly why we’re still here. After all, if everyone agreed with everything I said, this world might not be boring but that’s because I’m something of a lunatic – not because everyone agreed with me (I would find it pretty awkward if everyone did agree with me and I’m not always right and I’m willing to accept and admit that). Everyone has their own belief structure and their own goals, and I approve of that just as I approve of Lewis’ having his own beliefs (or what beliefs he had).

Thanks, C.S. Lewis, for your wonderful series involving the wonderful fantasy world called ‘Narnia’. It provided me much enjoyment and still does when I think of it.

Login-specific ssh commands restrictions

Updated on 2014/02/17 to fix a typo and to add two caveats with this method that I thought of (caveat #5 is one that if it effects you it is what can be a rather bad problem but if it does not it won’t matter at all and caveat #6 is something that should always be kept in mind when using smrsh).

One of the projects I work on involves a VirtualBox install of Fedora Core. The reason Fedora Core is used is twofold:

  1. It is on the bleeding edge. This is important because it has the latest and greatest standards. In this specific project this means I have access to most recent C and C++ standards. The server the VirtualBox runs on runs a more stable distribution that backports security fixes but otherwise maintains stability because there are less updates which means less chance of things to go wrong.
  2. As for binary distribution (which is beneficial to production — I used to love Linux From Scratch and Gentoo but the truth is compiling all the time takes a lot of time and therefore have less time working on the things I want to be working on) I am mostly biased for RedHat based distributions. This and reason 1, above means Fedora Core is the perfect distribution for the virtual machine.

However, from in the VM I need to access a CVS repository on my server (the server hosting the virtual machine is not the same). Now while I could use ssh with passwords or pass-phrases on the SSH keys I don’t allow password logins and as for the ssh keys I don’t (in the case of these two users) require pass phrases. This of course leaves a problem with the security: anyone who can access the virtual box and those users (which is not likely because no passwords and only sudo allows it and that is restricted to users in question; but not likely does not mean impossible).

However, while no one that has access to that virtual machine (iptables policy is to reject and selectively allows certain IPs access to SSH — which is how firewalls should be built, mind you: if it it is not stated that it is allowed then deny it flat out) is someone I don’t trust, the truth of the matter is I would be naive and foolish to not consider this problem!

So there’s a few ways of going about this. The first way is one I am not particularly fond of. While it has its uses, there is specifically using in /etc/ssh/sshd_config the option ForceCommand (in a Match block). Why I don’t like this is simple: ForceCommand implies that I know the single command that is necessary, and that means the arguments to the command too. I might be misinterpreting that and I honestly cannot be completely sure (I tested this some time ago so memory is indeed a problem) but I seem to remember this is the case. That means that I cannot use cvs because I cannot do ‘cvs update’, ‘cvs commit’, ‘cvs diff’, etc. So what can be done? Well, there is I’m sure more than one way (as often the case in UNIX and its derivatives) but the way I went about it is this (I am not showing how to create the account or do anything but limit the set of commands):

  1. Install sendmail (mostly you are after the binary smrsh – sendmail restricted shell). Note that if you use (for example) postfix (or any other MTA) then depending on your distribution (and how you install sendmail) it might mean you’ll need to adjust which MTA is used (see man alternatives) as the default!
  2. Assuming you now have /usr/bin/smrsh then you are ready to restrict the commands to the user(s) in question. First, set their shell to be /usr/bin/smrsh by either updating the /etc/passwd file or the better way run (as root) /usr/bin/chsh [user] where obviously [user] is the user in question. It goes without saying but DO NOT EVEN THINK ABOUT SETTING root TO USE THIS SHELL!
  3. Now you need to decide which commands they are allowed to use. For the purpose of the example it will be cvs (but do see the caveats!). So what you do is something like this (again, as root as you’re writing to system file):
    ln -s /usr/bin/cvs /etc/smrsh/cvs

which will create a symbolic link in /etc/smrsh called cvs which points to /usr/bin/cvs which is the command the user can use. If you wanted them to be able to use another command you would create a link just like with cvs.

  1. Assuming you already have the user’s ssh key installed, you’ll want to edit the file ~user/.ssh/authorized_keys file. If you have not installed their key then do that first. Now what do you need to add to the file and where in the file? Whichever key that you want to restrict (which is probably only allow one key or do it for all keys unless you want the hole of them copying the key from their ‘free’ machine to the ‘restricted’ machine) you need to add to the beginning of the line, the following:
no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty

That means that after you edit it will be in the form of (assuming that you use ssh-rsa):

no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa [key] [user@host]

Obviously [key] is the authorized key and [user@host] is the user@host they connect from. Although I cannot be 100% sure (as in I cannot remember but I am pretty sure this is not in fact a hole in my memory) I believe the user@host part is in fact just a helpful comment or reminder that this key is for this user and this host. Everything else will need to be there, however.

A few caveats need to be considered.

  1. Most importantly: If you don’t trust the user at all then they SHOULD NOT HAVE ACCESS AT ALL! This setup is used for when you have someone that needs access to the server but as a safety catch you don’t allow most commands as they don’t need it and/or there is a risk that someone else could abuse their access (for example if they are working in an office and they decide to step away for just a moment but keep their session unlocked).
  2. Those with physical access are irrelevant to this setup. Physical access = root access, period. Not debatable. If they want it they can have it.
  3. If they have more than one key installed then you either need to make sure all keys allow are set up the same way (preventing pty allocation etc.) or that it is impossible they can change the authorized_key file (which to be brutally honest, being sure of the latter is a guarantee of one thing: a false sense of security).
  4. This involves ssh keys. If you allow logins without ssh keys (e.g., with a password alone) then this is not really for you. ssh keys are ideal any way and you can have ssh keys with pass phrases too and even have a user password (think of if you grant them sudo access; now its an ssh key, a passphrase for that key [required] and for sudo their password [though, just like su, if too lax in allowing, it can be an issue], which means not only does a user need to have an authorized key they also need to know the pass phrase to that key. This is like having a ‘magical key’ that only fits in a lock when you also have a pass phrase you can input into a device that opens the keyhole. This is the safest approach especially if for any reason there is a chance that someone can steal the ssh key and you don’t happen to have ssh blocked at the firewall level by default (which is, depending on who needs access and from where, is a very good thing to consider doing).
  5. Something I neglected at first (not intentionally, just did not think to write it until a few days ago, today being 2014/02/17) is that if you have more than one login using this shell this may not be the most appropriate method to tackle the problem in question. The reason should be obvious but if it isn’t it is simply that the users might not have the same tasks (or, to put it another way, the commands they should be restricted to might be very different). This is definitely one downside to this method and is something to keep in mind.
  6. Even though I wrote caveat #1 which somewhat tackles this (it all comes down to trust which is given out too easily far too often), I feel it irresponsible to not discuss this. Depending on what you allow (and you should always be careful when adding a command to /etc/smrsh or in fact whenever you are deciding on something that changes the way things are dealt with by the system, especially if it involves users) you can run into trouble. Also consider that smrsh does allow && and ||. While I only allow certain IPs and only connections with an ssh-key (that is in authorized_key file), I only now thought to test this (which shows all the more how easy it is to forget something or even be unaware of something, that is actually a problem). The result? Because I disallow PTY allocations (in this setup) which means it is not possible to log in (unless of course through another user which is another can of worms entirely) it should be OK but do understand that breaking out of restricted shells – just like chroots – is not something that should be dismissed as “there is only one way and I know I prevent it” (a dangerous and incorrect assumption). This entire point (caveat 6) would be more of a problem if I did not disable PTY allocations but it still is something to strongly considered (again, if anyone can log on as this user through another user, then they aren’t exactly as restricted). On that, it should be noted that you should not ever add shell scripts or any program that can be told to run other programs (like procmail’s procmailrc file), to /etc/smrsh, and that also includes perl (or similar) and shells (that should be quite obvious but stating it anyway). sendmail’s restricted shell also allows certain built-ins like ‘exec’ so do keep that in mind too. Lastly, on the note of programs allowed, be mindful of the fact that this method DOES allow specifying command options, arguments and so even more care should be given when adding programs to /etc/smrsh (imagine you allow a program that can run another program – that is passed by name – and imagine now that the user runs a program you don’t want to allow; will it work? I’ve not tested it but it would be valuable to try it before deploying any possible loophole). If you do know in advance the command (exactly) then you should take the safer (more restrictive) approach built into sshd (just like always – you should always take the safest approach possible). I could go on about other things to consider but I won’t even try because besides I cannot possibly list everything (or even think of everything – I am human and not even remotely close to perfect) the the bottom line is this: there is always more to consider and it isn’t going to be a simple yes/no decision, so do consider all your options and their implications _before_ making any decision (in my case, with only one user with smrsh and only one command – which only runs scripts if configured on the server side – I would rather prevent normal logon and limit the commands than not).

With all that done, you should now have a login that can run cvs commands (but see caveat #6) but nothing else (sort of, see below). If they try something simple — even trying the following – they will get an error, as shown (and again, see caveat #6!):

$ ssh luser@server “ls -al”
smrsh: “ls” not available for sendmail programs (stat failed)

$ ssh luser@server
PTY allocation request failed on channel 0
Usage: -smrsh -c command
Connection to server closed.

On caveat #6, observe the following:

$ ssh luser@server “exec echo test && exec source ~/.bashrc”
smrsh: “source” not available for sendmail programs (stat failed)

Yes, that means that ‘echo test’ was in fact executed successfully (exec before ‘echo test’ was not strictly necessary for this, in case you wondered) — see below – but source was not (which is good; if you don’t know what source does do look it up as it is relevant). Note the following output:

$ ssh luser@server “exec ls || echo test”
smrsh: “ls” not available for sendmail programs (stat failed)
$ ssh luser@server “echo test”
test

And when looking at the above, specifically note the way the commands are invoked (or for the first time, the commands that I tried to invoke).

Rest In Peace Lou Reed

This will be fairly quick (or so I hope) because things have not been that great (“what is sleep ?” is the story) but I must write at least something before I do in fact try to sleep.

I just saw that Lou Reed has passed away. Now, those who know me well enough will know why I feel this is important: my favourite band collaborated with Lou Reed in 2011. I admit fully that I did not buy it (among the rare things of the band’s work I did not buy although this news may change that) because I did not like Lou’s voice. It was not that it was different that I did not like about the recording. No, that is something I have a huge amount of respect for: Metallica happens to do whatever it is they want and that includes shocking their fans. With shock comes (at times) disappointment. But at the end of the day the reality is they do what they want for themselves (and also for their fans, honestly – though some would disagree it is irrefutable) and that they are willing to risk upsetting someone for themselves shows not weakness but strength. Yes, strength, courage and let us all be realistic: we might not like change but without change the human species would be EXTINCT. So, good on Metallica for change. I don’t even have that much courage – I won’t deny that. Would I like to change that? Yes and no, which I think is how a lot of people view courage (or lack thereof and wanting to change/improve it). Regardless, them being comfortable doing this type of thing brings out their true colours and it is a beautiful rainbow of colours at that. They made mistakes. They are only human. Lars Ulrich pissed off a lot of people with Napster. But you know something? He also realised that perhaps his approach was not the best, and when a store in France (by mistake) released Death Magnetic a day early, not only did the band welcome it, Lars himself welcomed it and noted that things have changed.  They have.  Anyone who does not believe that is ignoring reality and also (in the case of them accepting Lars making a mistake) being unable to accept that no one is perfect but what matters is not perfection but instead always improving yourself and always being the best you can be. He does that and he does it quite well, regardless of how it comes across to some.  Don’t like him? That’s fine. No one likes everyone or everything. For instance: I did not think Lou Reed’s collaboration with Metallica was great at all. I didn’t dislike Lou Reed but I did dislike the way the recording sounded to my ears (his voice sort of drowned out the rest, for me). Still, I know a lot of fellow Clubbers respected his work and I know many more not part of the Metallica Camp respected him, too.

As for Metallica doing things for their fans and it being irrefutable, I have the following words to write: 30 Year Anniversary Celebration. Those who were fortunate enough to be there (and I was only there for one of the four nights) would fully agree, for sure. They truly do care about their fans and their fans care about them (I met people from Mexico, Denmark and Australia, to name three different locations in the world, that fans came from, while I was in San Francisco).

Lou Reed: The legend you left with you will never be forgotten and while I maybe did not like your voice (at least on Lulu) I still respect you, your personality, and you, period (and I always will). Rest in Peace, Lou, and thanks for allowing me to learn of you and what you are about (by collaborating with my favourite band).