Security

Things related to computer security in some way or another.

Open Source and Security

One of the things I often write about is how open source is in fact good for security. Some will argue the opposite to the end. But what they are relying on, at best, is security through obscurity. Just because the source code is not readily available does not mean it is not possible to find flaws or even reverse engineer it. It doesn’t mean it cannot be modified, either. I can find – as could anyone else – countless examples of this. I have personally added a feature to a Windows dll file – a rather important one, that is shell32.dll – in the past. I then went on to convince Windows file integrity check to not only see the modified file as correct but if I were to have replaced it with the original, unmodified one, then it would have replaced it with my modified version. And how did I add a feature without the source code? My point exactly. So to believe you cannot uncover how it works (or, as some will have you believe, modify and/or add features) is a huge mistake. But whatever. This is about open source and security. Before I can get in to that, however, I want to bring up something else I often write about at times.

That thing I write about is this: one should always admit to mistakes. You shouldn’t get angry and you shouldn’t take it as anything but a learning opportunity. Indeed, if you use it to better yourself, better whatever you made the mistake in (let’s say you are working on a project at work and you make a mistake that throws the project off in some way) and therefore better everything and everyone involved (or around), then you have gained and not lost. Sure, you might in some cases actually lose something (time and/or money, for example) but all good comes with bad and the reverse is true too: all bad comes with good. Put another way, I am by no means suggesting open source is perfect.

The only thing that is perfect is imperfection.
– Xexyl

I thought of that the other day. Or maybe better put, I actually got around to putting it in my fortune file (I keep a file of pending ideas for quotes as well as the fortune file itself). The idea is incredibly simple: the only thing that will consistently happen without failure is ‘failure’, time and time again. In other words, there is no perfection. ‘Failure’ because it isn’t a failure if you learn from it; it is instead successfully learning yet another thing and is another opportunity to grow.

All of this together is important, though. I refer to admitting mistakes and how it is only a good thing. I also suggest that open source is by no means perfect and therefore to be critical of it, as if it is less secure, is flawed. But here’s the thing. I can think of a rather critical open source library that is used by a lot of servers, that has had a terrible year. One might think with this, and specifically what it is (which I will get to in a moment), it is somehow less secure or more problematic. What is this software? Well, let me start by noting the following CVE fixes that were pushed in to update repositories, yesterday:


 

- fix CVE-2014-3505 – doublefree in DTLS packet processing
- fix CVE-2014-3506 – avoid memory exhaustion in DTLS
- fix CVE-2014-3507 – avoid memory leak in DTLS
- fix CVE-2014-3508 – fix OID handling to avoid information leak
- fix CVE-2014-3509 – fix race condition when parsing server hello
- fix CVE-2014-3510 – fix DoS in anonymous (EC)DH handling in DTLS
- fix CVE-2014-3511 – disallow protocol downgrade via fragmentation


To those who are not aware, I refer to the same software that had Heartbleed vulnerability. It therefore is also the same software as some other CVE fixes not too long after that. And indeed it seems that OpenSSL is having a bad year. Well, whatever – or perhaps better put, whomever – is the source (and yes, I truly do love puns) of the flaws, is irrelevant. What is relevant is this: they clearly are having issues. Someone or some people adding the changes are clearly not doing proper sanity checks and in general not auditing well enough. This just happens, however. It is part of life. It is a bad patch (to those that do not like puns, I am sorry, but yes, there goes another one) of time for them. They’ll get over it. Everyone does, even though it is inevitable that it happens again. As I put it: This just happens.

To those who want to be critical, and not constructively critical, I would like to remind you of the following points:

  • Many websites use OpenSSL to encrypt your data and this includes your online purchases and other credit card transactions. Maybe instead of being only negative you should think about your own mistakes more rather than attacking others? I’m not suggesting that you not considering yours, but in the case you are not, think about this. If nothing else, consider that this type of criticism will lead to nothing and since OpenSSL is critical (I am not consciously and deliberately making all these puns, it is just in my nature) then this can lead to no good and certainly is of no help.
  • No one is perfect, as I not only suggested above, but also suggested at other times. I’ll also bring it up again, in the future. Because thinking yourself infallible is going to lead to more embarrassment than if you understand this and prepare yourself, always being on the look out for mistakes and reacting appropriately.
  • Most importantly: this has nothing to do with open source versus closed source. Closed source has its issues too, including less people that can audit it. The Linux kernel, for example, and I mean the source code thereof, is on many people’s computers and that is a lot to see any issues. Issues still have and still will happen, however.

With that, I would like to end with one more thought. Apache, the organization that maintains the popular web server as well as other projects, is really to be commended for their post-attack analyses. They have a section on their website, which details attacks and the details include what mistakes they made, what happened, what they did to fix it as well as what they learned. That is impressive. I don’t know if any closed source corporations do that, but either way, it is something to really think about. It is genuine, it takes real courage to do it, and it benefits everyone. This is one example. There are immature comments there but that shows just how impressive Apache is to do this (they have other incident reports I seem to recall). The specific incident is reported here.

Steve Gibson: Self-proclaimed Security Expert, King of Charlatans

2014/08/28: Just to clarify a few points. Firstly, I have five usable IP addresses. That is because, like I explain below, some of the IPs are not usable for systems but instead have other functions. Secondly, the ports detected as closed and my firewall returning icmp errors. It is true  I do return that but of course there’s missing ports there and none of the others are open (that is, none have services bound to them), either. There are times I flat out drop packets to the floor but if I have the logs I’m not sure which log file it is (due to log rotation), to check for sure. There’s indeed some inconsistencies. But the point remains the same: there was absolutely nothing running on any of those ports just like the ports it detected as ‘stealth’ (which is more like not receiving a response and what some might call filtered but in the end it does not mean nothing is there and it does not mean you are some how immune to attacks). Third, I revised the footnote about FQDN, IP addresses and what they resolve to. There’s a few things that I was not clear with and in some ways unfair with, too. I was taking an issue to one  thing in particular and I did a very poor job of it I might add (something I am highly successful at I admit).

One might think I have better things to worry about than write about a known charlatan but I have always been somewhat bemused with his idea of security (perhaps because he is clueless and his suggestions are unhelpful to those who believe him which is a risk to everyone). More importantly though, I want to dispel the mythical value of what he likes to call stealth ports (and even more than that anything that is not stealth is somehow a risk). This, however, will not only tackle that, it will be done in what some might consider an immature way. I am admitting it full on though. I’m bored and I wanted to show just how useless his scans are by making a mockery of those scans. So while this may seem childish to some, I am merely having fun while writing about ONE of MANY flaws Steve Gibson is LITTERED with (I use the word littered figuratively and literally).

So let’s begin, shall we? I’ll go in the order of pages you go through to have his ShieldsUP! scan begin. First page I get I see the following:

Greetings!

Without your knowledge or explicit permission, the Windows networking technology which connects your computer to the Internet may be offering some or all of your computer’s data to the entire world at this very moment!

Greetings indeed. Firstly, I am very well aware of what my system reveals. I also know that this has nothing to do with permission (anyone who thinks they have a say in what their machine reveals, when connecting to the Internet – or phone to a phone network, or … – is very naive and anyone suggesting that there IS permission involved, is a complete fool). On the other hand, I was not aware I am running Windows. You cannot detect that yet you scan ports which would give you one way to determine OS? Here’s a funny part of that: since I run a passive fingerprinting service (p0f) MY SYSTEM determined your server’s OS (well, technically, kernel but all things considering, that is the most important bit, isn’t it? Indeed it is not 100% correct, however, but that goes with fingerprinting in general and I know that it DOES detect MY system correctly). So not only is MY host revealing information YOURS is too. Ironic? Absolutely not! Amusing? Yes. And lastly, let’s finish this part up: “all of your computer’s data to the entire world at this very moment!” You know, if it were not for the fact people believe you, that would be hilarious too. Let’s break that into two parts. First, ALL of my computer’s data? Really now? Anyone who can think rationally knows that this is nothing but sensationalism at best but much more than that: it is you proclaiming to be an expert and then ABUSING that claim to MANIPULATE others into believing you (put another way: it is by no means revealing ALL data, not in the logical – data – sense or physical – hardware – sense). And the entire world? So you’re telling me that every single host on the Internet is analyzing my host at this very moment? If that were the case, my system’s resources would be too full to even connect to your website. Okay, context would suggest that you mean COULD but frankly I already covered that this is not the case (I challenge you to name the directory that is most often my current working directory let alone know that said directory even exists on my system).

If you are using a personal firewall product which LOGS contacts by other systems, you should expect to see entries from this site’s probing IP addresses: 4.79.142.192 -thru- 4.79.142.207. Since we own this IP range, these packets will …

Well, technically, based on that range, your block is 4.79.142.192/28. And technically your block includes (a) the network address, (b) the default gateway and (c) the broadcast address. That means that the IPs that would be probing is in the range more like ’4.79.142.193′ – ’4.79.142.206′. And people really trust you? You don’t even know basic networking and they trust you with security?

Your Internet connection’s IP address is uniquely associated with the following “machine name”:

wolfenstein.xexyl.net

Technically that is the FQDN (fully-qualified domain name[1]), not “machine name” as you put it. You continue in this paragraph:

The string of text above is known as your Internet connection’s “reverse DNS.” The end of the string is probably a domain name related to your ISP. This will be common to all customers of this ISP. But the beginning of the string uniquely identifies your Internet connection. The question is: Is the beginning of the string an “account ID” that is uniquely and permanently tied to you, or is it merely related to your current public IP address and thus subject to change?

Again, your terminology is rather mixed up. While it is true that it you did a reverse lookup on my IP, it isn’t exactly “reverse DNS”. But since you are trying to simplify (read: dumb it down to your level) it for others, and since I know I can be seriously pedantic, I’ll let it slide. But it has nothing to do with my Internet connection itself (I have exactly one). It has to do with my IP address of which I have many (many if you consider my IPv6 block, but only 5 if you consider IPv4). You don’t exactly have the same FQDN on more than one machine any more than you have the same IP on more than one network interface (even on the same system). So no, it is NOT my Internet connection but THE specific host that went to your website and in particular the IP assigned to that host I connected from. And the “string” has nothing to do with an “account ID” either. But I’ll get back to that in a minute.

The concern is that any web site can easily retrieve this unique “machine name” (just as we have) whenever you visit. It may be used to uniquely identify you on the Internet. In that way it’s like a “supercookie” over which you have no control. You can not disable, delete, or change it. Due to the rapid erosion of online privacy, and the diminishing respect for the sanctity of the user, we wanted to make you aware of this possibility. Note also that reverse DNS may disclose your geographic location.

I can actually request a different block from my ISP and I can also change the IP on my network card. Then the only thing that is there is my IP and its FQDN (that is not in use and I can change the FQDN as I have reverse delegation yet according to you I cannot do any of that). I love your ridiculous terminology though. Supercookie? Whatever. As for it giving away my geographic location, let me make something very clear: the FQDN is irrelevant without the IP address. While it is true that the name will (sometimes) refer to a city it isn’t necessarily the same city or even county as the person USING it. The IP address is related to the network; the hostname is a CONVENIENCE for humans. You know, it used to be that host -> IP was done without DNS (since it didn’t exist) but rather a file that maintains the mapping (and still is used albeit very little). The reason it exists is for convenience and in general because no one would be able to know the IP of every domain name. Lastly, not all IPs resolve into a name.

If the machine name shown above is only a version of the IP address, then there is less cause for concern because the name will change as, when, and if your Internet IP changes. But if the machine name is a fixed account ID assigned by your ISP, as is often the case, then it will follow you and not change when your IP address does change. It can be used to persistently identify you as long as you use this ISP.

The occasions it resembles the IP is when the ISP has authority of the in-addr.arpa DNS zone of (your) IP and therefore has their own ‘default’ PTR record (but they don’t always have a PTR record which your suggestion does not account for; indeed, I could have removed the PTR record for my IP and then you’d have seen no hostname). But this does not indicate that it is static or not. Indeed, even dynamic IPs typically (not always) have a PTR record. Again, the name does not necessarily imply static: it is the IP that matters. And welcome to yesteryear… in these days you typically pay extra for static IPs but you suggest it is quite often that your “machine name is a fixed account ID” (which itself is completely misuse of terminology). On the other hand, you’re right: it won’t change when your IP address changes because the IP is relevant, not the hostname! And if your IP changes then it isn’t so persistent in identifying you, is it? It might identify your location but as multiple IPs (dynamic) and not a single IP.

There is no standard governing the format of these machine names, so this is not something we can automatically determine for you. If several of the numbers from your current IP address (23.120.238.106) appear in the machine name, then it is likely that the name is only related to the IP address and not to you.

Except ISP authentication logs and timestamps… And I repeat the above: the name can include exactly as you suggest and still be static!

But you may wish to make a note of the machine name shown above and check back from time to time to see whether the name follows any changes to your IP address, or whether it, instead, follows you.

Thanks for the suggestion but I think I’m fine since I’m the one that named it.

Now, let’s get to the last bit of the ShieldsUP! nonsense.

GRC Port Authority Report created on UTC: 2014-07-16 at 13:20:16
Results from scan of ports: 0-10550 Ports Open
72 Ports Closed
984 Ports Stealth
———————
1056 Ports TestedNO PORTS were found to be OPEN.Ports found to be CLOSED were: 0, 1, 2, 3, 4, 5, 6, 36, 37,
64, 66, 96, 97, 128, 159, 160,
189, 190, 219, 220, 249, 250,
279, 280, 306, 311, 340, 341,
369, 371, 399, 400, 429, 430,
460, 461, 490, 491, 520, 521,
550, 551, 581, 582, 608, 612,
641, 642, 672, 673, 734, 735,
765, 766, 795, 796, 825, 826,
855, 856, 884, 885, 915, 916,
945, 946, 975, 976, 1005, 1006,
1035, 1036

Other than what is listed above, all ports are STEALTH.

TruStealth: FAILED – NOT all tested ports were STEALTH,
- NO unsolicited packets were received,
- A PING REPLY (ICMP Echo) WAS RECEIVED.

The ports you detected as “CLOSED” and not “STEALTH” were in fact returning an ICMP host-unreachable. You fail to take into account the golden rule of firewalls: that which is not explicitly permitted is forbidden. That means that even though I have no service running on any of those ports I still reject the packets to it. Incidentally, some ports you declared as “STEALTH” did exactly the same (because I only allow the ports in a specific IP block as the source network). The only time I drop packets to the floor is when state checks fail (e.g., a TCP SYN flag is set but it is already a known connection). I could prove that too, because I actually went and had you do the scan a second time but this time I added specific iptables rules for your IP block which changed the results quite a bit and indeed I used the same ICMP error code.

As for ping: administrators that block ping outright need to be hit over the head with a pot of sense. Rate limit by all means, that is more than understandable, but blocking ICMP echo requests (and indeed replies) only makes troubleshooting network connectivity issues more of a hassle and at the same time does absolutely nothing for security (fragmented packets and anything that can be abused obviously are dealt with differently because they are different!). Indeed, if they are going to attack they don’t really care if you respond to ICMP requests. If there is a vulnerability they will go after that and frankly hiding behind your “stealth” ports is only a false sense of security and/or security through obscurity (which is a false sense of security and even more harmful at the same time). Here’s two examples: First, if someone sends you a link (example: in email) and it seems to you to be legit and you click on it (there’s a lot of this in recent years and is ever increasing), the fact you have no services running does not mean you somehow are immune to XSS, phishing attacks, malware, or anything else. Security is, always has been and always will be a many layered thing. Secondly: social engineering.

And with that, I want to finish with the following:

If anyone wants the REAL story about Steve Gibson, you need only Google for “steve gibson charlatan” and see the many results. I can vouch for some of them but there really is no need for it – the evidence is so overwhelming that it doesn’t need any more validation. Here’s a good one though (which also shows his ignorance as well as how credible his proclamations are): http://www.theregister.co.uk/2002/02/25/steve_gibson_invents_broken_syncookies/ (actually it is a really good one). If you want a list of items, check the search result that refers to Attrition.org and you will see just how credible he is NOT. A good example is the one that links to a page about Gibson and XSS flaws which itself links to: http://seclists.org/vuln-dev/2002/May/25 which is itself offers a great amount of amusement (note that some of the links are no longer valid as it was years ago but that is the one at seclists.org and not the only incident).

[1] Technically, what his host is doing is taking the IP address and resolving it to a name (which is querying the PTR record as I refer to above). Since I have reverse delegation (so have authority) and have my own domain (which I also have authority of) I have my IPs resolve to fully-qualified domain names as such. FQDN is perhaps not the best wording (nor fair, especially) on my part in that I was abusing the fact that he is expecting normal PTR records that an ISP has rather than a server with proper an A record with a matching PTR record. What he refers to is as above: resolving the IP address to a name, which does not have to have a name. Equally, even if a domain exists by name, it does not have to resolve to an IP (“it is only registered”). He just gave it his own name for his own ego (or whatever else).

SELinux, Security and Irony Involved

I’ve thought of this in the past and I’ve been trying to do more things (than usual) to keep me busy (there’s too few things that I spend time doing, more often than not), and so I thought I would finally get to this. So, when it comes to SELinux, there are two schools of thought:

  1. Enable and learn it.
  2. Disable it; it isn’t worth the trouble.

There is also a combination of the two: put it in permissive mode so you can at least see alerts (much like logs might be used). But for the purpose of this post, I’m going to only include the mainstream thoughts (so 1 and 2, above). Before that though, I want to point something out. It is indeed true I put this in the security category but there is a bit more to it than that, as those who read it will find out at the end (anyone who knows me – and yes this is a hint – will know I am referring to irony, as the title refers to). I am not going to give a suggestion on the debate of SELinux (and that is what it is – a debate). I don’t enjoy and there is no use in having endless debates on what is good, bad, what should be done, debating on whether or not to do something or even debating on a debating (and yes the latter two DO happen – I’ve been involved in a project that had this and I stayed out of it and did what I knew to be best for the project over all). That is all a waste of time and I’ll leave it to those who enjoy that kind of thing. Because indeed the two schools of thought do involve quite a bit of emotion (something I try to avoid itself, even) – they are so passionate about it, so involved in it, that it really doesn’t matter what is suggested from the other side. It is taking “we all see and hear what we want to see and hear” to the extreme. This is all the more likely when you have two sides. It doesn’t matter what they are debating and it doesn’t matter how knowledgeable they are or are not and nothing else matters either: they believe their side’s purpose so strongly, so passionately that much of it is mindless and futile (neither side sees anything but their own side’s view). Now then…

Let’s start with the first. Yes, security is indeed – as I’ve written about before – layered and multiple layers it always has been and always should be. And indeed there are things SELinux can protect against. On the other hand, security has to have a balance or else there is even less security (password aging + many different accounts so different passwords + password requirements/restrictions = a recipe for disaster). In fact, it is a false sense of security and that is a very bad thing. So let’s get to point two. Yes, that’s all I’m going to write on the first point. As I already wrote, there isn’t much to it notwithstanding endless debates: it has pros and it has cons, that’s all there is to it.

Then there’s the school of thought that SELinux is not worth the time and so should just be disabled. I know what they mean, not only with the labelling of the file systems (I wrote about this before and how SELinux itself has issues at times all because of labels, and so you have to have it fix itself). That labelling issue is bad itself but then consider how it affects maintenance (even worse for administrators that maintain many systems). For instance, new directories in Apache configuration, as one example. Yes, part of this is laziness but again there’s a balance. While this machine (the one I write from, not xexyl.net) does not use it I still practice safe computing, I only deal with software in main repositories, and in general I follow exactly as I preach: multiple layers of security. And finally, to end this post, we get to some irony. I know those who know me well enough will also know very well that I absolutely love irony, sarcasm, satire, puns and in general wit. So here it is – and as a warning – this is very potent irony so for those who don’t know what I’m about to write, prepare yourselves:

You know all that talk about the NSA and its spying (nothing alarming, mind you, nor anything new… they’ve been this way a long long time and which country doesn’t have a spy network anyway? Be honest!), it supposedly placing backdoors in software and even deliberately weakening portions of encryption schemes? Yeah, that agency that there’s been fuss about ever since Snowden started releasing the information last year. Guess who is involved with SELinux? Exactly folks: the NSA is very much part of SELinux. In fact, the credit to SELinux belongs to the NSA. I’m not at all suggesting they tampered with any of it, mind you. I don’t know and as I already pointed out I don’t care to debate or throw around conspiracy theories. It is all a waste of time and I’m not about to libel the NSA (or anyone, any company or anything) about anything (not only is it illegal it is pathetic and simply unethical, none of which is appealing to me), directly or indirectly. All I’m doing is pointing out the irony to those that forget (or never knew of) SELinux in its infancy and linking it with the heated discussion about the NSA of today. So which is it? Should you use SELinux or not? That’s for each administrator to decide but I think actually the real thing I don’t understand (but do ponder about) is: where do people come up with the energy, motivation and time to bicker about the most unimportant, futile things? And more than that, why do they bother? I guess we’re all guilty to some extent but some take it too far too often.

As a quick clarification, primarily for those who might misunderstand what I find ironic in the situation, the irony isn’t that the NSA is part of (or was if not still) SELinux. It isn’t anything like that. What IS ironic is that many are beyond surprised (and I still don’t know why they are but then again I know of NSA’s history) at the revelations (about the NSA) and/or fearful of their actions and I would assume that some of those people are those who encourage use of SELinux. Whether that is the case and whether it did or did not change their views, I obviously cannot know. So put simply, the irony is that many have faith in SELinux which the NSA was (or is) an essential part of and now there is much furor about the NSA after the revelations (“revelations” is how I would put it, at least for a decent amount of it).

Fully-automated ‘Security’ is Dangerous

Thought of a better name on 2014/06/11. Still leaving the aside, below, as much of it is still relevant.

(As a brief aside, before I get to the point: This could probably be better named. By a fair bit, even. The main idea is security is a many layered concept and it involves computers – and its software – as well as people, and not either or and in fact it might involve multiples of each kind. Indeed, humans are the weakest link in the chain but as an interesting paradox, humans are still a very necessary part of the chain. Also, while it may seem I’m being critical in much of this, I am actually leading to much less criticism, giving the said organisation the benefit of the doubt as well as getting to the entire point and even wishing the entire event success!)

In our ever ‘connected’ world it appears – at least to me – that there is so much more discussion about automatically solving problems without any human interaction (I’m not referring to things like calculating a new value for Pi, puzzles, mazes or any thing like that; that IS interesting and that is indeed useful, including to security, even if indirectly and yes, this is about security but security itself on a whole scale). I find this ironic and in a potentially dangerous way. Why are we having everything connected if we are to detach ourselves from the devices (or in some cases the attached so to the device that they are detached from the entire world)? (Besides brief examples, I’ll ignore the part where so many are so attached to their bloody phone – which is, as noted, the same thing as being detached from the world despite the idea they are ever more attached or perhaps better stated, ‘connected’ – that they walk into walls, people – like someone did to me the other day, same cause – and even walking off a pier in Australia while checking Facebook! Why would I ignore that? Because I find it so pathetic yet so funny that I hope more people do stupid things like that that I absolutely will laugh at as should be done, as long as they are not risking others lives [including rescuers lives, mind you; that's the only potential problem with the last example: it could have been worse and due to some klutz the emergency crew could be at risk instead of taking care of someone else]. After all, those that are going to do it don’t get the problem so I may as well get a laugh at their idiocy, just like everyone should do. Laughing is healthy. Besides those points it is irrelevant to this post). Of course, the idea of having everything connected also brings the thought of automation. Well, that’s a problem for many things including security.

I just read that the DARPA (which is the agency that created ARPANet, you know, the predecessor to the Internet – and ARPA still is referred to in DNS, for example) is running a competition as such:

“Over the next two years, innovators worldwide are invited to answer the call of Cyber Grand Challenge. In 2016, DARPA will hold the world’s first all-computer Capture the Flag tournament live on stage co-located with the DEF CON Conference in Las Vegas where automated systems may take the first steps towards a defensible, connected future.”

Now, first, a disclaimer of sorts. Having had (and still do) friends who have been to (and continue to go to) DefCon (and I’m referring to the earlier years of DefCon as well) and not only did they go there, they bugged me relentlessly (you know who you are!) for years to go there too (which I always refused, much to their dismay, but I refused for multiple reasons, including the complete truth: the smoke there would kill me if nothing else did before that), I know very well that there are highly skilled individuals there. I have indirectly written about groups that go there, even. Yes, they’re highly capable, and as the article I read about this competition points out, DefCon already has a capture the flag style tournament, and has for many years (and they really are skilled, I would suggest that except charlatans like Carolyn P. Meinel, many are much more skilled than me and I’m fine with that. It only makes sense anyway: they have a much less hectic life). Of course the difference here is fully automated without any human intervention. And that is a potentially dangerous thing. I would like to believe they (DARPA) would know better seeing as how the Internet (and the predecessor thereof) was never designed with security in mind (and there is never enough foresight) – security as in computer security, anyway. The original reason for it was a network of networks capable of withstanding a nuclear attack. Yes, folks, the Cold War brought one good thing to the world: the Internet. Imagine that paranoia would lead us to the wonderful Internet. Perhaps though, it wasn’t paranoia. It is quite hard to know, as after all a certain United States of America President considered the Soviet Union “the Evil Empire” and as far as I know wanted to delve further on that theme, which is not only sheer idiocy it is complete lunacy (actually it is much worse than that)! To liken a country to that, it boggles the mind. Regardless, that’s just why I view it ironic (and would like to think they would know better). Nevertheless, I know that they (DARPA, that is) mean well (or well, I hope so). But there is still a dangerous thing here.

Here is the danger: By allowing a computer to operate on its own and assuming it will always work, you are essentially taking a great risk and no one will forget what assuming does, either. I think that this is actually a bit understated, because you’re relying on trust. And as anyone who has been into security for 15-20 years (or more) knows, trust is given far too easily. It is a major cause of security mishaps. People are too trusting. I know I’ve written about this before, but I’ll just mention the names of the utilities (rsh, rcp, …) that were at one point the norm and briefly explain the problem: the configuration option – that was often used! – which allowed logging in to a certain host WITHOUT a password from ANY IP, as long as you login as a CERTAIN login! And people have refuted this by using the logic of, they don’t have a login with that name (and note: there is a system wide configuration file of this and also per user which makes it even more of a nightmare). Well, if it is their own system, or if they compromised it, guess what they can do? Exactly – create a login with that name. Now they’re more or less a local user which is so much closer to rooting (or put another way, gaining complete control of) the system (which potentially allows further systems to be compromised).

So why is the DARPA even considering fully automated intervention/protection? While I would like to claim that I am the first one to notice this (and more so put it in similar words), I am not, but it is true: the only thing we learn from history is that we don’t learn a damned thing from history (or we don’t pay attention, which is even worse because it is flat out stupidity). The very fact that systems were compromised by something that was ignored, not thought of prior to (or thought of in a certain way – yes, different angles provide different insights) or new innovations come along to trample over what was once considered safe is all that should be needed in order to understand this. But if not, perhaps this question will resonate better: does lacking encryption mean anything to you, your company, or anyone else? For instance, telnet, a service that allows authentication and isn’t encrypted (logging in, as in sending login and password in clear over the wire). If THAT was not foreseen you can be sure that there will ALWAYS be something that cannot be predicted. Something I have – as I am sure everyone has – experienced is that things will go wrong when you least expect them to. Not only that, much like I wrote not too long ago, it is as if a curse has been cast on you and things start to come crashing down in a long sequence of horrible luck.

Make no mistake: I expect nothing but greatness from the folks at DefCon. However, there’s never a fool-proof, 100% secure solution (and they know that!). The best you can expect is to always be on the look out, always making sure things are OK, keeping up to date on new techniques, new vulnerabilities, and so on, so software in addition to humans! This is exactly why you cannot teach security; you can only learn it – by applying knowledge, thinking power and something else that schools cannot give you: real life experience. No matter how good someone is, there’s going to be someone who can get the better of that person. I’m no different. Security websites have been compromised before, they will in the future. Just like pretty much every other kind of site (example: one of my other sites, before this website and when I wasn’t hosting on my own, the host made a terrible blunder, one that compromised their entire network and put them out of business. But guess what? Indeed, the websites they hosted were defaced, including the other site of mine. And you know what? That’s not exactly uncommon, for defaced websites to occur in mass batches simply because the webhost had a server – or servers – compromised* and well, the defacer had to make their point and name made). So while I know DefCon will deliver, I know also it to be a mistake for DARPA to think there will at some point be no need for human intervention (and I truly hope they actually mean it to be in addition to humans; I did not, after all, read the entire statement, but it makes for a topic to write about and that’s all that matters). Well, there is one time this will happen: when either the Sun is dead (and so life here is dead) or humans obliterate each other, directly or indirectly. But computers will hardly care at that point. The best case scenario is that they can intervene certain (and indeed perhaps many) attacks. But there will never be a 100% accurate way to do this. If it were so, heuristics and the many other tricks that anti-virus (and malware itself) products deploy, would be much more successful and have no need for updates. But has this happened? No. That’s why it is a constant battle between malware writers and anti-malware writers: new techniques, new people in the field, things changing (or more generally, technology evolving like it always will) and in general a volatile environment will always keep things interesting. Lastly, there is one other thing: humans are the weakest link in the security chain. That is a critical thing to remember.

*I have had my server scanned by different sites and they didn’t know they had a customer (or in some cases, a system – owned by a student, perhaps – at a school campus) that had their account compromised. Indeed, I tipped the webhosts (and schools) off that they had a rogue scanner trying to find vulnerable systems (all of which my server filtered but nothing is fool-proof, remember that). They were thankful, for the most part, that I informed them. But here’s the thing: they’re human and even though they are a company that should be looking out for that kind of thing, they aren’t perfect (because they are human). In other words: no site is 100% immune 100% of the time.

Good luck, however, to those in the competition. I would specifically mention particular (potential) participants (like the people who bugged me for years to go there!) but I’d rather not state them here by name, for specific (quite a few) reasons. Regardless, I do wish them all good luck (those I know and those I do not know). It WILL be interesting. But one can hope (and I want to believe they are keeping this in mind) the DARPA knows this is only ONE of the MANY LAYERS that make up security (security is, always has been and always will be, made up of multiple layers).

Windows XP End of Life: Is Microsoft Irresponsible?

With my being very critical of Microsoft one might make the (usually accurate) assumption that I’m about to blast Microsoft. Whether any one is expecting me to or not, I don’t know but I will make something very clear: I fully take responsibility for my actions and I fully accept my mistakes. I further make the best of situations. As such, I would like to hope everyone else does too. I know that is unrealistic at best; indeed, too many people are too afraid to admit when they don’t know something (and therefore irresponsibly make up pathetic and incorrect responses) or when they make a mistake. But the fact of the matter is this: humans aren’t perfect. Learn from your mistakes and better yourself in the process.

No, I am not going to blast Microsoft. Microsoft _was_ responsible. They reported the end of life _SEVEN YEARS AGO_! I am instead blasting those who are complaining (and complaining is putting it VERY nicely – it is more like that of a whiny little brat who throws a temper tantrum when they don’t get their own way on every single thing despite the fact they have been told in advance this will happen) about how they now have to quickly upgrade or not get updates, security updates included. For instance, let’s take two different groups. Let’s start with the manager Rosemayre Barry of London-based business, The Pet Chip Company who stated the following (to the BBC or at least it was reported on the BBC):

“XP has been excellent,” she says. “I’m very put out. When you purchase a product you don’t expect it to be discontinued, especially when it’s one of [Microsoft's] most used products.”

 

Sorry to burst your bubble, Rosemayre, but ALL software will eventually be discontinued (just like how smoke detectors, carbon monoxide detectors and the like have to be replaced over time and/or are improved over time and that is not even considering maintenance like battery replacement). You can complain all you want but this is not only the correct thing technically, it is economically unfeasible to continue with a product as old as Windows XP is. I don’t care how used it is or isn’t (I hardly expect it to be the most used product of Microsoft’s, however; I would argue its office suite is more used as it works on multiple versions of Windows and corporations rely on it a lot). I also know for a fact that corporations tend to have contracts with computer manufacturers that they LEASE computers for a few years at a time and when the time comes for the next lease they will get the more recent software and this includes the operating system.  Why would they do that? Well, again, it is economically better for the company, that’s why. And here’s some food for thought: Windows XP was released in 2001 and according to my trusty calculator (i.e., my brain) that means it is almost a 13 year old product (as it was August and we’re only in April). Well check this. Community ENTerprise OS (CentOS), a distribution of Linux, which is largely useful for servers, has a product life line, as far as I remember, of only 10 years. And you know something else? CentOS is very stable because it doesn’t have many updates or in other words is not on the bleeding edge. When a security flaw is fixed, the affected libraries and/or programs have fixes backported into the older libraries and/or programs. Indeed, the current GCC version is 4.8.2 and CentOS’s current version (unless you count my backport of 4.7.2 which you can find more info about at The Xexyl RPM Repository – possibly others exist somewhere else but for the time being the packages I maintain I have not updated to the 4.8.x tree) is 4.4.7 which was released on 20120313 or in other words the 13th of March in 2012. Yes, that is over _two years ago_. It means you don’t get the newer standards (even though the C and C++ standards that are most recent were in 2011 it is not to say that anything past the ratification date is somehow magically going to have it all. In fact, some features are still not complete in the most recent versions) but it also means your system remains stable and that is what a server needs to be: what good is a server if the service it offers is unstable (and I’m not referring to Internet connection stability! – that is another issue entirely and nothing to do with the operating system) and hard to use? Very little indeed. And realistically, 10 years is very reasonable if not more than very reasonable. Over the span of 10 years a lot changes including a lot of core changes (and let’s not forget standards changing) which means maintaining it for 10 years is quite significant and I cannot give anything but my highest praise to the team at CentOS – an open source and FREE operating system. To be fair to this manager, they at least DID upgrade to a more recent Windows, but the very complaint is beyond pathetic, irresponsible and outright naive and ignorant at best and stupid at worst. It is also unrealistic and unfair to Microsoft (and this coming from someone who is quite critical of Microsoft in general and someone who has railed on them more than once – in security and otherwise; in quotes about their capabilities and in articles alike – and quite harshly too; examples, one of which even includes a satirical image I made that is directed at Windows in general: Microsoft’s Irresponsible Failures and Why Microsoft Fails at the Global Spam Issue).

Next, let’s look at what the UK government has done: They are paying Microsoft £5.5m to extend updates of Microsoft Windows XP, Office 2003 and Exchange 2003 for ONE year. That is absolutely astonishing and I would think – to UK tax payers – atrocious. What the hell are they thinking? If the SEVEN years warning was not enough time what makes ONE extra year worth THAT MUCH? Furthermore, and most importantly, if they could not UPGRADE in SEVEN YEARS what makes any rational being expect them to UPGRADE WITHIN A YEAR? They claim they’re saving money. Yeah, right. Not only are they paying money to get updates for another year, they will STILL have to upgrade in due time, if they are to get updates. Think of it this way. When a major part of your car dies, you might consider fixing it. It will likely be pricey. If, however, shortly thereafter (let’s say within a year or two), another major part of your car dies. Also, the car has been used for quite some years and certainly out of warranty. What is the most logical and financially best assumption and choice? Is it to that this will be the last part to die – surely nothing else can go wrong! – and to pay for it and then wait until the third part dies (which it almost certainly will; it is mechanical and mechanical things die!)? Or is it maybe better to cut your losses and get a new car? I think we all know the answer. We of course, does not include the UK government.

The bottom line here is quite simple though: No, Microsoft is not being irresponsible. They are not being unreasonable either. No, they gave SEVEN YEARS notice. The only irresponsible and unreasonable people – and companies and/or government[s] – are those who STILL use Windows XP and especially those that are now forced to upgrade and at the same time whining worse than a spoiled brat who is used to getting his way but the one time he doesn’t, he throws a tantrum. Lastly, I want to point out the very dangerous fallacy these people are actually aligning themselves with. To those of us who remember when the TELNET and RSH protocols were prevalent, there came a time when enough was enough and standards had to change (e.g., secure shell aka ssh). Those who had any amount of logic in them UPGRADED. Many (though not as many as should have) saw many problems with them for far too long, among them the following (and note that these are on Unix systems and yes that means NO system is immune to security problems, be it Windows, Mac, Unix or anything else. Incidentally, Unix systems are what typically are used for servers which means customers data included in databases running on the servers, especially then as Windows NT was in its infancy by the time most – but probably not all – changed over):

  1. The fact a common configuration would allow “you” to remotely log in to a machine as a user from ANY HOST WITH NO PASSWORD? And of course it was PERFECTLY SAFE because after all, they won’t have a user with the same name, right? Well, ever occur to you that they could CREATE a user with that name? And ever hear of grabbing the password file remotely to find user names? Or a scrupulous employee who could do the same? An employee that was fired and wants revenge (and happens to have user names or maybe even stole data before they were completely locked out after being fired? Maybe they even left a backdoor in!)? For those who are slow that is sarcasm; it was NEVER safe and it WAS ALWAYS naive at best (this same problem is trust relationship and that is one of the biggest problems with security – too much trust is given far too easily). And indeed, Unix – just like the predecessor to the Internet – was NEVER designed with security in mind. That is why new standards are a good thing: to address problems and to extend, deprecate or obsolete standards (like, I don’t know, IPv6 as opposed to IPv4, anyone?).
  2. No encryption means sniffing could show the user and password (as well as other information in the traffic) to the sniffing party. Assuming that there is no one to sniff your traffic is security through obscurity at best and that is arguably worse than no security (it is a false sense of security and when taken to extreme some will refuse to believe it is a problem and therefore are blinded to the fact they already are or could be compromised any moment).

Consider those two examples for a moment. Finally, take the logic of “most people use it” or “it is so convenient and we shouldn’t have to upgrade” and where do you end up? Exactly like not upgrading from Windows XP or otherwise having tantrums about having to upgrade from Windows XP to something more recent (despite the seven years notice and it being in the news more and more as the deadline approached) or else not receive updates. In other words, you are staying behind times AND risking your data, your customer’s data, your system (and that means network if you have a network). And you know something? You had it coming to you so enjoy the problems YOU allowed and let’s hope that only you or your company is affected and not your customers (because it would be YOUR fault).