Things related to computer security in some way or another.

The Dangers of External Media (floppy diskettes, USB, etc.)

I’ve written about master boot record/boot sector infecting viruses before, albeit not specifically how they work (though I am familiar enough to do so if I really wanted to). I’ve reminded people of the similarity to floppy diskettes and USB drives in how they are a source of viruses (and malware more generally), as well as how systems can boot off of both of these (as well as CDs/DVDs and other devices) – while simultaneously installing malware. I believe I’ve made references to BadUSB and I’ve written about more of all this under Bring Your Own Demon (BYOD) and the Internet of Things (IoT). I also strongly criticised Fiat Chrysler for encouraging people to use USB sticks they receive in the mail. But now I have more to write on the matter. Because besides viruses (and I include BadUSB in that category) there is a new version of USB Killer. It actually kills computers (and it seems USB Killer v1.0 also did this). Yes, I of all people would find it odd that a non-sentient thing could die; I would also have argued it is literally speaking, impossible.

But I’ve changed my mind. I’m also reminded of some old issues that many are probably not aware of. To those who don’t remember the old AT power supplies (or those who have never heard of them or what dangers lurked inside them), if you plugged the power cable (referring to the 20-pin cable) in the wrong way[1] then you would have a very nasty power problem.

Then there is the old trick of sending a PSU (power supply unit) to a friend overseas or even a friend in the same country (therefore using the same line voltage) with the incorrect voltage set on the PSU. Or if you forget to change the voltage (as a long time friend from the Republic of Ireland did, when another friend sent him a PSU from the US), you would also be very unhappy indeed. But he was able to laugh about it, at least.

And then there is USB Killer v2 (video of USB Killer v2.0 on a laptop). There is another version – a remake of v1.0 (I guess it is) which in my opinion is worse than what v2.0 does – unsure what it would do to a laptop but the PSU catches fire and the internals of the computer are charred. He was going to attempt to salvage it but he didn’t expect it to be as bad as it turned out to be.

Yet, even though a version of USB Killer was released on April 1 (a comment at the video of one of the videos links to a website that I have used on) of this year, it was no joke. I should have linked to that in my article on the lie of security being of utmost importance but I had forgotten about it (I only remembered BadUSB and others, and I hadn’t seen the video of USB Killer v1.0 – I’m glad I have now). Chrysler really should be absolutely ashamed with themselves, especially for dismissing the risks as hypothetical.

People that also participate in that game (I can’t remember what it is called) where they look for items (based on coordinates that are published, maybe?) including USB sticks, should seriously reconsider doing so. While I’ve always been against destruction of data, I could still see just how amusing it would be to do this (even though I would never do this even if I were to participate in such games) – I mean, after all, anyone playing this game is asking for trouble (on the other hand, the fact people might test it on another person’s computer is reason enough to not do it) and games are meant to be amusing (and you can’t deny destruction amuses a lot of people). Still, with BadUSB and USB Killers, not to mention other malware and associated risks, it really shows just how reckless people are (same goes for older floppy diskettes; just leaving it in by accident or through forgetting, could lead you to being infected by a MBR/BS virus, perhaps even a multipartite – which is MBR/BS virus which also infects files). It might be due to unawareness but how does someone who is unaware become aware if they don’t know there is a problem?

Everyone who thinks the Internet of Things (IoT) or Bring Your Own Device (BYOD, which I have said the D is for Demon) are good ideas, really, really, and I do mean really, needs to wake up to the risks. These are very bad ideas. It is bad enough at home – it is worse in certain professional settings (social services or medical settings come to mind especially). Be aware, people. Stay vigilant or you will run into problems (you might run into problems even if you are vigilant but the chance goes up a lot when you aren’t vigilant). Be suspicious. Be concerned and careful. No, no, no, this is not paranoia. Paranoia involves no evidence that you are being targeted – it might involve evidence to the contrary. This is being intelligent. There. I’ve finally said it. I said there is such a thing as intelligence. Who knew?

[1] It was two different male connectors that you placed (together) into the female port on the motherboard. But you could put them in the wrong order (if coloured you wanted black to black, as I recall). If you didn’t do this correctly you would be very sorry.

Artificial Intelligence, Aliens and Computer Viruses

Okay, to be fair, Watson (that defeated the champions of the US trivia show Jeopardy) did have to interpret the questions in order to answer them, but without all that information it was fed its chance of winning would have been a lot lower. Storage capacity is huge compared to what it used to be (and it is a lot cheaper too) and more generally, technology and its power is advanced enough that it makes things like this less significant. I’m all for the evolution of technology but it is a mistake to not have serious, very long, very thorough discussions about AI – of every single concern at every level (technical, ethical and moral included). Yes, this means trying to find potential problems instead of ignoring the reality that we haven’t thought of everything (and no, we haven’t thought of everything – this is shown over time, repeatedly, when something new does come up).

I admit this might be childish of me but I readily admit that I can be childish. Whatever. It seems that a so-called AI machine was given an IQ test. The results, however, say a lot of just how good AI is (not). Maybe I’m so amused because I’ve stated many times that devices are not at all smart and maybe it is because I’ve pointed out the stupidity that many humans exhibit. But in any case, the intelligent machine scored the IQ of a four year old child. Yes, people, that is how intelligent AI really is and still people have faith, despite the fact some AI already has shown scary implications (as I refer to the OpenWorm project). No, feeding a robot (e.g. Watson) information in order to beat masters of trivia does not count as being smart but instead capable of retaining information. But since the machine got the result of a four year old, I’m going to childishly refer to a quote of mine that essentially likened human intelligence to that of artificial intelligence. Certainly, only fools will call themselves intelligent without any questioning and this is unfortunately something humans tend to excel in (and revel pointing it out as if it makes them superior than other species).

So. Saturday, September 12, I was made aware of a most amusing, ridiculous concern from scientists at Oxford University – that we have to be careful because we might send computer viruses to our friendly aliens in outerspace. Graham Cluley has an amusing video on the matter here. Yes, they genuinely believe we might spam and/or send viruses to the computers of aliens. One argues that we already spam the universe with reality soaps and I can’t say I disagree there; but that’s a different story. But I’m going to take this as an opportunity to discuss:

  • The pros, the cons and the risks of AI
  • The treatment of (‘against’) animals, the abuse of the environment, the destruction of planet Earth (and all its lifeforms) and the ethics of trying to find replacement planets because we’re too fucking stupid to take care of the planet we have
  • The possibility of aliens and the mentality humans tend to have about it (and them)
  • Alien computers and computer viruses (here versus there, wherever or whatever there might be)

Artificial Intelligence: The pros, the cons and the risks humans are subjecting themselves to.

I fully admit that I am mostly against AI but yet I do appreciate that there are legitimate uses of it. No good comes without bad and no bad comes without good. We all have dark and light in ourselves despite what many will say about certain figures in the history books.


  1. By experimenting with AI we learn more. Perhaps not enough to understand and appreciate the risks (but this is just like history), but the more we learn, the better things can be (perhaps with the exception of military advances – but even that is better for the military, I guess).
  2. A robot could be designed (or improved upon) to help rescue people trapped under rubble after a natural disaster (for instance, the 8.8 earthquake in Chile earlier this month?).
  3. A robot could do other things that are impractical for humans to do. Whether thinking is one of those things or not is another matter entirely (I would argue yes but only if AI really takes off).


  1. This is something I’ve never quite understood. So many people want AI to be advanced in order to do tasks that these same people consider tedious. But yet, if this is accomplished the robots will succeed the humans doing these tedious tasks, therefore taking their job (and that includes actual activity – of the brain and the body, both of which are part of slowing deterioration). Machinery doesn’t need money to live (an arcade machine isn’t alive even though it expects money) but humans do need some way to barter. There is just no getting around it.

The sceptic might look at the above lists and point out that despite the fact I’m against advancing AI, I’ve given less cons than pros. But besides the fact the list is not at all complete (and some pros might be cons to some and the same with cons as pros to others), there is something worse than cons: the many dangers that AI poses to mankind.


Rather than include a list of risks, I’m going to remark on some things I find concerning. Most would know I’m not at all the only one to warn these things, and some might claim I’m just another coward who is afraid of machines. But there is a reason I’m not the only one: there are actually very legitimate concerns. There is also the subject of ethics and morals (which in my view is equally important).

The fact that some countries want to develop killer robots should say enough to most people. I’m not sure if it does but it definitely says enough – far too much – to me. It shows an extreme and disgraceful disregard for human life and it shows just how far people are willing to go to for their own benefit. I’m going to call it as it is: those (nations recognised by the UN) who go so far as to develop (and/or buy into or fund) killer robots are selfish cowards to the absolute extreme. Then there is the Israeli Harpy drone that decides itself whether to shoot or not. The proponents will say things like they wouldn’t launch the ‘fire and forget’ device into the area if they didn’t think there was an enemy (does the fact humans aren’t perfect come to mind? It should) but besides the fact that the more advances with this technology, the fewer choices humans will have (I refer to a project in a bit that demonstrates this), and besides the fact a tank is still a tank (see also the concept ‘friendly fire’), a life is a life, is it not? If a remote controlled drone kills innocents, what makes any rational person believe a drone controlling itself, will do any better? An indiscriminate weapon is still an indiscriminate weapon and a life is still a life! (Yet, as an Israeli historian says, Israel has not learnt the full humanitarian lesson of the Holocaust as [they] should and [they] do manipulate the Holocaust but [they] also feel very, very deeply about it.) But I’m not trying to lecture anyone on this matter (there are plenty of resources and there are faults on all sides but the closest thing – that I’m currently aware of – to a killer robot, is the Harpy drone) – it would be futile and counter-productive, anyway; the bottom line is that AI has real risks to mankind and just like history it is being ignored by foolish people (which indeed includes military and government officials), AI is too (it is inevitable but there really needs to be far more discussion on the ethics and the implementations of). Yes, yes, I know many Americans and (all?) Israelis will condemn me to hell for these statements but I also imagine they would LOVE the technology in the hands of Hamas and Hezbollah! But you know something? Just like there is no going back after the splitting of the atom, there is no going back on this type of thing. Choose your poison and choose it well. However, if you ignore history for a moment (and not a moment more) and look at a telling experiment called the OpenWorm Project (pay particular attention to: Wriggle room; Silicon Immortality; and note Moore’s Law), then you should be able to understand exactly why killer robots are a horrible idea (besides the blatant disregard for life, life that could be your own or someone you care for deeply). Some would point out the Fighting Fate section and make the assumption that someone like me would agree with fighting death. Well, I don’t agree with fighting death any more than I agree with the blatant disregard for life that many humans exhibit: we’re all mortal and this is completely different from improving the lives (which includes health) of others. The section brings up a valid point, though – Mother Nature doesn’t care what humans are capable of (or have supposedly cured); the event they refer to is a good example (a specific solar flare). There are more examples than solar flares – for instance, supervolcanoes. Another example is the Tunguska explosion in Siberia in 1908. The bottom line is that artificial intelligence could overtake humans. Whether that is a problem to anyone or not is another matter entirely.

Planet Earth: The treatment of animals, the harm to the environment and the ultimate destruction of the planet.

I was going to write about this in more detail but after attempting it a few different ways, I see this is impossible for me to do – this subject is one I feel very strongly about and it is one of the things that most disgusts me about humans. The treatment of wildlife, the damage to the environment (and things like deforestation), and the fact humans can’t even respect themselves is just beyond comprehension. Last year, it was reported that in the past 40 years, 50% of world wildlife populations have been destroyed. (For some populations it was more than 50%.) Yet some claim that because there are difficulties with establishing these statistics, they aren’t statistically valid. This claim only proves just how out of touch (or unconcerned?) humans are with the amount of damage they cause.; humans cannot respect themselves so they certainly cannot respect anything else. What I will say is this: the planet will be devoid of all life, long before the Sun dies. One of the species will deserve it and the rest will not. The species that deserves it is the species that causes it – homo sapiens (whether directly or indirectly, mankind will destroy the world).

The possibilities and implications of extraterrestrial life.

I’ve long felt that humans need to stop looking for other planets to one day occupy. The reasons should be clear already, but I’ll reiterate it anyway: we cannot take care of our own planet, so do we really have the right to populate other planets – only to destroy it as well (not that those doing so really care if they have the right or not; humans tend to believe they inherently have the right to do whatever the hell they want)? The reality is if we can’t take care of the planet we have, we won’t be able to take care of other planets. It is one thing if mankind wants to destroy each other (and ultimately Earth) – and this is bad enough – but it is another entirely to find more planets to destroy. While not all humans are this way, the overall impact humans have on the world makes me truly question whether we deserve another planet. I don’t think we do even though some will suffer – and are suffering – because of those that don’t care about anyone or anything. But that’s not what this is about. The issue is quite simple:

If there are other lifeforms out there, and they are actually intelligent (at least in what humans call intelligence and in which case they will probably be more intelligent) and capable of contacting (or travelling to) us, then there are two likely outcomes:

  • They would have the capability to completely destroy us. I will not express my opinion on this matter other than say it would be cruel irony.
  • They will stay clear the hell away from Earth. This would seem plausible unless the first possibility is true. Humans cause so much damage to each other and the world, and humans destroy the unknown (hence the hunts for big foot and the stories of killing it; there are other examples though), why would aliens – who are intelligent enough to contact us – want to contact us? A Twilight Zone (or so I think it was) episode highlighted this quite well; I can’t recall the episode name but the idea was a town was inhabited by what they thought was an alien. In the end, someone was dead and they then understood that the alien was themselves; indeed, one of the humans killed another human they thought was an alien. That is sadly a rather accurate depiction of how humans behave.

Realistically, if they were capable of travelling here, they would probably be capable of destroying us, so the fact this hasn’t happened yet (unless they’re secretly mating with humans, silently taking over? I imagine some would like to believe – if not fantasise about – that) could possibly answer both questions at once (there are lifeforms that are intelligent enough to hide and there aren’t other lifeforms capable of travelling to Earth). I wish we’d stop looking though, I really do, because of the tendency to destroy the unknown.

Alien Computer Viruses.

What to say on the matter. There are so many things it is hard to know where to begin or even what to include. Let’s start with the technical aspects. It is true that computer malware has been accidentally sent to the International Space Station (though off hand I don’t have references, it has happened). That is scary enough and it is yet another reason nations writing malware (and abusing exploits; I’m looking at the US especially) is just a very reckless and stupid idea. But whether there are computers on other planets is another matter entirely.

There is this inherent belief that just because life on planet Earth requires certain things (carbon, hydrogen, oxygen and nitrogen for four examples), it should be the same for other forms of life on other planets (or all species), and therefore if a planet doesn’t have the same requirements we require, it cannot possibly have life. This is just stupid and arrogant. What makes anyone here believe life on other planets have the same restrictions we do? They could have more restrictions, they could also have less restrictions (or maybe none? At least one scientist believes that intelligent lifeforms on other planets will be machines) or it is entirely possible they can live under different restrictions (e.g. carbon, hydrogen, oxygen and nitrogen does not harm them but they don’t need any of it, either). They might live in fire instead of water; they might live underground instead of above ground. The reality is we just do not know and anything else is assuming – and assuming does nothing to settle matters (aside from settling who is made an ass of).

Similarly to how we don’t know what lifeforms on other planets might require (or if there are lifeforms at all), we also can’t say that if they had computers (I doubt it but I also don’t think we will ever know; not in our lifetime) they would have the same requirements. They might even be capable of real magic (including things humans have yet to accomplish – and probably will never succeed in – without illusions e.g. invisibility). We simply do not know! Let’s assume that there is life on other planets. Let’s also assume that they have computers. For fun we’ll also consider they have the same life requirements as we do. What sane person would think they will have the same operating systems (and software for!) we have? What really makes anyone (these scientists that are making jokes of themselves, for instance) think these aliens will run Microsoft Windows, Mac OS X, any of the Linux distributions, BSD Unix (any of them), or even DOS, VMS or something else we have? To worry about sending viruses… it is just absolutely absurd. Hilarious but an absurd way of reminding us that we should really worry about resolving the way we abuse Earth before we worry about life on other planets. Space exploration is important (many things people take for granted were discovered through it) – but that is different from trying to find a planet to inhabit (which I’ve seen references to).

US Navy in About-face on Exploit Black Market?

2015/09/23: Several additions (+ fixes and clarified some points).


It seems that the United States of America’s Navy is working on an Internet defence system for their ships – attacks which could affect its controllers. This is obviously a good idea, and my understanding is they are implementing (whether they have succeeded in this yet I do not know nor do I really care much) it so that the same attack will not work against more than one controller. That seems a good idea too, regardless of how well it will work in the end (some might consider it layered defence but I would extend this to be specifically subnetting and firewalling only in this case the hosts are the controllers on the ship instead of servers and/or other types of nodes).

It certainly isn’t unheard of for governments working on improving security; in fact, it is quite common (as I pointed out on June 21 of 2014, the NSA is directly involved in SELinux which is – in my opinion – quite ironic). But wouldn’t it be nice if they worked on their part for the global security of the Internet and not ever consider exploiting others (or consider it but not act on)? I think the answer to that question is yes. Yet sadly that isn’t the case, is it? As I wrote about in June of this year, the US Navy already has demonstrated this fact. But even if they weren’t soliciting for exploits (0-days included), the government isn’t innocent in the matter.

I would really like to see governments behave as one would expect them to – setting a good example; an example of how they expect other nations and their own citizens to act. Is this move by the US Navy an about-face? I seriously doubt it. I would really like to be proven wrong here, I really, really would, but I doubt I ever will. Meanwhile, there are often US accusations that other nations (China is probably the most common example) are breaking into United States government and corporation systems (and networks). And you know what? Maybe some from China are doing those things. But proving that the attacks are state sponsored is another matter entirely – it is an incredibly difficult thing to do (especially when the ‘evidence’ is the IP addresses). But let’s say you (e.g. the United States) know for 100% sure for some reason. Are the countries (e.g. the United States) making these accusations completely innocent? The fact the US Navy solicited 0-day exploits earlier this year says a lot in the matter, doesn’t it? That fact makes this defence system they’re working on rather ironic; how would they feel if someone (or a nation) was devising ways to compromise it (and also sell it to others)? That would be more like cruel irony. Regardless, the countries making these claims that are doing the exact same things should worry about themselves before telling other nations off for whatever those nations might – or might not – be doing. It should be made known that China too is a victim of computer crime. In fact, China has executed people for computer crime! Yes, really, they have. I remember reading this at the time (and possibly other things at other times). The rarity of the matter and the circumstances per incident are irrelevant (and they conveniently ignore my points and the reality of the situation).

And no, I’m not in cahoots with China or the Chinese; I’m in cahoots with no one. I am, however, an individual who looks at everything with perspective (and context) kept (or attempts to), one who sees the good and the bad in everyone (or at least is aware that there is both good and bad in everyone, and even if I can’t currently see it, I know both exists in them, somewhere, no matter how hidden it might seem), as well standing up (where and how I can) for people (or corporations) when they are unjustifiably wronged – even if I am also critical of them. A good example of this is Microsoft.

The Corporate Lie of Security Being of Utmost Importance

Apparently Experian is a credit checking agency for T-Mobile customers. There is a certain amount of irony in that but I suppose that’s irrelevant to the sincerity of any apologies. It seems also that they (Experian) might be using a weak cipher (I’ve only read this – I’ve not confirmed it and I have no intention to) on their server (https). If this is the case, then it changes things – at least with Experian. Yet, still, they at least have the notice at the top of their page. There will always be mistakes but the biggest mistake is not accepting this fact; nothing is perfect in this world and those that can accept it will improve and those who cannot will not improve. It’s really that simple. There certainly were faults here (because faults are everywhere) but yet at least they still have it on their front page. That’s something that far too many corporations neglect.


I now have an example case where security really is taken seriously by a corporation that has discovered a breach. There always could be a better job but that is how everything is in this world; what matters is they are taking it seriously, they are investigating it and they are doing everything they can to make sure it is known. T-Mobile has made public that some of its customers might have been affected by a breach at a credit agency called Experian which T-Mobile uses to process (certain – not sure what exactly) information on subscribers. The credit agency has a note at the top of their main page that links to a thorough document on the breach (which I linked to directly). T-Mobile also has a note on their main website (they could probably have it above the rest of their page but the note itself seems to be sincere enough to use as an example). This is how a security breach should be addressed. It is unfortunate it happened but it is also inevitable – yet they are making the best of it (and certainly are concerned about the breach and its impact). They should be commended for their upfront, transparent approach in the matter.


I’ve thought about this for a very long time, and something inspired me to finally write about it (even though it took several days to finish it). If a corporation has a product that fails security (or any part of their network is compromised) in a critical way (or is otherwise made public), there are at least four typical responses (plus a combination) you might hear (there certainly are others). They go something like this:

  1. We fixed the flaw within hours of being made aware of it.
  2. We fixed the flaw as soon as we were made aware of it.
  3. We are almost positive that it is of limited impact and very few will be affected by the breach.
  4. We’re still investigating but we’re confident that they did not access confidential information.
  5. A combination of the above.

In all of the cases, they make the claim that the security and safety of their product(s) and customers are of utmost importance. That’s ultimately what this is about. But as for the above list:

The first is sometimes true but it often isn’t enough.

The second is such a pathetic lie (or exaggeration) that even the most gullible person would be able to determine the absence of truth (or how absurd the claim is). No, you did not fix it immediately – not unless you actually knew of the flaw (put it in deliberately?) and were waiting for someone to find it first (in which case you are completely negligent in security if the word ‘security’ is in your vocabulary at all); it is a lie and nothing else: you did not fix it immediately so stop claiming you did. To be fair, it could be that they fixed it before it was made public (because the flaw was reported to the vendor before the public) but that isn’t the same thing. Of course, this could be called semantics by some, but this claim is made often enough where I feel it is different (ironically a day or two after I started wrote about this claim, I read this exact claim by some web service – don’t know what it was of any more).

The third is snubbing those who are affected by it; they really couldn’t care less about everyone else – and you’d understand this if you actually thought about those who are affected rather than how you feel fortunate that more haven’t been affected (which means  you feel less burdened than you might could have been [something that is always possible]).

The fourth is utterly absurd: you’re still investigating but you’re confident the attackers did not access confidential information? Then why was the attack successful and why are you still investigating, if you’re that sure? Why is it that difficult to be honest and upfront? The reality is you’re not confident of these claims; instead you’re insincerely trying to cover up your – forgive me – major fuck up, and it actually shows how dishonest and unethical you are – you only care about your business and its reputation.

Well, here is one of the very few valuable lessons I learned in school – very few valuable lessons because when an education institution is poor, it is really, really poor. And when it fails some students (this includes neglecting those with disabilities, neglecting any of the different or abnormal – typically positively different – students and ignoring bullying), they fail so much that the student ends up having wasted years of learning very little perhaps with the exception of just how much the education system is an utter failure. And the education system, really, really failed me in every way imaginable (and they caused great harm – with impunity). Whatever. I don’t usually think about that or them – I’d rather live in the present (and I’ve always loved learning and therefore consider everything – and make use of it as – a chance to learn something new); the point is, this is valuable enough to remember and live by. The irony is it is so incredibly simple that you would think more people would understand it. The lesson is about reputation. It was something to the effect of:

A good reputation is hard to keep but a bad reputation is hard to lose.

I learned that at age five or six but it stuck with me because it is a really good piece of wisdom (something that governments and corporations woefully lack). Yet these corporations are so afraid of ruining their reputation that they will put themselves above everything else – exactly the thing that would give them a bad reputation in the first place (and remember, losing a bad reputation is extremely difficult – something that many have found out the hard way). Customer service is really important and if you’re willing to delay or manipulate the truth (if not directly lie) then you’re betraying your customers in a most disgraceful way (and you deserve the tarnished reputation). And remember, even if many customers accept the fault (and indeed some will), that doesn’t mean would-be customers will (they don’t know you except by what they hear or are told – including by those who don’t accept your dishonesty).

The reality is that almost always these corporations can’t even say they are sorry. “Sorry” by itself doesn’t cut it and neither does “We’re sorry for the inconvenience”. That isn’t a genuine apology (it is as sincere as a robot who was programmed to say the same words – and only those words [perhaps they make use of such a robot?]) and it is an insult to those who are affected by what would be understandable but is instead a dishonest, insincere attempt to make others think the responsibility doesn’t lie upon your errors. But it does lie on you whether you accept the responsibility or not. It is also an insult to your customers – and the corporations that actually do apologise properly! Many corporations also don’t have the information about the breach in a very obvious place, as it should be – on the front page of its website(s) in big letters (linking to a separate page if necessary). This happens even when it affects your customers in a bad way and that is taking your customers for granted. What are you without customers?

To make matters worse, many corporations – let’s say those making devices that are part of the Internet of Things (‘IoT’) – claim they fixed the issues even when they haven’t done anything more than (if even) a workaround for a single problem without resolving the source of the problem (it is still connected to the Internet, is it not? Does it need to be? Did you design it with security in mind?). No, Chrysler, you do not consider the safety and security of your customers above all else as you suggest here. I quote from

When WIRED reached out to Chrysler, a spokesperson responded that the USB drives are “read-only”—a fact that certainly wouldn’t protect users from a future spoofed USB mailing—and that the scenario of a mailed USB attack is only “speculation.”

Denial – even from ignorance – is not an excuse when you’re attempting to (supposedly) fix a problem you caused. Maybe it escaped your notice while you were busying allowing cars to remotely have their engines shut off or their brakes disabled and refusing to recall your Jeeps while they were at risk of fuel tank fires (and who knows what else), but social engineering is an incredibly efficient tactic, so much so that it is probably the first choice of many attackers (Mitnick’s speciality, isn’t it Kevin? At least you’re upfront about your lying, unlike these corporations who hide behind lies, if that is something to commend). Perhaps you also missed the potency of BadUSB? Perhaps you never knew about other external media and viruses? Did you know that through basic techniques the old boot viruses would move the master boot record to another sector – sometimes encrypting it – which had the implications of the virus itself knew where from and how to load it (and therefore the OS), but if someone tries to ‘fix’ the virus by rewriting a new (default) MBR (e.g. through the DOS command fdisk /mbr that was often suggested for removing MBR/BS viruses), they would now (essentially) have no loader for their OS (and it might be that their old sector is now encrypted with nothing to decrypt it)? No? Well I wouldn’t blame you because you’re not in the computer (including security) industry and therefore you wouldn’t be expected to work with USB – or anything like it – but that’s exactly what you decided to use to ‘fix’ the major flaws of your Jeep anyway. Yet you call the concern speculation? You actually have the boldness, the arrogance, the idiocy to call the statements – made by those who would know more than you about security – speculation? I’m also calling your read-only claim naive ignorance but let’s say you had a brilliant idea here (and implemented it successfully, including preventing any circumventions) – the fact is it is encouraging people to use USB devices they get in the mail (not bought as a USB device itself in its original packaging – and even that has risks). It gets better though, because you also have the typical response that almost every corporation makes, as I described above (after a successful attack, of course), don’t you?

“Consumer safety and security is our highest priority,” the spokesperson added. “We are committed to improving from this experience and working with the industry and with suppliers to develop best practices to address these risks.”

Such lies Fiat Chrysler. Your best attempt for fixing a serious security vulnerability (with rather terrifying implications) of A JEEP is to make a VOLUNTARY recall, offer a fix ON YOUR WEBSITE, and to mail USB sticks? But to make it better, you then have the stupidity to state that the risks of these methods – which are once again something you are causing – stated by others who know more than you are just speculation? It isn’t speculation; it is a risk and it encourages dangerous practises (and makes the assumption that victims – yes, they are victims, victims of your irresponsible fuck ups – will know to check your website and also know how to use the fix once they have it on a USB flash drive). You’ve already proven you’re not able to make wise decisions when it comes to security (which would be understandable if you didn’t act the way you are acting – your industry is indeed very different) so why should anyone believe you now? If security is your highest priority then it is much more severe than you initially demonstrate.

Why can’t you suggest they go to a service centre where it can be done properly, by someone who should know what they’re doing (though there is the obvious question of whether they do know what they are doing, given your approach so far)? Lazy? Irresponsible? Ignorance? Because you feel it must be done the ‘IoT way’ or through something they receive in the mail (with the theory that the method of delivering the fix to the vulnerability isn’t vulnerable to anything itself – I return to this momentarily)? All of the above?  No, you do not place customer safety and security at your highest priority. Stop lying Chrysler. All corporations should take your disastrous attempt of disaster recovery as how to not do disaster recovery (which they’ll need in time, inevitably), though they should also improve upon it even more (disaster recovery isn’t a process that never changes and testing is always important). All corporations should also stop lying about what priority security is to them, when they clearly demonstrate otherwise (the rare exceptions aside). They should also learn to apologise correctly (and this includes being upfront about the problems so that everyone that goes to their website will see it without having to know to dig for it) and they should also think about security before – not after – designing phases. The reality is this: an IoT device isn’t fixed as long as it is on the Internet. There is not a single justified reason for a car to be connected to the Internet; some will refute this and give reasons but those reasons are wants and not necessities. The cars of yesteryear did perfectly fine not connected to the Internet and oddly enough, those cars are still doing fine (until it is totalled or parts stop dying – both of which will eventually happen to Internet connected cars, too, perhaps even before non-connected cars). The cars that haven’t jumped on (or become) the bandwagon are still doing fine. (And no, the difficulty of the attack isn’t relevant; the attack is possible and that’s all that matters.) An article I linked to earlier has an amusing point and I’m going to quote it:

And yes, you’ve no doubt spotted the irony that security researchers are able to overwrite cars’ software with their own home-grown code via the Internet – but Fiat Chrysler requires that the update is applied by someone with physical access to your vehicle.

The fact they can modify the code remotely is exactly what I described in another article: a car should only be controlled by the driver, not others outside of it (and this goes back to the fix itself might potentially be vulnerable to another flaw). But Chrysler criticises the way the researchers operate when they should be looking at themselves first. I’m actually shocked that politicians (especially because it is the politicians of the United States of America) are concerned about the issue that Chrysler – and other car manufacturers – are demonstrating. That they actually could do something positive – especially when it comes to the safety of others – is nothing short of amazing, impressive and they should commended for it (however rare that actually is). Ironically, while Chrysler criticises the researchers in how they raise the issue – an issue that really needs to be correctly and promptly addressed –  Chrysler is being criticised by many – as they should be – for their poor handling of the situation. And if it wasn’t for the researchers demonstrating it (it should be noted that the driver of the Jeep agreed to the experiment; yes, they did it on an open road but it brought much more attention to the situation and clearly that is needed) in this way, the issue would be standstill, much like a Jeep in a vast pool of mud (or tar pit) would be.

The worst of it here is that Fiat Chrysler (and any other car – or dangerous machinery – manufacturer that neglects the fact that cars – or other machinery – are dangerous tools, not toys, and dismisses risks as speculation or other immaterial) is taking the lives of (a life is still a life, is it not?) their customers for granted, and worse still the lives of others (passengers, pedestrians, those in other vehicles) for granted (and otherwise of little concern). That you, Chrysler, don’t have a (working) moral compass, that you lack ethics and that you actually lie about this, is shameful to say the least.

The Dangerous Twin of Bring Your Own Demon: The Internet of Things and ‘Smart’ Technology

Earlier today I was made aware that another exploit for another car allows remote controlling of a car, including halting the car (brakes) and even disabling the brakes! All it takes is sending a specially crafted SMS message. The device is called Metromile Pulse OBD-II. This is what Metromile’s advisory says:

At Metromile we take the security of our products and services very seriously.

The typical statement that nearly every organisation says after a successful exploit is found (or attack is executed). It is as dull as ever and it is a half truth if not an outright lie.

Recently, it was revealed to us that MDI, who makes our OBD-II dongle, the Metromile Pulse device, has a vulnerability that can remotely takeover these devices. We took immediate action and released updates to all devices in the field to resolve the discovered remote exploits and can confirm that most of the devices have successfully downloaded and applied the patch and we expect the remainder of devices to be patched by mid-August.

Immediate action that you shouldn’t have had to take in the first place because an SMS message shouldn’t be able to control a car – the driver should! Too little too late. The fact not all devices are patched when it endangers the lives of others is worse (and despite the fact it would take time, it still isn’t immediate action).

Connected telematics devices such as the Pulse are powerful because they have the potential to make many aspects of driving and owning a car simpler, less expensive, and more convenient. We ask that customers who are concerned about the security of Metromile systems contact us at

So the device is powerful because it has the potential to make many aspects of driving and owning a car simpler, less expensive, and more convenient, does it? Funny definition of convenient, isn’t it, seeing as how now the owners have to worry about a serious blunder you made. Perhaps you weren’t aware, but security conflicts with convenience. Yet you take security seriously, do you? Cars are heavy machinery that, while useful (to get where you need to), are deadly even under the best drivers in the best conditions. Driving a car requires discipline. There is a reason for driving licenses, there is a reason you need to maintain the car safety (how much so varying on the country), there is a reason for all these hurdles, and there is a reason you shouldn’t be driving under the influence! The reason is it isn’t a toy and it isn’t a game where you can start over! The fact a car can be manipulated through an SMS by an external party is irresponsible and it completely disregards the safety of people. To all those creating devices for the IoT, wake the hell up before you kill more people (which means they will never wake up again)!


Clarified (and added a link to) another vulnerable thing (as part of the Internet of Things) and added a few thoughts.

If a car is meant to be controlled by the driver in the car, how the hell is it being vulnerable to outside manipulation considered ‘smart’?

On February 17, 2012 I wrote a piece on the concept called Bring Your Own Device which I renamed Bring Your Own Demon, and just how stupid and dangerous it is. I’ve also written about so-called smart technology and how dangerous (and stupid) it is. I’m bringing up one because it is somewhat relevant to something I will bring up today (in that this has to do with the so-called smart technology). On September 3, 2013 I wrote a piece entitled ‘Smart’ Technology Is Still Dumb. In that piece, I highlighted an incredibly dangerous situation that would arise because of emergencies, be it medical, fire, or any other occasion where the rules of traffic must be broken by specific people (fire fighters, police officers, paramedics, etc.) in order to help the situation (which might include preventing the loss of life, loss of a home, or restoring peace). This warning still holds strong; the dangers still exist and they cannot ever be solved with automation: emergencies are unpredictable, unpredictable in every way. You cannot know when an emergency will occur and you cannot know what it will take to resolve it in the safest and quickest way possible! One seemingly minor variable can change things drastically! This is inherent to emergencies.

But then there is the Internet of Things (commonly IoT). Instead of bringing your own demon, you have many demons all around you. This includes medical equipment in a hospital and that is one of the things I will refer to today. First a brief understanding: the IoT is the idea that everything should be connected to the Internet in some way or another. This includes refrigerators, thermostats, cars, medical pumps, sniper rifles and even skateboards. I’m going to aim (and fire) at three of them now.

The Hospira LifeCare PCA Infusion System has serious flaws. Most recently is one that boggles my mind, boggles it because the flaw is so negligent, so amateurish, and has been that way for eternity. A remote attacker could login as root through TELNET without authentication! That is a very serious flaw and it is an utter disgrace for anything to be this way, but especially when it is medical equipment. But that isn’t the only problem. There are many other problems. Apparently this researcher also knows of the TELNET flaw and brief skimming of that page, it seems it might be more than one of the pumps (which is even worse). Disgraceful neglect is about as nice as it can be worded.

Then there is a skateboard that can be compromised. Yes, because a skateboard needs Internet connectivity, right? If you ask many, though, it seems they do truly believe this. Even if it isn’t need (which realistically it is not need) in their mind but instead a want, it shouldn’t take much intelligence (which might be part of the problem here?) to figure out it shouldn’t be connected to the Internet or for that matter, it shouldn’t have a computer at all. But at least one does exist. Quote from the researcher describing the problem:

Because the Bluetooth communication is not encrypted or authenticated, a nearby attacker can easily insert himself between the remote and the app, forcing the board to connect to his laptop. Once he achieves this, he can stop the skateboard abruptly, ejecting the rider, send a malicious exploit that causes the wheels to suddenly alter direction and go in reverse at top speed, or disable the brakes. An attacker can also simply jam the communication between the remote and the board while a driver is on a steep hill, causing the brakes to disengage.

So unencrypted, no authentication, and remote connection for a skateboard. Utter stupidity is putting it nicely.

Let’s now go to a sniper rifle. Yes, that is right: a sniper rifle as part of the IoT. This is from an interview given to Wired (I haven’t listened to it, I only have a quote).

The only alert a shooter might have to that hack would be a sudden jump in the scope’s view as it shifts position. But that change in view is almost indistinguishable from jostling the rifle. “Depending on how good a shooter you are, you might chalk that up to ‘I bumped it,’” says Sandvik.

As I’ve noted many times (of many more to follow, I’m sure), I strongly detest the misappropriation of the word ‘hack’ and ‘hacker’ but I can’t change that because of the influence the governments and the media have (a shocking amount of power, and it is quite scary) and this is a decades old problem. A problem that will never be resolved because the word is forever poisoned to have negative implications over positive. Which is a bloody shame, ungrateful and a damn disgrace, given what hackers have given society: without them we wouldn’t have the Internet and many other things we have today (and critically, the security problems would be worse by a lot). It used to be a good thing but now it is a bad thing, at least the perception many (if not most) people have [of hackers]. To add salt to the wound, governments couldn’t help but become hypocritical about yet another thing (there is never enough of this in their view, see?): poison the word and then do exactly what they poisoned all the while whining about others doing it (and arresting them for ‘breaking the law’). But to get away from a most touchy subject, if you look at their description, you can see the problem here. Except that there is a more serious problem. Apparently the device has a remote, root hole, and that means escalating to root (in this case it means adding an equally powerful user). Yes, that means whatever the interface allows, they have complete control. Why anyone wants a sniper rifle to have embedded Linux is beyond me. But they make it worse because then it is connected (through Wi-Fi). Then to make it worse still, they are so irresponsible that they feel they have no need to pay attention to security whatsoever. Thankfully pulling the trigger is still a manual thing. I really hope that stays that way forever.

Unfortunately, there are many more devices that have been compromised (or found holes that would lead to it), including researchers who remotely halted a Jeep going 70mph on a highway (or maybe more like a freeway, the US version of Germany’s Autobahn – which for those who like trivia, is in fact one of Adolf Hitler’s envisions). But that’s only in recent weeks. This isn’t a new problem and it won’t get better because more and more companies are creating what they call smart devices (also known as things) that just have to be connected to the Internet (hence Internet of Things). Yet people still think the IoT is a good idea (they say I’m batshit crazy but to think that some actually feel the need to have home appliances connected to the Internet …), and people actually believe these are smart devices (with equally a brilliant concept of it being connected to the Internet). If a car is meant to be controlled by the driver in the car, how the hell is it being vulnerable to outside manipulation considered smart? No, no, the above (and there are more examples and many more will follow) is a great example of human stupidity, something that this world is in excess of (the definition of homosapien perfectly demonstrates this given that the most foolish people of all others, are those that claim high intelligence and don’t challenge that claim whatsoever, whereas the most intelligent will challenge what they know and who have an insatiable appetite for learning and improvement, knowing that they can be a lot smarter than they are).

Yet despite this, the risk of self driving cars becoming the norm has not yet happened but when it does there will be problems. There are certainly other things in this world that are equally as dangerous but self driving cars is high up there on the list of dangers. I’ve warned about this before and I’ve also warned about automation in general (the less you concern yourself with thinking, the less capable you are of thinking when required or even desired) and I later (in admittedly an arrogant manner) wrote about my warning being real when a pilot relied on semi-automation, ending the life of two passengers (teenagers!). The pilot made multiple errors but the biggest error was assuming the plane would fix it for him. You’d like to believe a pilot would not be so negligent and stupid but instead to actually take care of problems he caused. But no. He couldn’t acknowledge this fact and two teenage girls died because of it. He might not be legally responsible but it is still his fault and he should forever feel badly about it (that is punishment enough and perhaps will remind him to be cautious about being too reliant on technology). But if semi-automation fails to account for emergencies, what makes any semi-sane person think full automation will work any better? If an emergency happens in a fully automated car, what will happen? Emergencies cannot be predicted and therefore there is no way to account for all outcomes (or solutions)! And if it can’t fix itself, how will it account for problems unrelated to itself (e.g. an ambulance on its way to the hospital)? It won’t, and this will only get worse. There are some things that require manual work in this world and operating heavy machinery is one of those things; cars are not toys – they are tools that are highly convenient but they are dangerous nonetheless.

The fact so many people are so glued to their bloody phones (and obsessed with social media and texting) that they walk in to people, walls, walk off piers (as I linked to in another post here, which it seems was not an isolated incident) says a lot. The fact Antwerp, Belgium, has for the time being, introduced text walking lanes (so they don’t walk in to sane people), says just how bad the problem is. The link there suggests that there are more mobile phones in this world than there are people; I find it hard to fathom but I’m not surprised either: nothing surprises me in this world because this is how the world works – it is evolution at play (if go back centuries very few would believe you if you claimed to them that one day there would be jets in the air, travelling from one place to another; they would probably think you’re mental, too).