Software management in the Unix/Linux environment and software issues specific to Unix/Linux.
Update on 2012/10/15:
I fixed an unfinished thought regarding installing CentOS (below). Indeed I have not updated anything of late as I have been quite unwell. There has been a more recent update of GCC (if I recall) but I’ve not been well enough to do anything with it for the Xexyl repository. I did at one point make a static package for gcc47 (so you don’t have to use shared libraries). Also, although the backport is potentially useful, there’s another option too. Depending on the environment it may actually be better. If you do have a need for more recent GCC releases and you prefer using just ‘gcc’, ‘g++’, etc., or even just want the system as ‘clean’ as possible, you could use VirtualBox and install the latest Fedora (which will have the latest GCC, generally speaking). It also allows for a completely different environment in the server, so that everything is separate. Otherwise, you are welcome to use my backport (that’s what it is there for). Lastly, I do not know when I’ll be writing again (due to health) but hopefully sooner than later. There have been several things I’ve wanted to write about in the past 3 months but I’ve just not been up to it.
Update on 2012/07/11:
I now have built the packages for i386/i686 architectures. So that means whether you use CentOS 6.x x86 or CentOS 6.x x86_64 you can make use of the packages. I did notice yesterday that CentOS 6.3 is out so I will likely build for that too in time, but note however the packages work for 6.3. as is so it’s not high on my priority for the moment.
I will document this and put it in a specific location and update this post when I do, but I wanted to introduce everyone to the Xexyl CentOS 6 RPm Repository. Yes, it’s true – you can now install gcc 4.7.0 under CentOS 6 in a safe way (I stopped working on CentOS 5 because I don’t have a need for that now. Considering CentOS 6, 6.1, and 6.2 are already out, I would highly suggest anyone installing a new CentOS install to use the latest). Not only that, you can do it without having to build the RPMs yourself. Indeed, thanks to a good long time friend of mine, I have a CentOS 6 RPM repo (along with yum files so you can easily update) and it is hosted on 2 servers each with a much faster connection than mine.
There is a cron job on the servers to sync from my server. It’s a nightly cron, so if I find any mistakes, or more likely – an updated GCC – it will be quickly adjusted (as long as I’m well and able to build it).
So, quick run down of how to get the Xexyl Repository installed :
# rpm -Uvh http://rpm.xexyl.net/redhat/6/noarch/RPMS/xexyl-release-6-1.noarch.rpm
That’s pretty much it. You should be asked (when first installing a Xexyl RPM package) if you want to install the GPG key and that is generated specifically for this reason. It means you can make sure the software in the repository is signed with this key (and the update program will make sure, as long as you don’t disable checking of the GPG key). So in short, just accept, and then you can install the programs via yum. Alternatively, you can install specific rpm files from the repository via yum or rpm.
Now, one last thing for this. How do you install gcc 4.7.0 in CentOS 6.x? After installing the Xexyl Repo package, simply type this command :
# yum install gcc47 gcc47-c++ libstdc++47 libstdc++47-devel libgcc47
That should be more than enough to pull in everything you might need. Oh, and yes, I should say that this allows for dynamic linking; indeed, you don’t have to statically link your binaries with this thanks to the GCC ABI policy: it allows for newer versions to work for older versions. Note you don’t need this if you’re in Fedora because Fedora already has the more recent gcc packages and indeed it will conflict with such.
NOTE: This post is mostly obsolete because I have my own RPM repository now. So, while the instructions are still below, I have since removed the files on my server, as it is both a waste of space and gcc4.6.2 is not the current version any more anyway. You can – if you want to see the spec file, just install the xexyl-release file (that I described elsewhere) and use ‘yumdownloader –source gcc47′. After that downloads, you can extract it and inspect it, rebuild, or whatever else. So, yes, you will get a 404 not found if you run some of the commands below, because the files aren’t there (and that’s what prompted me to write this note, actually).
(Update 2012/March/19: I confirmed and fixed some information on the mock setup portion of the commands below – regarding the second to the last command in that part)
The past year or so I’ve been working on a complete rewrite of an old project of mine. While it wasn’t originally mine, it basically is up to me these days for updates. Though my friend who started it is still around and knows about this rewrite, he’s very busy lately with his own company and other things.
There is (was) a problem with my rewrite though. Not the rewrite itself, however. It’s more that it uses C++ instead of C. However, with that I develop under Fedora Core 16. That’s not a problem itself, except its more up to date. That means it has more of the new C++ standard that was recently made official. Known as the C++11 standard (11 for 2011 – the year it was made standard) it is very much an improvement over the older (now obsoleted standard). However, the server we run the project on (it happens to be an older type of game – the predecessors to the MMORPG’s – it’s known as a multi user dungeon, or mud) is based on CentOS 5.x. And even if we upgraded to CentOS 6.x, that’s still too far behind for the newer features.
There’s always going to be complaints of library dependencies. In the Windows world this was called DLL hell. My understanding is that is still a terrible problem. However, in Linux we have so files (shared objects) – essentially libraries that aren’t linked in statically to the binary – they are loaded when needed if they can be found (else you have other issues). Now, one might think in CentOS (or any Linux/Unix) you’re therefore just as out of luck as the others. But not so. However, first, there’s another issue :
If you’re using a binary distribution, then the programs are already compiled and therefore linked. So, when you try to install an updated package (e.g., a RPM for Fedora Core) under a different system, it’s going to have issues (dependencies). So, then you think “I’ll just update those other packages.” Well, actually, you won’t. Not if you don’t want a broken system that is. Ok, you _could_ use rpm options –nodeps while installing it, but that is an absolutely terrible idea in this case (it might even fail for all I know). If you did this and it ‘succeeded’, you’ll at best be unable to run a lot of programs.
So what can you do then ? What did I do to fix my issue if these are so true ? The beauty of open source comes to the rescue. What if I actually compile the program myself and link it to the libraries that are on the system already? Some might reply back that there’s also a problem with that : you either have to store it in /usr/local/ or you have to risk clashing files. Well, its true that is partly correct. However, there’s other options. Much like GCC4.4.x was backported to CentOS 5.x (eg, the packages that start with gcc44), you can also backport GCC 4.6.2 _and_ the libraries. RedHat already did back port 4.4 (hence the suggestion above). I went further and backported 4.6.2 and the new libraries.
Before I show how though, I’d like to answer ‘what is a backport ?’. Simply put, CentOS (and other distributions) will backport security updates. It basically means: we take updates and merge them into the old(er) versions of the program. Therefore, we don’t update it entirely (and thus risk stability and every thing else, e.g., having newer libraries to worry about) – we just build it against the libraries we have for that system.
That’s exactly what I did. Actually, I did do it under a mock chroot (mock is an updated mach = make a chroot). I won’t explain too much of the RPM spec file, but I will link to it. Note that if I were to include the source rpm file you’d be downloading it for some long while; therefore you will have to generate the tarballs of the files in it. The rest – patches – I will throw into a zip file and include that here. So, here’s how to do it under CentOS (and also note that I only build C, C++ and disable profiling [I didn’t have java installed and didn’t need it either, and profiling had some other issues [at least originally; additional changes I added may have fixed that but I haven’t tested it as I don’t need it and it is after all a backport and not the official package). Note also that it will NOT clash file names. All files that are the same, e.g., gcc, g++ have a suffix: 46 (therefore its invoked when you type in gcc46 or g++46 for instance). I also use have the libraries provide the other version, so updates should not make a problem. You could also just include statics libraries under the proper directory (which I do for libstdc++ for example) and include an ld script so that it uses that instead).
For building, I suggest you follow these commands. It will install mock, set it up and then chroot into it. Then it’ll generate the right files and start the build.
First, in the host system do the following as root :
- yum install mock
- usermod –append -G mock username
- su – username
- mock -v –init
- mock –install yum
- mock –shell
I previously noted that the second to last one might not be necessary. However, I just confirmed it is necessary for later, say, when you want to install some package (e.g., the ones you create). Therefore, the instructions should be correct in all cases. In addition, I realized one other thing (I mentioned it in another post related to backports, but I never updated this until now, 14 April of 2012): you may have to install svn first – I added the command to the list below as of today.
Now, you should be in a mock chroot. Do the following :
- yum install svn
- cd /builddir/build/SOURCES
- svn export svn://gcc.gnu.org/svn/gcc/branches/redhat/gcc-4_6-branch@180561
- tar cf – gcc-4.6.2-20111027 | bzip2 -9 > gcc-4.6.2-20111027.tar.bz2
- wget http://www.mpfr.org/mpfr-2.4.2/mpfr-2.4.2.tar.bz2
- wget ftp://gcc.gnu.org/pub/gcc/infrastructure/gmp-4.3.2.tar.bz2
- cd ..
- wget http://xexyl.net/rpmbuild/gcc46.tar.bz2 && tar xvf gcc46.tar.bz2
- rpmbuild -ba SPECS/gcc46.spec
If you have everything in place it should build what you need. You can then install the resulting RPMS that are under ‘/builddir/build/RPMS’.
That’s all there is to it if every thing goes well (if it doesn’t, I’m particularly bad about mentioning ways to get in touch with me but if I do find anything wrong I’ll update it).
Oh, and it goes without saying (but saying it just to be clear) I won’t be responsible if you do something different or something goes wrong. We all take in information but its up to oneself of how you use the information. I suggest building and installing in a chroot for a reason – to be sure things do not go wrong. It works fine here but you should not take that for granted and just assume you shouldn’t be cautious.
The other day I came across something really interesting that I had never heard of. Indeed, upon a quick search, it seems this may be a little known trick yet incredibly useful to (at least) me and (maybe) others. To be fair, the symbols ARE generated. However, they are not in the binary itself.
What if you could somehow copy the debugging symbols to a separate file, remove them from the binary and then add a link in the binary? Well, the binary size will decrease by a lot.
Here is an example of file sizes, in a program I’m working on (its in C++). I had already optimized header files for decreased binary size, made constant strings extern linkage. The reason I make it extern linkage is simple: in C++ if you have a constant (e.g, a const std::string) in a header file, it is linked per file that includes it. This can potentially increase object file size quite a lot. This increases the binary size as a whole. Now, before header file optimization the binary size was approximately 11MB. After the optimization, I decreased it down to 7.7MB with other code added (quite a difference if you think about it). Still, a similar program in C was only around 3.5MB (both programs with debugging symbols created by the gcc/g++ -g option).
I knew there was an option in the strip(1) command to discard all but needed symbols, and I had this memory that there might be one such option for only needed debug symbols. However, I found something even more useful. This is perhaps why its less known than it could or should be, but it actually refers to the objcopy(1) command. Interesting thing is, it has exactly what we need with respect to moving debug symbols out of the binary and into a separate file, and then adding a link to the binary.
To simplify things, I’ll say that the executable file is ‘program’ and its in the current working directory (CWD). The following three commands will do exactly what you need (assuming you did compile with debug symbols in place) :
objcopy --only-keep-debug program program.dbg
objcopy --strip-debug program
objcopy --add-gnu-debuglink=program.dbg program
Now, after that, you have an additional file called program.dbg (which you could have named whatever you want). However, the original binary I said was 7.7MB? Well, even more code added, and the file size is actually 2134533 bytes, or in other words, 2.1MB. Very nice. Now, as for debugging?
You run the debugger the same exact way. What you’ll see is something like the following :
Reading symbols from /home/user/program…Reading symbols from /home/user/program.dbg…done.
As you can see, the debugging symbols are read in yet the binary size is a lot smaller. Is it perfect? Well, that depends on your definition; if you require debugging possibilities, then it’s about as close as you’re going to get. If you don’t, then there’s no point in compiling with debugging symbols anyway (especially if you have the source that you could compile with symbols if the time ever arises). So, in short, it as perfect as it can be. Some may not be bothered by larger files, and that’s fine. But I can think of several uses for this, and that does not even include the fact the memory space in RAM will be smaller.
This past week someone had some remarks about the Linux version of jAlbum. In particular, there was no file association or even the normal icons (outside of the launcher for the program). I’m naturally referring to the GUI version. This of course has a couple downsides:
- If you try to open a jAlbum file type (say by double clicking on it) – then you get the general archive manager program. Why, is that it’s technically a ZIP archive. The problem is, if you don’t know where to place the files, you won’t be able to use it. Further, you have to do the work rather than have jAlbum handle the task it should.
- There’s no icons to show it is actually related to jAlbum.
Well, I knew the problem and gave a hint about it to the person. This allowed them to at least be able to simply right click on the archive and select Open with jAlbum. That’s only part of it though. I then decided to get it automated; as in, I wanted the RPM’s and DEB installers/packages of jAlbum to do the work including file association and icons. So here’s how I did it. This can be used for any software developer or anyone who wants to add a file type. It may be your own. Or maybe you just want to know as you like to know as much as you can. Whatever the reason, here is how it’s done. Note that this is the general way; this is not all there is for jAlbum. A while back I helped them with automating RPM (possibly DEB) tasks. They use Apache Ant for building, so I helped with Redline setup (and possibly Ant-Deb-Task) for building their RPM’s and DEB packages (respectively). So in the case of jAlbum, I had to modify the ant build script again to include this. In addition, I had to add a few new files. That’s what I’m writing about.
Firstly, to use (i.e., implement this or for users – to install the package), you need the very likely installed programs: shared-mime-info and xdg-utils from Freedesktop. If they use GNOME or KDE, then they should have this. Among other directories, this program should own the directory and what is under it:
What you need to do, is, add a file under the directory packages under /usr/share/mime.
The file can be named whatever you want, but it’d be best to name it the name of your program or the general type of file. So, in my case, I named it
jalbum.xml. The full path will therefore be
<?xml version="1.0" encoding="utf-8"?>
<mime-info xmlns="http://www.freedesktop.org/standards/shared-mime-info" type="application/zip">
<comment>jAlbum skin file</comment>
<comment>jAlbum skin file</comment>
<comment>jAlbum property file</comment>
<comment>jAlbum extension file</comment>
It’s fairly simple, but here’s an explanation anyway:
- Specify this is an XML file and has an encoding of UTF-8.
- The root element shall be mime-info. It specifies the namespace and the type of file.
- mime-type declares a new file type. jAlbum has three file types. This one is application/x-jalbum-jaskin and it (as the others are too) are in the desktop launcher file (under /usr/share/applications). This, however, is another topic entirely. I’d rather not get into it – just examine the files there for examples. The only entry relevant to this discussion anyway, is:
- Comments are shown in properties of files of this type (e.g., right-click and left click properties).
- icon-name is the name (minus the extension that has to be jpg or png) of the icon. The icons will be installed in the post install scripts of the RPM and DEB packages.
- This line says if the file ends with .jaskin then it is this file type we just specified.
- Closes the first mime-type.
The others are the same only different file types. They all will open with jAlbum after it installs and updates the cache.
So how do you update the cache? Very simply, you run the following commands. Given you are usually installing programs when doing this, it goes without saying that you would need root access. So, the commands for post install:
/usr/bin/xdg-icon-resource install --size 48 /usr/share/jalbum/icons/JalbumSkin48.png jalbum-jaskin
/usr/bin/xdg-icon-resource install --size 48 /usr/share/jalbum/icons/JalbumDoc-48.png jalbum-jap
/usr/bin/xdg-icon-resource install --size 48 /usr/share/jalbum/icons/JalbumExtension-48.png jalbum-jaext
The first one updates the mime cache which means your system should try to open the types we installed/setup with jAlbum.
The other lines install the icons. For more info on the commands, just see their manual pages (`man update-mime-database` and `man xdg-icon-resource`).
To uninstall, you would need the following commands:
/usr/bin/xdg-icon-resource uninstall --size 48 JalbumSkin48 jalbum-jaskin
/usr/bin/xdg-icon-resource uninstall --size 48 JalbumDoc-48 jalbum-jap
/usr/bin/xdg-icon-resource uninstall --size 48 JalbumExtension-48 jalbum-jaext
I wish I could say and that’s all there is to it, but there’s more to those files. However, this is all there is for adding file type association and custom icons for those file types. Either way, now jAlbum has icon and file association for Linux. I’m happy and so is the developer. (And, hopefully I did not make any stupid mistakes writing this up. I did check it a few times but I’m pretty tired, too).
I know it’s happened to me:
I have a repository enabled for some reason, and yum pulls a package in. Some time in the future I then end up disabling the repository. Now a potential problem has cropped up. Yum won’t update the packages properly, because the release is different. So the troubling thing is if you want to update, you’ll need to either remove the package and install it from the proper repository, download the RPM and install with rpm (which means you’ll likely have to let it over write packages and files included – not so nice even if doable) or somehow make Yum update it. The problem is, if you just uninstall/erase the package you may actually have to uninstall dependencies you’d rather not remove (say, from an enabled repository). This is not always the case but it is a potential outcome at times.
I had this problem just now. On a whim I thought of and tried something. The key is: downgrade.
If you have package A from repository A that will be removed if you remove package B from repository B then all you need to do is (obviously make sure that package B is in one of your enabled repositories) :
yum downgrade [package(s)]
Obviously replace [package(s)] with the packages you want to install in the enabled repository.
Of course, make sure everything goes okay; although yum is fairly nice, it’s still only a program that can’t make decisions or know the internal workings of other programs, your system or anything unrelated to it or its database. But after this you can get updates (and that’s a good thing).