Due to my current occupation, which involves numerical computations, I have to deal with Fortran in its different flavors. Since in the past I have almost exclusively programmed C/C++ and never had used Fortan I ran into some nasty bugs, which I want to share, guessing that there are probably more people in the same situation. Most of these bugs are simply because of the fact, that one projects the known C-semantics onto Fortran. Or saying it differently: the bug sits between the keyboard and the chair
In this first post I want to start with the following example
logical :: foo = .false.
! do other stuff
which is compared to:
logical :: foo
foo = .false.
! do other stuff
Looks basically the same. Apparently it is NOT. The problem is that for Fortran every variable which is initialized at declaration is automatically marked saved, which, just to make it clear, corresponds to the static keyword in the C world. To make it even more precise the first example is the same as:
logical, save :: foo = .false. ! foo's value is persistent across subroutine calls
! do other stuff
Or the same function in C:
static bool foo = false;
// do other stuff
This is just a habit thing, since in C and C++ there is not a problem with initialization at declaration. When I experienced the bug above I luckily had a simple function and a unit test which helped to reveal the bug quite fast. Otherwise one probably could keep staring at code for some time, falsely assuming this part of the code being too simple to fail.
One of the things I wanted to have done for quite some time now was building a NAS from the ground up. Several weeks ago the two year old Buffalo Pro Duo capitulated such that I finally got my chance Over the time there have been some things I wanted to try, but which the Buffalo NAS due to several reasons wasn’t able to deliver. So there were certain requirements the new NAS would have to meet:
- Due to a misconception the Buffalo NAS was accessing the hard disks approximately every 20 seconds, which in the end probably led to the harddisk failure. Hence the new setup should use a SSD for the operating system partition in order to minimize the exposure of the mechanical parts.
- Although the initial setup will only use two harddisks, I want to have the possibility to expand the RAID5 array in the future.
- The processor should be powerful enough to handle a software RAID.
After searching a bit I found the following combination appealing, which then also ended up in the NAS
- Asus E35M1-M with a passively cooled AMD E-350
- beQuiet Straight Power E9 with 400 Watts
- Samsung MZ-7PC in the 64 GB configuration
- 2x Western Digital WD20EARX with 2TB each
- 4 GB Corsair PC1333 RAM
- Xigmatek Midi Tower Asgard
The assembly was straightforward although one directly noticed that the Midi Tower is the cheapest link in the chain. The power supply’s radiator cowling was slightly poking out such that it didn’t exactly fit, but it worked somehow.
In the beginning I was pondering whether to choose FreeNAS or CentOS. Both are probably good choices, but since I don’t consider ZFS as the holy grail of file systems and I’m a passionate Fedora user the final choice was the CentOS 6.2 minimal spin. One of the main arguments for CentOS compared to other Linux distributions was that it has a long life cycle and ships quite up-to-date packages, although GRUB2 and systemd are for example missing. The kernel is shipped in the version 2.6.32, which is IMHO a bit old, as it turned out it doesn’t support the CPU temperature sensors out of the box (see below).
During installation I only had to deal with a small problem, that the installer wouldn’t want to boot from the UNetbootin-prepared USB stick in the beginning. Finally looking at the ASUS EFI bootmenu unveiled a second entry for the USB stick which choosing did the trick. From there on there were just some minor bumps on the road to a complete NAS:
- Out of the box CentOS minimal is configured to use NetworkManager, although NetworkManager is not installed. As had been described here one has to edit /etc/sysconfig/network-scripts/ifcfg-eth0 to turn off NM and to enable eth0 on startup.
- As usually there’s always a point where you start struggling with selinux. In my case this was fixed by calling
chcon -t samba_share_t /path/to/shares
for the samba share directory and
restorecon -R -v /root/.ssh
for the newly created ssh directory (otherwise public key authentication won’t work).
- While doing the Samba setup I searched quite some time for the option probably everybody wants to use in a small home network
map to guest = Bad User
which, as it says, maps everybody without proper authentication to the guest (normally the user “nobody”).
- A nice gimmick I wanted to try was AirPrint. I found a script that autogenerates the avahi service files for the installed printers, but as it turned out one also needs to add
to the cups configuration.
- Unfortunately does the 2.6.32 linux kernel not ship any support for the hardware sensors in the E-350 CPU. So I had to install the kmod-k10temp rpm from ElRepo. The sensor data is then available in the /sys/module/k10temp/drivers/pci:k10temp/0000:00:18.3 directory.
Testing the CPU-Cooling
The thing I was most curious about was whether the passively-cooled CPU would even sustain under full load, since there’s also a PRO Version of the motherboard which is shipped with an additional fan. So I started twice
and monitored the CPU temperature over time. The result can be seen below:
The first plot on a physicist’s blog and it doesn’t even have error bars, shame on me It’s difficult to interpret these numbers as in the k10temp documentation is stressed, that temp1_input is given in almost arbitrary units. But k10temp gives one also the following values in the same units
temp1_max = 70000, temp1_crit = 100000, temp1_crit_hyst = 97000
According to the k10temp documentation the CPU is throttled to prevent damage once temp1_crit(_hyst) is reached, so operating a passively cooled E-350 in a NAS should be safe even under occasional load. At first I was a bit irritated of the temp1_max value, but apparently it seems to be just a dummy value (see k10temp source).
So far I’m quite content with the NAS, but there are still some benchmarks that I want to run and some options (e.g. spin-down time of the harddisks) which I want to tweak. Hopefully I’ll find some time to blog about it.
In my last post I had described how to circumvent some issues when compiling Wordpess 3.2.1 with Hiphop-Php. Unfortunately it came up that the compiled binary suffered from a memleak which took me quite some time to find and fix.
As it turned out hphp has a regular expression cache which caches every regular expression indefinitely such that clearing the cache is only possible if you shutdown the application. In principle this is not a problem for an application which has only a limited set of static regular expression patterns (which should be the case for most of the applications). But once the regex pattern becomes a runtime option the cache fails. This seems to be due to the fact that hphp compares cacheentries according to their regex-pattern hash and there is no guarantee that two equal dynamically allocated regex-pattern strings have the same hash. In the specific case of WordPress you have the runtime option to specify the date format which is mangled into a regex pattern somewhere inside the mysql2date function.
The obvious workaround is to limit the number of cacheentries. The specific commit can be found in my hiphop-php branch, which as the title says makes the PCRECache a least recently used cache. I strongly recommend those running a hphp-compiled WordPress to apply that patch. Feedback is as always welcome
This is a project that I started last weekend and where I just want to share some of the insights I had, because compiling with Hiphop-Php (hphp) is not as straightforward as compiling an application with gcc or clang
The first thing you realize when looking at the github hiphop-php page is that it has a long list of dependencies, which I wanted to reduce to a minimum. So I ended up forking hiphop-php and adjusting it to my needs: it should work with a minimal set of dependencies and it should be easy to deploy. At the moment my list of dependencies, that are not provided by CentOS 5, is down to libevent, curl, oniguruma and libmemcached. I had to sacrifice the ICU SpoofChecker, but as it isn’t used by WordPress this shouldn’t be a problem. Additionally I’ve chosen to use the static library versions of these dependencies, because I compile this stuff in a separate virtual machine and I don’t want to mess with rpath issues.
Once when you get to the point where you have a working hphp and try to compile WordPress 3.2.1 you will notice that the function SpellChecker::loopback won’t compile. Introducing a temporary variable fixes the issue:
$ret = func_get_args();
Now you are at the point where you can compile WordPress …., but it won’t work Some of the SQL queries will fail and the best workaround I could come up with is to set
$q['suppress_filters'] = true;
So was this all worth it? Given the current viewership numbers of this blog I wouldn’t say so, but it was quite funny According to apachebench this blog is now capable to serve 50 request per second instead of 10.
At the end some last remarks about hphp:
- Using the mentioned approach generates huge binaries, so a normal WordPress blog needs about 40-50 MB. The problem seems to be that some files, especially the dynamic_*.cpp ones, accumulate the references to symbols in other files. This prevents the linker from stripping the unneeded sections, because the compiler by default puts all functions of the same source file into one section. There are compiler flags, namely “-ffunction-section” and “-fdata-section” in combination with the linker flag “-Wl,–gc-sections”, which can change this behavior, but so far I didn’t try.
- The upstream hphp has some issues with the source files not being present at runtime, see this commit.
- I personally don’t like the idea to have to execute cmake in the root path of hphp
As the blog name suggests, this blog is about some random thoughts of mine. Given the fact that the entropy of these thoughts wouldn’t justify /dev/random, I found it appropriate to call it “/dev/urandom thoughts”.
Those hoping for random number, encryption or information theoretical related blog posts will probably be disappointed because I’m so far not planing any posts about these topics. The main topics I had in mind while setting up this blog are:
- KDE, especially kwin, related thoughts
- maybe some physics
- All the rest I haven’t thought about yet.
I hope you enjoy reading