Yenya's World

Wed, 28 Feb 2007

Security Advice

SlashDot has been running a story referring to the Eric Allman's article on handling security bugs. I thought: the author of sendmail teaching us about secure software, WTF?

Apparently I was not alone, but the following comment is definitely amongst the funniest yet still to-the-point comments I have seen on /. at all:

Getting advice on how to handle security bugs in your software from someone who works on Sendmail

It could be worse; it could be advice on how to write readable code from the person who wrote qmail.

What a shame I don't have moderation points today.

Section: /computers (RSS feed) | Permanent link | 2 writebacks

2 replies for this story:

Milan Zamazal wrote:

The article is about handling security bugs, not about secure software, isn't it? And the sendmail author should have a lot of experience with that (unlike many of us who write insecure software too and nobody cares about that fact:-).

Yenya wrote:

Actually what I found funny was the second part of the comment.

Reply to this story:

 
Name:
URL/Email: [http://... or mailto:you@wherever] (optional)
Title: (optional)
Comments:
Key image: key image (valid for an hour only)
Key value: (to verify you are not a bot)

Mon, 26 Feb 2007

Superlinear Algorithms

A while ago I wrote about the Email::Address Perl module for parsing addresses in the e-mail headers (such as To:, Cc:, etc.). We use it in production system, but on Thursday we have got a problem, which I narrowed to an inoptimality in this module. The mail delivery daemon (which - amongst other things - parses the mail message) seemed to be stuck in an infinite loop.

Later I found out that the message in question contained an overly long To: header - it had more than 300 mail addresses there. I have investigated this further, and it seems that the Email::Address module has complexity far above O(n):

# of addressesparsing time
600.13 s
650.13 s
700.29 s
750.45 s
800.79 s
850.58 s
900.86 s
951.63 s
1002.38 s
1053.17 s
1104.02 s
1155.97 s
12011.37 s
125230.89 s
1301045.07 s

So it was definitely impossible for it to parse a header with >300 addresses in a reasonable amount of time. The alternative module, Mail::Address, is unusable on this particular message as well - it crashes perl with SIGSEGV :-(.

As a workaround, I wanted to truncate the header somewhere, but in order to truncate the header exactly between e-mail addresses, I would have to parse the header. Which is what I has been doing, and it took too much time. A nice Catch 22, isn't it? So I have invented a heuristics where I truncate the header after a comma followed by a newline. When this is not enough, I use just a sole comma as a separator, and when even this is not enough, I truncate the header to a fixed number of characters. So far it seems to be sufficient.

Section: /computers (RSS feed) | Permanent link | 3 writebacks

3 replies for this story:

Adelton wrote: The scale of the problem?

What was the (repeated) address you've tried this on? I got results that are an order of magnitude better: 100: 0.010125s, 1000: 0.505776s, 2000: 4.809914s even on my aging Celeron. This is with 5.8.8 and Email::Address 1.884.

Yenya wrote: Re: Adelton

I have 1.871. As for the input data, I have used the original To: header from the mail in question, shortening it to several lines as required by the test. I can send you the data, if you are interested.

Adelton wrote:

Yup, send me the example. I just tried my email address, multiplied and joined using ', '. Also, looking at the changelog, there are some performance improvements mentioned, so you might want to give the newer version a try.

Reply to this story:

 
Name:
URL/Email: [http://... or mailto:you@wherever] (optional)
Title: (optional)
Comments:
Key image: key image (valid for an hour only)
Key value: (to verify you are not a bot)

Fri, 23 Feb 2007

Gentoo Linux

I use J-Pilot as a desktop PIM application for synchronizing with my Palm. Few days ago I found that it apparently leaks memory (J-Pilot bug entry, Fedora bug entry). The response to the J-Pilot bug entry was that the bug cannot be reproduced on the author's Gentoo, so it has to be a Fedora bug. So I have decided to install Gentoo in a virtual machine to test J-Pilot under it.

So far the installation was pretty straightforward, I created a virtual disk image, created a filesystem on it, mounted, unpacked the stage3 tarball and Portage snapshot to this filesystem, and in the rest I have followed the Gentoo Handbook pretty closely (omitting steps which are not necessary for chroot or virtual machine environment, such as building my own kernel). In fact, since the Gentoo install runs in the chroot environment, the virtual machine is not necessary when everything you want to do is to compile and test a single GUI application. So I am compiling the packages in chroot instead of the virtual machine (it is faster that way).

We have some Gentoo servers here, so I am already somewhat familiar with the emerge command and USE flags. So far I did not ran into bigger problems, just few annoyances (some of which may definitely be an user error, as I am new to Gentoo):

Now I am waiting for the compile to finish. When I am at it, I want to explore the inner workings of their startup scripts (rc-update and friends) and other Gentoo specialities as well. I also wonder how they do SELinux and how their default security policy look like.

Section: /computers (RSS feed) | Permanent link | 6 writebacks

6 replies for this story:

Honza wrote: Use -X

Hi, I think there is something like default USE, which means that you have to put -X if you do not want X server. At least that's what I would try. Best Honza P.S. Thx for this blog!

Yenya wrote: Re: -X

I thought (and the help file for USE option also says it), that -X is when I do not want X support at all in applications, not only the X server support.

finn wrote:

Have you reported these bugs? I mean the wrong linux headers and non-FHS compliancy?

Yenya wrote: Re: reporting bugs

I think the non-FHS compliancy (i.e. portage under /usr) is an architectural decision instead of a bug, and the iproute2 problem - maybe I did something wrong myself. When I install another Gentoo, I will try to reproduce it, and then maybe I will fill a bug report.

asd wrote:

*gentoo default editor is nano not vi!

Yenya wrote: Re: asd

... which makes Gentoo a non-UNIX system.

Reply to this story:

 
Name:
URL/Email: [http://... or mailto:you@wherever] (optional)
Title: (optional)
Comments:
Key image: key image (valid for an hour only)
Key value: (to verify you are not a bot)

Wed, 21 Feb 2007

Device Event Handler

An useless but nice hack of the day: I have explored the udev rules a bit further - with udev, it is possible even to run a script when the device is created. I wrote two simple scripts - one at home, where it loads images from my camera after I plug the camera in, and stores them into my image repository. The other at work - around 6:30pm, I download the main daily news from the Czech radio station Radiožurnál, encode it to OGG/Vorbis, and when my Palm is plugged in, the script started by udev copies the audio file to the Palm. The rule itself is pretty simple:

$ cat /etc/udev/rules.d/60-palm-news.rules
KERNEL=="sd*1", SYSFS{serial}=="50xxxxxxxxxxxxxxxxxxxx39", \
    SYSFS{product}=="palmOne Handheld", \
    SYSFS{manufacturer}=="palmOne, Inc.", \
    RUN+="/usr/local/sbin/news-to-palm"

The tricky part was to not interchange double "==" with the single "=" by accident, and using the "KERNEL" parameter (otherwise, the script would be run for every virtual device along the path (USB device, virtual SCSI controller of the mass storage device, the whole USB disk, and finally every partition on that disk).

Another tricky part is to use device nodes from /dev/disk/by-uuid in the mount(8) command, so that the device path remains the same no matter which USB port I plug my PDA into, or what other mass storage devices are currently plugged in.

As an user-friendly bonus, the "news-to-palm" script uses notify-send to send a completion message over D-bus to inform me that I can unplug the PDA.

UPDATE 2007/02/22: More details
I forgot to mention some important tips:

Section: /computers/desktops (RSS feed) | Permanent link | 2 writebacks

2 replies for this story:

oozy wrote: d-bus

Nice. I would like to see the news-to-palm script. Especially I'm interested in notification via d-bus. Can you provide a link please? :)

Yenya wrote: Re: d-bus

The notification is simple: su kas -c 'DISPLAY=:0 notify-send "Zpravy" "Kopiruji zpravy do Palma..." -- however, this relies on knowing that I am user "kas" and I am always logged in at :0 display. I think it should be possible to send notification over the system d-bus as well (as opposed to the session d-bus), but I currently don't know how to do it.

Reply to this story:

 
Name:
URL/Email: [http://... or mailto:you@wherever] (optional)
Title: (optional)
Comments:
Key image: key image (valid for an hour only)
Key value: (to verify you are not a bot)

Tue, 20 Feb 2007

IPMI

The main problem with servers is the remote access - you should be able to reboot the server (or even change its NVRAM/BIOS settings) remotely, without actually having to walk to the server room. We use various servers from multiple vendors here, so naturally the means of remote access varies. SGI servers have the L1 controller, HP has iLO. For custom-made servers we use a combination of PC with a multiport serial card for a remote access (yes, good mainboards allow the access to the BIOS setup also over the serial console), and a master switch for power cycling locked-up systems. Now with bigger systems with redundant power supplies there is a problem that the server would use three or four sockets in the master switch alone. This is what IMPI is for.

IPMI (intelligent platform management interface) is a standard designed by Intel. It adds a small computer/microcontroller to the main server, and this computer handles things like power cycling the chassis, reading the sensor values, and even a remote serial console. We have ordered two IPMI boards for two of our Tyan-based servers, so I've got a chance to play with it.

Probably the most interesting thing is how the IPMI-over-IP works on some Tyan servers: their IPMI board does not have its own NIC, so it uses one of the NICs on the mainboard. With Broadcom-based NICs they patch the firmware in the network card, so that the card relays the frames with the IPMI board's MAC address to the IPMI board, completely transparently to the running OS. The OS can use the NIC for the original purpose, the firmware just "steals" some frames. I wonder what security nightmares would be possible should the attacker have a way of reprogramming the NIC's firmware.

The good thing is, that IPMI can also be used from the computer itself as well as remotely. Under Linux, IPMItool is the key utility, together with OpenIPMI at the kernel level. So you have instantly available all sensors (without searching for the driver of the sensor chip), hardware event log, etc.

Probably the ugliest thing is the remote serial console access. Serial (i.e. stream by nature) access over UDP? WTF? Also, IPMI probably maintains some notion of "connection" over UDP, so when one IPMI-over-LAN client crashes (yes, ipmitool has also its bugs), the new client has to deactivate the original session first.

Apart from that, IPMI seems to be quite usable. You cannot ssh to it like iLO, and it does not have the nice features of SGI L1, but it is still worth its price (SMDC 3921 board is priced around 100 Euro here, IIRC).

Section: /computers (RSS feed) | Permanent link | 0 writebacks

0 replies for this story:

Reply to this story:

 
Name:
URL/Email: [http://... or mailto:you@wherever] (optional)
Title: (optional)
Comments:
Key image: key image (valid for an hour only)
Key value: (to verify you are not a bot)

Mon, 19 Feb 2007

Virtualization Overhead

In a followup to my previous article I want to sum up the speed penalty introduced by the two virtualization systems, KVM and lguest. The measurements are by no means meaningful for everyone, just exact data I have measured. And yes, it sometimes compares apples-to-oranges - read on at your own risk :-)

The test systems were the following:

Pyrrha
Dual AMD Opteron 2220 (4 cores total, 2.8GHz), 6GB RAM, 4-way RAID-10 disk, gigabit LAN, Gentoo Linux/AMD64.
Terminus
KVM-based virtual AMD64 machine running at Pyrrha, 10 GB disk image, 512 MB RAM, bridged LAN to the main Pyrrha's LAN, Fedora 6/AMD64.
Scylla
Pentium 4 3.0GHz HT, 1 GB RAM, single SATA drive, 100baseTx LAN, Fedora Rawhide/IA32.
Glauke
lguest-based virtual IA32 machine running at Scylla, 10 GB disk image, 512 MB RAM, routed/masqueraded LAN to the main Scylla's LAN, Fedora 6/IA32.
Test Pyrrha Terminus KVM overhead Scylla Glauke lguest overhead
bc 6.286 5.876 -6.52 % 8.130 8.240 1.35 %
wget 0.441 10.885 2368.25 % 3.732 3.770 1.02 %
tar unpack 15.118 20.322 34.42 % 27.566 40.701 47.65 %
rm -rf 0.538 0.634 17.84 % 0.477 0.640 34.17 %
compile 6.410 21.929 242.11 % 126.005 184.833 46.69 %

The numbers in the above table are in seconds (so lower is better). I ran each test five times and used the lowest time from these five runs. I did not bother to reboot between the tests or switch the system daemons off.

Description of the tests

bc
time sh -c 'echo 2^1000000|bc >/dev/null'
A simple test of a CPU-intensive job. Why is Terminus faster than Pyrrha? Maybe the clock skew inside the guest? Or Gentoo-compiled bc being slower than Fedora-prebuilt one?
wget
time wget -q -O /dev/null ftp://ftp.linux.cz/pub/linux/kernel/v2.6/linux-2.6.20.tar.bz2
Network performance. KVM (having to emulate the real-world NIC) is waaay slower. However, Pyrrha has a gigabit NIC, so the baseline is 10 times off. But still, raw bandwidth used for KVM was ~22 Mbit/s, while lguest has filled the Scylla's 100 Mbit pipe without trouble. lguest could be even faster in the future, if they use a bounce buffer bigger than a single page (which is what they use now).
tar unpack and rm
time tar xjf /usr/src/linux-2.6.20.tar.bz2 ; time rm -rf linux-2.6.20.tar.bz2
A simple filesystem-related test. Nothing to see here (KVM is a bit faster).
compile
make clean; time make modules > /dev/null
A simple part of kernel compile. Both the architecture and kernel config was different between Pyrrha+Terminus and Scylla+Glauke, so do not try to compare the absolute times between those two groups. Interestingly enough, KVM was much slower than lguest.

From a subjective point of view, lguest feels faster, which was totally unexpected. I am looking forward to the further development (especially lguest with AMD64 support). Anyway, James Morris did a more in-depth measurements of the lguest network performance.

Section: /computers (RSS feed) | Permanent link | 3 writebacks

3 replies for this story:

Adelton wrote: Paravirt being faster unexpectedly ...

I didn't do any exact measures but having worked with both fullvirt and paravirt Xen guests in the last few months, indeed, paravirt feels faster for similar workloads. My naive internal explanation is that paravirtualized kernel knows how to be nice to the hypervisor, while the vanilla kernel does not case and the hypervisor has to try harder. The nice thing about Xen is (besides it being in FC6) that you can easily switch from paravirtual to fully virtual guests -- just boot different kernel from the grub menu.

Adelton wrote: vmware

I don't really know whether you're looking for some particular virtualization solution, but you should definitely check vmware as well -- the server is now free as in beer and having the full computer together with BIOS and PXE boot is just nice.

Yenya wrote: Re: (Adelton)

Yes, when I think about it, it looks obvious that paravirtualization should be faster than full virtualization. As for vmware - from my point of view it is even worse than Xen, being available without source code. Xen has at least some prospect of being included in the vanilla kernel.

Reply to this story:

 
Name:
URL/Email: [http://... or mailto:you@wherever] (optional)
Title: (optional)
Comments:
Key image: key image (valid for an hour only)
Key value: (to verify you are not a bot)

Fri, 16 Feb 2007

Virtualization

In the last few days I finally managed to take a look at virtualization systems under Linux. In particular, I have tested two of them: KVM, and Rustyvisor (aka lhype aka lguest).

Firstly the architectural differences: KVM is a full virtualization system - it requires pretty new CPU with virtualization support (Intel VT or AMD-V). With this support, it allows the computer to be fully virtualized (yes, I even tried to install The Other OS Which Should Not Be Named under KVM, and it worked). On the other hand, lguest is pretty minimalistic paravirtualization system, which requires slight modification of both the host and quest kernel - so it is Linux-under-Linux only.

KVM virtualizes the CPU, but to run a complete guest OS, the whole computer needs to be virtualized. KVM uses a patched version of Qemu, which emulates PC with a NE2k compatible NIC, Cirrus Logic VGA card, IDE drive (with a plain file in the host OS as a backing store), etc. Qemu itself can work as a full PC (and PowerPC and Sparc too) emulator, but with KVM, it just provides the necessary virtual components to the already-virtualized CPU. Qemu supports many file formats for the disk image (including Xen, User Mode Linux, VMDK and others). It also has the nice feature that when started with -nographic option, it connects its terminal to the virtual serial port of the quest system, providing a seamless serial console! As for architecture, KVM+Qemu runs on IA32 and AMD64, while the guest system can be 32-bit or 64-bit. The guest system is single-CPU only. Networking can be done using a shared file (so even an ordinary user can run his own set of virtual machines, and connect them together), or with TUN/TAP interface it can communicate with the host OS as well.

lguest is still a proof-of-concept code, but it has some nice features: it works on older CPUs as well (AMD64 support is being developed as well, but now it is 32bit only). It can use memory-to-memory communication between host and quest (so it does not have to emulate the NE2k card, for example). Subjectively lguest was faster and more responsive than KVM. Here are the drawbacks of lguest: its virtual disk is a raw file only, and it even does not support partitioning, so existing Qemu images did not work. It does not "boot" the block device per se, instead it starts an external kernel (in form of a vmlinux file), which then mounts its root.

I will measure the performance penalty of both virtualization systems, and post it to my blog later.

Section: /computers (RSS feed) | Permanent link | 4 writebacks

4 replies for this story:

Abraxis wrote: XEN

Why don't you evaluate XEN at all? It seems to me as most mature Linux virtualization technology (even with guest SMP support).

Yenya wrote: Re: XEN

XEN is still far from being included in the vanilla kernel, so it is not feasible for me to use it now. If the time allows, I might do some tests as well, though.

Yang Yang wrote: Cirrus driver crashed on Windows XP pro

I use qemu/kvm and installed windows xp pro on a disk image. My host os is Debian Linux 3.1, kernel version is 2.6.18. The installation is fine. But when I try to start it the blue screen appeared which show that cirrus driver entering a infinite loop. I can only start windows in safe mode. Strangely enough, if I turn off kvm, only start qemu, it works fine. I googled this but found nothing. Is it a bug of kvm or qemu or should I install a new kernel > 2.6.20?

Yenya wrote: Re: Win XP pro

I don't know, but the docs mention that for Win XP the qemu should be run without an ACPI support. Maybe this is what you need to run it.

Reply to this story:

 
Name:
URL/Email: [http://... or mailto:you@wherever] (optional)
Title: (optional)
Comments:
Key image: key image (valid for an hour only)
Key value: (to verify you are not a bot)

Thu, 01 Feb 2007

Girls Everywhere

It seems that these days, every university or faculty[*] needs to have an attractive girl in their home page, presumably to attract applicants :-)

banner FI

We had to follow the suit, of course :-) I brought my tripod, flashlight, and camera, and we spent some time taking photos. Tomáš then did an excellent work when creating the banner from my photos. While the title page and the page for applicants are nice, it still feels kind of weird to see the people I know in person in this banner. Oh, and BTW - this is my laptop they are looking at. To make them smile, I even displayed an episode of Azumanga Daioh there :-)

[*] the ESF page needs to be reloaded few times to actually get an image of a girl.

Section: /world (RSS feed) | Permanent link | 4 writebacks

4 replies for this story:

Vasek Stodulka wrote:

When I have seen the banner, I thought that there are charming people with a ughly laptop. :-) You should borrow them Vlasta's Mac.

Yenya wrote: Mac

We have _intended_ to use Vlasta's Mac. But Vlasta did not come in time, so we had to use my laptop instead. Blame him :-)

Spes wrote: Mac

Hey, don't blame me! :) I didn't know about this photo session :-p

honza holcapek wrote: wow

or should I say wtf? what is the next big thing? advanced php course? :-)

Reply to this story:

 
Name:
URL/Email: [http://... or mailto:you@wherever] (optional)
Title: (optional)
Comments:
Key image: key image (valid for an hour only)
Key value: (to verify you are not a bot)

About:

Yenya's World: Linux and beyond - Yenya's blog.

Links:

RSS feed

Jan "Yenya" Kasprzak

The main page of this blog

Categories:

Archive:

Blog roll:

alphabetically :-)