Yenya's World

Tue, 16 Dec 2014

Systemd: ENOENT

I maintain a small software project (about 4k LOC) which is a part of the university infrastructure. It is versioned in Git and installed on several computers across the university. Today I wanted to deploy it on a Fedora 20 machine, which of course is running systemd.

Firstly about my position on systemd: I think most of the things they are trying to acchieve are pretty cool, but sometimes the implementation and design choices are a bit questionable. Anyway, I have written two unit files for my software, even with the unitname@.service wildcard syntax. The units are OK, systemctl start unitname-instance.service works as expected. The crash landing came when I wanted to enable these units after reboot:

# systemctl enable unitname-instance.service
Failed to issue method call: No such file or directory

What's wrong with it? It can be systemctl start'd anyway, so the unit files should be OK, shouldn't they? After some hair pulling I have discovered that systemd intentionally ingores symlinks in the /usr/lib/systemd/system directory. Moreover, they just set O_NOFOLLOW and print whatever errno they get from the kernel, which is simply misleading. I think my use case - to have my own unit files in my git repository - is valid, and there is no reason for disallowing symlinked unit files.

Related Fedora bug reports: #1014311, #955379.

Section: /computers (RSS feed) | Permanent link | 0 writebacks

Sat, 13 Dec 2014

CSIRTs Considered Harmful

OK, I am fed up with spam coming from local CSIRTs. Firstly from CSIRT MU, and recently even from the CESNET CSIRT.

"The Guild of Firefighters had been outlawed by the Patrician the previous year after many complaints. The point was that, if you bought a contract from the Guild, your house would be protected against fire. Unfortunately, the general Ankh-Morpork ethos quickly came to the fore and fire fighters would tend to go to prospective clients’ houses in groups, making loud comments like ‘Very inflammable looking place, this’ and ‘Probably go up like a firework with just one carelessly-dropped match, know what I mean?’"
-- Terry Pratchett: Guards! Guards!

This is the problem with Computer security incident response teams (CSIRTs). When they are to actually handle the security incident, they work well. However, security incidents are not very frequent, at least the important ones. So they tend to over-estimate the impact of many so-called security problems, and tend to keep people notified about their own existence by spamming them, or even demanding replies.

For example, CSIRT MU monitors the network traffic and sends notifications about "suspicious" traffic. The report is an e-mail with the URL where the details supposedly can be found. In that page, there is a partial description of the incident, with complete description available through another link. So instead of opening, reading and deleting a single e-mail, one has to read the e-mail, open the included URL, and follow the link in that page. For example, CSIRT MU sends us notifications about some computers in our network "scanning" foreign networks, even though it is clearly visible that the "attack" uses one source and one destination address, and lasts for only a few seconds. Which most probably means that someone ran nmap against his own remote machine. So CSIRT sends us their report using their ticket system, and even demands that we respond in time about the cause (each response gets sent back to us twice - once in Group reply, and the second time through their ticket system). After we explain what is probably going on, their response is not a polite "sorry for bothering you with false-positive, we will refine our detection criteria". The response is "OK, I am closing the ticket", and the next day they send us another false-positive.

A few days ago we've got another "incident report", this time from CESNET CSIRT. They were notifying us about a new HTTPS server in our network with the Poodlebleed vulnerability. OK, we have notified the server owner and got the response "we will eventually look at it, but the same content is available over plain HTTP, and it is only a testing server". Which is a perfectly valid response. But CESNET CSIRT thinks they should spam us every day until this so called "problem" gets fixed.

In my opinion, something like CSIRT with dedicated staff should not exist (except in the largest companies, may be). The security response people should be the regular staff doing their own work, designed to stop their regular work immediately, should the security incident emerge, and work on the security incident instead. But the dedicated staff has too much time in their hands, and tend to look for opportunities to let people know about their existence. The same way as Ankh-Morpork Firefighters Guild did.

Section: /computers (RSS feed) | Permanent link | 5 writebacks

Wed, 10 Dec 2014

Apache Reload Bug

Yesterday I have discovered something that I suspect to be a bug in Apache: we use the same config file for many of our systems, and put the specific parts inside the <IfDefine> blocks.

When the Apache started, it worked as expected. However, after a graceful reload, it seems that some instances of Apache started interpreting some <IfDefine> blocks, even though the particular <IfDefine> string was not present in their command line. I have even verified this by creating a dummy <IfDefine> block with a non-existent directive - the Apache server has started correctly, but died on a syntax error in the config file after a graceful reload.

Long story short, I have upgraded to the latest-greatest version of Apache, and the problem has disappeared. Has anybody seen something similar?

Section: /computers (RSS feed) | Permanent link | 0 writebacks

Tue, 01 Jul 2014

Static Transfer Switch

Static Transfer Switches (STSs) are amongst the most important parts of power distribution in our datacenter. Some of the datacenters are designed with redundant power paths in mind (as required e.g. by TIER 3 specification). The problem with TIER 3 is, that it requires all the equipment to have two or more power supplies. Some appliances (for example, ethernet switches) are much cheaper with a single PSU. An ethernet switch with two PSUs is usually from the vendor's top line, and is of course priced as such. We have decided to design our datacenter power distribution with single-PSU equipment in mind.

According to our experience, the majority of the power outages in our previous datacenter were either the planned outages, or were caused directly by the failure of the equipment which was supposed to provide higher availability (e.g. the UPSes themselves). So we have planned the datacenter to be able to bridge around the failed part of the equipment, while still providing the uninterrupted power even for the equipment with single power supply.

An STS can be viewed as a box with two incoming power lines and one outgoing line. It monitors the incoming power paths, and can quickly switch to the alternate path, should the currently-used path become faulty, providing uninterrupted service of the outgoing power line even in case of the failure of one of the incoming power lines. The "Static" part in the name means that there are no mechanical parts involved in the switching itself (such as relays), the switching is done by SCRs:

Our STSs are Inform InfoSTS. Their communication protocol and documentation is pretty bad, so I cannot really recommend them. Their proprietary Windows-only management software is even worse. For example, an attempt to set the time fails when the time is before 10:00, because the management software sends the time as H:MM, while the STS itself expects HH:MM even for hours less than 10. I have nevertheless managed to decode the protocol and write my own web-based management application for it (screenshot above).

Probably the most interesting part is that it is the first time I used SVG inside the web page, and Javascript for modifying it when the new data is read. So the schematics can be edited in Inkscape, and provided that the object IDs are unchanged, the application layer can still work with it. I plan to connect it with MRTG or Zabbix, and make all the numbers clickable, leading to the graph of the history of that particular variable.

Section: /computers (RSS feed) | Permanent link | 0 writebacks

Mon, 09 Jun 2014

Politically Correct Media Players

Hello, welcome to today's issue of your favourite "Bashing the Questionable Fedora Desktop Decisions" series. Today, we will have a look at the politically correct media players.

In a civilized world, there is no place for such insane things like software patents. Unfortunately, there are less free parts of the world, which includes the United States of America. So the companies originating in the U.S. are forced to do absurd decisions like shipping audio players which really cannot play most of the audio files out there (which are, unfortunately, stored in the inferior MP3 format), or video players which cannot play almost any video (which can be encoded in wide variety formats, almost all encumbered by software patents).

For Fedora, the clean solution would be to have a package repository outside the U.S. jurisdiction, and offer it as a part of Fedora by default. Such a repository already exists at rpmfusion.org, and it provides everything needed to play audio and video in free parts of the world. But it is not as promoted as it should be in free parts of the world. However, Fedora does something different: they ship empty shells of audio and video players, such as Pragha or Totem, which in fact cannot play most of the audio and video files. The problem is, that these applications shamelessly register themselves as the handlers of audio/mp3, video/h264, and similar MIME types. Only after the media file is handed to them, they start to complain that they don't have an appropriate plug-in installed.

Hey, Fedora desktop maintainers, stop pretending that the US-based Fedora desktops can handle MP3 and H.264 files, and admit that your inferior but not U.S. software-patent encumbered players cannot handle these files by default. It would be fair to your users. Fedora users: is there anybody who really uses Totem instead of VLC or Mplayer?

Section: /computers/desktops (RSS feed) | Permanent link | 2 writebacks

Tue, 27 May 2014

MPEG Transport Stream

Today I have investigated why some files with the .MTS extension do not have their MIME type detected. The file starts with the following bytes:

$ od -tx1 file.mts | head -n 1
0000000 00 00 00 00 47 40 00 10 00 00 b0 11 00 00 c1 00

According to the current /usr/share/magic from Fedora 20, it is quite similar to the following entry:

0       belong&0xFF5FFF10       0x47400010
>188    byte                    0x47            MPEG transport stream data

Also, the shared-mime-info package contains something similar:

<match type="big32" value="0x47400010" mask="0xff4000df" offset="0"/>

Note that both files expect the 0x47 byte to be at the beginning of the file, not after four NULL bytes as in my example. Yet mplayer(1) can play these files, and ffprobe(1) can detect it as "mpegts" with an audio and video stream. Looking into the ffmpeg source, I have discovered it does horrible things in order to detect a file format. For example, for mpegts, it scans the file for a 0x47 byte at offset divisible by four, and then evaluates some other conditions. The probe function returns score, and a file format with greatest score is returned from the probe function. Ugly as hell, but probably needed for handling real-world data files.

So, what should I do next? Should I submit a patch to file(1) and shared-mime-info to accept also the magic number at offset 4? Are we getting to the point where the already-complicated language of the /usr/share/magic file is not powerful enough?

Section: /computers (RSS feed) | Permanent link | 4 writebacks

Wed, 07 May 2014

GMail Spam Filter

Apparently, GMail spam filter got too zealous. I have my own domain, and I run my own SMTP server on it. Now it seems Google has decided to reject all mail from my server:

<my.test.gmail.account@gmail.com>: host
    gmail-smtp-in.l.google.com[2a00:1450:4013:c01::1b] said: 550-5.7.1
    [2a01:...my.ipv6.address...] Our system has detected that this
    550-5.7.1 message is likely unsolicited mail. To reduce the amount of spam
    sent 550-5.7.1 to Gmail, this message has been blocked. Please visit
    550-5.7.1 http://support.google.com/mail/bin/answer.py?hl=en&answer=188131
    for 550 5.7.1 more information. o49si12858332eef.38 - gsmtp (in reply to
    end of DATA command)

In the mentioned page, they recommend putting "SPAM" in the subject of forwarded mail :-/ in order to trick GMail to accept it. But then, it is not forwarded mail at all, it is mail originated on the same host from which the SMTP client is trying to send it to GMail.

So, are we getting to the world where only Google and few other big players are allowed to run their own SMTP servers? And after that, they wil "suddenly" decide to stop talking to each other, as we have seen in the XMPP case with Google Talk. The morale of the story is: don't rely on services you cannot control for your private data and communication. They will drop your incoming mail as supposed spam and you will not be able to do anything about it.

Update - Wed, 21 May 2014: Workaround Available

Apparently, this is indeed IPv6-related, and the workaround is either to use IPv4 for Gmail, or better, make Postfix fall back to IPv4 after trying IPv6 first. This way, Google gets a penalty of two connections, and hopefully will have motivation to fix their problem.

The solution is described here, and more can be read in the postfix-users list archive (another source). The solution is:

Add the following to /etc/postfix/main.cf:

smtp_reply_filter = pcre:/etc/postfix/smtp_reply_filter

Create a file named /etc/postfix/smtp_reply_filter with the following line:

/^5(\d\d )5(.*information. \S+ - gsmtp.*)/ 4${1}4$2

and reload the Postfix configuration using postfix reload command.

Section: /computers (RSS feed) | Permanent link | 4 writebacks

Tue, 29 Apr 2014

The Grand C++ Error Explosion Competition

The daily ROTFL: if anybody still considers C++ being a sane language, look at this: http://tgceec.tumblr.com/.

Section: /computers (RSS feed) | Permanent link | 0 writebacks

Sat, 26 Apr 2014

Datacenter Power

As some of you may know, I am on a long detour from programming and system administration to the area of civil and electrical engineering, building supervision and datacenter design. Hopefully this detour is nearing to its end, as our new faculty building with its datacenter is almost ready.

I would like to share some photos of our infrastructure. Here are photos from our power distribution room (image labels are in Czech only, sorry):

And here is the image gallery from our UPS room and its service area. We use Dynamic UPS (DUPS), which does not maintain its backup power in the lead cells, but instead uses a huge flywheel, which allows to bridge the short gap between the power outage and start of a diesel engine:

More to come in the CVT FI blog, available to those who have access credentials to IS MU.

Section: /computers (RSS feed) | Permanent link | 0 writebacks

Fri, 25 Apr 2014

Buzzword Bingo

And the winner of today's Buzzword Bingo is ...

Project Atomic:

"Project Atomic integrates the tools and patterns of container-based application and service deployment with trusted operating system platforms to deliver an end-to-end hosting architecture that's modern, reliable and secure."

There is even the word "cloud" mentioned somewhere in their home page. I wonder what has happened to hackers and computer enthusiasts, when they are able and willing to put such a crap in their home pages. Apparently, the translation of the above is something like "We can run Docker applications under SElinux."

Section: /computers (RSS feed) | Permanent link | 3 writebacks

Thu, 19 Dec 2013

Arduino SCX Digital to USB interface

I have a SCX Digital slot cars set, and some years ago I bought an interface box for connecting it to the PC using a RS-232 serial port. PC then can be used as a timer, lap counter, and race management. Now I wanted to make some modifications to the firmware (it uses AVR Tiny 2313 chip). I have discovered that the author does not sell this version anymore, it has been replaced by a newer version with USB. So I kindly asked the author whether he can provide me the source code for the firmware for the old version. I have got the following reply:

Hi Jan
Sorry, I do not share any of my software.

Well, whatever. It is of course his choice to keep the firmware of the abandoned version for himself. But in the meantime, I've got some experience with electronics and microcontrollers (see my other projects).

Introducing SCXreader, my own SCX-to-PC/USB interface, built with Arduino Nano. It is fully open, including the source code of the firmware. It costs about US$ 6.50, way less than the current SCX-to-USB SEB interface.

Section: /computers (RSS feed) | Permanent link | 8 writebacks

Wed, 27 Nov 2013

Proprietary Applications

Welcome to the Rant of the month series, today about the proprietary web applications: The Web is more and more becoming a set of isolated proprietary islands, instead of being the deeply interconnected, how to say it, web. Lots of information, and even my friends, are disappearing behind the proprietary systems.

For example, I would like to get news from @whatifnumbers, preferably via RSS, but apparently it is not possible. Twitter used to have a RSS export, but it has been recently disabled. I, of course, have no intention to use a Twitter account (I think I created one long time ago, but I never used it).

Another examples are Google+ and Facebook: how do you stay in touch with your friends who have an account on only one of these systems? (Or none of them, like myself?) I have managed to create a RSS feed of one of my friends' G+ account, but the feed of course contains only the public posts.

We are moving from the world where people develop applications which everybody can install and run themselves (blogging systems, mail servers, web galleries, etc.) to the world where there is only a single instance of an important application, with no possibility to run my own copy.

Section: /computers (RSS feed) | Permanent link | 2 writebacks

Mon, 16 Sep 2013

3D Printer

Apparently 3D printers can nowadays be built for a moderate price, and their quality is improving. Also, there is a project called RepRap for developing open-source 3D printer (including design of components, Arduino as a controller board, firmware, CAD, and host software).

There are too many variants to choose from, so I was glad to discover RepRap Workshop, where it is possible to build and configure the 3D printer Průša i3 from the RepRap project under the supervision of somebody who has already built several 3D printers and has lots experience with them. All the parts and electronics were included in the price of the workshop.

My printer prints correctly, but still needs configuration tweaking. In the last image there are parts of this object from the open source repository of 3D objects called Thingiverse. I have printed it scaled by 0.7, but the other two parts were too brittle and their pins snapped off. I am looking forward to print more objects, for example LED lens holders for my Bike Lights project.

Section: /computers (RSS feed) | Permanent link | 2 writebacks

Mon, 01 Jul 2013

Transparent Internet

The times when the Internet was considered a transparent network, which relayed any kind of Layer 4 frames, as long as they were properly encapsulated in Layer 3 - the Internet Protocol version 4 (and version 6, recently) - are apparently gone forever.

The Network is not even supposed to look inside the Layer 3 payload, yet some core switches apparently handle a particular L7 protocol in a special way. I wonder whether we are now in state of TCP, UDP, and ICMP being cast in stone, and no way of deploying a whole new L4 protocol, or a substantial modification of current L4 protocols (do you remember TCP ECN fiasco, anyone?).

With NATs and firewalls being the integral part of the Internet, the situation is probably even worse. Not only L3 and L4 are cast in stone, but application protocols as well. These times, everybody seems to tunnel their data over HTTP, as this is the only protocol, which can be expected to pass over this mess of NATs and prohibitedly configured firewalls.

So let's hold a minute of silence for the end-to-end transparent Internet, which is apparently gone forever.

Section: /computers (RSS feed) | Permanent link | 0 writebacks

Thu, 30 May 2013

GPS Tracking Systems

I use my smartphone in addition to the cyclocomputer in order to be able to record my speed, and later compare the speeds at the same place amongst various conditions. The problem is what to use for tracking and what for reviewing and comparing the recorded tracks?

So far I record the tracks using Move! Bike Computer on my Android phone. It is far from ideal, but at least it stores tracks as a GPX files which are accessible directly from the flash. It uses 1-second intervals, and as a bonus, it can display the track using Google maps. The drawback is that it sometimes does not switch the GPS on, so it needs to be switched on manually from the Android top bar menu. The other drawback is that while it can send the GPX files by e-mail to the desktop computer, it does not remember the prefered export format (GPX instead of KML for me) and the prefered export method (e-mail using K-9 mail to a predefined address). So sending tracks from my phone for further archivation is not so easy. But at least it can be done. Another problem is the start and end of the track: I usually start this app before leaving home, and stop it some minutes or hours after reaching the destination. The recorded tracks then cannot be easily compared, because their durations vary in the order of tens of percent, even though the real time of activity is roughly the same. The auto start/stop feature of the cyclo computer is much more precise - the GPS always report at least some movement because of its imprecision and noise.

As for the viewer, the situation is even worse. So far the best I have found is Endomondo, (and "the best" here does not imply "good" at all). Endomondo can import the tracks in the GPX format, and display them on top of Google map, can generate the speed and height profile, etc. On the other hand, it is way too skewed to training and fitness (computing calories, etc.), and has way too much useless social features. It also has its own proprietary Android App, which makes sending data to Endomondo easier, but with this app it is impossible to get your own data back in an open format. Moreover, when importing GPX data with 1 second granularity, Endomondo rescales it to something more coarse (tens of seconds to even minutes), so it makes comparing the speed at a given place pretty meaningless.

What do you use for your sports tracking, and how does it meet your data accessibility and openness requirements?

Section: /computers (RSS feed) | Permanent link | 0 writebacks

About:

Yenya's World: Linux and beyond - Yenya's blog.

Links:

RSS feed

Jan "Yenya" Kasprzak

The main page of this blog

Categories:

Archive:

Blog roll:

alphabetically :-)