Sandy Bridge Woes – LGA1155 P67/H67 and VT-d (I/O Virtualization)

I recently upgraded my home server from an old Athlon X2 to the latest and finest from intel, namely the new Sandy bridge processors. These new processors come with a relatively new technology called VT-d or (Intel Virtualisation Technology with Directed I/O). It’s also called an IOMMU. This feature like the more common VT-x provides hardware support to improve virtualisation. VT-d does to PCI devices what VT-x does for the CPU, by virtualising and allowing remapping of the DMA. Basically, what it means is that you can now freely assign different PCI connected devices (like graphics card, USB host controllers) to your Virtual Machines. A use case would be to give a VM a real graphics card so it doesn’t have to rely on the slow virtualised graphics adapter.

It’s been a bit of a mystery whether VT-d is a CPU technology or a chipset/motherboard technology, but it’s clear that, like VT-x, you need both to know about this technology to work. Since the advent of i7s, various people have asking the support for this technology only to find that the main blocker is getting the right motherboard. Some lucky folks have gotten it as far back as dying days of the Core 2 era. Most i5s and i7s CPUs claim support for VT-d, however motherboards of P55 and H55 that support this was few and far between. The only sure way to find out if your motherboard supports it is if you find the option “Enable I/O Virtualisation” in the Advanced Features section of your BIOS. The situation markedly improved when P57 and H57 came about with MSI and Asus boards displaying the option. MSI’s H57M-ED65 is one such board. If you look at the downloadable manual, you’ll find the reference. Gigabyte is noticeably quiet on the matter. Armed with this knowledge I went out assuming that newer boards will follow the trend. How wrong was I.

There are special Sandy Bridge CPUs in the form of 250K and 260K. The K CPUs have a higher clock for the graphics, but sacrifices VT-d technology. Fortunately this is widely understood and reported, unlike last generation when confusion between VT, VT-x and VT-d reigned. What they failed to clarify was that, VT-d is unavailable in pretty much all P67/H67 chipsets. This means nobody can actually use VT-d anyway regardless of whether the CPU supports it or not. In fact, I don’t even know why Intel bother listing VT-d as a feature when support is so poor. Currently, the only sandy bridge (LGA1155) motherboard in existence, that supports VT-d is Intel’s DP67BG. I of course didn’t buy that board and as I’m building a server, I required integrated graphics and P67 doesn’t offer that. Only H67 does. As far as I can tell no H67 motherboard currently in existence has VT-d.

Anyway, this is my rant of the night.

Beginner to GIS and web mapping

Recently, I’ve taken an interest in GIS and OpenStreetMap, this blog post serves as a quick guide and reminder to myself of what I’ve found.

OpenStreetMap or OSM, is the wikipedia equivalent for mapping. It already has quite extensive details of streets at least for Australia partially aided by the use of nearmap and yahoo. The only detail missing from it is street address numbers, which makes it less useful for searching addresses and routing. It does have a increasing number of POIs including restaurants, shopping centres and schools.

One of the first things you will find out is that longitude and latitude for a particular point on earth can differ depending on the Spatial Reference/datum you’re using. This blog post explains it perfectly. The most popular datum would probably be WGS84, which is the one used by GPSes. In Australia, many maps use GDA94, which is roughly equivalent to WGS84. The difference between WGS84 and GDA94 is that the continental plate of Australia moves north easterly around 7cm per year, and in GDA94 coordinates, all points in Australia will not change, whereas in WGS84 terms, every point in Australia will be offset due to the drift. In 2000, the difference between GDA94 and WGS84 is 45cm. By extrapolation, in 2010, it would be around 1 meter, which is still less than the error for standard consumer GPSes of 20 meters. For the most part, you can consider WGS84 and GDA94 the same.

There are many more spatial references defined and collected by various groups, the most popular being EPSG. You can find an extensive list of spatial references at spatialreference.org. Here are some notable references:

There seems to be two types of coordinate systems – projection and geographic. Geographic is the actual system with a datum used to describe location in the 3D world, where as projection is the system used to display the 3D world in 2D, converting a sphere or ellipsoid into a flat map. Three popular projections are Plate Carree, Mercator and UTM. Plate Carree simple maps lon/lat to X and Y, while Mercator progressively scales the map toward the poles, stretching Greenland north and Antarctica south, eventually placing the poles theoretically at infinity. You can find more projections at ESRI’s arcgis documentation.

Once you’ve learnt about what these geographic coordinate systems are, you might want to manipulate them. On the movable-type.co.uk website is sample code to convert between different spatial references or coordinate systems this code. Also on the same site is also code to calculate distance and bearing between two lon/lats points.

There are extensive open source tools available for GIS in the form of the OSGeo collection of tools.

Storage of map objects are available for postgresql (PostGIS) and sqlite (SpatiaLite), both adds OpenGIS standards to database.

The Javascript web map control used by OSM is OpenLayers, which supports multiple layer sources including google maps, yahoo and WMS. Another map display control is MapBuilder, but it seems it has reached its end of life, with the creators recommending OpenLayers. Both are open source.

That should mostly cover what I’ve learnt thus far.

PHP NTLM integration with Samba

I’ve been sitting on this for ages, but finally I’m using the Christmas break to complete this article about integration between Samba and my PHP NTLM script. The basic idea is that pdbedit -w someuser will emit the NT hash (MD4) of someuser. Using this knowledge we will be able to create a helper utility that verifies NTLMv2 hashes by calling on pdbedit. This helper utility I named verifyntlm. A key requirement for this to work is that the Apache/PHP process be in the same machine the samba user database is hosted on, which also means this is strictly for samba and not for Windows servers.

There is one issue though, which is that pdbedit requires root privileges to access the hash.  It unreasonable to think that the apache/PHP could be run in root. The answer to this is to setuid verifyntlm to root such that verifyntlm will get elevated to root every time it gets executed by any user. It also why verifyntlm is written in C as setuid doesn’t not work for scripts. For this to be secure, verifyntlm must be watertight or else it can potentially be used for a root privilege escalation exploit. Maximum care was taken when coding with this but as with everything, I can’t guarantee absolutely security. The source code is for all to see though, so feel free to comb through it.

You would only use verifyntlm in conjunction with ntlm.php as expects parameters such as the user, challenge, hash etc which only ntlm.php provides. The program is designed to divulge as little information as possible, only 1 if successful, 0 for failure. This way in the event that an attacker gains access to the binary, they can gain no new information that’s not already gained through brute forcing a login prompt.

Prerequisites

verifyntlm.c requires:

  • gcc – To compile the C file
  • openssl (library & headers) – If using debian/ubuntu, run apt-get install libssl-dev
  • Samba – Obviously, so we can execute pdbedit

Installation

You may need to modify PDBEDIT_PATH in verifyntlm.c to point to where pdbedit is if it’s not at /usr/bin/pdbedit

Login as root (or add sudo in front) to compile and set the sticky bit:


# gcc verifyntlm.c -lssl -o verifyntlm
# chown root verifyntlm
# chmod u=rwxs,g=x,o=x verifyntlm

Move the binary to a location such as /sbin/


# mv verifyntlm /sbin

If you put the binary somewhere else, please modify $ntlm_verifyntlmpath in ntlm.php.

Usage

After compiling the binary, jump out of root into a normal user and try running it without parameters. If it prints usage information, then you should be set.

On PHP side, here’s how to use it in your own PHP scripts:

[php]
session_start();
$auth = ntlm_prompt("testwebsite", "testdomain", "mycomputer", "testdomain.local", "mycomputer.local", null, "ntlm_verify_hash_smb");

if ($auth[‘authenticated’]) {
print "You are authenticated as $auth[username] from $auth[domain]/$auth[workstation]";
}
[/php]

As always, get the source from my php-ntlm github.

Unfortunately, if you’re looking to integrate PHP NTLM with Active Directory, this is not the article you’re looking for, you may be able to achieve it through winbind, another samba component, but that’s for another time.

MythTV VLC Plugin Now Supports VLC 1.1.5 and MythTV 0.23/0.24

Over the weekend I decided to update the MythTV VLC plugin that I wrote a while ago. The plugin allows you to watch your MythTV recordings in VLC without SMB or any other intermediaries. It talks the Myth Protocol. It’s now updated for VLC 1.1.5 and MythTV 0.23/0.24 and better than ever. It’s still a beta release though, so expect bugs. Some bugs which I’ve already noticed are occasional high CPU (after seek) and occasional crash when stopping playing. Other than that, it’s perfectly usable. This version adds preview pics for your recordings in the VLC media browser.

Currently it only supports the default storage group. You can’t watch LiveTV yet, but support for it is planned. Only the windows version is released but Linux and Mac OS X is possible. (just needs to be recompiled on those platforms)

Download for Windows (0.5)

The source code will be released soon on my github.

PHP NTLM now working with lighttpd/FastCGI

Previously, the PHP NTLM library relied on apache_request_headers which required mod_php on apache. On many setups including lighttpd, php fastcgi is used instead to execute php scripts. This meant that the ntlm script can’t be used – until now. I’ve modified the script to use the HTTP_AUTHORIZATION server variable available to CGI scripts and fallback to apache_request_headers. I’ve also did some minor fixes include fallback to use the hash() function if mhash is unavailable.

The latest version of the library is available from the php_ntlm github.

Dictionary Model Binder in ASP.NET MVC2 and MVC3

In a decidedly typical turn of events, Microsoft changed the API of BindModel in ASP.NET MVC 2 such that it breaks DefaultDictionaryBinder. No longer can you enumerate through the ValueProvider, instead you can only Get a value which you know the name of. I’ve updated the code to work with MVC2 and also tested it with the new MVC 3 RC.

The code is compatible with ASP.NET MVC 1, 2 and 3. To use it for ASP.NET MVC 1, just set the conditional compiler directive ASPNETMVC1 and it will enable the MVC 1 code, otherwise it will work with MVC version 2 and 3.

The code is now up at github: DefaultDictionaryBinder.cs.

There’s also an example MVC3 project showing the basic functionality of the Dictionary Binder: link

Why is syslog-ng taking up 100% of CPU inside a lxc container

While experimenting with LXC, the linux virtual container, which by the way is shaping up to be a viable replacement for openvz, I ran into an annoying issue of syslog-ng taking up 100% of CPU time inside the container. Stumped, I tried to add the -d flag to the syslog command line, but it did not yield any clues.

Armed with strace, and attaching to the rouge process, the following spat out of the console again and again.

gettimeofday({1287484365, 501293}, NULL) = 0
lseek(8, 0, SEEK_END)                   = -1 ESPIPE (Illegal seek)
write(8, "Oct 19 19:39:57 login[439"..., 105) = -1 EAGAIN (Resource temporarily unavailable)

The key lines were lseek and write, both trying to write to file descriptor 8. To find out what fd 8 was, all I had to do was ls -al /proc/7411/fd/8 – The culprit was /dev/tty12. Now having looked into syslog-ng.conf, I was reminded of the fact that By default messages are logged to tty12.... So it seems, tty12 is somehow denying access to syslog. Being in LXC, I decided to check out tty12 by doing lxc-console -n container -t 12. To my surprise, syslog-ng was instantly unclogged as log messages were released into console. It looked as if the tty12 buffer was clogged up.

Regardless of the reason, the easy fix is to stop syslog-ng logging to tty12 as I’m never going look at that far away console. Commenting the console_all lines, all was fixed. This would probably never have happened if I had used metalog :/

Qemu/KVM sometimes not registering Mouse Clicks when used over VNC

After setting up Qemu/KVM and VNC and fixing cursor positioning issues (with the -usbtablet option), I had an annoying issue of the VNC viewer (TightVNC in this case) sometimes missing mouse clicks. You would quickly click on a button and icon and nothing would happen. If you hold it for long enough, it will eventually register. I don’t want to be holding my button for a second to make sure every click regsiters though.

After fiddling around with the options, I finally found the culprit. The option inside the VNC viewer “Emulate 3-buttons (with 2-button click)” seems to be the cause. Turning it off seems to make my mouse clicks reliable. No idea why though.

Getting the version number of your own Chrome Extension

UPDATE As pointed out by commenter Andreas, you can now use a simpler way:
chrome.runtime.getManifest().version
The code below no longer is necessary but kept as reference.

Following on from yesterday’s post about getting the version number of your own firefox extension, what if you were now developing a Google Chrome extension and want the same thing? Google Chrome’s extension API is much more limited that Firefox’s. There’s no explicit extension-metadata-getting API that I know of. However, we do know that the version information is tucked away in manifest.json. With this knowledge and coupled with a few friendly APIs (XMLHttpRequest & JSON.parse) we can now have the equivalent function for chrome:

[js]
function getVersion(callback) {
var xmlhttp = new XMLHttpRequest();
xmlhttp.open(‘GET’, ‘manifest.json’);
xmlhttp.onload = function (e) {
var manifest = JSON.parse(xmlhttp.responseText);
callback(manifest.version);
}
xmlhttp.send(null);
}

// to use
var version;
getVersion(function (ver) { version = ver; });
// version is populated after an indeterminate amount of time
[/js]

As XMLHttpRequest is asynchronous, our method needs a callback to receive the version information. You can also get whatever other information you want in your mainfest.json. So there you go.

Getting your Firefox extension version number

Sometimes you just want to show the version number (or any other meta data) of your own Firefox addon as specified in the install.rdf. You want to display this in your about page for example. In Firefox 2 and 3, you can to use this function:
[js]
function getVersion(addonID) {
var extMan = Components.classes["@mozilla.org/extensions/manager;1"].getService(Components.interfaces.nsIExtensionManager);
var ext = extMan.getItemForID(addonID);
ext.QueryInterface(Components.interfaces.nsIUpdateItem);
return ext.version;
}

// usage
var version = getVersion("{my addon id}");
[/js]

Inputting your addon ID, you will get your version number back. Inside the method, the variable ext (nsIUpdateItem) has a lot more metadata properties you can use as well if you so choose.

However, in the new Firefox version 4, the addon APIs have totally changed. The call to get the addon metadata has changed from synchronous to asynchronous which tends throws a spanner into the works. There seems to be a tendency to migrate all javascript data calls to asynchronous to avoid freezing the UI. Thus, rewriting getVersion means rewriting all the consumers to expect asynchronous operations. Here’s the new version that supports Firefox versions 2 to 4.
[js]
function getVersion(addonID, callback) {
var ascope = { };

if (typeof(Components.classes["@mozilla.org/extensions/manager;1"]) != ‘undefined’) {
var extMan = Components.classes["@mozilla.org/extensions/manager;1"].getService(Components.interfaces.nsIExtensionManager);
var ext = extMan.getItemForID(addonID);
ext.QueryInterface(Components.interfaces.nsIUpdateItem);
callback(ext.version);
return;
}

if (typeof(Components.utils) != ‘undefined’ && typeof(Components.utils.import) != ‘undefined’) {
Components.utils.import("resource://gre/modules/AddonManager.jsm", ascope);
}

ascope.AddonManager.getAddonByID(addonID, function (addon) { callback(addon.version); } );

}

// usage:
var version;
getVersion("{my addon id}", function(ver) { version = ver; });
// you don’t know when version will be populated

[/js]

Generally, in my use cases, I need to get the version and I don’t want to continue on until I have the version. Other than rewriting, the only workaround is to fetch the version number on extension start up and cache it somewhere. Thus all subsequent get version calls will get that cached number synchronously. Such a pain :/