Archive for the ‘Code’ Category

DRMed TV stream

August 6, 2010

For some time I’m living with a friend who now uses Linux on her computer, but mostly because she likes the idea of Linux, and also because the Ubuntu install breaks less often than the Windows Vista preinstalled on that computer, works faster etc., but not because she likes coding or digging in the intestines to customise things or make rare things work.  As a result every some time I’m asked to make this and that work. This week’s request was to enable her to watch a TV programme she likes, one that is aired on the TVN24 channel.  Since I don’t watch TV, she bought a three-day online subscription to TVN24 after she googled an Ubuntu forums post where someone had success watching TV on Ubuntu.  So it turns out the channel indeed can be watched online if you have a Microsoft DRM-enabled player.  Also it turns out that mplayer does support Microsoft DRM decoding using code borrowed from the FreeMe2 opensource DRM decoder.

That is, only if you have the DRM key, particularly the SID which is a 30 or so digit number and which seems to be unique to the TV channel or a group of songs or a movie if you buy it through one of the online services.  It’s as simple as passing -demux lavf -lavfdopts cryptokey=<The-SID>.  Apparently the SID doesn’t change very often so it would probably enable you to watch the TV beyond the subscription period, but that wasn’t my goal, I just need an open-source player. The activation or deactivation of the subscriptions are handled using some non-cryptographic methods (IOW security through obscurity only).  That means that the SID is well hidden in Windows when Windows Media Player downloads the DRM keys, and it seems the method of hiding it is different in each new version.  Windows Media Player also checks that you’re using the latest DRM version every time you download a new DRM license.  I’m not sure if the method of requesting and downloading the licenses also changes, I’d guess it changes less often or is completely standardised.  Unfortunately all software that I was able to find to extract the SID is based on reading the Windows key storage structures, registry etc. from disk (which change), rather than interpreting the network communications between the client and the server.  It seems the central place for all this software is the undrm.info website which has been pretty stagnant since 2008, so none of these programs work with the latest Windows Media Player anymore.

We restored the Windows Vista partition to a bootable state and launched the closed-source player there, but I have a tcpdump of all the communication that was happening between the client and the license server up to when it started playing.  The protocol (as of 2008) has been beautifully documented by Beale Screamer and it seems the dump contains all of the elements mentioned there.  I’d love to implement the algorithms from that documentation to try to calculate the SID, if I had a week of vacations on a desert island or was retired :)  But if anyone else has time to play with it and wants that achieve undying fame (or an undying subscription to the various TV channels), I’ll happily give you the TCP log, a sample encrypted fragment of the wmv stream, or give links to other places that use Ms DRM, and links to existing code and documentation.

Exporting javascript objects

July 16, 2010

gateway.py, the script that is part of my little ofono web-based client which I mentioned before, now lets its clients export their own objects as D-bus objects.  While not very practical, a full D-bus service can be written in javascript that way.  I added this capability mainly because ofono will now start using agent interfaces as part of its D-bus interface and other daemons, like connman and bluez, already have agents, so you need to be able to export an object to make a UI.

All other types of http requests remain as listed here.  The new (GET) requests are:

  • /path/to/object/export/<n>/ObjectName[;Interface.MethodOrSignalName,<in_signature>[,<out_signature>][;…]] – This looks complex but it just creates a D-bus object with the given name (arbitrary string, used just internally), and a given list of members on given interfaces. Members list is ;-delimited, each member is a signal or a method depending on whether it has an out signature.  New calls with their parameters are sent to the client in replies to the “idle” request using the same syntax as for signals from remote objects.  In turn signals from exported objects are emitted using the same syntax as method calls on remote objects.
  • /path/to/object/Interface.MethodName/return/[<value>[,…]] – Return a (possibly empty) tuple of values to a pending call sent in an “idle” request earlier.
  • /path/to/object/Interface.MethodName/error/<string> – Return an exception / error constant to a pending call sent in an “idle” request earlier.

Javascript to D-bus, can you hear me

June 1, 2010

(I guess I better post something here because it’s been a while.)  So over the last two weekends I made a simple oFono client in javascript, meaning that it’s browser-based or “web-based”.  To do that I needed a way to talk to D-bus over HTTP.  I’ll try to set up a demo instance of the client later but now I’ll just mention the HTTP to D-bus gateway.  Even though the whole thing is a hack, maybe the gateway will be useful to someone.  It’s also possible that there are already fifteen similar programs there, I’ve not really checked.

The idea is rather simple, it’s a 10 kilobyte python script called gateway.py and you can run it in some directory and it will run a primitive web server on a given port using python’s built-in http library, and will serve the files from the current directory and its subdirectories.  It also understands a couple of fake requests that enable web applications to talk to D-bus.  It connects to the system bus and relays messages to and from D-bus services using the following three types of (GET) requests:

  • /path/to/object/Interface.Method(parameters) – This makes a regular D-bus call to a given method on a given interface of an object.  It’s synchronous and the HTTP response will contain the D-bus response written as JSON.  The D-bus types correspond very neatly to JSON types so the response is easy to use in javascript on the web.
  • /path/to/object/Interface.Signal/subscribe/<n> – This subscribes the application to a given D-bus signal.  The applications identify themselves with a number (<n>), this can be any integer but it should be (reasonably) unique, for example it can be a random number generated when the application loads.
  • /idle/<n> – This just waits for any signal that application <n> is interested in, to arrive.  The signal arguments are then sent to the client as JSON again, in the HTTP response.  This way the browser keeps a socket open to the server and signals are sent over it.

Here are some example calls to make it clearer, along with their return values:

  • $ wget 'http://localhost:8000/modem01/VoiceCallManager.Dial("5555")' -O -
    '/modem01/voicecall01'
  • $ wget 'http://localhost:8000/modem01/voicecall01/VoiceCall.GetProperties()' -O -
    { 'State': 'active', 'StartTime': '2010-06-01T02:16:34+0200', 'LineIdentification': '5555' }
  • $ wget 'http://localhost:8000/modem01/Modem.PropertyChanged/subscribe/500' -O -
    null

It’s easy enough to make a little javascript class in your code to hide the http stuff away so you can make plain js calls and get the return values and have handlers called for the signals.  Also, obviously ajax doesn’t just sit waiting for a http response so your application doesn’t become synchronous in any way.

You’ll notice that the interface names are shortened to just the last part of the name.  Since the part before the dot is usually same as the service name, you can skip it and it’ll be added automatically.  So you can write either /org.ofono.ModemManager or just /ModemManager

To check out the repository do,

git clone http://openstreetmap.pl/balrog/webfono.git

It’s python 3 and uses the D-bus and glib bindings, so getting these dependencies installed may be a little challenge at this point.

Wikipedia overlay

October 6, 2009

Last week I’ve set up an overlay for OSM that displays Wikipedia links completely obstructing the view of the map.  I explained it in more detail in this mailing list posting, but other people have blogged about it so I probably should too.

It’s not like the Google wikipedia layer because it display links from OpenStreetMap entities to Wikipedia, not the other way.  At the low zoom levels you’ll only see dots but if you zoom in to an interesting place there will be roads, rivers, polygon areas etc all linking to respective Wikipedia pages.  Only Firefox is supported because I’m not using OpenLayers (but some WebKit-based browsers seem to work some of the times, and a commercial browser starting with O).

The goal of this is to get more people using the wikipedia= tag in OSM — if you’ve been making applications with OpenStreetMap data you’ve surely noticed that people much more often map features that get visualised somewhere in some way.  It’s also an experiment in a couple of directions: it’s a tiled GeoJSON layer (as opposed to bitmap tiles) — this gets us browser caching and seems to be much faster than an area query like OpenStreetBrowser uses.  The tiles can be retrieved using JSON-P in addition to xhr, I also have added a kind of “kinetic” zoom — the base map widget is based on Bernhard Zwischenbrugger’s excellent zoom zoom zoom map in place of OpenLayers, meaning it’s also 20 times smaller in terms of lines of code.  Also zoom beyond mapnik tile levels is supported, this may be good accessibility wise even though it’s a bad workaround for the default mapnik style rendering names in a pretty small font.

I’ve also set up a http redirect for wikipedia interwiki links and images that saves you one click, it’s fully described at the OSM forums but in short, if you only know the german title of a wikipedia page referring to something, you can type http://wp.openstreetmap.pl/de:Bananen and you’ll be redirected to a page about bananas in the language configured in your browser.  http://es.wp.openstreetmap.pl/de:Bananen in turn will send you to the Spanish page about bananas, i.e. http://es.wikipedia.org/wiki/Musa_×_paradisiaca

Morton numbers

August 3, 2009

Long time no posting, but I have excuses (also I’m posting some at openstreetmap user diaries).

So anyway here’s a cheap trick I came up with but which you might already know.  If you’re indexing any georeferenced data, such as when doing any fun stuff with OpenStreetMap data, you’ve probably wanted to index by location among other things, and location is two or three dimensional (without loss of generality assume two as in GIS).  So obviously you can combine latitude and longitude as one key and index by that but that’s only good for searching for exact pairs of values.  If your index is for a hash table then you can’t hope for anything more but if it’s for sorting of an array you can do a little better (well, here’s my trick): convert the two numbers to fixed point and interleave their bits to make one number.  This is better because two positions that are close to each other in an array sorted by this number probably are close to each other on the map.  You could probably use floating point too if you stuff the exponent in the most significant bits and get a result similar to some degree.  With fixed point you can then compare only the top couple of bits when searching in the array to locate something with a desired accuracy.

Converting to and from the interleaved bits form is straight forward and you can easily come up with a O(log(number of bits)) procedure (5 steps for 32 bit lat / lon) or use lookup tables as suggested by the Bit Twiddling Hacks page, where I learnt they’re called Morton numbers.  32-bit lat/lon will give you a 64-bit number and that should be accurate enough for most uses if you map the whole -90 – 90 / -180 – 180 deg range to integers.  Even 20-bit lat/lon (5 bytes for the index) gives you 0.0003 deg accuracy.

So what else can you do with this notation? Obviously you can compare two numbers and use bisection search in arrays or the different kinds of trees.  You  can not add or subtract them directly (or rather, you won’t get useful results) but you can add / subtract individual coordinates without converting to normal notation and back, here’s how:

First separate latitude from longitude by masking:

uint64_t x = a & 0x5555555555555555;
uint64_t y = a & 0xaaaaaaaaaaaaaaaa;

Now you can subtract two numbers directly, you’ll notice that the carry flags are correctly carried over the unused bits, you’ll just need to mask them out of the result:

uint64_t difference(uint64_t a, uint64_t b)
{
	...
	return ((ax - bx) & 0x5555555555555555) |
		((ay - by) & 0xaaaaaaaaaaaaaaaa);
}

(you can also use this Masked Merge hack from the page I linked earlier).

The result is signed two’s complement with two sign bits in the top bits.

Now something much less obvious is that if you want to calculate absolute difference, you can call abs() directly on the result of subtraction and only mask out the unused bits afterwards.  How does this work?  The top bit in (ax - bx) always equals the sign bit even if ax and bx only use even bits (top bit is odd), so this part is ok.  Now, if the number is positive then there’s nothing to do with it.  If it’s negative, then abs negates it again (strips the minus).  Conveniently -x equals ~(x - 1) in two’s complement, so let’s see what these two operations do to a negative (ax - bx)~ or bitwise negation just works because it inverts all bits including the ones we’re interested in.  The x – 1 part also works because it flips all the bits until the first 1 bit starting from lowest bit, and you’ll find, although it may be tricky to see, that the first bit set in (ax - bx) is always even (or always odd).

uint64_t distance(uint64_t a, uint64_t b)
{
	...
	return (llabs(ax - bx) & 0x5555555555555555) |
		(llabs(ay - by) & 0xaaaaaaaaaaaaaaaa);
}

Addition requires a little trick for the carry flags to work: just set all unused bits in either ax or bx:

uint64_t sum(uint64_t a, uint64_t b)
{
	uint64_t ax = a & 0x5555555555555555;
	uint64_t ay = a & 0xaaaaaaaaaaaaaaaa;
	uint64_t bx = b | 0xaaaaaaaaaaaaaaaa;
	uint64_t by = b | 0x5555555555555555;

	return ((ax + bx) & 0x5555555555555555) |
		((ay + by) & 0xaaaaaaaaaaaaaaaa);
}

Python stack in GDB

February 9, 2009

I’m sure everyone already knows about this, but it’s such a nice feature I’ll post it anyway.

There’s a set of macros for gdb, described in a comment on this page, that will let you attach to a running python program using gdb and inspect its python call stack and python objects using the familiar interface of gdb.   I’m a complete stranger to python and couldn’t figure out how to enable the python debugger, and it would get me lost even if I managed to enable it. Additionally I was trying to find out when and why a python program uses a particular syscall and I’m not sure the python debugger can help with this.  For the record that python program blocks all signals so I couldn’t just send it a signal and have it print the stack.

I’m wondering if you can do the same thing with Java, and who’ll be the first to implement the gdb macros.  I’ve not coded java for years but it makes me want to have a look at it again considering there’s source code for it now (I just wish I had the time). How about swi-prolog?

Practical note: For this to work you will need to rebuild python with debug information in. If you’re on Gentoo, whose default package manager uses python, and if you still have python2.4 installed, if you screw up your python2.5 installation, you can revive emerge by running it implicitly with python2.4 (python2.4 /usr/bin/emerge blah blah).  To rebuild python with custom options, edit /usr/portage/dev-lang/python/python-2.5.2-r8.ebuild to add –with-pydebug, and run ebuild /usr/portage/dev-lang/python/python-2.5.2-r8.ebuild digest unpack, then edit /var/tmp/portage/dev-lang/python-2.5.2-r8/work/Python-2.5.2/Objects/unicodeobject.c to remove the assert on line 372, which seems to be a typo, and then ebuild /usr/portage/dev-lang/python/python-2.5.2-r8.ebuild compile install qmerge to let it finish.  You may need to re-emerge some of the packages that have installed into /usr/lib/python2.5/site-packages) for your program to work again.

Accelerating in my pocket

June 8, 2008

I started poking at the SMedia Glamo chip in the GTA02 this week. First I played with the Linux framebuffer driver and later with decoding MPEG in hardware, and now I have some code ready. I was challenged by messages like this on the Openmoko lists. Contrary to the opinion spreading accross these messages, we’re not doomed and we still have a graphics accelerator in a phone (which is coolness on its own). And it’s a quite hackable one.

I first had a look at libglamo code – a small library written some time ago by Chia-I Wu (olv) and Harald Welte (laf0rge) for accessing some of the Glamo’s submodules (engines). I asked the authors if I could use their code and release it under GPL and they liked the idea, so I stitched together libglamo and mplayer and added the necessary glue drivers. This wasn’t all straight forward because mplayer isn’t really prepared for doing decoding in hardware, even though some support was present. Today I uploaded my mplayer git tree here – see below what it can and cannot do. There’s lots more that can be improved but the basic stuff is there and seems to work. To clone, do this:

cg-clone git://repo.or.cz/mplayer/glamo.git

The Glamo fact sheet claims it can do MPEG-4 and H-263 encoding/decoding at 352×288, 30fps max and 640×480 at 12fps max. Since it also does all the scaling/rotation in hardware, I hoped I would be able to play a 352×288 video scaled to 640×480 at full frame-rate but this doesn’t seem to be the case. The decoding is pretty fast but the scaling takes a while and rotation adds another bit of overhead. That said, even if mplayer is not keeping up with the video’s frame-rate it still shows 0.0% CPU usage in top. There are still many obvious optimisations that can be done (and some less obvious that I don’t know about not being much into graphics). Usage considerations:

  • Pass “-vo glamo” to use the glamo driver. The driver should probably be a VIDIX subdriver in mplayer’s source but that would take much more work as VIDIX is very incomplete now, so glamo is a separate output driver (in particular vidix seems to support only “BES” (backend scaler?) type of hw acceleration, which the Glamo also does, but it does much more too). Like vidix, it requires root access to run (we should move the driver to the kernel once there exists a kernel API for video decoders – or maybe to X).
  • It only supports MPEG-4 videos, so you should recode if you want to watch something on the phone without using much CPU. H-263 would probably only require some trivial changes in the code. For completeness – MPEG-4 is not backwards compatible with MPEG1 or 2, it’s a separate codec. It’s the one used by most digital cameras and it can be converted to/from with Fabrice Bellard’s ffmpeg. A deblocking filter is supported by the Glamo but the driver doesn’t yet support it. For other codecs, “-vo glamo” will try to help in converting the decoded frames from YUV to RGB (untested), which is normally the last step of decoding.
  • The “glamo” driver can take various parameters. Add “:rotate=90” to rotate (or 180 or 270) – the MPEG engine doesn’t know about the xrandr rotation and they won’t work together. Add “:nosleep” to avoid sleeping in mplayer – this yields slightly better FPS but takes up all your CPU, spinning.
  • Supports the “xover” output driver, pass “-vo xover:glamo” to use that (not very useful with a window manager that makes all windows full-screen anyway).
  • Only works with the 2.6.22.5 Openmoko kernels. There were some changes in openmoko 2.6.24 patches that disabled access to the MPEG engine but since we don’t have a bisectable git tree I can’t be bothered. UPDATE: A 2.6.24 patch here – note that it can eat your files, no responsibility assumed. I guess it can also be accounted for in mplayer, will check. My rant about lack of changes history in git is still valid – while I loved the switch to git, the SVN was being maintained better in this regard.
  • In the mplayer git tree linked above I enabled anonymous unmoderated push access so improvements are welcome and easy to get in.

With respect to the linux framebuffer poking, I wanted to see how much of the text console rendering can be moved to the hardware side and it seems the hw is not lacking anything (scrolling, filling rectagles, cursor) compared to the other accelerated video cards, and even the code already exists in Dodji Seketeli’s Xglamo. I’m sure sooner or later we’ll have it implemented in the kernel too. For now I got the framebuffer to use hardware cursor drawing (alas still with issues).

Bricked! lol

May 28, 2008

Somewhat related to the Phoenix probe landing, I found in the Viking mission page on wikipedia (the exams are here again and I’m looking up things on WP and then getting stuck reading completely unrelated stuff and consequently failing exams) an amazing bit of information. The mission started in 1975 when it sent to Mars two NASA rockets carrying four spacecraft, each having on-board a computer based on the RCU1802 chip (that was a legitimate computer at that time). All four vessels successfully carried out their missions but each one failed years later in a different way. Three computers were shut down in appropriate ways worth a space travel (physical damage) but the last operating one has this failure reason: Human error during software update.  Sounds so contemporary.

It’s amazing that a board that left Earth in 1975 could be updated from 100,000,000 km away (some vendors still don’t get it about updates). Even more amazing is that the discussion of whether (and how) to protect software from the user is still not resolved. FIC GTA phones evolve a pattern of writable and read-only memories to become “un-brickable”. I’m sure that’s partially because it becomes less clear who is a user and who is the developer (like in a NASA mission). It’s clear that nobody wants their mission to end this way, “a lorry ran over my phone” somehow sounds much better.

OMAP3 resources opened

April 9, 2008

Texas Instruments OMAP series of mobile CPUs have for some time had okay Linux support with parts of the code coming from community, parts from TI and parts from Nokia, one of the vendors. This month we start seeing results of TI’s recent efforts on making this support better by opening various technical resources that were available only to the vendors earlier. Yesterday the announcement of their DSP-bridge framework release under GPL was posted to the linux-omap list, and as of this week you can download the entire TRMs (35MB PDF each) for various OMAP3 CPUs from ti.com. Added to this are various types of manuals, example code and that covers also the recently announced 35xx models.

I had an occasion to be at TI’s Rishi Bhattacharya’s talk at BossaConference last month with a sneak peek on the process of opening OMAP3 related resources that had been ongoing internally for some time. Apparently more releases are planned including among other things some GPLed sources (and some freeware binaries) of DSP codecs for use on OMAP. This also should make life a fair bit easier. One of the interesting points was also the evaluation board for the new processors which looks a bit more like a final product than previously made evaluation boards. It’s called Zoom MDK and it’s sold by a third party. It includes a modem, optional battery and a neat case so it can potentially be used as a (only slightly oversize for today’s standards) phone, and comes equipped with a full Linux SDK. One of the points is also to make it more affordable so that individual developers are not excluded (currently only available through a beta programme but the final price was said to be aiming at below $900). There’s an effort to have Openmoko running on the thing. Looking forward to that and to the rest of the releases from TI.

ZoomMDK external view

4: OABI spec

February 10, 2008

Bad news, I’m gonna talk about OABI again.  I just want to write down what I found about it before I forget, so that it gets indexed and a person needing to know something can find it on google.

I was told by a gcc hacker that it was based on the APCS32 ABI whose specification can be found here.  The specification is however very vague about some parts, and other parts are simply different from OABI, so I’ll point these out and refer to APCS32 in other places, and compare with EABI also.

Control arrival. One thing that is not specified at all in either APCS32 or EABI is the program entry point requirements. These may be system-specific but the Linux Standard Base has no mention of ARM entry point either.  The only reference thus is the Linux kernel code. Qemu-arm code is based on it. The requirements don’t seem to have changed between OABI and EABI and they’re also pretty much identical to the x86 entry point requirements which can be found in the SysVr4 docs, modulo some of the tags put on the stack before entry. They can be found in Linux or qemu and I’m not gonna list them here.

APCS Variants.  The APCS32 document specifies 16 incompatible variants based on four different properties that can have two possible values each.  Linux OABI is the 32-bit case (as opposed to 26), with implicit stack-limit checking (as opposed to done in software), floating-point arguments/return values passed in registers and on stack (i.e. FPU registers are not used for that) and is non-reentrant (except libraries).

Arguments passing.  Register have the same meanings as in APCS32 with first four words of argument list passed in registers and the rest on stack, with the possibility of a single argument split between the two.

Floating-point values.  There’s no mention of their encoding in APCS32 but it seems to be the standard IEEE 754 encoding – with a small caveat… Doubles and long doubles have their first 32-bit word swapped with the second word, when compared to EABI or x86. The same applies to both of the individual doubles inside a double _Complex and inside long double _Complex.

Return values.  Here we have the same three variants as in APCS32: no return value, return value in register(s) and return value in an implicit pointer passed as arg0.  There is a very tricky difference from APCS32 though: when is the second variant chosen and when is it the third one.  APCS32 recognises something it calls simple types which it defines as anything that fits in four bytes.  Anything bigger is returned through a pointer.  In OABI there seems to be a similar idea except the simple type is defined differently: All C basic types are simple, even if they exceed the word width.  In addition to this a struct seems to be considered simple as long as it has only a single member whose type is simple (possibly a struct) and not larger than one word.  Arrays are never simple and unions are simple if all their fields are simple.

The $100 question is how do you return an object of a simple type wider than one word in a register?  APCS32 allows only r0 to be used for that, but gcc doesn’t mind using also r1, r2 and r3.  So a long long int, double, long double, int _Complex or a float _Complex will all be returned in the r0-r1 pair, while double _Complex and long double _Complex get returned in r0-r3.

Alignment.  Pointers have to be word-aligned only and this applies also to the stack pointer on call.  This is nothing special but if you want to mix OABI and EABI it becomes a major caveat because EABI requires the stack to be aligned on 8-bytes in inter-linking-unit calls.  If you forget about it and call and an EABI function from OABI context you will get the strangest and extremely hard to debug results, such as glibc sprintf() returning a wrong value, which can very painful.

Another change that happened at the same time as the OABI to EABI switch in Linux was a switch from setjmp/longjmp based C++ exceptions (the generic, cross-platform way) to a new, faster model (EABI does specify how exceptions should be handled and how stack unwinding works, while APCS doesn’t – this aspect is known as C++ personality across the docs and code).  I am not describing it here.

If something of the above is wrong, please lemme know.