Archive for the ‘OpenMoko’ Category

Accelerating in my pocket

June 8, 2008

I started poking at the SMedia Glamo chip in the GTA02 this week. First I played with the Linux framebuffer driver and later with decoding MPEG in hardware, and now I have some code ready. I was challenged by messages like this on the Openmoko lists. Contrary to the opinion spreading accross these messages, we’re not doomed and we still have a graphics accelerator in a phone (which is coolness on its own). And it’s a quite hackable one.

I first had a look at libglamo code – a small library written some time ago by Chia-I Wu (olv) and Harald Welte (laf0rge) for accessing some of the Glamo’s submodules (engines). I asked the authors if I could use their code and release it under GPL and they liked the idea, so I stitched together libglamo and mplayer and added the necessary glue drivers. This wasn’t all straight forward because mplayer isn’t really prepared for doing decoding in hardware, even though some support was present. Today I uploaded my mplayer git tree here – see below what it can and cannot do. There’s lots more that can be improved but the basic stuff is there and seems to work. To clone, do this:

cg-clone git://repo.or.cz/mplayer/glamo.git

The Glamo fact sheet claims it can do MPEG-4 and H-263 encoding/decoding at 352×288, 30fps max and 640×480 at 12fps max. Since it also does all the scaling/rotation in hardware, I hoped I would be able to play a 352×288 video scaled to 640×480 at full frame-rate but this doesn’t seem to be the case. The decoding is pretty fast but the scaling takes a while and rotation adds another bit of overhead. That said, even if mplayer is not keeping up with the video’s frame-rate it still shows 0.0% CPU usage in top. There are still many obvious optimisations that can be done (and some less obvious that I don’t know about not being much into graphics). Usage considerations:

  • Pass “-vo glamo” to use the glamo driver. The driver should probably be a VIDIX subdriver in mplayer’s source but that would take much more work as VIDIX is very incomplete now, so glamo is a separate output driver (in particular vidix seems to support only “BES” (backend scaler?) type of hw acceleration, which the Glamo also does, but it does much more too). Like vidix, it requires root access to run (we should move the driver to the kernel once there exists a kernel API for video decoders – or maybe to X).
  • It only supports MPEG-4 videos, so you should recode if you want to watch something on the phone without using much CPU. H-263 would probably only require some trivial changes in the code. For completeness – MPEG-4 is not backwards compatible with MPEG1 or 2, it’s a separate codec. It’s the one used by most digital cameras and it can be converted to/from with Fabrice Bellard’s ffmpeg. A deblocking filter is supported by the Glamo but the driver doesn’t yet support it. For other codecs, “-vo glamo” will try to help in converting the decoded frames from YUV to RGB (untested), which is normally the last step of decoding.
  • The “glamo” driver can take various parameters. Add “:rotate=90” to rotate (or 180 or 270) – the MPEG engine doesn’t know about the xrandr rotation and they won’t work together. Add “:nosleep” to avoid sleeping in mplayer – this yields slightly better FPS but takes up all your CPU, spinning.
  • Supports the “xover” output driver, pass “-vo xover:glamo” to use that (not very useful with a window manager that makes all windows full-screen anyway).
  • Only works with the 2.6.22.5 Openmoko kernels. There were some changes in openmoko 2.6.24 patches that disabled access to the MPEG engine but since we don’t have a bisectable git tree I can’t be bothered. UPDATE: A 2.6.24 patch here – note that it can eat your files, no responsibility assumed. I guess it can also be accounted for in mplayer, will check. My rant about lack of changes history in git is still valid – while I loved the switch to git, the SVN was being maintained better in this regard.
  • In the mplayer git tree linked above I enabled anonymous unmoderated push access so improvements are welcome and easy to get in.

With respect to the linux framebuffer poking, I wanted to see how much of the text console rendering can be moved to the hardware side and it seems the hw is not lacking anything (scrolling, filling rectagles, cursor) compared to the other accelerated video cards, and even the code already exists in Dodji Seketeli’s Xglamo. I’m sure sooner or later we’ll have it implemented in the kernel too. For now I got the framebuffer to use hardware cursor drawing (alas still with issues).

Bricked! lol

May 28, 2008

Somewhat related to the Phoenix probe landing, I found in the Viking mission page on wikipedia (the exams are here again and I’m looking up things on WP and then getting stuck reading completely unrelated stuff and consequently failing exams) an amazing bit of information. The mission started in 1975 when it sent to Mars two NASA rockets carrying four spacecraft, each having on-board a computer based on the RCU1802 chip (that was a legitimate computer at that time). All four vessels successfully carried out their missions but each one failed years later in a different way. Three computers were shut down in appropriate ways worth a space travel (physical damage) but the last operating one has this failure reason: Human error during software update.  Sounds so contemporary.

It’s amazing that a board that left Earth in 1975 could be updated from 100,000,000 km away (some vendors still don’t get it about updates). Even more amazing is that the discussion of whether (and how) to protect software from the user is still not resolved. FIC GTA phones evolve a pattern of writable and read-only memories to become “un-brickable”. I’m sure that’s partially because it becomes less clear who is a user and who is the developer (like in a NASA mission). It’s clear that nobody wants their mission to end this way, “a lorry ran over my phone” somehow sounds much better.

Unscientific GPS note

April 28, 2008

Last week I charged the different batteries and took a GTA01 Neo, a GTA02 Neo and a Nokia N810 with me to enable their GPSes on my way home from school. Then I saved the traces they logged and loaded into JOSM to have a look (GTA01, GTA02, N810 – gpx files converted using gpsbabel from nmea)

The devices made respectively 11.28km, 12.12km and 11.07km routes (sitting in the same bag the whole time).

All in all I like the GTA01 accuracy the most although all three sometimes have horrible errors. They all three have accuracy about near the bottom line of usability for OSM mapping for a city, so if you get a GPS with that in mind, it may be slightly disappointing. All three are quite good at keeping the fix while indoor but everytime there’s not enough real input available they will invent their own rather than admit (if you had physics experiments at high-school and had to prove theories that way, you know how this works), resulting in run-offs to alternative realities – especially the N810 likes to make virtual trips. They all three apparently do advanced extrapolation and most of the time get things right, but the GTA01 GPS (the hammerhead) very notably assumes in all the calculations that the vehicle in which you move has a certain inertia and treats tight turns as errors. I’m on a bike most of the time and can turn very quickly and it feels as if the firmware was made for a car (SpeedEvil thinks rather a supertanker).

It’s suprising how well they all three can determine the direction in which they’re pointing even when not moving (the GTAs more so). The firmwares seem to rely on that more than on the actual position data sometimes. This results in a funny effect that the errors they make are very consistent even if very big – once the GPS thinks it’s on the other side of a river from you (or worse in the middle), it will stay there as long as you keep going along the river.

I’m curious to see what improvement the galileo system brings over GPS.

UPDATE: I was curious about the precision with which the altitude is reported, which can’t be seen in JOSM.  First I found that the $GPGGA sentences on my GTA01 have always 000.0 in the elevation field, but the field before it (normally containing HDOP) has a value that kind of makes sense as an altitude, so I swapped the two fields (HDOP value should be < 20.0 I believe?).  Then I loaded the data into gnuplot to generate this chart:

The horizontal axis has longitude and vertical the elevation in metres above mean sea level.  Err, sure?  I might have screwed something up but I checked everything twice.  Except the GTA01 which might be a different value completely – but the is some correlation.  I’m not sure which one to trust now.

Trip

April 21, 2008

So I went to Brazil last month but had no time to put any pictures online, now I uploaded them here. Also uploaded some pictures from a trip to Spain that was just before that.

Brazil cities reminded me a lot of Peru, which was the only place I had seen in America (this comparison must seem awfully ignorant to anyone who lives in some place between Brazil and Peru). We spent one week in Ceará region seeking out best places for paragliding. One of the spots was the launch pad near Nossa Senhora Imaculada sanctuary near Quixada where “Sol” group (Brazil) took off last year and set the current world record in straight distance paraglider flight landing over 460km away. (Obviously this was a different season and incomparable weather conditions.)  I made an attempt to adapt my Neo1973 Linux phone to dub as a variometer using the altitude data from built-in GPS.  Impressively the measures are somewhere on the edge of being accurate enough for that purpose, but time resolution is way too low (normal variometers use air pressure changes rather than GPS).  The speaker is loud enough to emit the familiar beeping of a variometer (so good enough for showing off even if inaccurate).  The GTA02 should be much better with its 3D accelerometers, but I didn’t have time to play with it yet.

The second week the group split and I went Bossa ’08 conference that was in a fantastic setting and from where I brought home a collection of five geeky t-shirts.

OMAP3 resources opened

April 9, 2008

Texas Instruments OMAP series of mobile CPUs have for some time had okay Linux support with parts of the code coming from community, parts from TI and parts from Nokia, one of the vendors. This month we start seeing results of TI’s recent efforts on making this support better by opening various technical resources that were available only to the vendors earlier. Yesterday the announcement of their DSP-bridge framework release under GPL was posted to the linux-omap list, and as of this week you can download the entire TRMs (35MB PDF each) for various OMAP3 CPUs from ti.com. Added to this are various types of manuals, example code and that covers also the recently announced 35xx models.

I had an occasion to be at TI’s Rishi Bhattacharya’s talk at BossaConference last month with a sneak peek on the process of opening OMAP3 related resources that had been ongoing internally for some time. Apparently more releases are planned including among other things some GPLed sources (and some freeware binaries) of DSP codecs for use on OMAP. This also should make life a fair bit easier. One of the interesting points was also the evaluation board for the new processors which looks a bit more like a final product than previously made evaluation boards. It’s called Zoom MDK and it’s sold by a third party. It includes a modem, optional battery and a neat case so it can potentially be used as a (only slightly oversize for today’s standards) phone, and comes equipped with a full Linux SDK. One of the points is also to make it more affordable so that individual developers are not excluded (currently only available through a beta programme but the final price was said to be aiming at below $900). There’s an effort to have Openmoko running on the thing. Looking forward to that and to the rest of the releases from TI.

ZoomMDK external view

3: Getting gllin to run

January 27, 2008

I was going to make a small trip this weekend but I missed my plane and have to wait until next week. But that means I already have a good excuse for not spending the weekend studying for this week’s exams and I have finally put the time into making gllin behave under Schwartz.

Gllin is a closed-source driver for the Global Locate (now Broadcomm?) GPS known as Hammerhead and it’s been said it didn’t work when the folks compiled it for ARM EABI (i.e. what is used on most ARMs currently) so they only released the OABI binary (the ad-hoc ABI that was used on Linux until ARM came up with a standard ABI and hired people to implement it). So the downloadable gllin package comes with an OABI rootfs which will run under chroot if you have OABI support in your kernel. It seemed wrong to me to have a second rootfs on my phone to run a single program, and it has several other drawbacks.

With the Schwartz loader/linker you can run OABI-compiled programs natively on Linux systems that use different ABIs. This is achieved through translation of library calls that I mentioned previously. Schwartz is by no means complete, and more than anything it’s a proof-of-concept, but it seems to be usable and today my Neo1973 had an actual 3d fix and gave me real coordinates as well as satellite time/date and other info. I took my Neo for an excursion to the shopping mall (not so much to show off, but) to make my first GPS trace for OpenStreetMap. It ran quite stably for the whole 2h and I uploaded the trace here. So here’s how to use it.

Download the schwartz binary from here or here (minimal version). The sources are in this git tree, but building them is not exactly straight-forward. Upload the file to your Neo1973 (or qemu-neo1973). Upload also the gllin binary if you don’t have it there already. In the openmoko package the binary is named gllin.real because gllin is a wrapper script that runs the whole chroot thing. You only need the “.real” binary. You can also safely leave out OABI support from your kernel. Next, make the named pipe for your NMEA data, same way the openmoko package does. After that we’re ready to run gllin and then your favourite gps software.

 $ mknod /tmp/nmeaNP p
 $ cat /tmp/nmeaNP | gzip >> /home/root/gps.gz &
 $ ./ld4 --depnofail --weakdummies --settargetname --noinit gllin -low 5
 $ ./ld4 --depnofail --weakdummies --settargetname --noinit gllin -periodic 2

You can modify the scripts from the package to do all that. ld4 is quite verbose and will print lots of stuff tot he console, which just shows how far it is from completeness. The minimal ld4 differs from the full binary in that the “strace” code is not compiled in. With the full binary, if you append –trick-strace to the cmdline options you will get a strace-like (but more pretty!) log of all functions being called and their parameters. This may potentially be useful for the folks reverse engineering the Hammerhead protocol but I’m not really sure. In the ld4 output you can see a lot of debugging messages and other, that gllin doesn’t normally print out. I have not noticed any anomalies when running gllin under Schwartz but it’s totally possible that the floating-point precision is reduced or something else is broken. gllin is a pretty tough test case for the ABI translation thing for various reasons: all the floating-point arithmetics, heavy usage of memory/files/sockets, C++ libraries, C++ exceptions, real-time constraints and more.

Among other things schwartz enables you to do is running gllin without root privileges (chroot normally requires those). Also an interesting thing to do is compare the strace (the real traditional strace) output of gllin running under a chroot with OABI compiled libc, and the strace output of the same gllin running under schwartz and using EABI libc. You’ll see two different sequences of syscalls being made, but having pretty much the same end effect.

I probably won’t have time to hack schwartz further but improvements from others are welcome. I just wish I had the thing running earlier – ironically I already have a GTA02 on my desk, and GTA02 has a different GPS chip in it which needs no driver on the OS side. There’s very little time left till the mass-production and selling of GTA02 starts and gllin slides into oblivion. (It seems that the TomTom Go’s using the same or a similar driver though).

2: ABI translation

January 4, 2008

First, why would we want to do that? Most architectures have a single popular ABI accepted by the kernel and supported by the binutils, on Linux this is usually the System V R4 defined ABI. This is the case of i386. X86-64 also has a single standard ABI based on the i386 ABI but it’s not a System V standard because System V doesn’t seem to have one for x86-64 yet. The ARM case is different because there are more than one ABIs in use and you can get a mismatch when pairing user-space and kernel images or libraries for a program. The older and unstandardised one is called OABI and Schwartz can (attempt to) translate between OABI calls issued by an OABI-compiled program and whatever ABI the host uses. This will be enabled automatically if an OABI executable is detected, no command line switch needed.

Why it seems this hasn’t been done before? Because it’s non-trivial. Currently people resort to using an entire OABI rootfs sitting in a subdirectory of the host rootfs and chrooting to it, if they need to run a OABI binary in a system that uses EABI.

Why is it non-trivial and how does Schwartz do it? In a nutshell if an executable is compiled with a different ABI than the host, we need to translate everything that’s being passed between the program and the libraries it uses (this is assuming the executable is dynamically linked and issues no syscalls directly – otherwise only the syscalls would have to be translated but that cannot be done in user-space so we’re not concerned with this) and the format of this interaction is precisely what ABIs define. Two types of interaction occur that I know of: through data and control. The control is always passed to and from libraries in the same way, through jumps aka. branches, and there isn’t any space for differences between ABIs so we’ll concentrate on the data. Data is passed on various occasions. I will divide all the data interaction into three parts:

  1. static chunks of data shared between program and library. This means mainly global variables in terms of a C program or other. The format of a variable depends on it’s type and the ABI. The most basic types are encoded always the same way, while data types which are constructed of sub-elements, like structs, have a format governed by the ABI. The ABI usually specifies how elements are packed inside an object and there may be important differences between ABIs. Fortunately global objects are not usually shared by libraries, and those that are, are almost always simple types, so we don’t perform any translation. In addition it would be very difficult because we would have to react to every access to such variables, and in some cases completely impossible, for example for C union types, because the data has more than one interpretation in such cases, and we can’t tell which interpretation is used in which access.
  2. on program entry. Entry happens only once, when the control is passed to the program at start and is accompanied by some data being passed too (for example the command line arguments). This part is easy because we can have a separate entry for each ABI, and some ABIs just don’t specify any requirements for the entry point (this is the case of OABI and EABI, and the Linux implementation is exactly identical for both of them). So currently there’s only one main() call per architecture in Schwartz.
  3. on function calls. This is responsible for the biggest part of ABI translation in Schwartz. A function call between a program and a library is accompanied by data being passed both ways, from caller to callee in call arguments, and from callee to caller in the return value. We will see below that a library can be both a callee and a caller, for different functions. Function parameters as well as their return values can be passed differently depending on the ABI. The ABI usually specifies when and which parameter values (or parts of them) are passed in registers (of the CPU or FPU) and which are marshalled on stack, and possibly which are passed as pointers. They can also have different types, ranging from simple to compound, where the packing is important again, as it was in 1.

How does Schwartz handle function calls to different ABIs? We simply make a wrapper for every library function that we suspect may be used, and we resolve function symbols to our wrappers instead of the original functions. Again this is not a generic solution if we want to load arbitrary executables but practically is good enough. If there is an executable that uses symbols we haven’t a wrapper for, we can easily add information about the new function and recompile. The information is generated automatically based on system headers and a list of symbol names (and the list is extracted automatically from a list of executables). Such wrapper will accept parameters in the program’s ABI format, adapt them to the library ABI if needed and call the real function passing the same parameters but in the library’s ABI again. The same has to be done with the return value, just in the reverse order.

But here’s the trick: a function pointer is also a data type, so it can be passed as a parameter or a return value from a library function, and we have to handle it very carefully. Example library functions that take a function pointer as parameter are signal(), qsort() or __libc_start_main() (specified in Linux Standard Base). Example function that returns a function pointer is signal() again. So how do we handle translation of the function pointer data type? We have to generate a wrapper for every value passed that is a function pointer, and since there may be different such values passed in successive calls to the same function as parameters, we have to do it dynamically in the run-time, for every value separately. Fortunately there’s only a finite number of such values because the only valid values are those that point at functions in the program (plus optionally NULL, which we pass intact) and there is a finite number of functions, they aren’t generated dynamically. Now the wrappers will be of two types: those for parameters and those for return values. To see the difference between these two, let’s look at what the callee can do with the value it is passed in a parameter and a value a caller gets when it is returned from a call. It can do two things:

  1. It can make a call to the function pointed to by the function pointer. If we’re a callee and we got a function pointer in a parameter we will want to make the call in our ABI, while the function was passed from the caller so it expects parameters in the caller’s ABI, so we need translation again. But this time the callee (we) becomes a caller and the target of the call is a function passed from the other ABI, so the translation needs to be in reverse direction. If we are the library and the caller was the program, we now need a wrapper that translates from library ABI to program’s ABI. The converse case is easier: we’re now the caller, we called a function and it returned another function pointer. The function which is pointed at will expect parameters in the callee’s ABI so the translation occurs in the “same direction” as before.
  2. It can remember the value somewhere and the value can later be returned or passed as a parameter back to the other side. Since the function pointer is a value we got in return or in a parameter, we know that it is already wrapped appropriately by Schwartz. But we are now passing it back to the other side, precisely where it came from. If we follow the logic from 1. we will be unnecessarily wrapping it again (wrapping the wrapper) in a translator of opposite direction. Schwartz has to notice the double wrapping and “annihilate” the two translators and just pass the original pointer, in order to inhibit the possibility of DoS’ing ourselves by generating an infinite serie of wrappers. To see this better here’s an example of when this happens in a C piece:
    sighandler_t *original_handler;         /* Function pointer */
    ...
    /* Let's setup a handler for SIGUSR1 */
    original_handler = signal(SIGUSR1, &my_sigusr1_handler);
                                            /* External function is being returned,
                                               it is wrapped in an ABI translator,
                                               so that we can safely call it (but
                                               we don't in this example).  */
    ...
    /* Let's restore the original handler */
    signal(SIGUSR1, original_handler);      /* The wrapped external function is
                                               being passed as parameter, normally
                                               it would be wrapped again so that the
                                               callee can safely call it.  But
                                               instead we "unwrap" it and we get the
                                               same effect.  */

The bottom line in 1. is that if we decide to do ABI translation from ABI X to Y, we also have to translate from Y to X occasionally, so they are tied together, and we have to be able to do both things dynamically. In 2. the bottom line is that we need to cache pointers to untranslated functions also. If we add to this the fact that pointers can point to functions which also have function pointers as parameters or return types (see man xdr_union(3)), and that struct or array elements can be function pointers too, and that there can be a variable number of parameters of unknown types, we get a pretty complex task.

There’s another case of functions like dlsym() that return a-void-pointer-but-we-know-it’s-a-lie, for which we need a totally custom translator, but this is more easily doable.

1: Presenting Schwartz

January 4, 2008

Use the Schwartz, Luke!

It seems everyone needs to code at least one ELF loader of their own, so here’s mine. Schwartz is a yet another ELF loader and linker that can do a couple of tricks that other linkers can’t do (names not included – any similarity is purely coincidental), like ABI translation. I started it when the gllin binary was released to public in November but never had the time to finish it. It aims to be a generic linker not tied to any architecture or host ABI, but gllin was a good reason to start coding. My next couple of posts will be related to Schwartz as well, so you better be interested!

Schwartz doesn’t use the ELF interpreter mechanism like the ld-linux linker – it compiles to a normal user-space program that needs no special privilege level. Typically the user just runs the linker (the executable name is ld4) passing as a parameter the name of the executable to load and run. Supported architectures are at the moment x86-64, ARM and i386 (the last one untested).

For that to work we have to use some tricks at every level, starting from the loader part. Because every hack has its limits (that make it what we call a hack), if you take The Schwartz code and try to extend it you may hit one of the limits and see that things stop working. There’s nothing inherently unfixable in it but you may need to come up with a new hack.

  • The loader

Its task is loading the contents of an ELF executable into memory at the right locations where the ELF will feel especially comfortable. In other words we construct the memory image of the program out of the image in the executable file. This at first seemed like an easy task because I had zero experience with ELF executables and my last experience with executables was from ms-dos times where all executables were relocatable. So in my endless ignorance I was thinking I’d just reserve a piece of memory, dump the contents there and relocate the code. Obviously this didn’t work because it turns out operating systems stopped using relocatable binaries for normal programs about twenty years ago when I wasn’t paying attention. So to make the program feel at home you have to place the code at the exact addresses it wants.

To run fully in user-space we use a linker script that moves our own code to a non-standard location in the memory image, so that the standard location becomes free and we can load the executable there. Such linker script can be pretty much generated automatically for every platform. Obviously on the target executable could have also used a linker script and chosen an address colliding with our non-standard addresses. In this case the dungeon collapses and we don’t support such executables. The user has to go and modify the script (which is fairly trivial) to be able to run such executable. The user can even go farther and support only a single executable and just link the ld4 with her target program into a single file if she wants to only take advantage of (say) the ABI translation feature for this single program.

By doing that we have both programs in a single memory space / single process, happily coexisting and we gain one interesting feature: If we attach a debugger to the process, we will have the symbols from both executables in place. This means we can load the debug info for either of the programs into the debugger and the debugger will see the symbols in the right places and not get confused. In GDB you can switch the debugged binary in runtime without detaching from the process.

  • Linker

The linker is used only for dynamic executables. It looks at the list of symbols in the external libraries that are used by our target program and resolves each of them by loading the necessary library and finding the symbol. Again we have both programs (ld4 and the target) in a single process so we can share the libraries instead of loading them two times. I use libdl for external symbols rather then resolving them manually but there’s no reason the Schwartz couldn’t recursively load the libraries as well. Currently we support only a very small subset of the defined relocation types but this seems to be more than enough for programs built with binutils (i.e. all programs).

Because we control what we resolve every symbol to, we can override the library symbols with our own when we want. This allows us to play different kinds of tricks on the program.

One such trick is a strace-like tracing of the calls made by the program to library functions. I’ve implemented that for most of the <string.h> calls as an example, this functionality is turned on with the –trick-strace switch.

Another feature is a fake chroot done with simply mangling the path strings passed forward and back between the program and libraries. This is ofcourse not as secure as a real chroot if you allow arbitrary executables, because an executable may use libraries or library functions that we haven’t provided a wrapper for, or use syscalls directly. However, it has the advantage that any user can use it, while normal chroot requires root privileges. This is enabled with –trick-chroot <path>.

Yet another trick could be a user-space implementation of a poor man’s debugger, with the capability to set breakpoints, inspect data, etc., but perhaps not watchpoints (at least not easily) and other fancies. I’m not implementing this.

And yet another trick based on overriding library symbols is C++ exception model translation and ABI translation. More about this in the next post. Look out!

I can see you gdb

December 1, 2007

So, as soon as the gllin binary was released for download, I came up with an evil plan – will for sure blog about it after it is executed. But first (as part of usual preparation for an evil plan) I needed to find out whether in a normal program under Linux the heap is executable, or rather what section is executable and writable. While attempting this I made a funny and completely unuseful observation which I’m going to share with you now. Here’s the test program:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

static void sayhello(int world_number) {
        int local;
        static int stat;

        printf("Hello World %i! Local variable at %p and static at %p\n",
                        world_number, &local, &stat);
}

int main(int argc, char *argv[], char **envp) {
        void (*say[2])(int i) = { sayhello, malloc(0x1000) };

        memcpy(say[1], say[0], 0x50);
        say[0](0);
        say[1](1);
        return 0;
}

After “make hello” I have the ELF under ./hello and load it into gdb and inspect:

 $ gdb ./hello
GNU gdb 6.6
...
(gdb) break sayhello
Breakpoint 1 at 0x40068f: file hello.c, line 9.
(gdb) run
...
Breakpoint 1, sayhello (world_number=0) at hello.c:9
(gdb) up 
#1  ... in main (argc=1, argv=..., envp=...) at hello.c:17
(gdb) disassemble say[0] (say[0] + 15)
Dump of assembler code from 0x400684 to 0x400693:
0x0000000000400684 <sayhello+0>:        push   %rbp
0x0000000000400685 <sayhello+1>:        mov    %rsp,%rbp
0x0000000000400688 <sayhello+4>:        sub    $0x20,%rsp
0x000000000040068c <sayhello+8>:        mov    %edi,0xffffffffffffffec(%rbp)
0x000000000040068f <sayhello+11>:       lea    0xfffffffffffffffc(%rbp),%rdx
End of assembler dump.
(gdb) disassemble say[1] (say[1] + 15)
Dump of assembler code from 0x602010 to 0x60201f:
0x0000000000602010:     push   %rbp
0x0000000000602011:     mov    %rsp,%rbp
0x0000000000602014:     sub    $0x20,%rsp
0x0000000000602018:     mov    %edi,0xffffffffffffffec(%rbp)
0x000000000060201b:     int3
0x000000000060201c:     lea    0xfffffffffffffffc(%rbp),%edx
End of assembler dump.

You may now ask yourself the same question that I asked myself: WTF? or I may first explain what is happening above and you may ask the question then. We loaded the program into the debugger. The program was supposed to greet the world once and then call a copy of sayhello we made with memcpy(). We set a breakpoint at the start of the function and run the program. When it enters sayhello, it hits the break and we have a chance to look at the copy of the function. We step out of the sayhello frame so that we can access the say array. We disassemble the start of the original function and the start of its copy, and we see that they differ (!). Someone is messing in MY functions?! Or memcpy() is perhaps broken?!

No, it’s just gdb. When we set a breakpoint at sayhello it inserted the extra instruction (which I would have maybe recognised if I used x86 asm more often) to get notified in the right moment. We copied the function together with the breakpoint and we hit the original breakpoint. gdb then hid it from out eyes (first disassembly) but it didn’t know that we had secretly made a copy (second disassembly) and we now have a pretty little breakpoint of our own.

So what useful did we learn? Nothing really. That checksumming the program in runtime may sometimes work.

Good news is that memcpy() is fine and the world is safe. Pheww..

QPE 4.3.0 plus QEMU

October 28, 2007

Trolltech GPLed and released its Qtopia Phone Edition 4.3.0 distro a couple of weeks ago, at the same time adding Neo1973 as a supported device. I had a look at the “Phone” part of the package and while I was never a fan of Qt, I like a number of things in qtopia design, although I have also tried running Qtopia on my phone and the interface was not terribly nice for a first time user. Like, on one hand I like the fact that they came up with a custom input method, avoiding getting into obscure deals with the popular T9 input method which is patented (which they could easily do). On the other hand though I couldn’t comfortably enter message text using this input method. The cool bit is that it’s now all open-source there’s no way back :)

Having for a short time been involved in the development of gsmd in OpenMoko what I like most in Qtopia and at the same time envy the most is that their phone services have a logical design, quite complete set of documentation and probably work. This last thing I haven’t verified but even if not, the logical design alone would be enough to make me happy, seeing how chaotic is gsmd development process. Gsmd has no documentation and also suffers from lack of maintainership which recently changed status to a presence of a very strange maintainership that makes contributing code very hard and probably leads to less progress than when there was no maintainer. Fortunately there’s recently enough work to be done in the GSM support in OpenMoko that doesn’t involve touching gsmd itself.

Qtopia’s phone part is nicely divided into services each of which supports plugins for adding support for exotic modems. The division is quite grain but no too grain and there are full tutorials for writing each type of plugins. The code is not so amusing but it’s quite complete with all standard features implemented, even those not present in any of the supported devices. I was particularly looking in qtopia for GSM multiplexing code and it was there and surprisingly it was written in C (all the rest of Qtopia being C++, making it not directly reusable in other projects) but it was quite ugly and suboptimal, so only useful for comparing the results. At the moment Qtopia doesn’t do multiplexing when running on the Neo1973, there is probably some reason for this and I’m suspecting it is in the Neo1973 hardware or kernel (the kernel’s not a part of Qtopia, it comes from OpenMoko).

What I found useful is development tools that come with QPE, two in particular. The first one is called phonesim and is used for testing the phone services. The second one is atinterface or “phone emulator”. Both tools idea is to simulate a modem which you can talk to using a standard AT command set, but they do it in different ways. Phonesim is strictly a developer tool, segfaults a lot and is supposed to run on the desktop, or wherever you’re coding, although it can run anywhere. It simulates a dummy GSM modem, you can run Qtopia or other tool that talks to a modem (QEMU, gsmd, gnokii) and make it connect to phonesim. Sometimes it will work and sometimes it won’t because phonesim understands just a minimum subset of standard AT (and some of GreenPhone’s modem’s proprietary commands), but is easily extensible. There’s an optional GUI through which you can simulate incoming calls, messages, data packets and more, but basically the GUI is the only source of events. Atinterface on the other hand runs on Qtopia and it takes events from QPE’s phone subsystem. It’s purpose is exposing a modem interface to a laptop or other devices so that they can send faxes or make data calls through a GreenPhone. The interface is hardware independent, i.e. the virtual modem presented by atinterface to your laptop will not depend on whether the QPE is running on a Neo1973 or GreenPhone or HTC. It’s also more standards compliant than the GSM subset emulated by phonesim, but to use it you will need a running QPE and its phone services.

Now what I wanted was tools for easy testing gsmd and/or OpenMoko running in QEMU. Connecting it all together is not exactly simple so I will explain here how to do that. So, we want to run gsmd or QEMU, and we want to use phonesim or atinterface as a virtual modem, so that we don’t have to use a physical modem because the physical modem is a lot of hassle (for example if it’s the Neo modem, it constantly runs out of battery), if you have one. While we’re at it I will also show how to use the physical modem of Neo1973 with a gsmd running on PC, it’s less hassle than testing gsmd on the phone.

We have two parts: a modem (physical or virtual) and a program (gsmd or QEMU). For the communication channel we choose a network socket because sockets are flexible and already supported in many places. For the modem we have three possibilities: 1. a phonesim virtual modem, 2. atinterface virtual modem, 3. a Neo1973 physical modem.

1. Phonesim supports sockets out of the box, so we just need to build and run it. I hacked up a phonesim version that can build outside a Qtopia tree and I included it in the qemu-neo1973 repo at svn.openmoko.org, to build it you only need to check out a recent qemu-neo1973, configure it with

$ ./configure --disable-system --disable-user --target-list=arm-softmmu --enable-phonesim && make

The command

$ (cd phonesim; LD_LIBRARY_PATH=lib ./phonesim -gui ../openmoko/neo1973.xml) &

runs phonesim. The -gui switch is optional. The GUI will only appear after a first client connects. Phonesim now listens on localhost port 12345 and is ready to accept clients. The neo1973.xml file defines a modem behavior resembling the Neo1973 modem (TI Calypso).

2. Atinterface is part of Qtopia, and requires Qtopia. I will not explain here how to build Qtopia. After you’ve built and installed it (I assume the default paths) you will need to first run QPE and then atinterface. For QPE to run you need a modem, we can use phonesim. You can use the phonesim build that comes with Qtopia, to do that run the following command:

$ bin/phonesim -gui src/tools/phonesim/troll.xml &
$ export QTOPIA_PHONE_DEVICE="sim:localhost"

Next, we’ll need to emulate a framebuffer on which QPE will display and then we can run QPE and atinterface:

$ bin/qvfb &
$ echo [SerialDevices] > etc/default/Trolltech/Phone.conf
$ echo ExternalAccessDevice=/dev/ttyS1:115200 >> etc/default/Trolltech/Phone.conf
$ image/bin/qpe &
$ image/bin/atinterface --test -qws

Ready, now we have atinterface listening on localhost:12350.

Phonesim on an OHand laptop

3. To make the Neo1973 modem accessible to a PC over USB we have several options. The u-boot gsm passthrough support turned out unreliable so we will boot Neo into linux, kill gsmd and run netcat:

# killall gsmd
# nc -l -p 5000 < /dev/ttySAC0 > /dev/ttySAC0

Voila, if the usb ethernet is configured (see OpenMoko wiki) , the modem is now listening at 192.168.0.202:5000.

Now we want to connect to our modem from the other side, gsmd or QEMU programs on the desktop. With QEMU the task is easy because it can connect to a socket directly: just append -serial tcp:localhost:12345 (in the phonesim case, in other cases tcp:localhost:12350 or tcp:192.168.0.202:5000) and you should see the system running inside QEMU connect to a GSM network and be operational. Remember that if you haven’t configured QEMU with –enable-phonesim, it uses a builtin modem emulator based on gnokii (yes, yet another virtual phone), which needs to be disabled.

With gsmd the problem is that it wants a character device to connect to, rather than a socket. We will emulate a character device using a tiny program I uploaded to qemu-neo1973 svn yesterday. It will make a pseudo-terminal pair (pty stands for Pseudo-Terminal. Has anyone wondered, if tty is for Tele-Typewriter, why pty is not Pseudo-Typewriter?) and connect the master to the socket. If you’ve checked out qemu-neo1973 and you’re inside the source directory, do:

$ make pty
gcc-3.3.6 -Wall -O2 -g -fno-strict-aliasing -I. -D_GNU_SOURCE -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -g  pty.c   -o pty
$ ./pty localhost 12345
/dev/pts/12

The pty program connected to the modem (change the hostname/port pair accordingly) and it told us that it created a character device /dev/pts/12 to which gsmd can now connect (qemu can also connect to character devices). The device will exist as long as pty is running.

$ /usr/local/sbin/gsmd -p /dev/pts/12 -s 115200 -v ti -m generic

That should be it. Now you can launch a program like openmoko-dialer that uses gsmd and hack away. Phonesim has a nice GUI (alas Qt…) from which you can observe the AT communication with your program.