Custom ISR Prologue in AVR C

I fell into a situation recently where I was relying on pin change (PCINT) interrupts to decode a serial protocol. This meant I needed to be able to detect if the interrupt was triggered by a rising or falling edge. This particular protocol can have pulses as short as 1uS (8 CPU cycles) so in order to tell if the pin was rising or falling I need to sample it within 8 cycles of it actually happening. Adding time for the pin synchroniser and a non-detirministic ISR entry up to 7 cycles-ish, by the time the ISR starts you’re usually already too late… won’t stop me trying though!

Now I’ll skip the part where you tell me I should just use INT0 or poll or whatever and just look at the case where you do actually want to insert some code before the interrupt service routine (ISR) starts its prologue.
A typical ISR is defined like this:

ISR(PCINT0_vect) {
  //do stuff here
}

When the compiler is building this, it checks all registers your ISR modifies (including modifications by other functions you call) and pushes them to the stack. A dissasembled ISR might look something like

ISR(TIMER0_COMPA_vect) {
0000010B  PUSH R1		Push register on stack 
0000010C  PUSH R0		Push register on stack 
0000010D  IN R0,0x3F		In from I/O location 
0000010E  PUSH R0		Push register on stack 
0000010F  CLR R1		Clear Register 
00000110  PUSH R18		Push register on stack 
00000111  PUSH R19		Push register on stack 
00000112  PUSH R20		Push register on stack 
00000113  PUSH R21		Push register on stack 
00000114  PUSH R22		Push register on stack 
00000115  PUSH R23		Push register on stack 
00000116  PUSH R24		Push register on stack

...actual stuff...

00000150  POP R24		Pop register from stack 
00000151  POP R23		Pop register from stack 
00000152  POP R22		Pop register from stack 
00000153  POP R21		Pop register from stack 
00000154  POP R20		Pop register from stack 
00000155  POP R19		Pop register from stack 
00000156  POP R18		Pop register from stack 
00000157  POP R0		Pop register from stack 
00000158  OUT 0x3F,R0		Out to I/O location 
00000159  POP R0		Pop register from stack 
0000015A  POP R1		Pop register from stack 
0000015B  RETI 		Interrupt return 

This is the ISR burning a bunch of CPU cycles storing the state of whatever you just interrupted, and would be about to clobber otherwise, and then restoring it. This is pretty important, but the typical PUSH takes two clock cycles, and there are a lot of them…

It’s possible to define a ‘naked’ ISR where the compiler doesn’t give you the epi/prologue

ISR(PCINT0_vect, ISR_NAKED) {
  //do stuff here
}

however, now you have to worry about any registers you might be stepping on. Really, you should never use a naked ISR unless you are hand coding assembler.

My decoder is completely interrupt driven, which means my ISR actually does quite a lot, and I would very much like to have it preserve the state of whatever I’m interrupting. However it must sample the pin that changed as the very first thing it does.

Enter the naked top half:

ISR(PCINT0_vect, ISR_NAKED) {
  asm (
    "SBIS %[port], 1\t\n" //Check PINB1 and..
    "RJMP __vector_PCINT0_FALLING\t\n"
    "RETI"
    :: [port] "I"(_SFR_IO_ADDR(PINB)) :
  );
}

and the not so naked bottom half:

ISR(__vector_PCINT0_FALLING) {
  //do stuff
}

The top half here is built by the compiler and installed as the ISR for the PCINT interrupt we care about. Being naked, the only code it runs is the assembly you see here. It samples PINB, checks if the pin is high or low and then jumps to the bottom half which performs the rest of the ISR. If we jumped, then we can rely on the bottom half to RETI for us (also why we don’t ‘call’ the bottom half), but if we didn’t jump then we need to clean up the ISR ourselves with a RETI. We could also define another bottom half for the rising edge and replace the RETI with an RJMP to it if we care about both events. Finally, the name of the bottom half should start with “__vector_*” to stop GCC complaining.

This works because you can put any name you like in an ISR() definition. GCC will check all of its clobbering and generate the epi/prologue regardless of wether the ISR is attached to an actual interrupt vector. Once it’s created that for us, we can hand code the top half however we like and JMP knowing the the compiler has taken care of the hard stuff coming later. Though you should be careful not to clobber stuff in the top half.

Ubuntu on IGEPv2

Preface

This is a short guide to getting Ubuntu running on the IGEPv2 Rev C from a bootable SD card.  There are a few guides around that I’ve based various sections on, but I couldn’t find a complete howto that was up to date with the current hardware.  Most credit goes to Michael Opdenacker of Free Electrons.

Download some Packages

  • I’m assuming you’re building on Ubuntu.  You’ll need to install a couple of extra packages.
    $ sudo apt-get install ia32-libs git git-core qemu qemu-kvm-extras debootstrap build-essential

Install the poky toolchain

  • First download the IGEP SDK Yocto Toolchain from here.
  • Extract the toolchain to /.
    $ sudo tar jxf igep-sdk-yocto-toolchain-*.tar.bz2 -C /

This should install the cross compiler to /opt/poky. To use the toolchain you need to configure your environment, which you can do with this shortcut:

$ source /opt/poky/1.2/environment-setup-armv7a-vfp-neon-poky-linux-gnueabi

You’ll also need to add these to your environment.

$ export ARCH=arm
$ export CROSS_COMPILE=arm-poky-linux-gnueabi-

Build the Kernel

  • Clone the Linux OMAP Git repository
    $ mkdir igep
    $ cd igep
    $ git clone git://git.igep.es/pub/scm/linux-omap-2.6.git
    $ cd linux-omap-2.6/
  • Switch to the kernel version you want to build, in this case 2.6.37-6.
    $ git tag
    v2.6.28.10-3
    ...
    v2.6.37-4
    v2.6.37-5
    v2.6.37-6
    $ git checkout -b v2.6.37-6_local v2.6.37-6
  • Run the build with the default configuration.
    $ make igep00x0_defconfig
    $ make -j 4

Create a Root Filesystem

  • Download rootstock from here.
    $ tar zxvf rootstock-0.1.99.3.tar.gz
  • Build the filesystem. You can substitute your own username and password here.
    $ cd rootstock-0.1.99.3
    $ sudo ./rootstock --fqdn igepv2 --login joe --password 123456 --imagesize 2G --seed build-essential,openssh-server --dist lucid

Build IGEP-X-Loader

  • Clone the Git repo
    $ cd ~/igep
    $ git clone git://git.isee.biz/pub/scm/igep-x-loader.git
    $ cd igep-x-loader
    $ git checkout -b release-2.5.0-2_local release-2.5.0-2
  • Build with the default configuration.
    $ make igep00x0_config
    $ make

Format the SD Card

You’ll need a micro SD card at least 2GB in size. If you need some more details on how to do this step, you can find them here.  I found this to be a much simpler experience.

Note: Substitute sdc with the actual device node of your SD card. I shouldn’t have to warn you that this will destroy all data on the SD card.

  • Delete all the partitions.
    $ sudo dd if=/dev/zero bs=512 count=1 of=/dev/sdc
  • Run cfdisk to format the card.
    $ sudo cfdisk /dev/sdc
  • Create the boot partition.
    • type: W95 FAT32 (LBA)
    • size: 100MB is probably plenty
    • Mark it as bootable
  • Create the root partition.
    • type: Linux
    • size: Whatever is left
  • Write the changes to the card and quit cfdisk.
  • Format the partitions.
    $ sudo mkfs.msdos /dev/sdc1
    $ sudo mkfs.ext3 /dev/sdc2
  • Mount the partitions.
    $ sudo mkdir -p /media/boot
    $ sudo mkdir -p /media/rootfs
    $ sudo mount /dev/sdc1 /media/boot
    $ sudo mount /dev/sdc2 /media/rootfs

Copy Everything to the SD Card

  • Copy the kernel and X-Loader.
    $ cd ~/
    $ sudo cp igep/igep-x-loader/MLO /media/boot
    $ sudo cp igep/linux-omap-2.6/arch/arm/boot/zImage /media/boot
    $ sudo cp igep/igep-x-loader/scripts/igep.ini /media/boot
  • Copy the root filesystem.
    $ cd /media/rootfs
    $ sudo tar zxvf ~/igep/rootstock-0.1.99.3/armel-rootfs-201306180112.tgz
  • Install kernel modules.
    $ cd ~/igep/linux-omap-2.6/
    $ make INSTALL_MOD_PATH=/media/rootfs modules_install

Modify the X-Loader config

  • Edit /media/boot/igep.ini.
  • Specify the rootfstype.
    ;  --- Configure MMC boot --- 
    root=/dev/mmcblk0p2 rw rootwait
    ; add this line
    rootfstype=ext3

A Few Modifications to the Root Filesystem

  • Copy tty config.
    $ cd /media/rootfs
    $ sudo cp etc/init/tty1.conf etc/init/ttyO2.conf
  • Edit etc/init/ttyO2.conf and change.
    exec /sbin/getty -8 38400 tty1

    to

    exec /sbin/getty -8 115200 ttyO2
  • Disable ureadahead.
    $ sudo mv etc/init/ureadahead.conf etc/init/ureadahead.disabled
  • Setup the network interface by editing etc/network/interfaces. This is just an example, you’ll probably have you own details to put here.
    auto lo
    iface lo inet loopback
    
    auto eth0
    iface eth0 inet static
            address 192.168.5.1
            netmask 255.255.255.0
            network 192.168.5.0
            broadcast 192.168.5.255
  • Add some more source repositories by editing etc/apt/sources.list.
    deb http://ports.ubuntu.com/ubuntu-ports lucid main universe
    deb http://ports.ubuntu.com/ubuntu-ports lucid-updates main
    deb http://ports.ubuntu.com/ubuntu-ports lucid-security main

Testing the Build

  • Unmount the SD card from your development machine.
    $ cd ~/
    $ sudo umount /media/rootfs
    $ sudo umount /media/boot
  • Insert the card into the IGEPv2 and power it up.

The board should boot and you should see the kernel messages on the debug port. You can connect to the board over ssh using the username and password you used when creating the root filesystem, and the IP address you specified in /etc/network/interfaces.

Prints for Sale

I’ve been taking photos for as long as I can remember, starting out with a $10 film camera that had no need for batteries, and chewing through dozens of disposables during my high school years.  I never took it very seriously, it was mostly just documenting fun times with friends.

That was until I started hunting for a new digital camera a few years ago.  My supervisor takes his photography very seriously and had some things to say about my potential selections.  It was then a struggle between my budget and his discomfort with quality that eventually ended with me purchasing a Canon 600D.  This began a long saga of gear lust and me taking tens of thousands of photos of friends, landscapes, bugs, drunken students and whatever else happened to appear in front of my lens.

It wasn’t long before I began picking up odd jobs for clubs at uni or even local magazines which helped subsidize my rampant purchasing.  I’ve since upgraded most of my gear, moving up to a Canon 6D and even buying some studio lights.

I’ve wanted to set up a website to sell prints for a while now, but just recently (and suddenly) found the motivation to do so.  After a week of hacking together some html I’m fairly happy with the look of it.

You can find it over at photography.cgsy.com.au.  There you will find some shots that I consider worthy of showing you, as well as some examples of my professional work.

Sensor Film Review

I’ve recently started noticing some dust specks in the photos from my DSLR which are starting to bug me.  In fact I’ve never actually seen my camera’s sensor completely clean as they often come preloaded with dust from the factory.

Before

I’ve made some attempts at cleaning the sensor (it’s actually the low-pass filter infront of the sensor, but I’m just gonna call the whole thing ‘the sensor’) but haven’t had much luck.  Anyone who has tried poking around in a $1000+ camera with various cleaning implements will know how stressful this process can be.

I’m not going into the detail of cleaning a sensor, and I’m going to pretend you already know all the risks and considerations to make when touching the sensor with ‘anything’.  No; today I’m going to talk about this nifty goo I found called Sensor Film.  I struggled to find many reviews of this stuff while I was researching it but found it intriguing enough to give it a go.  Hopefully my experience here will help others.

20121105_164215 Sensor film is a polymer that you paint onto the sensor with a small soft brush.  Once it has dried you peel it off and hopefully take all the dust on your sensor with it.  The advantage is that you never put any pressure on the sensor, and never rub anything against it so the risk of scratching is negligible.  You also have the chance to clean out the mirror chamber with a blower/vacuum while the sensor is protected under the polymer.

The consistency is much like honey and it is very easy to apply it to the sensor.  It does have a tendency to draw out long ‘tails’ (just like real honey) that you need to prevent before you move the brush near the camera.  If you don’t, you’ll end up leaving little spider webs around your mirror chamber which is probably a bad thing.  It’s well behaved on the sensor and goes where you tell it to, but it’s still a delicate operation to cover the sensor without getting too close to the edge.  I have a Canon 600D which has a fluoride coated low-pass filter, so naturally I’m using the fluoride variant of Sensor Film.  The film was also nice enough to even itself out after sitting for minute, so I only needed to make sure there was enough goo and not worry too much about how evenly I spread it.

After the sensor is covered you have to let it dry which may take up to 3 hours according to the manufacturer.  They also recommend you leave the shutter open the whole time so make sure you have a fresh battery.  They never actually state how to tell if the film has dried, but I found mine dry to the touch after about an hour.  Also about the closest you ever want you finger to get to the sensor :P .

To remove the film, you need to attach a small paper tab.  There is a piece of paper for this included with the Sensor Film;  I don’t know what’s special about it, but if you keep it with the bottle of goo there should be enough for all your cleaning needs.  Cutting the tab is easy, but be sure to make it an appropriate length.  It’s very easy to over estimate the length you need which will lead you to make a mess of attaching it.  The tab is simply glued to the film with a small amount of Sensor Film.

My first attempt at this tab failed.  The paper de-laminated as I tried to pull the film off the sensor.  I suspect it may not have been close enough to the edge of the film, or the strip of paper I used was too thin.  After cutting another strip and a very stressful 30 minute wait for it to dry again I managed to peel the film off without an issue.  The force required was quite reasonable, and certainly didn’t feel like there was a risk of damaging the filter.

_MG_3016

The results were impressive.  Almost all the specks, including the largest ones were gone.  There were still a few remaining that were not present in the before shot.  These may have landed in the time it took me to attach a lens and close the shutter.  There was one large blob in the corner, which on close inspection was a small piece of lint.  It wouldn’t budge with a gentle breeze so I made the foolish decision to move it with the brush.  This left a comically large smear on the sensor and caused my palm to connect with my face.

After staring at the smear in a new test shot, I decided to start the process again.  This would allow me to maybe clear the rest of those specks, test Sensor Film on a nasty smear and see how repeatable the process was.  With the justification out of the way, I set about painting my sensor again (after swapping batteries of course).

This time I tried for a smoother coating, avoiding or removing air bubbles as I could.  The practice helped and I was much quicker at coating the sensor.  It was fairly warm in the space I was working so the film dried quickly.  I would’ve finished under an hour, but I had the same issue with the first paper tab delaminating.

_MG_3018

After peeling this one off and taking a test shot all I could think was WTF?!  The sensor was pretty much perfect except for one giant chunk just off center.  There were also a couple of very minor spots that were consistent across all my test shots, and obviously impervious to Sensor Film.  The smear from last time had mercifully vanished without a trace.

20121105_212028 The big chunk prompted a third attempt.  This time I decided to lay the film on pretty thick.  This is the best way to avoid any voids made by brush strokes or bubbles.  The film doesn’t seem flow out of the area you apply it to, but does smooth itself out pretty well.  Getting the edges thick takes a bit of technique. The process is more about gently pushing the film into the areas you want it rather than brushing it on to the surface.

After peeling this third film off (first time!) and taking a test shot I was relieved to finally have a clean sensor.  There was still one stubborn spot, but it was not significant enough to worry about.

After

In conclusion I’m going to be pretty generous to Sensor Film.  While it takes more practice than advertised it did a remarkable job cleaning my sensor.  I’m sure the number of applications can be minimised by taking more care and having a little more experience with how the stuff works.  Most other methods of sensor cleaning I’m aware of also require multiple attempts; the big difference with Sensor Film is how stress free each attempt is.  Even after painting my sensor three times I never felt like I may have done any damage, at least after I successfully peeled the first film off :P .

I would recommend Sensor Film.  It’s just as tedious and frustrating as any other method out there, but it does keep the heart rate to a minimum while cleaning.  When you’re done, the results are as good as you can expect.  Just take your time and apply it and make a nice thick film.

And for those who are curious: here is what the world looks like through a layer of Sensor Film.

_MG_3015

Goodbye RepRap

So you know that RepRap I spent all my money & time building a few years back?  Well it’s been sitting in a cupboard doing nothing for a long while now so I have decided to put it up for adoption.

_MG_2843 I had finally managed to get the thing working and even spit out a few parts that resembled the CAD files I gave it.  However like most 3D print enthusiasts, I quickly realised my printers short comings.  The biggest was a lack of heated bed, which is required to print parts bigger than a 15ml shot-glass.  Without it, these parts will warp drastically during the print, usually resulting in a big mess.

Though I tried to build my own heated bed, it only half-solved the problem.  While the prints improved, the extruder was painfully slow and the parts still had a little too much warp to be successful.  It was while I was trying to improve the extruder that it overheated and destroyed itself.  At this point I put it away, opting for much easier, albeit more expensive prints from Shapeways.

My colleagues over at Make Hack Void have been building a few, notably newer printers among themselves, as any good hacker space should.  So I have decided to donate my printer to the space in the hope that someone is willing to resurrect it.

Farewell frustrating contraption, and God’s speed to whoever is brave enough to try and get you running again.

_MG_2825

Processing Architectures

So I was listening to a recent episode of The Amp Hour; “An off-the-cuff radio show and podcast for electronics enthusiasts and professionals“, and Chris & Dave got onto the topic of custom logic implemented in an FPGA vs a hard or soft core processor (around 57 minutes into episode 98).  This is a discussion very close to my current work and I’m probably in a very small minority so I figure I should speak up.

If you look closely at the avionics I’ve developed you’ll notice there is only an FPGA with no processor to handle the functionality of this device.  There is an 8bit Atmel, but it’s merely a failsafe.  So to make my position clear, everything in Asity is (will be) implemented in custom logic.

Chris & Dave didn’t go into great depth as it was just a side-note in their show so I’ll do my best to go through a few alternative architectures.  I’ll also stick with Asity as an example given my lack of experience.  I am just a Software Engineer after all.

The goals here are to retrieve data from several peripheral devices including ADCs, accelerometers, gyroscopes among many others; do some processing to come up with intelligent control, and then output to various actuators such as servos.  When designing such hardware a decision has to be made as to what processor(s) will be in the middle.

CPU-FPGA-Interfaces1

The first example is the one I’ll refer to as the traditional approach.  This includes a single CPU that interfaces with all peripherals and actuators, much like you would find in your desktop PC from 5 years ago, or your phone today.  This is the architecture used in the Ardupilot and many other avionics packages.

Modern processors are capable of executing billions of instructions per second.  What can be done with those instruction depends on the instruction set used and the peripherals available to the CPU.

The major limitation with this architecture is that a single CPU core can only attend to one thing at a time.  This means it can only service one interrupt, or perform one navigational calculation etc. at a time.  In order to keep up with all the data gathering and processing required a designer must either be very talented or lucky.  Either way, they still need to spend time developing a scheduling algorithm.

In a single core architecture with undeterministic inputs or algorithms it can be impossible to guarantee everything is serviced in time.  The alternative that is often used is to make sure the CPU is significantly over powered, which requires extra money and power.

CPU-FPGA-Interfaces2 Next we have the multiple CPU core architecture.  This could either be a multi-core CPU like a modern PC, or several independant CPUs/micro-controllers.  A couple of avionics packs make use of the 8 core Parallax Propellor such as the AttoPilot.

This architecture allows tasks and interfaces to be serviced in smaller, independent groups which simplifies scheduling logic and allows a reduction in clock speed.  It also introduces the extra complexity of managing the communication between each of the cores.  While this improves the situation over the single core architecture, each core is still fundamentally limited by a single execution bottle neck.

CPU-FPGA-Interfaces3 The final architecture I’ll discuss is complete custom logic.  This is the architecture I’ve used in Asity and the one that makes the most sense to me as a computer scientist.  I’ve chosen to implement this in a single FPGA, but this architecture can be spread over many devices without significantly altering the software.

In this architecture, each peripheral device is serviced by dedicated silicon, it never does anything else.  This allows great flexibility in interrupt timing, samples can be taken at the natural or maximum rate of the device without increasing the burden on computational resources.  Internal logic modules are also dedicated to a specific task such as navigation and attitude calculations without ever being interrupted by an unrelated task.

In both CPU based architectures a significant portion of development effort is required to schedule tasks and manage communication between them.  In fact a large portion of Computer Science research is spent on scheduling algorithms for multitasking on linear sequential processors.  Truth be told, sequential processors are a very awkward way of making decisions especially if the decisions aren’t sequential in nature.  They have proven to be useful in extensible software systems like a desktop PC, as long as there is an operating system kernel to manage things.

Any software designer worth their salt is capable of a modular design which will quite naturally map to custom logic blocks.  These blocks can be physically routed within an FPGA fabric allowing data to flow along distinct processing paths which don’t have to deal with unrelated code.

The down side to custom logic is of course time and money.  FPGAs are still quite expensive, two orders of magnitude more expensive than a processor suitable for the same task.  There also aren’t as many code libraries available for synthesis as there is for sequential execution so a lot has to be written from scratch.

A small price to pay for deterministic behaviour.

Outback Challenge Deliverable 2 Submitted

So there have been some sleepless nights recently as the deadline for the Outback Challenge second deliverable passed this afternoon.  I managed to get my report in by the skin of my teeth after some email troubles (still waiting on the confirmation from the organisers :S ).

Each team had to submit a technical report that details the design of their aircraft and their risk management strategies.  We also had to compile a video that demonstrates our on field setup procedure, takeoff and landing and how the aircraft handles carrying and dropping the payload.

I’ve compiled a playist of all the D2 videos I could find on YouTube.  Of the 53 teams that passed the first milestone I could only find 12.  Some teams may have been using private links, while others may not have used YouTube.  However I can feel the field shrinking.

My video is included in the playlist above, but if you’re only interested in that one, here it is:

embedded by Embedded Video

YouTube Direkt

The competition certainly feels like it’s heating up!

 

UPDATE:  Just got the confirmation email that my submission was received (phew).

Pulsar 4E Bottle Drop Tests

With the second deliverable for the Outback Challenge quickly approaching it was about time I discovered if the Pulsar could even carry the all important bottle of water.

After a successful maiden flight I took the Pulsar home and began hacking away at it with a rotary tool.  The idea was to cut the holes I had planned in the fuselage so the bottle could be attached and the camera could see.  This is pretty much what I achieved, albeit with a few more slips and scratches then I had hoped for.

With some minor cosmetic damage, but functionality otherwise perfect it was time to take it to the field.  The conditions on Saturday were perfect with only a slight wind, no rain and some pretty spectacular looking clouds.

I took a short flight to test some changes I had made to the Pulsar’s controls as well as the epoxy I had just applied to fix a minor wound.  The exponential setting on the elevator made a world of difference and the Pulsar is now a dream to fly.  Once I was comfy, I landed and attached the water bottle.

Pulsar LaunchThe payload is not my proudest moment in terms of design.  I just took a 500mL soft-drink bottle, stuck some fins on it and taped some bubble wrap to the front.  It’s not pretty, and I don’t expect it to survive the drop.  I’ll worry about that if I make it as far as locating Joe.

The bottle also has a plastic pylon strapped to it with some webbing which mates with the mechanism in the fuselage.  This includes a servo with metal arms that locks the pylon in place until commanded to release it.  Both parts were made from sintered plastic and my tests of the mechanism have been successful so far on the ground.

The Pulsar weighs about 2.3kg with all my gear installed, which is about what the designer had in mind.  Adding another 500g also adds a number of concerns.  Will it still be able to climb? or will it just crash into the runway on takeoff?  If it does climb, will the wings stay attached or be ripped off the fuselage?  And finally: if it flies with the bottle attached, what happens when it suddenly drops 18% of its total weight?  There was really only one way to answer all these questions.

Launching the Pulsar with a bottle attached is awkward to say the least.  Because it needs to be exactly below the center of gravity, it sits exactly where I want to hold it.  The first launch was quite hairy as the Pulsar banked hard, directly towards my camera man.  I quickly floored the throttle and was very happy to learn it had enough power to climb and clear my camera man by a significant margin.

It actually managed to climb quite well with the bottle attached.  It couldn’t quite go vertical as it does without the weight but it still manages to climb quite quickly and steeply.  It certainly didn’t fly as easily with all the extra weight, but it still managed to fly effectively.  It could still glide and could get up a lot of speed.

20120317-_72L0403 With the Pulsar & bottle in the air and with the wings still attached, it was time for the moment of truth.  I counted down to give my camera men warning and flicked the switch to drop the bottle. It separated cleanly and fell away without interfering and to my surprise the aircraft barely twitched.

After landing, the bottle was recovered and despite some damage to the fins it still contained all the precious water and had no cracks or leaks to speak of.  We completed two drops of the bottle without incident and the bottle itself remains intact.

A very successful day indeed and given the deadline for the next deliverable was extended two weeks, I’m still on track to receive a ‘go’ from the organisers.  One objective was also to gather enough footage to prove everything I said above.  Thanks to Jan & Uwe I have a bunch which I’ve attempted to cut together into a short film for you all.  Enjoy!

embedded by Embedded Video

YouTube Direkt

The Pulsar’s Maiden Flight

So it’s been 18 days short of a year since the Pulsar arrived in a giant box.  Since then there has been a lot of time spent measuring and modelling and generally designing to figure out how to fill it with stuff to ready it for the outback challenge.

It took a few attempts at various components to get the fit right.  Measuring its sleek, sexy curves proved to be quite difficult.  The last few days have been spent shifting things around to perfect the balance and program my recently replaced radio equipment.

So as you can see in the image, the cabin of the Pulsar is pretty cramped.  On the right is the stack of avionics including Asity and the controller for the camera.  In the middle is the servo that holds and releases the water bottle (under all the cables).  Finally on the top left is the receiver used for manual control.  Asity is now capable of controlling the aircraft, so once I’ve tested it thoroughly, that receiver will be replaced with a slave receiver and Asity will take over.

Today was the latest in a few attempts, the first since January, to get the Pulsar off the ground.  I warmed up a bit by taking a the Paprika for a spin, as I haven’t actually flown anything other than a simulator for about twelve months.  The wind was a bit on the strong side but wasn’t gusting too badly, so we decided to go ahead with launching the Pulsar.

The launch was a bit hairy as always but once I pushed the throttle it had no trouble climbing almost vertically.  Once I had some altitude I could take a breath and find a comfortable attitude.  It doesn’t handle anything like the Paprika as you would expect, so I can’t apply the throttle and do a quick backflip to escape a low altitude stall.  It didn’t take much trim to get it flying level but I did leave the elevator controls linear rather than the softer exponential option.  Given the elevator is so huge this would have made it much more controllable.  So I spent the first flight bobbing up and down with over corrections but generally flying smoothly.

One major issue I had not considered was that the wings look identical from top to bottom.  This often left me guessing which way up the aircraft was, leading to a terrifying accidental barrel roll.  Given its wingspan it doesn’t flip over very quickly so my usual ‘trial and error’ approach to situational awareness doesn’t leave me with much time to make any mistakes.  I’ll probably be sticking some fluro-yellow tape to the underside.

The landing was a typical repetitive stall as I continued to overcorrect with the elevator.  Under full flaps, this aircraft can drift incredibly slowly which made the landing look like a slow motion crash.  Luckily it actually was slow so it touched down with only a slight bump.  Once I’ve sorted out the controls it should be a very nice aircraft to fly.

The Pulsar wasn’t the only plane to take it’s first flight today.  My supervisor has recently built a Pilatus Porter which has also been waiting patiently to get off the ground.  His son, Jan took the controls and it too had a successful maiden voyage.

The day almost went without incident.  I took the Paprika for another run after the Pulsar was done.  At some point during the flight, one of it’s stabilisers cracked, and was hanging limply by some balsa threads.  Miraculously, or just because of my mad skillz it made it to the ground in one piece.  If it didn’t have a v-tail, I don’t think the landing would have been nearly as successful.

So the Pulsar flies!  My work now is to mutilate the shiny white fuselage by cutting holes for the camera and payload which I can’t say I’m looking forward to.  Hopefully I can get it back up to test the bottle drop in time for my April deadline.

DSM2/DSMX Remote Receiver Protocol

I’ve been playing around with my transmitter a lot recently as flying season is starting up again and the darn thing isn’t working well.  Likewise, the receiver I’ve just installed in the Pulsar refuses to bind to my radio.  So a big fail all round by Spektrum, which has lead me to seek alternative solutions.

The AR8000 Receiver with a remote receiver attached

I’ve seen a few forum threads discussing the serial protocol used by Spektrum’s remote receivers and it seems pretty straight forward to collect the data from them; bypassing the main receiver completely.  Unfortunately the only decent write up of the details that I could find was a bit out of date, and only covered the 7 channel case.  So here is what I’ve learnt from poking this device with an oscilloscope and a logic analyser for a few hours.

I’m using a Spektrum DX8 for all my testing and will focus on the DSMX modulation as that’s what made the most sense to me.

The physical layer is a very simple asynchronous serial stream.  There are three wires: the orange is 3.3V, and will draw about 30mA, the black is ground, and the grey is serial data (3.3V).

The serial data is standard 115200 baud, 8 bit wide, no parity and one stop bit (8N1).  I’ve seen many different reports on what this speed should actually be. My own measurements of the few receivers I have puts it around the 110 kbps mark.  I’m pretty sure it’s ‘supposed’ to be 115.2K and you won’t have much trouble if you just use the standard speed.

Now things start to get a little tricky.  My DX8 allows me to select either DSM2 or DSMX modulation as well as either an 11ms or 22ms frame speed.  This seems to decide what resolution your data is (1024/2048) and will vary the packet format slightly.  As far as I can tell, the DX8 always transmits packets 11ms apart, but as my main receiver won’t talk to me I can’t see the variations in the PWM output rate.

The data format is set when binding the receiver, and is stored along with the model data in the transmitter.  You can bind multiple remote receivers, one at time as long as you have the same modulation and frame rate selected.  I just use the AR8000 to bind the remote receivers and won’t look into binding without it.

Once powered up, the receiver will wait for the radio it is bound to, to begin transmitting.  It will not send any data if it’s not receiving anything.  This is also true if the transmitter is turned off, the remote receiver will simply stop sending data.

A frame consists of 16 bytes which includes 7 channels of data.  If you are using more than 7 channels, which is the case for the DX8 then your frames will include two packets of 16 bytes, 11ms seconds apart.

Each data word is two bytes long (16 bits).  The first word is a count of missed frames which uses at least 10 bits, I didn’t manage to get it to wrap around.  The remaining seven words are channel data for the current stick positions.

The channel data is given in a seemingly random order, although during my testing it remained constant for a given modulation and frame speed even after power cycling and rebinding.

Each 16 bits contains the channel ID and the channel value, as well as a couple of zeros.  Be aware that the channel ID bits did move when the receiver was bound using a 1024 mode versus a 2048 mode.  The channel value is the 10 LSBs in a 1024 mode and the 11 LSBs in a 2048 mode.  I believe DMS2 at 22ms is the only 1024 mode for this transmitter/receiver.

The channel ID is the 3 or 4 bits above the channel value.  I say 3 or 4 as I expect this protocol can handle more than 8 channels, but I have nothing to test that with.  This identifies the channel to which the value following should be assigned. 0: Throttle, 1: Aileron, 2: Elevator, 3: Rudder, 4: Gear, 5: Aux1, 6: Aux2, 7: Aux3.

For DSMX, each channel will only be included once in each frame, so the spare words in each packet are set to all 1′s (0xFFFF).  In DSM2-2048 A few channels were repeated in the second packet.

The MSB of each word appears to identify the packet within each frame.  It is usually 0 except in the case where the word is not used for channel data (as above) or immediately following the Frame Loss Count in the second packet of the frame.

The channel values provide the duration of the pulse used to control the servos.  The following is how I plan to implement my own PWM and not necessarily how the Spektrum receivers do it.  It could be the same, I just can’t measure it while my receiver is broken.

 The DX8 allows servo travel to range from -150% up to 150%.  A typical PWM controller will go from 1000us up to 2000us for the full range of the servo, so I’ve decided to go from 750us up to 2250us to match my transmitters range.  You’ll find most of the scaling and direction of the data is done in the transmitter and only absolute positions are sent to the receiver.

There are a few details I didn’t really look into during the last few hours as I have enough to keep me going.  So hopefully this is helpful to someone else and always remember that your mileage may vary.