Controlling Zynq PL Clocks in Linux Userspace

The Xilinx Zynq Ultrascale devices seem to have this covered, but I struggled to find much info on how to do this with the Zynq 7000 parts.  Here’s my notes on both platforms.

With a 4.19 kernel, the Xilinx PL clock enabler (XILINX_FCLK) is the driver you need.  This will expose any compatible = “xlnx,fclk” device-tree nodes to userspace through sysfs.  On Zynq this is something like

# echo 150000000 > /sys/devices/soc0/fclk0/set_rate
# cat /sys/devices/soc0/fclk0/set_rate
142857142  # obviously some PLL rounding to deal with

on ZynqMP

# echo 150000000 > /sys/devices/platform/fclk0/set_rate
# cat /sys/devices/platform/fclk0/set_rate
133333332  # obviously some PLL rounding to deal with

The ZynqMP dtsi’s already have fclk nodes supplied from zynqmp-clk-ccf.dtsi.  My Zynq dts didn’t (probably because it was branched many years ago…) but they can be added like:

fclk0: fclk0 {
    status = "okay";
    compatible = "xlnx,fclk";
    clocks = <&clkc 15>;
};

The PL clocks on the Zynq are <&clkc 15>, <&clkc 16>, <&clkc 17> and <&clkc 18>.  On the ZynqMP they are <&zynqmp_clk PL0_REF> etc. (if you #include <dt-bindings/clock/xlnx-zynqmp-clk.h>).

If you don’t want to set clock frequencies from userspace, you can use ‘assigned-clocks’ in any device tree node that seems relevant.

&custom-thing {
    assigned-clocks = <&clkc 15>, <&clkc 16>;
    assigned-clock-rates = <250000000>,
                           <100000000>;
};

Hope this saves someone else some time.

Cormorant Power Consumption

I’ve been a bit slow testing my Cormorant prototypes over the last few months.  I have managed to complete the drivers for the IMU sensors and the data radio which cover most of the important hardware devices, so I thought now would be a good time to measure the power consumption.  This is not just one simple number however, as Cormorant has many peripheral devices with many configurations each consuming different amounts of power.

I tested several FPGA designs and hardware configurations in order to isolate the power requirements of the individual components.   Where I could, I compare theoretical numbers with what I’ve actually measured.

It’s possible to operate Cormorant as a functional flight controller with under 100mW, though this would not include any RF communication.

How I measured power consumption

To measure the power consumption of Cormorant in its various operating modes I assembled a high-side current shunt with an amplifier out of whatever dodgy parts I already had on my shelf, as illustrated here:

highside-current-amp
Design of high-side current amplifier

The voltage drop across Rshunt is amplified ~5x, then attenuated ~2x to give an overall gain of 2.5x.  The output of the amplifier is filtered with a cutoff ~300Hz before a variable offset is applied.  The resulting Vout should then be 2.5mV per 1mA through Rshunt.  These components are of questionable quality and temperature sensitivity, so I’ve included the two pots so I can calibrate the amplifier.  In the end I found it accurate to ±2mA.

The three voltage sources were supplied by a Keithley 2231A-30-3 DC Power Supply.  This supply was used as a reference for calibrating the amplifier by shorting the load and setting the current limit.  The three outputs are isolated, so I can stack two on top of each other to get a negative rail.

Disclosure: I am a founder & employee of Liquid Instruments; manufacturer of the Moku:Lab.

Sample Moku:Lab measurement
Sample measurement from the Moku:Lab

Measurement of the power supplied to the load was done by a Moku:Lab, which monitored Vcc at the load and the current via Vout, using it’s Oscilloscope instrument.  The Moku:Lab’s Oscilloscope allows for a custom probe multiplier, which I set to 1/2.5 (inverse gain of the amplifier) so that the units for the current channel can be read directly as mA.  Then using the maths channel (yellow) to multiply the two inputs I can read power directly in mW.

There are many configurations in which Cormorant can be used, involving different peripherals and levels of activity.  I tested a sample of configurations that would try to isolate various components to quantify their power requirements individually.  A design for each configuration was compiled and deployed to six devices, the power consumption of each was measured and an average established.  In addition to activating peripherals, the power consumption is also increased by the dynamic resources in the FPGA; we use the vendor’s tools (Libero) to estimate this increase.  Briefly, the configurations tested were:

 

FPGA Design Description Dynamic Power (mW)
Erased The FPGA was erased as it comes from the factory
Empty The design was mostly empty, just a few default pin drivers 0.000
CPU 100MHz The design included only the CPU with the given clock frequency.  No software was loaded 80.311
CPU 140MHz 104.569
IMU The design configures and samples the IMU components (gyro/accel/compass/baro) in a typical configuration 3.334
Radio The design configures and communicates over the data radio with an optional High Gain Mode (HGM).  This involves packets 119 bytes long, transmitted at 15Hz. 7.201
Radio+IMU Both radio and IMU running simultaneously 9.713

These configurations were tested with and without the breakout board attached (The radio is only available on the breakout board).

There are a number of hardware configuration options enabled by jumpers on the PCB.  The majority of these enable level translators to selectable voltages to drive the external headers.  These are disabled on all but units 1 and 2 leading to a slight increase in power consumption on these boards.

What are the numbers?

PowerConsumption

 

The raw data and basic comparison plots of each design/unit combination are supplied in the attached document.  From this data I was able to isolate the power consumption of individual components and activities with simple linear algebra.

Component Power (mW)
Cormorant 80
Breakout 23
IMU 12
CPU 58 + 1.35/MHz
Radio (Active quiescent) 44
Radio TX 309
Radio TX HGM 449

Comparing these results with the theoretical expectations can be done using the FPGA designs that have simple and deterministic power estimates from Libero.  These are the empty design and the two CPU designs.  Additionally the IMU design has simple power requirements for the peripheral devices and can also be easily compared.

FPGA Design Power Increase – Estimate (mW) Power Increase – Measured (mW) Factor (%)
CPU 100MHz 80.331 192 42
CPU 140MHz 104.569 246 43
IMU 2.106+3.334 12 45

So what does this mean?

The original design of Cormorant was aiming to control an aircraft with under 100mW.  These results show that this is possible, just, with a minimal configuration that doesn’t include any RF communication.  In this configuration we would have inertial sensors and computation capabilities (FPGA only, no CPU).  Of course this doesn’t include the actuators required to control the aircraft.  A more typical configuration would include the breakout board with the radio transmitting and is likely to require 160mW minimum and 470mW during transmit.

The measured power consumption was significantly higher than the theoretical expectation, however in the cases we can directly compare there is a consistent factor in this error.  This is likely the efficiency of the switch mode power supplies used in the design.  This efficiency factor was expected to be ~80% at the given current output, not the ~43% we’re observing.  This will need to be confirmed with further experimentation.

Power consumption during radio transmission was also significantly lower than expected.  In high gain mode, the output power is expected to be just under 27dBm, and the power draw of the amplifier alone should be ~1W.  In experimentation, increasing the signal amplitude into the amplifier can increase power consumption, but I need to actually isolate where the power is missing.

Cormorant. A New FPGA Based Autopilot

It has been a few years since I first assembled a prototype flight controller built around an FPGA, having been thoroughly distracted by work and thesis writing it’s about time I updated the design.  So I hereby present Cormorant!

The concept is the same as before: motion sensors, actuator control, communication and of course the FPGA.  All of these have come a long way since my original ProASIC3 design, and CPU/FPGA “SoCs” have become common.  This new design centres around a SmartFusion2 SoC which uses the FPGA fabric of an igloo2 paired with a Cortex-M3 CPU.

The FPGA uses FLASH memory to store the logic configuration, as opposed to SRAM which most other brands use.  This means the configuration is persistent during power down, static power consumption is reduced, and most significantly the logic configuration is not susceptible to ionising radiation effects (SEEs).  Using an FPGA also greatly simplifies meeting real-time constraints and determinism, even in high frequency applications such as motor controllers.

Prototypes arrived a few weeks ago and I’ve been frantically porting code from the previous platform.  So far they’re performing exactly as designed -which is a relief.

Cormorant SoM
Cormorant System-on-Module

The flight controller is implemented on a single board as a System-on-Module.  This contains the processing and memory elements as well as the compass, barometer, accelerometer and gyroscopes.  This should include everything required to autonomously control an aircraft in a tiny 20x34x4mm package.

While the SoM is intended to be a complete system, there are many peripherals and conveniences that are typically required in a UAV system.  These can be provided by a baseboard that mates with the 40 pin header on the SoM depending on the requirements of the application.

Cormorant
SoM mounted on Baseboard

The first baseboard I’ve implemented provides a 900MHz data radio, programming & USB serial interface, SD card, audio output and standard 8 channel servo header.  The baseboard measures 20x67x14mm and with the SoM attached the entire unit weighs 18.5g.

Unlike the previous design I’ve decided not to include a GPS or airspeed indicator on the central flight controller.  This is due to the fact that these require routing pneumatic tubes or RF cables through the aircraft to locations that are more suitable for these devices; it’s much easier to route a couple of data wires instead.

As my main area of focus is on solar power gliders, I’ve tried to minimise the power consumption.  The design is expected to require less than 100mW at 5V which is true for a minimally functional system.  However the power consumption can increase drastically depending on which peripherals and features are used such as the CPU and external RAM (neither of which are required for a flight controller).

There are actually quite a number of topics to cover in this design, as well as a whole heap of technical details and specification.  I’ll definitely be posting more on these as I find the motivation.

Custom ISR Prologue in AVR C

I fell into a situation recently where I was relying on pin change (PCINT) interrupts to decode a serial protocol. This meant I needed to be able to detect if the interrupt was triggered by a rising or falling edge. This particular protocol can have pulses as short as 1uS (8 CPU cycles) so in order to tell if the pin was rising or falling I need to sample it within 8 cycles of it actually happening. Adding time for the pin synchroniser and a non-detirministic ISR entry up to 7 cycles-ish, by the time the ISR starts you’re usually already too late… won’t stop me trying though!

Now I’ll skip the part where you tell me I should just use INT0 or poll or whatever and just look at the case where you do actually want to insert some code before the interrupt service routine (ISR) starts its prologue.
A typical ISR is defined like this:

ISR(PCINT0_vect) {
  //do stuff here
}

When the compiler is building this, it checks all registers your ISR modifies (including modifications by other functions you call) and pushes them to the stack. A dissasembled ISR might look something like

ISR(TIMER0_COMPA_vect) {
0000010B  PUSH R1		Push register on stack 
0000010C  PUSH R0		Push register on stack 
0000010D  IN R0,0x3F		In from I/O location 
0000010E  PUSH R0		Push register on stack 
0000010F  CLR R1		Clear Register 
00000110  PUSH R18		Push register on stack 
00000111  PUSH R19		Push register on stack 
00000112  PUSH R20		Push register on stack 
00000113  PUSH R21		Push register on stack 
00000114  PUSH R22		Push register on stack 
00000115  PUSH R23		Push register on stack 
00000116  PUSH R24		Push register on stack

...actual stuff...

00000150  POP R24		Pop register from stack 
00000151  POP R23		Pop register from stack 
00000152  POP R22		Pop register from stack 
00000153  POP R21		Pop register from stack 
00000154  POP R20		Pop register from stack 
00000155  POP R19		Pop register from stack 
00000156  POP R18		Pop register from stack 
00000157  POP R0		Pop register from stack 
00000158  OUT 0x3F,R0		Out to I/O location 
00000159  POP R0		Pop register from stack 
0000015A  POP R1		Pop register from stack 
0000015B  RETI 		Interrupt return 

This is the ISR burning a bunch of CPU cycles storing the state of whatever you just interrupted, and would be about to clobber otherwise, and then restoring it. This is pretty important, but the typical PUSH takes two clock cycles, and there are a lot of them…

It’s possible to define a ‘naked’ ISR where the compiler doesn’t give you the epi/prologue

ISR(PCINT0_vect, ISR_NAKED) {
  //do stuff here
}

however, now you have to worry about any registers you might be stepping on. Really, you should never use a naked ISR unless you are hand coding assembler.

My decoder is completely interrupt driven, which means my ISR actually does quite a lot, and I would very much like to have it preserve the state of whatever I’m interrupting. However it must sample the pin that changed as the very first thing it does.

Enter the naked top half:

ISR(PCINT0_vect, ISR_NAKED) {
  asm (
    "SBIS %[port], 1\t\n" //Check PINB1 and..
    "RJMP __vector_PCINT0_FALLING\t\n"
    "RETI"
    :: [port] "I"(_SFR_IO_ADDR(PINB)) :
  );
}

and the not so naked bottom half:

ISR(__vector_PCINT0_FALLING) {
  //do stuff
}

The top half here is built by the compiler and installed as the ISR for the PCINT interrupt we care about. Being naked, the only code it runs is the assembly you see here. It samples PINB, checks if the pin is high or low and then jumps to the bottom half which performs the rest of the ISR. If we jumped, then we can rely on the bottom half to RETI for us (also why we don’t ‘call’ the bottom half), but if we didn’t jump then we need to clean up the ISR ourselves with a RETI. We could also define another bottom half for the rising edge and replace the RETI with an RJMP to it if we care about both events. Finally, the name of the bottom half should start with “__vector_*” to stop GCC complaining.

This works because you can put any name you like in an ISR() definition. GCC will check all of its clobbering and generate the epi/prologue regardless of wether the ISR is attached to an actual interrupt vector. Once it’s created that for us, we can hand code the top half however we like and JMP knowing the the compiler has taken care of the hard stuff coming later. Though you should be careful not to clobber stuff in the top half.

Ubuntu on IGEPv2

Preface

This is a short guide to getting Ubuntu running on the IGEPv2 Rev C from a bootable SD card.  There are a few guides around that I’ve based various sections on, but I couldn’t find a complete howto that was up to date with the current hardware.  Most credit goes to Michael Opdenacker of Free Electrons.

Download some Packages

  • I’m assuming you’re building on Ubuntu.  You’ll need to install a couple of extra packages.
    $ sudo apt-get install ia32-libs git git-core qemu qemu-kvm-extras debootstrap build-essential

Install the poky toolchain

  • First download the IGEP SDK Yocto Toolchain from here.
  • Extract the toolchain to /.
    $ sudo tar jxf igep-sdk-yocto-toolchain-*.tar.bz2 -C /

This should install the cross compiler to /opt/poky. To use the toolchain you need to configure your environment, which you can do with this shortcut:

$ source /opt/poky/1.2/environment-setup-armv7a-vfp-neon-poky-linux-gnueabi

You’ll also need to add these to your environment.

$ export ARCH=arm
$ export CROSS_COMPILE=arm-poky-linux-gnueabi-

Build the Kernel

  • Clone the Linux OMAP Git repository
    $ mkdir igep
    $ cd igep
    $ git clone git://git.igep.es/pub/scm/linux-omap-2.6.git
    $ cd linux-omap-2.6/
  • Switch to the kernel version you want to build, in this case 2.6.37-6.
    $ git tag
    v2.6.28.10-3
    ...
    v2.6.37-4
    v2.6.37-5
    v2.6.37-6
    $ git checkout -b v2.6.37-6_local v2.6.37-6
  • Run the build with the default configuration.
    $ make igep00x0_defconfig
    $ make -j 4

Create a Root Filesystem

  • Download rootstock from here.
    $ tar zxvf rootstock-0.1.99.3.tar.gz
  • Build the filesystem. You can substitute your own username and password here.
    $ cd rootstock-0.1.99.3
    $ sudo ./rootstock --fqdn igepv2 --login joe --password 123456 --imagesize 2G --seed build-essential,openssh-server --dist lucid

Build IGEP-X-Loader

  • Clone the Git repo
    $ cd ~/igep
    $ git clone git://git.isee.biz/pub/scm/igep-x-loader.git
    $ cd igep-x-loader
    $ git checkout -b release-2.5.0-2_local release-2.5.0-2
  • Build with the default configuration.
    $ make igep00x0_config
    $ make

Format the SD Card

You’ll need a micro SD card at least 2GB in size. If you need some more details on how to do this step, you can find them here.  I found this to be a much simpler experience.

Note: Substitute sdc with the actual device node of your SD card. I shouldn’t have to warn you that this will destroy all data on the SD card.

  • Delete all the partitions.
    $ sudo dd if=/dev/zero bs=512 count=1 of=/dev/sdc
  • Run cfdisk to format the card.
    $ sudo cfdisk /dev/sdc
  • Create the boot partition.
    • type: W95 FAT32 (LBA)
    • size: 100MB is probably plenty
    • Mark it as bootable
  • Create the root partition.
    • type: Linux
    • size: Whatever is left
  • Write the changes to the card and quit cfdisk.
  • Format the partitions.
    $ sudo mkfs.msdos /dev/sdc1
    $ sudo mkfs.ext3 /dev/sdc2
  • Mount the partitions.
    $ sudo mkdir -p /media/boot
    $ sudo mkdir -p /media/rootfs
    $ sudo mount /dev/sdc1 /media/boot
    $ sudo mount /dev/sdc2 /media/rootfs

Copy Everything to the SD Card

  • Copy the kernel and X-Loader.
    $ cd ~/
    $ sudo cp igep/igep-x-loader/MLO /media/boot
    $ sudo cp igep/linux-omap-2.6/arch/arm/boot/zImage /media/boot
    $ sudo cp igep/igep-x-loader/scripts/igep.ini /media/boot
  • Copy the root filesystem.
    $ cd /media/rootfs
    $ sudo tar zxvf ~/igep/rootstock-0.1.99.3/armel-rootfs-201306180112.tgz
  • Install kernel modules.
    $ cd ~/igep/linux-omap-2.6/
    $ make INSTALL_MOD_PATH=/media/rootfs modules_install

Modify the X-Loader config

  • Edit /media/boot/igep.ini.
  • Specify the rootfstype.
    ;  --- Configure MMC boot --- 
    root=/dev/mmcblk0p2 rw rootwait
    ; add this line
    rootfstype=ext3

A Few Modifications to the Root Filesystem

  • Copy tty config.
    $ cd /media/rootfs
    $ sudo cp etc/init/tty1.conf etc/init/ttyO2.conf
  • Edit etc/init/ttyO2.conf and change.
    exec /sbin/getty -8 38400 tty1

    to

    exec /sbin/getty -8 115200 ttyO2
  • Disable ureadahead.
    $ sudo mv etc/init/ureadahead.conf etc/init/ureadahead.disabled
  • Setup the network interface by editing etc/network/interfaces. This is just an example, you’ll probably have you own details to put here.
    auto lo
    iface lo inet loopback
    
    auto eth0
    iface eth0 inet static
            address 192.168.5.1
            netmask 255.255.255.0
            network 192.168.5.0
            broadcast 192.168.5.255
  • Add some more source repositories by editing etc/apt/sources.list.
    deb http://ports.ubuntu.com/ubuntu-ports lucid main universe
    deb http://ports.ubuntu.com/ubuntu-ports lucid-updates main
    deb http://ports.ubuntu.com/ubuntu-ports lucid-security main

Testing the Build

  • Unmount the SD card from your development machine.
    $ cd ~/
    $ sudo umount /media/rootfs
    $ sudo umount /media/boot
  • Insert the card into the IGEPv2 and power it up.

The board should boot and you should see the kernel messages on the debug port. You can connect to the board over ssh using the username and password you used when creating the root filesystem, and the IP address you specified in /etc/network/interfaces.

Prints for Sale

I’ve been taking photos for as long as I can remember, starting out with a $10 film camera that had no need for batteries, and chewing through dozens of disposables during my high school years.  I never took it very seriously, it was mostly just documenting fun times with friends.

That was until I started hunting for a new digital camera a few years ago.  My supervisor takes his photography very seriously and had some things to say about my potential selections.  It was then a struggle between my budget and his discomfort with quality that eventually ended with me purchasing a Canon 600D.  This began a long saga of gear lust and me taking tens of thousands of photos of friends, landscapes, bugs, drunken students and whatever else happened to appear in front of my lens.

It wasn’t long before I began picking up odd jobs for clubs at uni or even local magazines which helped subsidize my rampant purchasing.  I’ve since upgraded most of my gear, moving up to a Canon 6D and even buying some studio lights.

I’ve wanted to set up a website to sell prints for a while now, but just recently (and suddenly) found the motivation to do so.  After a week of hacking together some html I’m fairly happy with the look of it.

You can find it over at photography.cgsy.com.au.  There you will find some shots that I consider worthy of showing you, as well as some examples of my professional work.

Sensor Film Review

I’ve recently started noticing some dust specks in the photos from my DSLR which are starting to bug me.  In fact I’ve never actually seen my camera’s sensor completely clean as they often come preloaded with dust from the factory.

Before

I’ve made some attempts at cleaning the sensor (it’s actually the low-pass filter infront of the sensor, but I’m just gonna call the whole thing ‘the sensor’) but haven’t had much luck.  Anyone who has tried poking around in a $1000+ camera with various cleaning implements will know how stressful this process can be.

I’m not going into the detail of cleaning a sensor, and I’m going to pretend you already know all the risks and considerations to make when touching the sensor with ‘anything’.  No; today I’m going to talk about this nifty goo I found called Sensor Film.  I struggled to find many reviews of this stuff while I was researching it but found it intriguing enough to give it a go.  Hopefully my experience here will help others.

20121105_164215 Sensor film is a polymer that you paint onto the sensor with a small soft brush.  Once it has dried you peel it off and hopefully take all the dust on your sensor with it.  The advantage is that you never put any pressure on the sensor, and never rub anything against it so the risk of scratching is negligible.  You also have the chance to clean out the mirror chamber with a blower/vacuum while the sensor is protected under the polymer.

The consistency is much like honey and it is very easy to apply it to the sensor.  It does have a tendency to draw out long ‘tails’ (just like real honey) that you need to prevent before you move the brush near the camera.  If you don’t, you’ll end up leaving little spider webs around your mirror chamber which is probably a bad thing.  It’s well behaved on the sensor and goes where you tell it to, but it’s still a delicate operation to cover the sensor without getting too close to the edge.  I have a Canon 600D which has a fluoride coated low-pass filter, so naturally I’m using the fluoride variant of Sensor Film.  The film was also nice enough to even itself out after sitting for minute, so I only needed to make sure there was enough goo and not worry too much about how evenly I spread it.

After the sensor is covered you have to let it dry which may take up to 3 hours according to the manufacturer.  They also recommend you leave the shutter open the whole time so make sure you have a fresh battery.  They never actually state how to tell if the film has dried, but I found mine dry to the touch after about an hour.  Also about the closest you ever want you finger to get to the sensor :P.

To remove the film, you need to attach a small paper tab.  There is a piece of paper for this included with the Sensor Film;  I don’t know what’s special about it, but if you keep it with the bottle of goo there should be enough for all your cleaning needs.  Cutting the tab is easy, but be sure to make it an appropriate length.  It’s very easy to over estimate the length you need which will lead you to make a mess of attaching it.  The tab is simply glued to the film with a small amount of Sensor Film.

My first attempt at this tab failed.  The paper de-laminated as I tried to pull the film off the sensor.  I suspect it may not have been close enough to the edge of the film, or the strip of paper I used was too thin.  After cutting another strip and a very stressful 30 minute wait for it to dry again I managed to peel the film off without an issue.  The force required was quite reasonable, and certainly didn’t feel like there was a risk of damaging the filter.

_MG_3016

The results were impressive.  Almost all the specks, including the largest ones were gone.  There were still a few remaining that were not present in the before shot.  These may have landed in the time it took me to attach a lens and close the shutter.  There was one large blob in the corner, which on close inspection was a small piece of lint.  It wouldn’t budge with a gentle breeze so I made the foolish decision to move it with the brush.  This left a comically large smear on the sensor and caused my palm to connect with my face.

After staring at the smear in a new test shot, I decided to start the process again.  This would allow me to maybe clear the rest of those specks, test Sensor Film on a nasty smear and see how repeatable the process was.  With the justification out of the way, I set about painting my sensor again (after swapping batteries of course).

This time I tried for a smoother coating, avoiding or removing air bubbles as I could.  The practice helped and I was much quicker at coating the sensor.  It was fairly warm in the space I was working so the film dried quickly.  I would’ve finished under an hour, but I had the same issue with the first paper tab delaminating.

_MG_3018

After peeling this one off and taking a test shot all I could think was WTF?!  The sensor was pretty much perfect except for one giant chunk just off center.  There were also a couple of very minor spots that were consistent across all my test shots, and obviously impervious to Sensor Film.  The smear from last time had mercifully vanished without a trace.

20121105_212028 The big chunk prompted a third attempt.  This time I decided to lay the film on pretty thick.  This is the best way to avoid any voids made by brush strokes or bubbles.  The film doesn’t seem flow out of the area you apply it to, but does smooth itself out pretty well.  Getting the edges thick takes a bit of technique. The process is more about gently pushing the film into the areas you want it rather than brushing it on to the surface.

After peeling this third film off (first time!) and taking a test shot I was relieved to finally have a clean sensor.  There was still one stubborn spot, but it was not significant enough to worry about.

After

In conclusion I’m going to be pretty generous to Sensor Film.  While it takes more practice than advertised it did a remarkable job cleaning my sensor.  I’m sure the number of applications can be minimised by taking more care and having a little more experience with how the stuff works.  Most other methods of sensor cleaning I’m aware of also require multiple attempts; the big difference with Sensor Film is how stress free each attempt is.  Even after painting my sensor three times I never felt like I may have done any damage, at least after I successfully peeled the first film off :P.

I would recommend Sensor Film.  It’s just as tedious and frustrating as any other method out there, but it does keep the heart rate to a minimum while cleaning.  When you’re done, the results are as good as you can expect.  Just take your time and apply it and make a nice thick film.

And for those who are curious: here is what the world looks like through a layer of Sensor Film.

_MG_3015

Goodbye RepRap

So you know that RepRap I spent all my money & time building a few years back?  Well it’s been sitting in a cupboard doing nothing for a long while now so I have decided to put it up for adoption.

_MG_2843 I had finally managed to get the thing working and even spit out a few parts that resembled the CAD files I gave it.  However like most 3D print enthusiasts, I quickly realised my printers short comings.  The biggest was a lack of heated bed, which is required to print parts bigger than a 15ml shot-glass.  Without it, these parts will warp drastically during the print, usually resulting in a big mess.

Though I tried to build my own heated bed, it only half-solved the problem.  While the prints improved, the extruder was painfully slow and the parts still had a little too much warp to be successful.  It was while I was trying to improve the extruder that it overheated and destroyed itself.  At this point I put it away, opting for much easier, albeit more expensive prints from Shapeways.

My colleagues over at Make Hack Void have been building a few, notably newer printers among themselves, as any good hacker space should.  So I have decided to donate my printer to the space in the hope that someone is willing to resurrect it.

Farewell frustrating contraption, and God’s speed to whoever is brave enough to try and get you running again.

_MG_2825

Processing Architectures

So I was listening to a recent episode of The Amp Hour; “An off-the-cuff radio show and podcast for electronics enthusiasts and professionals“, and Chris & Dave got onto the topic of custom logic implemented in an FPGA vs a hard or soft core processor (around 57 minutes into episode 98).  This is a discussion very close to my current work and I’m probably in a very small minority so I figure I should speak up.

If you look closely at the avionics I’ve developed you’ll notice there is only an FPGA with no processor to handle the functionality of this device.  There is an 8bit Atmel, but it’s merely a failsafe.  So to make my position clear, everything in Asity is (will be) implemented in custom logic.

Chris & Dave didn’t go into great depth as it was just a side-note in their show so I’ll do my best to go through a few alternative architectures.  I’ll also stick with Asity as an example given my lack of experience.  I am just a Software Engineer after all.

The goals here are to retrieve data from several peripheral devices including ADCs, accelerometers, gyroscopes among many others; do some processing to come up with intelligent control, and then output to various actuators such as servos.  When designing such hardware a decision has to be made as to what processor(s) will be in the middle.

CPU-FPGA-Interfaces1

The first example is the one I’ll refer to as the traditional approach.  This includes a single CPU that interfaces with all peripherals and actuators, much like you would find in your desktop PC from 5 years ago, or your phone today.  This is the architecture used in the Ardupilot and many other avionics packages.

Modern processors are capable of executing billions of instructions per second.  What can be done with those instruction depends on the instruction set used and the peripherals available to the CPU.

The major limitation with this architecture is that a single CPU core can only attend to one thing at a time.  This means it can only service one interrupt, or perform one navigational calculation etc. at a time.  In order to keep up with all the data gathering and processing required a designer must either be very talented or lucky.  Either way, they still need to spend time developing a scheduling algorithm.

In a single core architecture with undeterministic inputs or algorithms it can be impossible to guarantee everything is serviced in time.  The alternative that is often used is to make sure the CPU is significantly over powered, which requires extra money and power.

CPU-FPGA-Interfaces2 Next we have the multiple CPU core architecture.  This could either be a multi-core CPU like a modern PC, or several independant CPUs/micro-controllers.  A couple of avionics packs make use of the 8 core Parallax Propellor such as the AttoPilot.

This architecture allows tasks and interfaces to be serviced in smaller, independent groups which simplifies scheduling logic and allows a reduction in clock speed.  It also introduces the extra complexity of managing the communication between each of the cores.  While this improves the situation over the single core architecture, each core is still fundamentally limited by a single execution bottle neck.

CPU-FPGA-Interfaces3 The final architecture I’ll discuss is complete custom logic.  This is the architecture I’ve used in Asity and the one that makes the most sense to me as a computer scientist.  I’ve chosen to implement this in a single FPGA, but this architecture can be spread over many devices without significantly altering the software.

In this architecture, each peripheral device is serviced by dedicated silicon, it never does anything else.  This allows great flexibility in interrupt timing, samples can be taken at the natural or maximum rate of the device without increasing the burden on computational resources.  Internal logic modules are also dedicated to a specific task such as navigation and attitude calculations without ever being interrupted by an unrelated task.

In both CPU based architectures a significant portion of development effort is required to schedule tasks and manage communication between them.  In fact a large portion of Computer Science research is spent on scheduling algorithms for multitasking on linear sequential processors.  Truth be told, sequential processors are a very awkward way of making decisions especially if the decisions aren’t sequential in nature.  They have proven to be useful in extensible software systems like a desktop PC, as long as there is an operating system kernel to manage things.

Any software designer worth their salt is capable of a modular design which will quite naturally map to custom logic blocks.  These blocks can be physically routed within an FPGA fabric allowing data to flow along distinct processing paths which don’t have to deal with unrelated code.

The down side to custom logic is of course time and money.  FPGAs are still quite expensive, two orders of magnitude more expensive than a processor suitable for the same task.  There also aren’t as many code libraries available for synthesis as there is for sequential execution so a lot has to be written from scratch.

A small price to pay for deterministic behaviour.

Outback Challenge Deliverable 2 Submitted

So there have been some sleepless nights recently as the deadline for the Outback Challenge second deliverable passed this afternoon.  I managed to get my report in by the skin of my teeth after some email troubles (still waiting on the confirmation from the organisers :S ).

Each team had to submit a technical report that details the design of their aircraft and their risk management strategies.  We also had to compile a video that demonstrates our on field setup procedure, takeoff and landing and how the aircraft handles carrying and dropping the payload.

I’ve compiled a playist of all the D2 videos I could find on YouTube.  Of the 53 teams that passed the first milestone I could only find 12.  Some teams may have been using private links, while others may not have used YouTube.  However I can feel the field shrinking.

My video is included in the playlist above, but if you’re only interested in that one, here it is:

[There is a video that cannot be displayed in this feed. Visit the blog entry to see the video.]

The competition certainly feels like it’s heating up!

 

UPDATE:  Just got the confirmation email that my submission was received (phew).