Tinkering with Raspberry (and other things)

Pi-Hicle part 2 – Programming Movement And Display Path On Screen


In part one everything was about getting that legacy touch screen to work. Now it’s time to re-live my childhood. I am going to include the logic that will move my Pi-Hicle around.

In case you didn’t read the first part, here’s a short video demonstrating the GUI and screen output. The path can be programmed and will be displayed on screen with triangles showing the Pi-Hicle’s heading.

orientationThe original Big Trak was able to hold 16 instructions in its memory. Sixteen! With the Raspberry Pi as a brain this number is significantly higher, although not really needed. The programming was done with a touch pad where one could select the direction (forward, back, left, right), wait and fire. Every command was followed by one or two digits, telling the vehicle how many units of its own length to move. The numbers after the “left” and “right” instructions were used to program a turn if xx degrees. To make things easy for us children, the angle to move was scaled according to an analogue clock. 15 meant 90 degrees, 30 was 180 etc.

From the image it is clear that “Right-45” would have exactly the same effect as “Left-15”, although the vehicle would be rotating in the opposite direction.

So writing the logic for my Pi-Hicle is fairly easy and breaks down to:

  • There are 4 movement commands: LEFT, RIGHT, FORWARD, BACK
  • There are two other commands: HOLD, FIRE
  • Every command is followed by one or two digits
  • After LEFT or RIGHT the digit translates to an angle of rotation
  • After FORWARD or BACK the digit is an arbitrary unit of movement
  • After HOLD and FIRE it means an amount of time (seconds, 1/10th of a second…)

And this makes saving a programmed path even more easy: Every step in the programming will be a letter, followed by one or two digits. We can use a string to hold that information and put all the strings in a “command list”.

The keypad revisited

doc_main_keypadHere’s the keypad image again, with all the assigned “keys”. The direction, Hold and “Engage” button, together with the number pad, make up the movement commands. Only these key presses will be saved to the command path.

The other buttons can be divided into groups that have different effects while the Pi-Hicle software is running:

  • CLR and CLS affect the programmed path directly
  • CHK and SIM display (sort of) the path
  • GO will “drive” the path
  • OUT and BCK are commands that will end programming mode and go to other levels of functionality.

The source code for path programming

Let’s take a look at “programming loop” for the Pi-Hicle. The while loop will run as long as progDone is False. The other variables that are used:

  • cmdStarted: indicates that a path command button was pressed
  • cmdNumcount: counts the numbers that are pressed after a path command was pressed (maximum of 2 digits allowed)
  • _shortBuffer: a temporary buffer where I store the command that is actually entered at the moment (allows for checking without modifying the real path)
  • _cmdBuffer: a list holding all entered and positively checked commands

The source code for this is, as the Big Trak was, very simple (and would look much cooler and nerdier in assembly language I reckon).

def programMode(self):
    progDone = False
    cmdStarted = False
    cmdNumcount = 0
    while not progDone:
        # check user inputs and interpret them
        status = self._screen.getTouch('status')
        if status == 2:
            # release detected, get coordinates
            #print display.getTouch('coord')
            event = self._prgPad.getEvent(self._screen.getTouch('coord'))
            if event == 'OUT':
                # done here, return
                progDone = True
            elif event == 'CLR':
                # clear memory
                progDone = False
                cmdStarted = False
                cmdNumcount = 0
                del self._cmdBuffer[:]
                self._shortBuffer = ""
                del self._path[:]
            elif event == 'CHK':
                # testing is only possible if not in cmdStarted mode
                if (cmdStarted==True and len(self._shortBuffer)<2):
                    print "Error: Finish last command before testing"
                elif len(self._shortBuffer)>1:
                    #first add command to buffer
                    cmdStarted = False
                    cmdNumCount = 0
                    self._shortBuffer = ""
                # show last command if there is anything in buffer
                if len(self._cmdBuffer)>1:
            elif event == 'CLS':
                # clear last command
                # if programming has started, just clear the shortBuffer
                if cmdStarted is True:
                    self._shortBuffer = ""
                    cmdStarted = False
                    cmdNumcount = 0
                    # pop last command
            elif event == 'SIM':
                if (cmdStarted==True and len(self._shortBuffer)<2):
                    print "Error: Finish last command before testing"
                elif len(self._shortBuffer)>1:
                    #first add command to buffer
                    cmdStarted = False
                    cmdNumCount = 0
                    self._shortBuffer = ""
                if len(self._cmdBuffer)>0:
                    print "Error: No commands in memory"
                    #print "Sound returned %02x"%ord(self._screen.playSound("e1beep.wav",0x00))
            # number typed?
            if ( event in self._NUM_MAP ):
                if (cmdStarted is not True):
                    print "Error. Type Command first!"
                elif ( cmdNumcount==2 ):
                    print "Error. Only two numbers allowed after command"
                    # assume the command is now finished
                    self._shortBuffer = ""
                    cmdStarted = False
                    cmdNumcount = 0
                elif ( cmdNumcount<2 and cmdStarted is True ):
                    #append number to shortBuffer
                    self._shortBuffer += event
                    cmdNumcount +=1
                    # if this is 2nd number assume command is finished
                    if cmdNumcount == 2:
                        self._shortBuffer = ""
                        cmdStarted = False
                        cmdNumcount = 0
            if ( event in self._MOVE_MAP ):
                if (cmdNumcount == 0 and cmdStarted == True):
                    # there needs to be a number after each command
                    # so this is an error
                    print "Error: Previous command needs a number."
                # this starts a new command so check if the previous one
                # is written to buffer
                if len(self._shortBuffer) > 0:
                    self._shortBuffer = ""
                cmdStarted = True
                self._shortBuffer = self._MOVE_MAP[event]
                cmdNumcount = 0

After programming the Pi-Hicle is finished, the _cmdBuffer will hold all commands in single entries of the form “Xnn”, where X is the short form of the command and nn is a number.

This list makes it easy to have commands like “Clear Last Step” etc, as these are simple list operations like append() and pop(). Furthermore the list can easily be iterated for parsing and calculations.

You may have noticed that in the “SIM” branch there is a call to drawRoute(). This function will draw the programmed route on the touch screen. The route will be scaled (of course) but everything else, like rotating by n degrees etc., will be calculated and displayed correctly. And that is where mathematics starts.

Simulating the path on screen (be warned: math content)

To be able to display the path on the screen there is need for some calculations. As the touch screen itself has no GUI layer and window-server that could handle the projection of the programmed path onto a viewport (that means display or window) of given dimensions, we need to do the calculations by ourselves.

What’s the problem? Well, take a look at the picture below. In mathematical coordinate systems “forward” means moving along the x-axis. When the path is displayed on the screen, “forward” is drawn upwards. And if that wasn’t enough, the coordinate system for pixels has its y-value increase downwards. So to “translate” from the mathematical system to the “real world” we need to rotate the path by 90 degrees and then change the sign on the y-coordinate.



The next problem is with rotation commands. In mathematical world, rotating an positive angle means a counter clockwise rotation, in “real world” we expect a clockwise rotation. The rotation commands for the vehicle refer to the last direction it moved to, aka it’s “heading”. So if we rotated by 90 degrees right, then drove forward and then rotated again, this time 45 degrees, in the coordinate system (both mathematical and real world) this would be a rotation by -(90+45) degrees (everything parallel to the x-axis is 0 degrees).

Therefore we need to keep track of

  • The coordinates of the last point the vehicle was at
  • The heading at that point (where was the vehicle facing to)

Anything else? Oh yes. The screen does only have a limited amount of pixels in every direction, so eventually there is the need for scaling down (or up) all coordinates so they fit on the screen. This is done by calculating the boundary of the path, that is the rectangle the path fits in (in our mathematical coordinate system).

The path in the mathematical coordinate system

So I am going with a two-part approach: First, in the mathematical coordinate system every point of the path and the heading (that is the angle between vehicles forward direction and the x-axis) is calculated. The command “F5” would yield 5 single points in the path. All these points are saved in the form (heading, x component, y component). If the “Shoot Laser” command is issued at a certain point, I set the heading to -999, for “Wait” it is -888.


In a more mathematical way: I am calculating the direction vector for the angle for each command. This directional vector is simply calculated as (Attention: negative sign in the sin()-term so that positive angles mean clockwise rotation. As cos() is symmetrical there is no negative sign needed there):

\vec{v}=    \begin{pmatrix}    \cos(\omega)\\-\sin(\omega)    \end{pmatrix}

This heading is stored in a list called vector, so that vector[0] is the x-element and vector[1] the y-element. Moving forward means adding these elements to the current point, moving backwards means substracting. So moving in a given direction for an amount of “val” units looks like this in the source code:

for step in range(0,val):
    x = point[0] + sign*vector[0]
    y = point[1] + sign*vector[1]
    point = (x,y, oldHeading)

I am simply appending all calculated points to the path. This is the mathematically correct representation of the programmed path. Thus it could be used to do all kinds of calculations with it. As a last step, the bounding rectangle of this path is calculated, so I do have the maximum and minimum value of the x- and y-coordinate for scaling.

The path in the screen coordinate system

Displaying the path on a display means rotating it and changing the sign for the y-coordinate. Moreover, as the display size is limited, we need to scale the complete path in a way that it fits onto the screen.

Keeping on with the vector notation, a rotation about the origin is generally calculated as

p^{'}=    \begin{pmatrix}    x^{'} \\ y^{'}    \end{pmatrix}    =    \begin{pmatrix}    \cos(\omega)& \-\sin(\omega)\\    \sin(\omega)& \cos(\omega)    \end{pmatrix}    \cdot    \begin{pmatrix}    x \\ y    \end{pmatrix}

This can be written as two separate equations for the new x- and y-coordinate (easy to be used in program):

x^{'}= x \cdot \cos(\omega) - y \cdot \sin(\omega)\\    y^{'}= x \cdot \sin(\omega) + y \cdot \cos(\omega)

This looks complicated, but we are lucky that cos(90°)=0 and sin(90°)=1. Putting this into the equation and changing the sign of the y-coordinate gives a result that is as easy as:

x^{'}= -y\\    y^{'}= -x

So practically we are just switching the coordinates and change the sign on one of them. Yes, I could have told earlier, but I like math…

Finally there is need to scale the path. This is accomplished by just dividing the screen resolution in each dimension by the width (or height) of the bounding rectangle. Then we simply re-calculate our points and write them to a path list. One last thing: this new coordinates are still calculated with an origin at (0,0). So to finally draw them on screen, a “starting point” is calculated and added to every point in the path. The starting point is simply the scaled valued of the smallest points of the bounding rectangle.

In Python all this looks like that:

def createDisplayPath(self):
    dispPath = []
    # first rotate and mirror
    for point in self._path:
        x = (-1.0)*(point[1])
        y = (-1.0)*(point[0])
        p = (x,y, point[2])
    # now calculate boundary in display coordinate system with a border
    # of 7 pixels
    dispBoundary = self.calcPathBoundary(dispPath, 7)

    # get dimension of attached screen
    res = self._prgPad.getDimensions()
    # calculate scale factor
    xscale = (res[0])/(dispBoundary[2]-dispBoundary[0])    # xres/(xmax-xmin)
    yscale = (res[1])/(dispBoundary[3]-dispBoundary[1])    # yres/(ymax-ymin)

    # if scale bigger 7 pixels reduce - too ugly if too big
    if xscale > 7:
        xscale = 7
    if yscale > 7:
        yscale = 7
    ## calculate new orign to fit all points nicely in screen
    ystart = abs(yscale*dispBoundary[1])
    xstart = abs(xscale*dispBoundary[0])

    # now create path with display coordinates.
    resultPath = []
    for point in dispPath:
        x = xstart + xscale*point[0]
        y = ystart + yscale*point[1]
        p = (x,y,point[2])
    dispPath = resultPath
    return dispPath

This path can then be drawn directly on screen. Or you use the points in the path and draw markers around them to show the Pi-Hicle’s path. I chose to draw a red circle as the starting point and green triangles as movement indicators (showing the heading of the vehicle).

What next?

Well, transferring the programmed path to a real vehicle is next. I need to do some prototyping and testing as at the moment I am using an old “Lego Mindstorms 1.5” as a test vehicle and that needs some work.

UPDATE 2014-01-10: Part three is online and it’s all about the internals of the Big Trak vehicle.