## Saturday, 30 November 2013

### Braitenbot #1: Getting Around

In 1984 Valentino Valentino Braitenberg, cyberneticist, neuro-anatomist and musician, published a landmark series of thought experiments. His book, Vehicles: Experiments in  Synthetic Psychology, explores the principles of intelligence by defining a series of successively more complicated creatures.

Braitenberg Vehicle #1 has just one sensor and one motor. By 'motor', Braitenberg means anything that can provide a motive force, not just electric motors. The sensor situated at the front can be any kind of sensor but for argument's sake is a light sensor that outputs a continuously variable signal in proportion to the amount of light falling upon it. This signal is conveyed from the sensor to the motor by the wire or nerve fibre indicated, causing the motor to vary continuously in its output. The brighter the light, the faster it drives the motor.

This vehicle might seem ridiculously simple, but nature employs something very close to this design in the humble E. coli bacterium. Not just a source of food poisoning, but one of natures own vehicles. From http://www.cellsalive.com/animabug.htm:

A motile E.coli propels itself from place to place by rotating its flagella. To move forward, the flagella rotate counterclockwise and the organism ‘swims’.  But when flagellar rotation abruptly changes to clockwise, the bacterium "tumbles" in place and seems incapable of going anywhere. Then the bacterium begins swimming again in some new, random direction.

Swimming is more frequent as the bacterium approaches a chemo-attractant (food). Tumbling, hence direction change, is more frequent as the bacterium moves away from the chemoattractant. It is a complex combination of swimming and tumbling that keeps them in areas of higher food concentrations.

While thought experiments can be enlightening, there's nothing better than a real experiment. Braitenberg Vehicle #1 can be easily realized in robotic form (as many have done before, see here, here, here, here, here, here, and here).

### BraitenBrushBot

This simplest and most direct realization of Vehicle #1 is based on the humble BrushBot. The unidirectional brushbot comprising a battery and motor, but no sensors, might be considered Vehicle #0 on the Braitenberg scale. However, the sensorimotor loop is considered fundamental to life. According to the principles of computational neuroethology the nervous system, body and environment combine to form a tightly coupled dynamical system.

The BrushBot brings out an important aspect of Vehicle #1 faced by simple motiles, and even exploited by the humble E.coli. "Once you let friction come into the picture, other amazing things might happen. As the vehicle pushes forward against frictional forces, it will deviate from its course. In the long run it will be seen to move in a complicated trajectory, curving one way or the other without apparent good reason. If it is very small, its motion will be quite erratic, similar to 'Brownian motion', only with a certain drive added." Exactly this kind of complex trajectory can be seen in the YouTube video of the BraitenBrushBot in action here.

The BrushBot design is augmented with a single light sensor. The sensor is constructed with a photoresistor (GL5528) and an NPN transistor (BC548B) to amplify the signal. When illuminated, the resistance of the photoresistor drops, opening the transistor, allowing current to flow through the motor.

### DFRobot

For the second implementation of Vehicle #1, I'm using the DFRobot 2WD Mobile Platform (see wiki and assembly guide). This is a great little platform that includes motors and wheels, with lots of fixing holes for additional sensors. As a control board, I'm using the DFRobot Arduino compatible Romeo V2-All in one Controller (R3) (see wiki). This is like a regular Arduino Leonardo but comes with two built-in 2-way (forward and reverse) DC motor drivers. These Braitenbots formed the basis of the DigiMakers 'Robots Are GO!' workshop held at At-Bristol on November 16th 2013.

The robot shown has two light sensors and two motors, which will come in handy for later Vehicles, but are a bit redundant in Vehicle #1. For now, please just pretend we have one of each. The left and right motors are wired into the control board as Motor 1 (M1) and Motor 2 (M2) respectively.

#### Sensors

The robot has two light sensors mounted to the left and right (it's also fitted with an Infra-Red distance sensor which we'll return to in future). The light sensors shown are of home-brew construction using cheap components. Each light sensor contains a photo-resistor and a resistor forming a voltage divider mounted in a terminal block. The output signals are voltages that can be read by the Arduino's analogue inputs.
The left light and right light sensors are plugged into the Arduino analogue inputs 4 and 5, respectively.

#### Installation

The DFRobot controller is Arduino based so you need to install the Arduino IDE from here (If needed, I've put more instructions here). To simplify the code, I've created a library that can be downloaded from here. The library defines a new class of Vehicle through which we can access the buttons, sensors, and motors straightforwardly. To create this library I followed the advice at 'Writing a Library for Arduino'.

• Windows: C:\Program Files (x86)\Arduino\libraries
• Mac: Documents/Arduino/Libraries
Check that this library contains the relevant c++ (vehicle.cpp) and header (vehicle.h) files. When you enter the code below, create a new sketch and then:

Sketch > Import Library > Vehicle

This should add the #include <Vehicle.h> step automatically. If it doesn't then review the steps above.

If you're using Windows 7 you may need to upgrade the driver software:

`Open your control panel > Divide Manager > Other Devices > Arduino Leonardo > Update Driver Software... > Browse My Computer for Driver Software (Search in arduino-1.0.5\drivers)`

#### The Code for Vehicle #1

The library is used to create an object, v, representing a Vehicle.

The Arduino code is organized into a setup() function which is called just once at the start, followed by a loop() function which is called repeatedly. The setup is used to configure the Vehicle, in this case to use the Romeo Revision 3 input/output pin assignments which differ from earlier releases of the Romeo controller.

To prevent the robot running off the table when you upload this sketch to it, the robot can't move until you press any one of the buttons when the state of 'go' is switched (The exclamation means 'not'). To pretend like we've got just one sensor we take the average of the two.

The values returned from leftSensor() and rightSensor() are values in the range 0 to 1. To simulate the single light sensor of Vehicle #1 the two sensor values are averaged. When this value is assigned to the motors a value of 0 means 'stop' and a value of 1 means 'full speed ahead'. To simulate the single motor of Vehicle #1, the same value is assigned to the left and right arguments of motors(left,right). Note that the nerve fibre of the Braitenberg design is represented by the variable, m, that conveys the signal from the sensor(s) to the motor(s).

At the end we insert a short delay of 10 milliseconds (1000ms = 1 second). This is really there in case we insert debugging print statements so as to slow them down.

```#include <Vehicle.h>

Vehicle v;
boolean go = false;

void setup() {
v.r3(); // Romeo V2-All in one Controller (R3)
}

void loop() {
if (v.buttonPressed()) go = !go;

float left = v.leftSensor();
float right = v.rightSensor();

float m = (left + right) / 2;

if (go) v.motors(m,m);
else v.motors(0,0);

delay(10);
}```

#### Setup

1. Start the Arduino IDE
2. Input and save the above program
3. Select: Tools > Board > Arduino Leonardo
4. Verify the program (The tick in the Arduino IDE)
5. Connect the robot with the USB cable
6. Select: Tools > Serial Port > COM*  (PC) or /dev/tty.usbmodem*** (Mac)
7. Upload the program to the robot (The button to the right of the tick)
8. Disconnect the robot
9. Switch the robot on and place it on the floor
10. Press a button to start the robot

#### Experiments

• Press any button to start/stop the robot.
• What happens if you cover its eyes?
• Shine a torch into its eyes. Does it speed up or slow down?

#### Parts List:

These parts aren't exactly what you can see in the photo, but are the nearest equivalent available.

Alternative to the home-brew light sensors:

 Analog Grayscale Sensor V2 £2.75

"Imagine, now, what you would think if you saw such a vehicle swimming around in a pond. It is restless, you would say, and does not like warm water. But it is quite stupid, since it is not able to turn back to the nice cold spot it overshot in its restlessness. Anyway, you would say, it is ALIVE, since you have never seen a particle of dead matter move around quite like that."
Valentino Braitenberg.

## Sunday, 27 October 2013

### Pi Eye

Vision is probably the most important sense we have, so computer vision is key to creating robots that can successfully navigate the world. Robots don't yet understand the visual world around them, but we can devise special purpose vision systems with limited aims. The aim of the system described here is to program a Raspberry Pi to detect and track a coloured blob, actually a blue lego Mindstorms ball. It should be able to distinguish this from another object of a different colour, say, a red lego Mindstorms ball.

This simple computer vision application uses a cheapo USB camera, rather than the official Pi cam; mainly for compatibility with available computer vision libraries. I demonstrated this setup at Digi-Makers, Bristol on 28th September 2013 as an example to encourage people to think about low-level programming (Introduction  to ARM Assembler). The software seen here is written in the C programming language and benefits from the efficiency of this language to run this application in real-time. Any slower, and a moving ball would quickly move out of the robot's field of vision and that critical goal would be conceded.

The camera returns an image represented as an array of red, green, and blue (RGB) pixels. Cameras and displays are built this way because the human eye has detectors - rods and cones - that respond more or less to the frequencies of these primary colours. so far, so biologically inspired. But once the colour information reaches the brain, these colour channels are fused into a single value that represents the hue which is closer to our perception of colour. The areas of the brain that process hue (V4) are called globs. Cells within a glob respond to the same hue regardless of intensity, and cells in different globs correspond to different hues; together they form a chromotopic map. So it is that globs detect blobs.

The main steps of this process are as follows:
1. Map the RGB colour space to HSI (Hue, Saturation, Intensity).
2. Glob the image by the target colour (blue), binarizing the image
3. Blob detection using Run Length Encoding (RLE).
The library I'm using has its roots in OpenCV, an open-source library of computer vision algorithms. OpenCV is at the cutting edge of research so a more stable version called SimpleCV has been created. Now, I'll be honest, I've not had a lot of luck yet with either OpenCV or SimpleCV on the Pi. Luckily a  cut-down version has been made available by Cambridge University. The download instructions for the Pi are repeated below.

```wget http://www.cl.cam.ac.uk/projects/raspberrypi/tutorials/robot/resources/imgproc.zip
unzip imgproc.zip
cd library
sudo make install
cd ..
```

If you've bought the cam and installed the imgproc library You can download, compile, and run vision.c with the following. The -Olevel is a compiler optimization setting (2 is typical), -o identifies the output file (vision), and -l pulls in the required library code (imgproc).

```wget http://battle-bot.googlecode.com/files/vision.c
gcc -O2 -o vision vision.c -limgproc
./vision```

Select the terminal window and use CTL-C CTL-\ to stop the program. To view and edit the vision.c program I recommend using geany as (unlike nano) its mouse aware, and it adds line numbers so you can figure out where the errors are in your code.

```apt-get install geany
geany```

### Map

The first step is to map the RGB colour space onto the HSI (Hue, Saturation, Intensity) colour space. They contain the same information, but HSI is easier to work with for computer vision. The vision software accepts a single parameter that, viewed as a binary number, enables or disables various functions. If the input is 0 then all functions are disabled and we can view the RGB input from the camera prior to processing.
```./vision 0
```

Hue fuses the separate red, green, and blue channels into a single number that represents the colour information. Hue represents the angular position on a colour circle so that one full turn returns you to the same colour. If you laid an equilateral triangle on the circle with its vertices touching the perimeter, then those three points represent pure red, green and blue. If red is at 0° then green is at 120° and blue is at 240°. Every pixel in the image is stored as 3 bytes, so the hue has to fit into a single byte and must be an integer in the range 0-255. If hue represents an angle how do we store a number in the range 0-359° (radians in the range 0-τ, that's tau or 2*π, are no good because they're not integer)? The answer is to use binary degrees (also known as binary radians or brads) which have 256° in a circle. A semi-circle is 128° and a right-angle is 64°.

The following code following C code calculates the hue given r,g,b colour values for a given pixel. This method approximates the colour circle with a Preucil Hexagon. Starting at 0° to the far right, going anticlockwise the vertices of the hexagon represent the colours red, yellow, green, cyan, blue, and magenta. Between each vertex, the hue is estimated along a linear gradient (just like a gradient fill) which is more than accurate enough for our purposes. Each conditional below represents a separate hexagonal face.
``````
// calculate hue

// red-yellow
if (r>=g && g>=b) h = 60*((g-b)/(r-b));

// yellow-green
else if (g>r && r>=b)
h = 60*(2-(r-b)/(g-b));

// green-cyan
else if (g>=b && b>r) h = 60*(2+(b-r)/(g-r));

// cyan-blue
else if (b>g && g>r) h = 60*(4-(g-r)/(b-r));

// blue-magenta
else if (b>r && r>=g) h = 60*(4+(r-g)/(b-g));

// magenta-red
else if (r>=b && b>g) h = 60*(6-(b-g)/(r-g));

// convert to binary degrees
char hue = h/360.0f * 256;
``````

The saturation represents the colourfulness of a pixel relative to its brightness, while the brightness, or intensity is the average of the three RGB channels, and represents a simple grey-scaling of the image (a single number between 0-255). This application doesn't use saturation or intensity directly, so these aren't discussed further here. The HSI conversion replaces the RGB values with HSI values. Viewing the result looks like a false colour image. Note that the HSI map is enabled by bit 2 (with a parameter value of 4).
```./vision 4
```

### Glob

We want to attune our robot vision to the primary colour blue. Being sensitive to a narrow range of hues, a glob is a colour filter. To be more realistic, we'd need globs attuned to the full range of hues, but that's not necessary for our simple setup. Our glob is sensitive to a range of hues 60° (or 42 binary degrees) either side of pure blue. if it matches then the intensity is set to 255, otherwise it is set to 0. The effect is to binarize the image.
``````
// calculate hue distance
int dist = abs(hue-targetHue);

// distance exceeds range?
char binarized = dist>range ? 0 : 255;
``````

Now if we compute the HSI map (4) and apply a blue glob filter (R=1,G=2,B=3) we should see a black & white image of the blue ball (4+3).
``./vision 7``
Look out for reflections of the same hue. In the image the ball is sitting on a white surface that reflects some of the blue, distorting the  boundary of the base.

### Blob

The next stage is a simple segmentation of the image, partitioning it into segments that represent the objects of interest. The binarization helped because the pixels that 'belong' to the blue ball should now be in the foreground. Run Length Encoding is a technique developed for image compression. With RLE, most famously used in JPEGs, runs of data of the same value are compressed into a simple count.  Instead of looking for a complex two-dimensional blob, this algorithm simply looks for the longest strip of 'on' pixels. This works particularly well for balls which, being widest in middle, allow the RLE algorithm to seek out the run across its diameter. This algorithm has the nice quality that it can be performed in a single-pass, making it particularly suitable for implementing in hardware.
The longest run-length is visualized as a cross-hair; the horizontal line is aligned with the longest run-length, and the vertical line cuts through its centre. Invoke the vision code with the parameter:- 3 (blue) + HSI (4) + RLE (8).
``./vision 15``
For best results, the lighting should be as diffuse and even as possible, and the background should be plain and provide good contrast. Like most robotic challenges, it is often simpler to engineer the environment than over-complicate the software. Shadows, highlights and reflections are the scourge computer vision systems everywhere.

Bradski, Kaehler, Learning OpenCV Computer Vision with the OpenCV Library, O'Reilly, 2008.
Demaagd, Oliver, Oostendorp, Scott, Practical Computer Vision with SimpleCV, O'Reilly, 2012.
Kernighan, Ritchie, The C Programming Language, 1978.
Trein, Schwarzbacher, Hoppe, FPGA Implementation of a Single Pass Real-Time Blob Analysis Using Run Length Encoding, MPC-Workshop 2008

## Sunday, 29 September 2013

### The Birth of Blitz

In all honesty I don't know when I drew this. Judging by the technology in evidence, I'd guess this was sometime in 1979, possibly 1980, at 4.30am - coding time. That's my Nascom 1 - the first British microcomputer - with a couple of  Gemini expansion boards behind it (A buffer board and ROM expansion card), that was normally housed in the large box under the desk. The Nascom had an RF modulator which allowed it to generate a display on the 12" TV set, or through RS232 the ASR-33 teletype could be used for input/output or just as a printer. The Science of Cambridge MK14 is hanging precariously from the desk by a ribbon cable connecting it to the Burroughs terminal. I recognise the circuit diagram and ring-bound manual on the floor as the Nascom's and the smaller A5 manual for the MK14. Note the pictures of the Commodore Pet on the wall, and the sinuous printout reads 'SACC', the Sunbury Amateur Computer Club. The guy at the keyboard is my best friend Simon Taylor who, as a budding entrepreneur, managed to sell Blitz and Super-Blitz to Commodore for the Vic-20 and Commodore-64 from this very bedroom. Simon did the coding and I produced the primitive graphics on a bit of old graph paper. Looking back over my career, this 33 year old program has quite possibly been the most successful piece of software I've ever written, with ports to Android and the FIGnition providing it with an independent, autonomous existence of its own. Not just software but a meme.

## Sunday, 25 August 2013

### Ten Billion Times as Stupid

The human brain is said to contain 86 billion neurons, give or take 8 billion or so. This rather puts the 10 artificial neurons controlling my 1982 MicroGrasp robot into perspective. With ten orders of magnitude separating man from robot, this makes my MicroGrasp ten billion times more stupider than me.

Anyone who has seen Penfield's Cortical Homunculus will know that the human cortex is largely concerned with creating detailed maps of the world around it and the body containing it (see @Bristol's homunculus below). These are topographic maps where each neuron (or neuronal cluster) comes to represent a point on a map, and neurons that are physically close to each other map to points that are nearby each other.

The basic MicroGrasp is extremely limited in terms of sensors. All it has are the angle sensors on each axis that provide feedback to the servo-controller. This is analogous to our own sense of proprioception which you can think of as our sixth sense - after the senses of sight, hearing, taste, touch, smell (but before equilibrioception, thermoception, nociception, chronoception). The Natural History Museum in London has a grotesque sensory homunculus that shows the relative proportions of the cortex associated with different sensory parts of the body. The key thing about these somatosensory systems is that they are self-organizing, developing only in response and proportion to the information available, rather than according to any prior plan. The 10 artificial neurons of the MicroGrasp behave similarly, organizing themselves into a robo-homunculus.

If all the wrinkly bits could be ironed flat, the human cortex is equivalent to a sheet a half metre squared and about the thickness of orange peel. How does this predominantly two-dimensional structure come to represent the complex three-dimensional world that it inhabits? It's possible to demonstrate how this trick is pulled off using Teuvo Kohonen's Self-Organizing (or Kohonen) Maps. I'm using software called NeuroLab to create an artificial neural network containing 50 neurons. This uses numpy to do the hard sums and matplotlib to generate the pretty pictures.
 One dimensional toroidal map (after 500 epochs)

The one dimensional toroidal map demonstrates how a one-dimensional artificial neural network (red) maps onto a two-dimensional figure in the unit plane. The trick is that this low dimensional neural network tries to form itself into a space filling curve.

The 500 green data points are uniformly distributed within a barycentric triangle (a handy device invented by August Möbius - of Möbius strip fame). During each epoch the network is trained on the full sample set of 500 points. The neurons are connected in a chain so that each neuron connects with two neighbours forming a closed loop (a 1-torus). Real neurons in a topographic map extend similar lateral connections towards their peers. These serve to excite nearby neighbours enabling them to act as an ensemble, while connections reaching further out are designed to fend off the competition by suppressing electrical activity. The diagram also shows the error rate, defined as the average distance from each datum to the nearest neuron, which falls over time, showing that the quality of the mapping is improving.

Life for the self-organized neuron is much like life as we know it. They have to compete for resources, and when new data arrives only the closest neuron can win. However, these neurons are an affable bunch so they'll share their prize with their nearest and dearest. They do this through cooperation; the winning neuron is able to pull its nearest neighbours towards the rich new source of information. We were able to demonstrate this tangibly at the Young Rewired State Festival of Code held at Knowle West Media Centre on August 7th 2013, where the MicroGrasp and its artificial brain were demonstrated. The roles of 8 neurons were performed by a number of willing volunteers, and the lateral neural connections were represented by a ball of wool stretched out between them. Given the same linear network and following the same competitive/cooperative rules as the robot we were able to train this woolly brain to map out the shape of a half-circle (or Pi) marked out (over time) by post-it data-points stuck to the floor.

This learning behaviour can be described more precisely as follows. This equation governs the way the neurons are updated over time and comes straight from Kohonen's excellent book, Self-Organization and Associative Memory.

$\begin{array}{ccc}{m}_{i}\left({t}_{k+1}\right)={m}_{i}\left({t}_{k}\right)+\alpha \left({t}_{k}\right)\left[x\left({t}_{k}\right)-{m}_{i}\left({t}_{k}\right)\right]& \text{for}& \text{i}ϵ{Ν}_{c}\\ {m}_{i}\left({t}_{k+1}\right)={m}_{i}\left({t}_{k}\right)& \text{otherwise}& \end{array}$

The vector m is the memory of the system, holding the coordinates of the neuron in the space to be mapped. As this vector captures the higher dimensionality of the space to be mapped, it represents the extraordinary dendritic fan out of a neuron - a branching root system that feeds off local information. Each mi is a separate neuron and the index c identifies the winner of the competition for datum x with Nc outlining the privileged neighbourhood of neurons surrounding it who share in the spoils of victory.  The rate of learning can be controlled in two ways; firstly a weighting function alpha decreases gradually so that over time the mapping of a neuron becomes more fixed. This weighting is applied to the difference between the datum and the neuron's existing mapping, governing how much it will change as a result. Secondly, the size of the neighbourhood decreases over time shifting the focus from coarse to fine detail.

### MicroGrasp Simulator

Back to the robot and its 10 neurons. The code is in Python and runs on the Raspberry Pi using RPi.GPIO to control the MicroGrasp. For development it also runs in simulation on a Mac. The MicroGrasp simulator implements the same GPIO interface so that the code can be ported across to the Pi with minimal changes. The output of the simulator, below, plots the position of the arm as a stick-robot (green) using matplotlib.

 MicroGrasp simulation at epoch 0
The robot behaviour simply involves reading the position encoded by each neuron in a continuous loop, moving the arm to each position in turn. Learning is incremental and interleaved with arm movement so that the time spent waiting for the arm to physically move is gainfully occupied by learning from a sample of 100 random arm configurations that are physically possible. Looking at the MicroGrasp simulation at epoch 0, before any learning has occurred, we can see the initial randomized mapping. The simulated robot is shown in plan view and side elevation, showing the 10 robot configurations represented by the neural net. The red line indicates the path traced out in space by the robot's end-effector (gripper), and we can see that the robot starts out moving quite haphazardly as it lurches from one random position to another.

 MicroGrasp simulation after 10K epochs
Now take a look at the MicroGrasp simulation after 10,000 epochs where the picture is very different. The network has organized itself so that connected neurons map to nearby configurations in physical (3D) space. Furthermore, the neurons have spread themselves across the entirety of the mapped space as they fight to maximize the area that they control.

The robot now moves with a more fluid motion and this is beautifully illustrated by the almost flower-like path traced out by the robot gripper. Because of the greater number of degrees of freedom, the learning curve is dominated by the step change as the neighbourhood radius drops decrementally. There are effectively two learning phases, one where the neurons disentangle themselves and spread out across the space to be represented, followed by a fine tuning phase where they optimise their chosen position.

### What of the Future?

An artificial neural network running a mere ten neurons yields interesting results in near real-time on a Raspberry Pi. Coincidentally, the same (British) technology that powers the Pi, the ARM processor, can be found in the Spinnaker neural supercomputer currently under construction at Manchester University. The full machine with over 1 million ARM cores is expected to be ready by the end of 2013, and will be capable of simulating one billion neurons, or 1% of the human brain's 86 billion neurons. A robot using that would be only 100 times as stupid as us - about as smart as a cat. Perhaps these robots aren't so dumb after all?

### Software

This project uses the PyDev development environment for Eclipse on the Mac (before porting over to the Pi) and has quite a few dependencies including numpy, scipy, matplotlib, pygame, and neurolab.

• Install Python 2.7 64-bit on the mac. You'll may need to add the Python binary folder to the path in your ~/.bash_profile: PATH=/System/Library/Frameworks/Python.framework/Versions/2.7/bin:\$PATH
• Install XCode, see http://guide.macports.org/chunked/installing.xcode.html. Installation is only the first step. Run the xcode app (in Applications) to complete the installation.
• Then: Xcode > Open Developer Tool > More Developer Tools... Select and Download the latest "Auxiliary Tools for Xcode" Select and download the latest "Command Line Tools".
• Install the latest standard version of Eclipse then Help > Install New Software > Work with: pydev > select PyDev and install
• Install numpy, scipy, matplotlib. These all need to be compiled together, use the instructions found at http://fonnesbeck.github.com/ScipySuperpack/
• Under Eclipse > Preferences > PyDev > Interpreter -Python each of the listed interpreters has its own defined PYTHONPATH to which we can remove any existing extras, and add new eggs found in ~/Library/Python/2.7/site-packages.
• Install pygame: http://www.pygame.org/install.html
• Install macports http://www.macports.org/install.php and run the package installer sudo port selfupdate
• sudo port notes python27 and follow instructions, eg. sudo port search pyobjc to find relevant ports, then prepend pyVERSION such as  py27-
• sudo port install py-opengl (depends on the xcode command line tools)
• sudo port install py27-pyobjc
• sudo port install py27-numeric
• sudo port install py27-game
The code for this project may be downloaded here as a zipped Eclipse workspace.

## Saturday, 27 July 2013

### The Four Laws of Brushbotics*:

1. A brushbot may use only real brush heads (e.g. a toothbrush).
2. A brushbot may use only a single brush head.
3. A brushbot should be easily reconfigurable
4. A brushbot must not run software (i.e. BEAM robotics).
Seen as educational tools, these laws encourage use of familiarityplayability, and tangibility to enhance learning.

In accordance with these laws, my second brushbot design sports improved reconfigurability, using lego to mount and connect the various components. Of course it also uses the directional brush design. This allows the roboteer to explore the placement of the battery and motor, and different orientations of the motor.

*With apologies to Isaac Asimov and his Three Laws of Robotics

### Suggested experiments:

1. Try the motor at the front and at the back. Which is fastest?
2. Try the motor facing forwards and backwards, instead of to the side. Can you make it turn?

### Construction

The brushbot comprises three basic parts that need to be constructed by an adult, or under adult supervision: The brush, the motor, and the battery with interconnecting wires.

### The brush

The brush mounting is a single value toothbrush head cut to length and reformed under hot water to provide directional travel. The length of the part is the length of a lego plate 1x6.

1. Cut the toothbrush to the length of the lego plate 1x6.
2. Clamp the toothbrush head so that the bristles are facing backwards, immerse in boiling hot water and allow to cool.
3. Lightly sand the top of the brush-head then use epoxy glue to stick the lego plate 1x6 to the top of the toothbrush (I've found that super-glue doesn't bond as well to the lego ABS plastic).

### The motor

I discovered that the iPhone 3 vibration motor fits neatly inside the hole in many lego technics bricks. The brick neatly holds the motor terminals in position enabling header pins to be glued to the side of the brick and soldered to the terminals.

1. Remove extraneous snap on/rubber casing from the motor (if present). This should fit neatly into the lego hole.
2. Place a small spot of epoxy glue on the side of the motor, insert into the hole in the lego brick orienting the terminals so that they face sideways.
3. Snip off a pair of header pins and glue to the back of the lego brick such that the pins are in contact with the motor terminals.
4. Solder the motor terminals to the header pins.

### The battery and connectors

To provide a bit more oomph I decided to use two CR2032 3v batteries in parallel. The battery holders I obtained are able to be glued back to back. I've found that female crimp pins make good connections with header pins. I've avoided using male crimp pins here as I find them very brittle and after a few bends they fracture and break. The terminals are quite close together and have a tendency to short, the heat-shrink provides useful additional insulation.
1. Glue two CR2032 battery holders back to back with their terminals touching.
2. Solder a single header pin to each battery terminal for the connector. This also forms a common electrical connection between opposing battery terminals.
3. Glue a single lego plate 1x2 on the bottom of the battery holder. Ensure this is mounted centrally so that it doesn't obstruct insertion of the batteries.
4. Make up the red (positive) and blue (negative) connectors. Cut two wires to about 12cm long.
5. Attach four female crimp pins to the ends of the wire.
6. Slide a short length of heat-shrink over the crimp pins and use a hair-dryer to shrink-on.

### Component list

 Value Toothbrush 9p each lego plate 1x6 99p Lego Technic Brick 1x2 with hole 99p iPhone 3 Vibration motor 3v 99p 36 Way Single Row Header Right Angle 0.1 inch pitch 25p (less than a penny a pair) 36 Way Single Row Header 0.1 inch pitch 20p CR2032 battery holder 59p each in packs of 10 CR2032 3v battery 12.5p each in packs of 10 lego plate 1x2 (part no. 3023) 99p for 10 Female Crimp Pins for 0.1" Housings 0.24p for four in packs of 100 Stranded hookup-wire 24p per metre Heat Shrink 3mm 76p per meter Approximate cost per brushbot £4.60

### Tools and materials required

1. Soldering iron and solder
2. Epoxy resin glue
3. Crimping tool
4. hair dryer

## Tuesday, 16 July 2013

### Further Outlook

Further Outlook by W. Grey Walter

I wanted to read this book out of interest in its author, William Grey Walter; wartime developer of the classic RADAR scanning display; electroencephalography (EEG) pioneer and discoverer of the contingent negative variation (CNV) or 'readiness potential' (that many claim calls free-will into question); but perhaps best known as inventor of the first autonomous robots in Bristol in 1948. 'Further Outlook' centres on the enigmatic Paula, "Finite in substance but infinite in scope", and the research team she accretes around her to develop fusion powered aircraft. However, as is the case with so many scientists turned author, the characters are thinly sketched vehicles for the pet theories of the author, as was picked up on by 'The Eugenics Review':

"A novel of this kind necessarily has to be judged twice over, first for its literary merit, and secondly for the author's scientific knowledge and foresight… Those who have read The 'Living Brain'… will know that his mind works rapidly and sometimes in several directions at once. His exuberant, concentrated writing demands that the reader should either apprehend rapidly or stop, re-read and think over each paragraph. This makes for heavy going."
P. R. C., The Eugenics Review, April 1957.

As the three men, Simon, Wing, and Punch trace their phototropic orbits around the bright flame of Paula, their relationships are sadly shallow and unconvincing, "Men and women can never be really happy together until we have worked out the algebra of love." Yet even here, there are fascinating echoes of the new cybernetics and the "endless beauty to be found in the intricate algebras and geometries of personal relations." Walter's grand hypothesis is that freedom and autonomy arise out of the innumerable combinations of simpler, mechanistic components, "So why shouldn't we learn to enjoy one another more by finding out how to sketch in the lines of force that unite and divide us and the factors in our brains and glands that make one person different from everybody else for someone?"

'The Curve of the Snowflake' of the American title turns up repeatedly. This fractal (although the term wasn't introduced until 1975 by Benoit Mandelbrot) is a 3-dimensional version of the Koch Snowflake, "You can ... go on arithmetically adding to the sides without end. But the area which it contains remains finite." The foursome define the vertices of a love-tetrahedron, "What about the snowflake image - wasn't that of some importance too? … an extenson of a triangle - Simon and Punch and I being a plane triangle of - infinite extension. Or is the scientific enterprise itself the snowflake? "Science, moreover, has room for all who can find their way to it - the rim of it is a snowflake curve."

Wing is a transparent proxy for Walter himself, "My own consulting room was well provided also with essential equipment, my transistorised electrocardiograph and electroencephalograph." "Wing's interminable recordings from the brain and his childish little models seemed quite harmless" and the apparent history of the future pays homage to "the humble creeping automata of Bristol." Wing goes on "to define Mentality as the rate of change of behaviour, and this puts Mentality in the in the class of rational abstractions such as Velocity and Gravity." Utterly thought-provoking.

He imagines that over the course of a century, language is changed and streamlined by the 'Calculus of Semantic Probability', enabling humans to "get along without word-magic." To our ears this language would "seem terribly harsh, inhuman, mechanical - almost like the love letters the old electronic computors used to churn out to amuse distinguished visitors." This is surely a reference to Christopher Strachey's 'Loveletters' program of 1952 that ran on the Manchester University Computer”; the first stored-program electronic computer system.

I'll close with a quote from a review with a much more gung-ho attitude. Walter's passion for gliding was picked up on by a contemporary reviewer in 'Sailplane & Gliding':

'Those gliding folk who are susceptible to science fiction will be interested in "Further Outlook." Its real time period extends only into the next decade or so, but there is an enigmatical treatment of a period covering the next century. Windsock watchers are kept on familiar ground by the style of titling of each of the five parts: Part One is "A Depression Is Passing (1940-1958)", while Part Four is "Occluded Front (2056-1964)" (sic). … As the author is Dr. Grey Walter of the Bristol Gliding Club, it is most refreshing to find also the true soaring pilot's attitude faithfully revealed. Thus a more conventional portrayal of our sport has a harassed Simon Gloster going off on a sailflying week-end, to seek relaxation among the Scottish hills. Even so, he there encounters the most remarkable travelling device that has yet been thought up, to this writer's knowledge, in the realms of fact or fiction. Here is science fiction from the upper flights of fancy. The reader is recommended to check his de-icing equipment - and take oxygen aboard. It's worth it!'
Geoffrey Bell
Sailplane & Gliding, April 1957.

## Thursday, 20 June 2013

### The OWI-535 (Maplin) Robot Arm

Don't be put off by the box, "ROBOTIC ARM with USB PC INTERFACE"; the OWI will work with the Raspberry Pi. The OWI robot is great value for an entry level robot, and a fantastic introduction to robotics for kids under adult guidance. It has the advantage that, using only a USB connection, it avoids the complexity and hazards of connecting to the GPIO port. The OWI reminds me of an old Airfix kit, with all the parts still attached to their injection moulded sprue.

I may be a bit of a perfectionist, but to get the parts off the sprue I use side-cutters and then a small file to file down the remaining sprue nubbins. Some of the parts are very small, and I'd recommend assembling the gear-boxes on a clear flat surface so you can find the parts when you inevitably drop them. It's also very easy to over-tighten some of the long shafted screws in the gripper which have particularly short threads cutting into soft plastic - stop when the thread has gone out of sight - they're never going to reach a point when they tighten up.

The OWI has no built in sensors, so the software can't detect the position of the robot, nor when a given axis has reached the end of its travel. In its basic form, the robot supports only remote-controlled, tele-operation. The incorporation of sensor feedback and autonomous behaviour is left as an exercise for the reader. I took my OWI-535 along to the recent Raspberry Pi Bootcamp organised by the British Computer Society at the @Bristol hands-on science centre, on June 15th (2013).

A few initial steps on the Pi. Open a terminal window and input the following commands.

It's always good to do an update first:
\$ sudo apt-get update

If you don't have pip installed already:
\$ sudo apt-get install python-pip

We're going to control the robot from Python using the PyUSB library:
\$ sudo pip install pyusb

Once you've assembled the robot and plugged it into the USB socket on the Pi, you can use the 'list USB' command to see if it is visible to the Pi.
\$ lsusb

The robot should appear with a vendor ID of 1267, and product ID 0 (see the screen shot below). If it isn't there, try cycling the power to the robot (turning it off and on again). The first thing the code has to do is to identify the robot via its vendor and product IDs.

import usb.core
import usb.util

dev = usb.core.find(idVendor=0x1267, idProduct=0)

if dev is None:

The OWI-535 has 5 motors and a nifty little LED on the gripper. The protocol is a 3 byte code, with two bits per motor (unchanged = 00, open/down/clockwise = 01, close/up/counter-clockwise = 10). The first byte controls the gripper, wrist, elbow, and shoulder running from the least significant to the most significant bits. The least significant pair of bits in byte 2 control the base. Finally, the least significant bit of byte 3 controls the LED. For example, the following command would drive all motors open/down/clockwise and switch the LED on.

cmd = [ int(0b01010101,2), int(0b01,2), 1 ]
dev.ctrl_transfer(0x40,6,0x100,0,cmd,1000)

As the robot is hard enough to control by itself, I don't want to make it any harder by providing a text-only interface. The de-facto way to build Graphical User Interfaces (GUIs) in Python is to use the Tkinter library. This provides handy graphical widgets including buttons, sliders, and checkboxes which can be used to set these values.