Mastering Mobility: Modeling an Omni-Directional Robot in Simscape: Part 1

Introduction

Every robot navigates with different levels of control and agility. If a mobile robot’s level of control matches its total possible degrees of freedom, its movement is described as holonomic [1]. When the level of control is less than its available degrees of freedom, its movement is described as non-holonomic. A good example of this is a car. Even though the car has three available degrees of freedom – its position in the XY plane and its orientation, it can only control its movement with two controls – acceleration and steering angle. It’s easy to imagine the limitation this imposes on the available paths a car can follow.

A car’s position can be described by its position (x, y) and orientation θ. However, it can move by adjusting just two control variables – acceleration a and steering angle Φ.

A standard wheel only has one degree of freedom, i.e. it can only rotate about its center. This rotation causes the wheel to exert a force along its tangent on the road. The road exerts a reactionary force due to friction in the opposite direction. The weight of the robot exerts a force downwards on the road. When the force exerted by the wheel is lesser than the frictional force, the wheel rolls. If the force exerted by the wheel exceeds the frictional force, the wheel will lose contact with the road and rotate freely.

Forces acting on a rolling wheel.

We can add another degree of freedom to this standard wheel by adding a small wheel between the wheel and the road such that the wheel can roll in a direction perpendicular to the rolling direction of the larger wheel. Now, the wheel can continue rolling along the primary direction of motion by applying a rotational torque or the secondary direction of motion by applying a lateral force along the rotational axis of the larger wheel. Since this smaller wheel is not subject to any external forces, we can call it a passive wheel or roller.

Passive roller added to a standard wheel adds an additional degree of freedom.

While this does add an additional degree of freedom, the rolling motion of the larger wheel is uneven due to the cylindrical shape of the passive roller. Also, its easy to see that one would have to add more such passive rollers along the circumference to achieve this additional degree of freedom at any give rotation of the larger wheel. Furthermore, we can change the shape of the passive rollers such that its profile exactly matches the circumference of the larger wheel, giving it a barrel shape.

Barrel-shaped passive rollers offset by 60°. The main wheel has been modified to provide pivot points where the passive rollers can be attached.

Adding the barrel-shaped rollers helps maintain the overall shape of the wheel but there are still gaps between the rollers where the wheel cannot roll freely. One solution explored to solve this problem is to make the rollers as small as possible such that the gap between them becomes negligible. Another clever solution to this problem is to simply sandwich two of these assemblies together and rotate them by a fixed angle relative to each other. The rendering shows a wheel configuration with six passive rollers. Choice of a particular configuration will largely depend on cost and application area.

3D Rendering of the final assembly of the wheel.

To finish off this section, we will finally try to answer this question – How does this wheel help in making the robot omni-directional? The wheel we arrived at has two degrees of freedom. Adding three such wheels to a robot chassis allows us to add the same two degrees of freedom to the entire robot as well. To understand how such an assembly would move, let us consider the following scenarios:

Different movement scenarios. Arrows with different colors indicate different rotation speeds.

Let us begin by discussing the easiest scenario (e). In this case, we apply an equal rotation to all three wheels in the same direction, causing the entire robot assembly to rotate in place. In the rotation amounts are different, the robot will move in a spiral as seen in (f).

In scenarios (a) & (b), we apply opposite equal rotations to two wheels. This pushes or pulls against the third wheel which is unpowered. Due to the passive rollers, this wheel will freely roll in the direction that the assembly is pushing or pulling, which in this case is along its own main axis of rotation.

In scenarios ( c) & (d), different combinations of rotations to each wheel will result the entire assembly to move in the lateral directions. In fact, it can be shown that by carefully choosing the right rotation amount and direction for each wheel, we can move the entire assembly in any direction without changing its original orientation, making it an omni-directional robot.

3D Rendering of the final assembly of the omni-directional robot.

Now we have the basic idea in place about holonomic drives and omni-directional robots. We have seen how the omni-directional wheels behave given different inputs. We have also seen how these wheels allow the robot to move in any direction. And we have done so without a single line of mathematics. But as we begin to move to the exciting step of modeling the assembly and its behavior in Simscape, we will dive into some of that math which we cannot avoid.

References

  1. M. West and H. Asada, “Design of a holonomic omnidirectional vehicle,” Proceedings 1992 IEEE International Conference on Robotics and Automation, Nice, France, 1992, pp. 97-103 vol.1, doi: 10.1109/ROBOT.1992.220328. keywords: {Vehicles;Wheels;Angular velocity;Tires;Kinematics;Jacobian matrices;Friction;Manufacturing industries;Angular velocity control;Mechanical systems},
Mastering Mobility: Modeling an Omni-Directional Robot in Simscape: Part 1

Light Sensor with Attiny85 and a photo-resistor

circuit_with_oscilloscope

As part of a larger project, I need a light sensor that can provide a digital pulse whenever the photo-resistor output drops (which means that something obstructed the light source it is exposed to).

A very basic circuit for that is shown below:

schematixc

 

Here a photo-resistor is connected to the bottom half of a voltage divider. The center-tap from the voltage divider is fed into a RC circuit to debounce the output of the photo-resistor. This debounced signal is fed into the analog input pin of the ATtiny85 microcontroller. The microcontroller reads the value of this analog input pin every few milliseconds and determines whether the signal has changed or not. When the signal drops below the 3v, it sets the digital output pin PB5 to LOW, turning off the LED. When the signal returns to a voltage above 3v, it sets the output to HIGH, turning on the LED.

You can try out the circuit by clicking on “Start Simulation”  and then selecting the photo-resistor to change it’s value. You will observe the LED turn ON and OFF as you play with the photo-resistor values.

https://circuits.io/circuits/849806-iot-light-sensor/embed#breadboard

I will soon update the post with more details about how the RC circuit helps us solve debouncing issues with sensors like photo-resistors.

Light Sensor with Attiny85 and a photo-resistor

Math Art!

matlab3dprinting_01
L-Membrane function plot on MATLAB 

(Image Credit: http://blogs.mathworks.com/community/2013/06/20/paul-prints-the-l-shaped-membrane/)

logoheight
L-Membrane function plot converted into a grayscale image

logo_stl
Grayscale Image converted into a 3D mesh using OpenSCAD

stacks3
3D mesh converted into stacks using AutoDesk 123dMake

stacks1
Stacks printed on card stock paper and cut using precision knife

stacks2
Assembled stack model

Image

3D Scanning with Raspberry Pi and MATLAB

 

Raspberry Pi opens a lot of possibilities for do-it-yourself projects. It’s affordable and full of potential for implementing challenging projects. After having spent several years tinkering around my 3D printer, wanting to build my own 3D scanner to complete the 3D workflow was an exciting idea. Using MATLAB and the Raspberry Pi hardware support package for development made the experiment quick and easy, at least from the software perspective.

In this project, I decided to use one of the most basic scanning techniques – focusing more on getting the entire mechanism to work with off-the-shelf components rather than get the best possible results. Raspberry Pi serves as the main controller board for the setup, capturing the images using the Pi Camera, controlling the Line LASER diode and providing control signals to the EasyDriver (Stepper Motor Driver). I have used MATLAB and the Raspberry Pi Hardware support package to implement the algorithm and deploy it to the Raspberry Pi. This step helped me reduce the time required to setup the controller board, and allowed me to focus on getting the mathematics behind the scanning algorithm correct.

stormtrooper_pointcloud

Figure 1: 3D Point Cloud generated using the scanner

stormtrooper

Figure 2: 3D Point Cloud converted into a mesh object using MeshLab and NetFabb Basic


Basic Theory

figure3

Figure 2: Image of a 3D object

An image of a 3D object provides the projection of the object onto a two dimensional plane. It is trivial to extract the X and Y coordinates of any point on the object since they lie within the image plane. However, the information related to the depth of the point with respect to the center of the object is lost due to the projection. In order to retrieve this information, we need some special help. Thankfully, this is not as difficult as it sounds.

figure4

Figure 4: A simple triangulation setup consisting of a Line LASER diode projecting a line on the 3D object

The image above shows a very simple triangulation setup using a camera and a line LASER. The LASER diode is positioned such that it creates a triangle with the view direction of the camera. As you can see from the image, the LASER line projects on the object and intersects the view direction of the camera exactly at the axis of rotation of the object. The angle at the intersection of the LASER line and the view direction (we will call it THETA) provides us with the first tool for extracting depth related information from the image captured by the camera.

fig5

Figure 5: Z-coordinate can be determined by simple trigonometric calculations

Let us assume that the Y coordinate corresponds to each line of pixels in the image and maps to the actual Y coordinate through some scaling factor. At each Y coordinate, the point on the surface of the object where the LASER line projects itself, the intersection of the LASER line with the view direction of the camera (if there was no object to block its path) and the perpendicular dropped from the point on the surface to the view direction form a right-angled triangle. The length of the perpendicular dropped on the view direction gives us the distance of the surface point from the view direction and can be considered as our X coordinate with some scaling factor. The other smaller side of the triangle gives us the depth of the surface point with respect to the axis of rotation, again with a scaling factor. As you can see, the calculations are basic trigonometrical operations.

Now, let use this information to build our basic scanning algorithm. In order to extract 3D information of each point lying on the surface of the object, we need to first determine the points at which we can extract this information. This can be achieved very easily by capturing two images – once without the LASER on and once with the LASER on. Since everything else in the view of the camera remains the same, difference of the two images should give us all the points that lie on the LASER line projected on the object. By converting the difference image into a binary image, we can remove most of the extraneous information in the difference image and mark all points lying on the LASER line in white and the remaining pixels of the image as black. We can further narrow down our region of interest by making assumptions about the rectangular area that covers the entire object in the image.

fig6

Figure 6: Taking difference of two images of the object – one with the LASER line projected and one without, helps us extract the points that are projected onto the object.

Now we have all the necessary components available for extracting the 3D coordinates of every point lying on the surface of the object. There is still one part that still needs to be taken care of. The points extracted from the images lie on the same image plane and are not oriented correctly in the 3D space. For this, we need to rotated each extracted 3D point by some amount about the axis of rotation. By carefully keeping track of the rotation of the object, we can easily determine the angle by which the points need to be rotated after each rotation step. The final algorithm looks something as shown in the image below.

fig7

Figure 7: Complete flowchart for steps required to scan a complete 3D object

 

Prerequisites

For this project, you will need the following hardware and software tools:

  1. Software:
    1. MATLAB 2015b+ and Hardware Support package for Raspberry Pi
    2. Image Processing Toolbox
    3. Camera Calibration Toolbox
    4. Point Cloud Toolbox
    5. MeshLab (tool for generating meshes from point clouds)
  2. Hardware:
    1. Raspberry Pi
    2. Pi Camera
    3. Line LASER diode
    4. EasyDriver Stepper Motor driver module
    5. Stepper Motor
    6. Optional
      1. Raspberry Pi prototyping plate
    7. 3D Printed parts for the turn-table
    8. MakerBeam kit for the chassis
    9. Soldering Iron
    10. Ethernet Cable for connecting the Raspberry Pi to the laptop
    11. Power Supply – 5v 4A at least

 

Downloadable Code and Models

You can download the code from:

http://www.mathworks.com/matlabcentral/fileexchange/56861-raspberrypi-+-matlab-based-3d-scanner

You can download the printable parts from:

http://www.thingiverse.com/thing:1622779

 

Tasks

Task 1: Preparing the Chassis

I have used the MakerBeam starter kit to build my chassis for the project. It is easy to use and provides a sturdy base for the camera, LASER diode and the turntable.

You will need the following in order to complete the chassis:

  1. Two 300mm beams
  2. Two 200mm beams
  3. Three 100mm beams
  4. Two 60mm beams
  5. 90-degree brackets
  6. 60-degree bracket for the LASER diode

Use the 60-degree bracket to angle the LASER diode towards the center of the stepper motor placed between the 300mm beams at the other end. Once you connect the camera, you will have to ensure that the center of the stepper motor exactly coincides with the center of the image captured by the camera. With this, the triangulation setup will be complete.

The turn-table can be printed using the models linked in the download section. The base can be fitted directly on top of the stepper motor. The bearing slides into the base and the turntable plate should fall into place on top of the bearing and connecting the shaft coupling connected to the stepper motor shaft.

The complete setup should look like this:

fig9

Task 2: Preparing the hardware circuits

I have used the Raspberry Pi Rev B board for this project. I am also assuming that the Pi Camera is connected to the camera port on the board. The circuit diagram assumes the pin layouts match this board.

scircuit

There are two main parts to the hardware setup:

  1. Stepper motor control
  2. LASER switch control

Stepper Motor Control

For the Stepper motor control circuit, we are using the EasyDriver board (link). This board takes away all the pains of having to build a voltage-regulated power supply that can deliver consistent and enough current to run a stepper motor, along with the PWM control signals required to run them. With this board, all we need to do is connect a dc power supply with a high-enough current rating (4 Amps is sufficient), connect the control lines to IO pins on the Raspberry Pi, connect the stepper motor to the motor output, and you are ready to go! It really is that simple. Thanks Brian!!!

And Did I mention that EasyDriver also provides a regulated 5v output that can drive other circuits? Well, yes it does! So we will be powering up the Raspberry Pi and the LASER diode with this regulated supply! Double Thanks Brian!!!

The EasyDriver requires the following inputs:

  1. ENABLE– When this control signal is low (0v), the motor output is enabled and what signals are applied to the other control signals, get propagated to the stepper motor. We connect this pin to one of the IO pins on the Raspberry Pi (Pin 24). We have to remember to set this pin to logical 0 whenever we want to enable the stepper motor and to logical 1 whenever we want to disable it.
  2. MS1and MS2 – These two control signals control the micro-stepping mode of the stepper motor. The stepper motor usually takes about 200 steps to complete one full rotation – with a step-size of 1.8 degrees. Micro-stepping allows you to break this into smaller steps – 1/2, 1/4, or 1/8. Essentially this means that it allows you to reduce the step-size to 0.9, 0.45 or 0.225 degrees respectively, allowing finer control on the rotation. We connect MS1 to Pin 25 and MS2 to Pin 23 on the Pi. Setting the pins to (0,0) tells the driver to run the motor without any micro-stepping. Setting the pins to (1,1) tells the driver to run the motor with the smallest step-size – 1/8th. The values in between are left to you to decipher.
  3. STEP– this control line drives the stepper motor. When we apply a pulse to this line (000011110000), the driver moves the stepper motor by one step when the line transitions from 1 to 0 (also called the falling edge of the pulse). We connect this line to Pin 18 of the Pi. We will write a small MATLAB function to send a pulse on this IO pin. More about that in the software setup.
  4. DIR– this control line decides whether the stepper motor will rotate in the clockwise or counter-clockwise direction. We will connect this line to Pin 17 of the Pi. Setting this line to logical 0 makes the stepper rotate in clockwise direction and logical 1 makes it rotate in the counter-clockwise direction.

 

LASER Switch Control

For the scanning setup, we need a mechanism to switch the LASER diode on and off when required. We accomplish this using a very simple transistor switch circuit. We use a NPN transistor (TIP31) in the common-emitter configuration and a voltage-divider bias (Wikipedia). In our circuit, R1 is the 120-ohm resistance and R2 is the 10 KOhm resistance. Pin 22 of the Pi is connected to the outer lead of R1. When we set this pin to logical 1, it is equivalent of tying R1 to Vcc. By ensuring that voltage across the base and emitter is higher than the forward bias, we allow the current to flow through the collector and thereby activating the LASER diode. When we set this pin to logical 0, it is equivalent of typing R1 to ground, pulling the voltage across the base and emitter to zero, switching off the current through the collector and deactivating the LASER diode.

The LASER diode has two leads – power and ground. The power lead is connected to the +5v supply coming from the EasyDriver and the ground lead is connected to the collector of the NPN transistor.

Note: It is important to tie the ground of the +5v supply from the EasyDriver and the ground of the Raspberry Pi together to ensure the control signals coming from the Pi have a common ground.

I would recommend that you test the entire circuit on a breadboard before assembling it on top of the prototyping shield so that there is no need for any costly reworks!

 

Task 3: Getting the Software together

For the software, we need to do two things:

  1. Prepare the Raspberry Pi to communicate with MATLAB using the hardware support package. You can get the details about that here:
    http://www.mathworks.com/help/supportpkg/raspberrypiio/functionlist.html
  2. Once you have setup the SDC and plugged it into the Raspberry Pi, you need to open the Rasp3DScanner project in the MATLAB guide interface. This is a simple GUI interface for the scanner. Make sure that you have changed the current directory to point to the folder containing the scanner code you downloaded from MATLAB File Exchange.

Code Walkthrough

You should see the following files in the Rasp3DScanner archive:

files

Rasp3DScanner.fig is the main GUI file and can be launched from MATLAB console by typing “guide” and selecting the Rasp3DScanner project from the browse field. Rasp3DScanner.m contains all the code for the application and implements the scanning algorithm. There basic utility functions that are self-explanatory in the sense that they control the functioning of the LASER, the Stepper motor and camera.

The cameraParams.mat file contains calibration data from my setup. You should regenerate the cameraParams.mat file for your setup by following the steps here:

http://www.mathworks.com/help/vision/ug/single-camera-calibrator-app.html

3D Scanning with Raspberry Pi and MATLAB

Studio Ghibli… a love affair

I don’t quite remember what made me watch the first film made by Studio Ghibli. But, I do know for certain that it started a love affair with Hayao Miyazaki’s work that will last a lifetime and hopefully will be passed down to my children as well!

So, rather than just watching his creations, I have decided that I am going to make each and every one of his characters as scaled models for my yet-to-be-born children. Now, that is a fantastic legacy to leave behind, eh?

To begin on that quest, I decided to make a model of Jiji, the adorable talking cat from Kiki’s delivery service.

jiji_painted

And yes, if you want to make one for yourself, head to

http://www.thingiverse.com/thing:948676

and grab the 3d model, and print away!

I painted the eyes with enamel paint. It might be a good idea to seal the surface with some putty before painting.

Next character that I am going to attempt is Totoro! Will keep posting as I work on that.

Studio Ghibli… a love affair

Nerfors are online!!!

Recently, I got my hands on four pieces of dirt-cheap radio modules – the nrf24L01s or the Nerfors as I like to call them. After watching them rot on my table for a while (they unfortunately don’t rot away like the fruits), I finally decided to do something about them. And not to mention a dear old friend who never misses a chance to say “Dude, just f@#king finish something!”.

IMG_0175
Nerfors: NRF24L01 modules attached to Arduinos

So here is what I am going to do. I am going to bring these Nerfors online. Then I am going to get three of these Nerfors (the Leaf node devices) to communicate with my raspberry pi (the Gateway node device). Then I am going to get my raspberry pi to host a web server that will allow me to talk to each of the leaf nodes and receive status updates from them over a HTML5 page. Then I am going to give some work to the leaf nodes (rather than just let them send “oinks”). I have a couple of temperature probes lying around. I have a couple of servos lying around. Plenty of work for the nodes.

But the real work will be to control the brightness of an LED lamp. I will talk about this little project of mine soon. Like any other unfinished project of mine, it is waiting for the trainman to come and pick it up. Some of them have been waiting for a long long time.

Here are some technical details about the Nerfors.

nrf24l01_connection_bb

The connections are as follows:

  • GND – Arduino GND pin
  • VCC – Arduino 3.3V pin
  • CE – Arduino digital pin 9
  • CEN – Arduino digital pin 10
  • CLK – Arduino digital pin 13
  • MOSI – Arduino digital pin 11
  • MISO – Arduino digital pin 12

You will notice there is a capacitor connected across the VCC and GND pins of the radio module. This capacitor is needed to allow the radio module to pull current more efficiently from the Arduino. This is necessary when the module is transmitting, as there are very short duration current spikes during transmission that the Arduino cannot handle very well. The capacitor stores the current and discharges whenever the module needs more current.

Here is the video of two nerfors talking to each other:

Nerfors are online!!!