A while ago I bought a version of Kyosho Blizzard, which features the new iReceiver which lets you control standard servos and ESC’s with your tablet or mobile phone. There’s also a camera fixed to Blizzard’s fuselage for streaming video. I didn’t have high expectations about the video performance but for the video the range is really poor. For controlling, the range is what you would expect for wi-fi device, about 50 m. The application however stops streaming the video as soon as it detects even a slight packet loss, which generally happens after 20 m. The thought arose, that since mobile networks are nearly ubiquitous, how about tunneling the traffic via 3g connection?
Some device is needed, which acts as a VPN server and forwards the traffic to iReceiver. I used Beagleboard-XM for this, since it has multiple
USB ports. In this case two are needed, one for wi-fi adapter and one for 3g modem. Basically any Linux-running device would work, the performance does not play a role here. These instructions are for Ubuntu distro, but others would work too with some modifications. Trying this out also does not need you to modify your Blizzard in any way, except you need to strap the parts to it somehow (There’s nice flat space at the rear end of the model body).
Setting up 3g connection
Once you have your board booting and you have ssh access to it, the first task is to make the board to connect to Internet. For this 3g modem is needed. I used Huawei E230 for this since it seems to work with Ubuntu out of the box. Install wvdial, which does the “calling” to initiate connection
apt-get install wvdial
Edit /etc/wvdial.conf and put the following lines in it. You might need to change these, depending on your carrier. Google for proper settings. Note that you also need a connection, where you get real publicly accessible IP-address. In Finland only operator to offer this is currently DNA. There also might be some limitations which ports can be accessed. I use default ports in this post to keep it simple (NB. DNA also blocks most ports, so pay attention when setting up VPN). Also the SIM PIN is disabled for the sake of simplicity.
[Dialer dna] Modem = /dev/ttyUSB0 Init = AT+CGDCONT=1,"IP","internet" Phone = *99# Stupid Mode = 1 Username = " " Password = " "
Test that the connection works by starting the connection with “wvdial dna”
Setting up dynamic host name
Since your operator issues dynamic IP address, which may change on each connection, we need to set up a dynamic host name to connect to your device. First, go and create account to one of the dynamic host services. I used DynDns. Install ddclient, which takes care of updating your IP address
apt-get install ddclient
The install process asks you about your selected service and user name, so you are all set after that.
Connecting to iReceiver
First we need to find out your iReceiver name. They should all be unique, so multiple iReceivers can be in use at the same time. Install wireless tools
apt-get install wireless-tools
Power on your iReceiver to find out it’s name
You should see something like this
Cell 05 - Address: 6C:71:D9:79:09:F8 Channel:8 Frequency:2.447 GHz (Channel 8) Quality=56/100 Signal level=56/100 Encryption key:on ESSID:"iReceiver7909F8"
Copy the ESSID and insert following lines to your /etc/network/interfaces file
auto wlan0 iface wlan0 inet dhcp wpa-ssid "iReceiver7909F8" wpa-psk "12345678"
Replace ssid with your essid. The password is always 12345678. Start the interface and check that it gets IP address from range 192.168.15.*
Setting up OpenVPN server
The VPN server is the essential part here. The Kyosho iReceiver application on your phone expects to be connected to the received via wi-fi and the receiver to be reachable at static ip-address. With VPN, we can create a tunnel to transport the data via Internet and at the same time make you phone to believe that iReceivers address is a local address, to which data can be sent directly. Generating various keys is the most laborious task and has been explained much better elsewhere, so go to this link https://help.ubuntu.com/community/OpenVPN and do the certificate and key generation for both server and client. Write following to /etc/openvpn/server.conf
# Change this if ISP blocks the default port port 1194 proto udp topology subnet # Client address range server 192.168.14.0 255.255.255.0 # Tell the client that 192.168.15.* network can be reached through VPN push "route 192.168.15.0 255.255.255.0" dev tun0 # Use correct key and certificate file names here ca ca.crt cert myservername.crt key myservername.key dh dh2048.pem ifconfig-pool-persist ipp.txt keepalive 10 600 comp-lzo persist-key persist-tun verb 3 mute 20 status openvpn-status.log client-config-dir ccd
Start the server with
service openvpn start
Setting up IP masquerading
Even though the client can now send packets to the iReceiver, not all things work. The iReceiver address is 192.168.15.2 and it issues client addresses from .3 forward. The iReceiver accepts streering packets happily from any address and you could drive around with the Blizzard, but the video would not work. The reason is that iReceiver ignores the video start packet, unless it comes from subnet 192.168.15.*. Now our VPN client gets address from 192.168.14.* subnet. To make this work, we use iptables to setup IP masquerading so that every packet going to wlan0 device appears to be originated from 192.168.15.* subnet.
iptables -t nat -A POSTROUTING -o wlan0 -j MASQUERADE
Setting up VPN Client
On your Android device, install OpenVPN client. On client application, there’s no way to enter settings manually, so we need to create settings file. Create client.ovpn file with following text. Copy and paste your keys and certificates you generated previously. Also replace the hostname with the one you created previously.
client dev tun proto udp remote yourdyndnshostnamehere 1194 tun-mtu 1454 nobind persist-tun comp-lzo verb 3 mute 20 auth-nocache <ca> contents of .ca file here </ca> <cert> contents of .crt file here </cert> <key> client key here </key>
Trying it out
After all the steps, trying it out is just starting VPN connection and launching iReceiver app. Drop a comment if there’s a bug in instructions, or some step needs clarifying! The same idea should work with any other wlan controller also, regardless of the brand.
As a good friend of mine, Ville Kotimäki (who is also an excellent photographer btw) just finished his PhD, some figuring out some present to give was in order. I got an idea to machine some really solid piece out of aluminum and engrave it with text. I also wanted it to play something, as it seems to be a recurring theme for me.
The initial idea was to build a fire alarm siren inside an aluminum container, which would be welded shut and would only have ON button, but no means to turn it off. The idea was shortly scratched because also I would have to listen to it. The second iteration was to make the container read out the thesis like an audiobook.
Fortunately I managed to recruit my friend Heikki to do all the laborious tasks. We started by downloading the thesis pdf. The pdf was converted to text and Heikki did they most annoying part of the project by cleaning the text files from badly converted items like equations, picture captions and tables etc. In the meanwhile I tried a few different speech synthesis programs. I would have liked to use some open source software, but most of them sounded like Stephen Hawking on a bad day.
The commercial Nuance was far superior to everything else I tried. Nobody seemed to have a license for it but there is a nifty feature in OS X that it has text to speech support by Nuance and from system preferences menu you can even download additional voices from Nuance. Heikki proceeded to write a script, which splits the text file to single pages. These pages are then converted to speech with “say” command in OS X and resulting AIFF files are converted to WAV files suitable for wav library we used in Arduino.
The container itself was first drafted on a piece of paper. The measurements were dictated by the speaker we used (diameter 66mm) and the fact that on the inside there are speaker, 9v battery and power switch on top of each other. My brother machined the container shape from solid aluminum with a manual lathe from the drawing. Engraving and machining of the legs and drilling the bottom to let the sound out was designed with Alphacam software and machined with Haas UMC-750 5-axis machining center at my company G-Tronic.
I thought the engraving would be an easy job with Alphacam , but it turned out that the post processor which generates code for the machining center from cad drawing had a few nasty bugs resulting the tool occasionally to go through the work piece. Eventually I had to resort the help of professional machinists at the company, but (or because of that) the finished product looks great! The black effect on text is achieved by coloring it over with a black sharpie and wiping it over with a tissue dipped in acetone.
The schematic for the device is really simple. SD-card is connected in parallel with ISP interface to ATMega328p. Since SD-cards operate with 3.3v voltage, LM328 regulator drops voltage from 9v battery to 3.3v. BC547 transistor acts as a extremely simple audio amplifier, switching 9v voltage to speaker commanded by processors pwm output. Not and audiophile solution but works surprisingly well in this case. Since Heikki wanted to learn some electronics, I just drew the schematic in Eagle and left him the job of figuring parts arrangement on the stripboard and soldering the parts together.
We had a great trouble with the first version. We tried to use LD1117V33 fixed voltage regulator. When measured, it outputted solid 3.3v but the ATMega just would not start to execute the code. The thing worked without problems when powered with PSU. Only explanation I can think about is that the regulator starts to oscillate, but when we checked the output with oscilloscope, there was nothing obvious visible. In the end we exchanged the regulator with trusty old LM328 and the problems went away. The circuit draws about 170 mA, which translates to roughly 3h of usage from 9v battery, but we figured out that no sane person wants to listen this for more than couple of minutes at the time. Note that the connector SV2 pinout is not the same than SD-card pinout! For example how to connect the SD-Card to avr, see here.
The firmware itself is quite straightforward, the only difficulty was the user interface since there is only a power button and we did not want the device to start at the beginning of the book each time. We also wanted a couple of different voices to read the book. On the SD-card, audio files are saved with file name pattern <page (number)>-<voice (number)>.wav. When the device is turned on, it randomizes one of the three voices to use. Each time it finishes playing one page (one file), it saves the next page number to eeprom memory. When device is turned on, it starts to play from the saved page. If the device is turned off before it finishes to play back one page, the counter resets to the beginning of the book. There’s even a nifty page turn sound saved in every other file.
When we initially tested the setup on Arduino UNO board, we used TMRPCM library. It worked well, but on the final hardware, we used the internal 8MHz oscillator instead of 16MHz crystal and found out that the library does not support other that 16MHz clock speed. We changed the library to superior SimpleSDAudio, which enabled us to use 31.250 kHz sampling rate @ 8 Mhz. The library seemed also to have smaller compiled size, but that was irrelevant to us since we are only using 10k of 32k code space.
- Source, scripts and schematics are available at: https://github.com/JanneMantyharju/thesis-grenade
I was really happy with the finished device. Some Ville’s friend dubbed the device to “Thesis Grenade”, which is quite accurate by the looks of it =) The video below is a little bit repetitive, I tried to demonstrate the different voices used, but the random generator decided to use one same voice for many times.
I needed a tool that can play back recorded serial stream “in real time”, so that the playback perfectly reflects the stream that was originally sent. Rather than building my own, OpenLog project seemed to be a perfect starting point. The board itself is very small, about the size of a micro-sd socket that is on the bottom of the board. By default the device records serial data received to standard ftdi-pinout serial port. The device is Arduino UNO compatible and code can be compiled and uploaded with the Arduino IDE. The device is configured by editing CONFIG.TXT file on the sd-card. If the file is not present on the card, the device creates one with default settings at the startup.
Example of config file:
As you can see, I added the timestamp option at the end. If this option is enabled, each received byte in the log file is prefixed with 4-byte unsigned long timestamp. The timestamp is in little endian order and in milliseconds. Unsigned long type provides about 55 days before counter rolls over. The value can be decoded for example in python with struct.unpack function:
buf = f.read(4) timestamp = struct.unpack("<L", buf) char = f.read(1)
The logged files will be four times larger than without timestamping, but with large sd-cards this does not really make a difference. I quickly tested the performance and the device still seemed to be able to log the data without data drops at 115200 bps speed.
The python script that plays back the stream in real time can be found from the examples folder.
I’m using Samsung NX1000 for aerial photography. The camera has a nifty feature of using smartphone as a remote viewfinder and shutter release but unfortunately the good idea is watered by buggy and limited program and the feature freezes the whole camera all too often. Fortunately there is a simple way of triggering the shutter via cameras usb-port. The trick is to have 68K resistor between ID and GND pins. After this USB data lines can be used to trigger camera focus and shutter.
Tip: If you have spare micro-usb cables, it’s easy to source a connector from the cable that came with the camera. Just squeeze black plastic around the connector with a pliers and the plastic casing will crack open exposing the connector. The connector on the cable has a small pcb, which makes it easy to solder the required resistor in place. If you use smd resistor, the casing can even be reassembled.
The cable described above works fine for manual use, but to use the camera for aerial photography some interface for rc receiver is needed. Fortunately this is easily achieved with a small arduino program, which reads pwm value from receiver servo port and then pulls either shutter or focus line low based on channel value. The compiled code size is under two kilobytes, so it’s possible to use small and inexpensive microcontroller like AtTiny2313.
The schematics and sources can be found at: https://github.com/JanneMantyharju/nxshutter
Compatible with (at least) following models: NX20, NX210, NX1000, NX1100 and NX2000
As with other hardware in this blog, if you need one and don’t want to make one yourself, drop me a message and we’ll see what we can do!
Update: If you don’t want to make your own, I’m selling these ready made. Just drop me a message.
A while ago Farnell sent email to me and offered one (inexpensive) product as a sample in exchange for mentioning it at this blog. I browsed for a while for an interesting part and settled with Microchip MRF24WB0MA/RM WiFi module (Order code 1823142). This module is quite inexpensive and is used in products like WiShield and thus has good Arduino support.
I wanted to upgrade my electricity meter to communicate with WLAN to get rid of XBee receiver at the back of my server. After some prototyping I ended up using RN-XV module from Roving Networks. Since my application did not have to do any fancy network stuff, RN-XV was a perfect match. It has the same footprint as XBee module, which I was already using, so the hardware required no changes. The module supports WPA2 security and can remember it’s settings. Communication via HTTP request is incredibly easy, I set up the module to generate http request to my home servers address each time when AtMega output measurement data.
Unfortunately some thing are just too good to be true. The module soon proved to be quite unreliable. Each time after about 14 hours of operation the module lost connection to accesspoint for ever. I tried to solve issue with technical support. By default if module looses AP, it does not try ever to reconnect unless linkmon parameter is specified, which to me is quite braindead default setting. Even with the linkmon RN-XV did not work for long with my Cisco AP. I tried everything including rebooting the module every hour, adding commands to force reconnect and even doing hardware reboot to the module, but without much success. Eventually I changed the RN-XV to connect to different AP. It still looses the connection every now and then but it’s able to reconnect after random time. Still in every 24 hours the module is unavailable for about total of 3 hours due to lost connections. RN-XV firmware version I have is 2.36.
In the end, only changes I made was to modify server backend to accept HTTP requests and change the code running on the AtMega to output measurement data periodically instead listening request from XBee.
The RN-XV was configured with following commands:
set ip dhcp 1 # get ip from dhcp
set wlan auth 4 # use wpa2-psk encryption
set wlan phrase password # set network password
set wlan ssid network # set the name of accesspoint to connect
set wlan linkmon 5 # After 5 tries declare connection to AP lost and try to reconnect again.
set ip proto 18 // turn on HTTP mode=0x10 + TCP mode = 0x2
set ip flags 0x6 # close tcp connection if accesspoint is lost
set ip host ip # server ip address
set ip remote 8080 # server port
set com remote GET$/? # GET string to be sent to server. Any data from uart will be concatenated to this string
set uart mode 2
All in all, for simple projects, I can really recommend the RN-XV module over the MRF24 due it’s simplicity, but definitely not for reliability. Both modules cost about the same, but apart from Sparkfun I don’t know who else has them in stock. Farnell could start selling them, since ordering from Sparkfun can get expensive if you don’t live in the states.
Sources can be found from the repository: https://github.com/JanneMantyharju/electricity-monitor
In my previous post I described building adapter which converts APM telemetry data to be shown on Hitec Auroras screen. Since then APM2 hardware has been released and some code changes made previous version unusable. I updated the hardware and software to support also APM2. I couldn’t get SPI transfer to work in APM2, but fortunately it has a spare serial port to use. The good thing is that now only three wires need to be connected to APM2 and nothing needs to be soldered.
APM1 is still supported, wiring instructions are in the previous post. When using with APM1, USE_SPI must be defined in main.cpp.
This version also includes support for showing GPS coordinates. Previously I thought it’s not really useful feature, but I changed my mind after searching for crashed plane in wintery forest in the light of the setting sun for a long time. Coordinates would have been useful after all. Time and Date are still unsupported since APM internal date format is yet to be decided.
Sources are uploaded to github: https://github.com/JanneMantyharju/apm-hss-emulator
Update 5.1.2013 – Error in schematic corrected. Added fet to power video transmitter after GPS lock has been acquired.
Update 7.1.2013 – Kind of a long shot, but if somebody wants to borrow me a Spectrum or Graupner system, I could modify the device to support different telemetry protocols.