Saturday, 14 April 2018

Eroadarlanda / Elways / Electric charging road in Sweden

What is it?

The world's first electric charging road "opens**" in Sweden. Near Arlanda airport. It can "charge***" "electric" cars while they are driving.
 
It is a massive press-release that appears to have hit a lot of mainstream media in the last couple of days (12 April 2018)


"Stretch of road outside Stockholm transfers energy from two tracks of rail in the road, recharging the batteries of electric cars and trucks"

They certainly seem to have an excellent PR agency.

Can it do what it says?

 Mostly, no.

They have a truck which apparently can run from this scalextric-type road. It is not clear whether the truck is electric. Most of the video I can find shows a diesel truck with a dummy electrical load. They also have a test-car (which is, I suspect, petrol-powered, at least it sounds like it in the video) which carries a 17kw dummy load powered from the track. It is not clear whether the car is road-legal with the dummy load (I can only find video of the car being used on a test-track)

Could it possibly work, ever?

Maybe, yes. If it can actually deliver 17kw to a moving vehicle, that is approximately enough to run a car at motorway-type speed and charge at a modest rate simultaneously. 

They made various claims about its safety and robustness etc, which seem at least vaguely plausible. They obviously did some testing in various conditions. There is video of the truck driving in snow.

So what's the problem?

No normal electric car could ever charge from the track while driving. The electrics system would have to undergo major modifications. That's ignoring the major software updates.

That's to say - EVs can recharge while driving - it's called regenerative braking. 

The electric system in an electric car is fairly complex and needs to handle a lot of power in several directions. Most electric cars can charge at up to 50kw, and many will discharge even faster in some conditions. Moreover, they can switch from one to the other in a very short time while driving.

EVs usually have an onboard charger for charging from AC (Level 1 and 2) sources. This is part of the power electrics which can't operate at the same time as the drive.

Rapid charging (Level 3) is done by an external charger, which communicates with the car in a rather complex way, and provides DC at the correct voltage / current for the specific charging requirements of the vehicle. The external charger is a big heavy box which would be prohibitively heavy to put on a light vehicle. They usually take an industrial 3-phase supply.

I don't think it's plausible to use a level 3 charger in the track:
  • There aren't enough pins for a level 3 charger, which needs at least an AC supply AND a DC supply.
  • There would not be time on a 50m section (or even 200m section) of track, for the charger to get the correct voltage, current etc for the specific vehicle
  • The large number of complex level 3 charger boxes would be prohibitively expensive.

There is a lot of difference between delivering some power to the vehicle (which has been demonstrated) and charging an EV traction battery (which has not).

But those things could be solved?

Yes, in principle, but no car maker is going to want to do that. Modern vehicles are not designed for a single market, it's not cost-effective. Cars designed now (without elways support) are going to continue to be made for 20 years without a major redesign of their drive train.

It's not remotely viable to redesign the electrics of an EV for a feature which is available only in one country, as the weight / performance / space penalty would exist in the design everywhere else.

Why else am I sceptical?

 Most of the videos that I've seen seem to be on the test-track (not on the airport highway). I saw one 12-second clip which appears to show a slightly less hacky pickup (maybe on an electric truck) looking like it might be powering the real truck.  But otherwise, all the media seem to be from the test-track, or with dummy loads.

The Guardian's funky video obviously shows a diesel truck with the test-load running some bright lights - this is presumably on the test-track because it won't be road-legal.

The PR is certainly very strong. There is a lot of marketing wank that makes very big promises, but is very light on details.

There is basically no detail whatsoever about any of the vehicles that they have to use on this "open" road.  I understood from the various press releases that there was one truck which would be regularly using the road, which is a 2km section. Again, it's not clear whether the truck actually takes useful traction power from the road (whether charging or just driving), or just drives a dummy load.

* I wish they would spend more money on developing the tech instead of making mockup videos and writing PR stuff
 

 Anything positive?

Yes a few things
  • The tech for getting the track and pickup working seems well developed, they've shown it working back in 2013
  • It has been demonstrated working in bad weather.
  • If we forget about charging electric cars and just consider an extra power system for (ICE powered) commercial vehicles, it starts looking a lot more sensible.
  • Heavier, slower vehicles pay a lower weight and aerodynamics penalty for having the tech installed.
  • Delivering 17kw to an electric motor on an otherwise-diesel truck would save it a great deal of fuel, it would also extend the range of an electric one (but not much if you only have 2km)

** opens to the single vehicle it's designed to work with. But other traffic can still use the road, so I suppose that's open?
***  well, not actually charge. But deliver power to. Perhaps?

Sunday, 8 April 2018

Strange RC terminology

Here are some terms used in RC models

"Electronic speed contoller" aka ESC

What does it do?

It converts RC servo pulses (pulse width only matters, 1500us = centre, approx 1000 = left, 2000 = right) into a speed, often with reverse, so that it can drive between full speed reverse, stopped, full forward. These usually control brushed (DC) motors, but versions for the "brushless" motors are available too (which often only drive in one direction, they are intended for RC aircraft).

 
Why is it called "electronic" speed controller? Surely that's redundant?

No. Originally RC models used either a petrol engine (in which case, the engine is the speed controller, and servo operates its throttle) - or a second battery with a primitive electro-mechanical speed controller - essentially a variable resistor with a servo attached.

So the "electronic" means that it's fully solid state - no moving parts.

"Battery eliminator circuit" aka BEC

WTF? It is apparently the RC term for a voltage regulator. Just a voltage regulator.

So why is it called a BEC?

Because originally models used two batteries, one for the radio receiver and servos, another for the main drive. The traction battery was much bigger and generally rechargable (NiCd chemistry was common before lithium cells were good). The radio batteries sometimes used primary cells, or a different form of rechargable e.g. AA or standard cells.

The BEC essentially eliminates the secondary, or RC receiver battery, by regulating the voltage from the traction battery down to a stable voltage for the receiver (the receiver will probably want 5v, but the traction batteries are often 7v or higher)

The two-batteries scheme is probably somewhat obsolete, especially on flying models (the BEC is generally a lot lighter than a second battery).


Saturday, 17 March 2018

How to make a cheap ripoff PS3 controller work as a Bluetooth joystick under Linux

The device was sold as a PS3 controller. I don't have a PS3, I wanted it to control a robot (obviously).

The device in question identifies over USB as "Gasia Co.,Ltd PS(R) Gamepad"

BUT the software support is terrible. I did some of the following things, and it eventually worked.

Get qtsixa from here: https://github.com/supertypo/qtsixa.git - there are other branches available , this one has hacks for cheapo devices.

* Do not bother trying to pair it with sixpair utility - the cheapo PS3 controller rip-off does not remember the paired device.
* Instead, use sixpair to see what device it expects to be paired with, and change the host's bluetooth address accordingly.

I'm using a Raspberry Pi, so I hacked this file:

 /usr/bin/btuart

This file is specific to Raspberry Pi and is run on startup. Normally this file uses the Raspberry Pi serial number to generate a bluetooth address somehow.

So I just hacked it to use the address 00:12:34:56:78:9b instead, which is the one the controller expects. Then reboot.

THEN - install "sixad" and run "sixad --start"

Monday, 22 August 2016

Deterministic json output from python

How do we get deterministic JSON output from Python?

JSON objects are not inherently ordered, their properties can be written in any order without making any difference to their meaning.

Unfortunately, people often use text tools to check the output, and sometimes we want to generate deterministic JSON:

Consider this bash/Python mess:

for ((i=0; i != 10; i++ )); do python3 -c 'import json; print(json.dumps({"a":420, "b":99, "c":100}))'; done

{"a": 420, "c": 100, "b": 99}
{"a": 420, "c": 100, "b": 99}
{"c": 100, "a": 420, "b": 99}
{"b": 99, "a": 420, "c": 100}
{"c": 100, "b": 99, "a": 420}
{"c": 100, "b": 99, "a": 420}
{"c": 100, "a": 420, "b": 99}
{"c": 100, "a": 420, "b": 99}
{"b": 99, "c": 100, "a": 420}
{"c": 100, "a": 420, "b": 99}

What's happening here?

Python's internal dicts' hashing is using some random number to prevent hash-collision attacks, which means that each time we run the program a different hashing pattern is produced, and the dictionary keys appear in a different order. This is good because the order isn't predictable to an attacker, but it means that an otherwise deterministic program generates different output each time it's run.

Why would this be a problem?
  • You can't compare JSON files using "diff" any more, because these differences always appear
  • Continuous integration - different output triggers false alerts
  • Humans may be able to read the json more easily in a specific order.

How do we fix it?

Use collections.OrderedDict. But be sure that you don't initialise it from a regular dict.

Correct:

od = collections.OrderedDict([("a",420), ("b", 999), ("c", 888)] )
od = collections.OrderedDict(); od["a"] = 420; od["b"] = 999; od["c"] = 888


Incorrect (creates a normal dict first):

od = collections.OrderedDict({"a":420, "b": 999, "c": 888})

 

Tuesday, 26 April 2016

How to use "cron" to run periodic scheduled jobs

"cron" is the Unix (Linux, etc) scheduler which runs regularly scheduled jobs. This post is not meant to be the "man" page (it has one of those already), but ideas how to use "cron" in a robust way.

Setting up cron jobs

There are at least *three* ways of configuring cron jobs on a modern Linux system; technically these are extensions, but they're so quasi-standard, they're even (possibly) available on FreeBSD :)

  • Per-user "crontab" file. This can be edited using crontab -e, or replaced by crontab . If you are installing system-level software, you probably don't want to use this. Each user can have only one crontab file.
  • System-wide "crontab" file, usually /etc/crontab. This is usually managed by the distribution / package manager, and you probably don't want to change this; there is only one.
  • Per-package "crontab" files - usually kept in /etc/cron.d. There are multiple files, usually one per package (or per app). This is the best solution if you're distributing software as a package, on multiple machines, and want the installation to be robust and repeatable.

The system-wide crontab files, give the option of running cron jobs as any OS user (who must exist, obviously!)

More tips on configuration:

  • A few environment variables can be set, typically  things like PATH, SHELL and importantly MAILTO
  • If you don't set MAILTO=, then the stdout and stderr will be sent by email to the user who owns the cron job.  This is seldom what they want nowadays.

When to schedule jobs

Don't schedule daily jobs between midnight and 2am. Cron jobs are scheduled in local time. The sysadmin might not have configured the machine for UTC, therefore there is a possibility that some cron jobs are repeated, missed etc, during changes of time zone or daylight saving time.

In general, I don't like to schedule daily jobs at all, unless they aren't very critical.

For important stuff, it's probably better to have an hourly job just check the (UTC) hour, so it can avoid time-zone dependency.

A feature of "cron" which is little-known, but available on most systems, is the "@reboot" jobs, which are run shortly after the system boots. Such jobs are useful to perform cleanup work that otherwise might not get done at all (e.g. a 6am job, when the user seldom has the machine powered on at 6am).

Anacron

Some (i.e. most Linux) systems have a small script called "anacron", which runs jobs hourly, daily, weekly etc, with no particularly fixed schedule (because they run in sequence with other jobs which may take some time). 

This could be used as an alternative to "cron", however it's got more limitations and, in particular, runs everything as root.

Multiple instances

"cron" mostly does not care about multiple instances, and will execute more than one copy of your job. This is almost never desirable, so if there is any probability whatsoever of this happening, you should prevent multiple instances.

The "flock" shell program might be handy (recipes are in the man page), or creating a file and exclusively locking it (e.g. in Python, C or your favourite language).

If you have a slow job (say, a backup, or something which relies on the network), and a second instance starts, there is a good chance that the 2nd instance will slow down the first instance, then a third instance starts, until the whole system comes down with too many instances of a cron job.

Stampeding herds

If your software will be installed on many machines sharing infrastructure (e.g. a network, a server, a VM host), then it may be useful to try to avoid a stampeding herd.

"cron" has the unfortunately property that it usually runs jobs at the exact same moment (usually to the same second), if identically configured on several machines. If your cron job depends on something, it can cause problems or failures.

The obvious solution is "random sleep"; sleeping a random amount of time (usually only seconds/ minutes) before doing any work which may require infrastructure access. In some cases though, a random sleep can increase resource usage, imagine this sequence:
  • Start up program
  • Load loads of huge libraries
  • connect to database server
  • random sleep (0... 300 seconds)
  • perform work which takes, maybe 5 seconds
  • exit
 The random sleep is doing more harm than good. Be sure to place "random sleep" before taking too many resources!

Another option that I've seen occasionally, is to dynamically generate the "crontab" at install-time with a random value for the minute field.

Error handling

One of the nice things about running in a "cron" job is that "log & exit" is often sufficient error handling. This is particularly true if it's an hourly job, where the next hour would be an ideal time to retry whatever failed.

Some types of errors (e.g. network problems) might just go away if we try again 1 hour later. Other types (e.g. out of disc space) might need someone to fix them.

The stdout / stderr from a cron job is often lost, so it usually important to log to a file or system log. 

Alternatives

Maybe you don't need "cron" at all?

Historically, many popular web applications have just done periodic clean-up work in response to a user request. Sometimes random requests are chosen, or on particular user activity.

Some third party services exist, to simply "hit" a PHP script (such apps are usually written in PHP, which I'm not necessarily advocating) on a regular basis.


Writing a permanently-running "daemon" process has some advantages over "cron", although it does incur more initial work, more "boiler plate" code. If work is required very frequently (say, several times per hour) it might be more convenient or have better performance to run it in a daemon.


Friday, 15 April 2016

How to correctly make "latest" symlinks

"Latest symlink"

A "latest" symlink, is a symbolic link (on Linux, Unix etc) which links to the "latest" version of a file.

Suppose we have a file which takes some effort to create, which is generated periodically or in response to some stimulus (e.g. user activity). Then we want to create a "latest version" symlink.

Ideally the properties should be
  • latest symlink always points at the latest version (duuh!)
  • latest symlink always exists
  • latest symlink never points at a partially completed, broken, missing or otherwise bad file
Sometimes people do this in a way which won't work.

How to create a symlink

Dead easy, right? Just call the "symlink" function. 

 int symlink(const char *oldpath, const char *newpath);

 DESCRIPTION
       symlink()  creates  a  symbolic  link  named newpath which contains the
       string oldpath.


But if we call "symlink" and the newpath param points to an already existing file (including a symlink) then it will return EEXIST.

The wrong (obvious) way

try:
  unlink(dest)
except OSError:
  pass
symlink(src, dest)

The correct (not so obvious) way

templink = dest + '.temp'
symlink(src, templink)
rename(templink, dest) 

Why?

Because we want to avoid a race condition where the destination symbolic link does not exist. Renaming files is atomic and will instantly replace the existing link with a new one; no other program can possibly see a non-existing file.

Other wrong ways

Some possibly common, but wrong (or even wronger) ways to do this
  • Just create the "latest" file using our "do lots of work" process directly. This is really bad, as during file creation, another process can see a partially completed file. If your code looks like this: write file header; do lots of work to create file body; write file footer, then there is a really good chance that another process sees an incomplete file.
  • Create a different file, then copy the file (file copying isn't atomic)

Tuesday, 15 September 2015

Headless web browsers: PhantomJS and SlimerJS

What is headless web browsing?


It's using a web-browser like application to do automated fetching and analysis of web pages, without a human user present.This is different from simply fetching HTML content via HTTP; headless web browsers typically also load images, process Javascript code, CSS and layout the page content (albeit in an invisible way). 

The developer can then use scripting (usually Javascript) to examine the page as it is laid out in memory, as if in a "real" web browser, to look at the style of text, etc.

We could even use OCR to look for text within images shown in the page.

Why?

  • More effectively analyse the content of pages. Lots of pages nowadays contain a huge amount of "boiler plate" uninteresting text, often in HTML elements without semantic meaning (e.g. DIV). Only by using CSS (and sometimes Javascript) are we able to have a computer see the page as a human would
  • Generation of screenshots
  • Getting metadata which are dynamically written by scripts, etc, such as Javascript-created links.
  • Automated testing of web applications

What tools are available?

Several. Traditionally some users have hacked their own solutions using either a web browser extension, or embedding a web browser in a C++ program (often Webkit).

Here I'm looking at PhantomJS and SlimerJS.

PhantomJS and SlimerJS essentially perform the same task - to run developer-specified Javascript code in the context of an automated web browser, without using a real web browser.

PhantomJS is based on Webkit; SlimerJS is based on Mozilla / Firefox.

PhantomJS

Two versions of PhantomJS are available - the 1.9 series and the 2.0 series. The main difference is that the 2.0 series uses a more recent version of Webkit.

Unfortunately, last time I tested them, neither is very good for browsing lots of real web pages "in the wild". NB: This may be fixed when you read this, test it yourself!

  • Lots of memory usage
  • Slow
  • Prone to crashing; diagnosing crashes is very difficult
  • v1.9 has an out-of-date Webkit which has less feature support
  • v2.0 seems to leak memory very badly.

So probably PhantomJS is ok for some automated testing scenarios, particularly if you have a "single page application", or only a small number of pages tested.

But accessing large numbers of "real" web pages quickly breaks it, and it's not easy to fix.

Essentially the problem is that Webkit is now an abandoned fork (Apple and Google have both forked off from it) and bugs don't get fixed upstream. PhantomJS does not usually apply bugfixes to Webkit itself.

PhantomJS is a C++ executable that includes most of Webkit inside its binary. This is OK, as it's almost completely standalone, but it means that compiling it is VERY time consuming, particularly on limited resources. For example, on a Raspberry Pi I was able to run PhantomJS, but building it will take days (a more powerful system is really required). On a modern x86 system compiling is much quicker, but can still take 1 hour; the link step uses several Gb of memory (not really a problem on a server, but careful if building in a memory-limited VM).

Linux binaries are also available from the web site, which is handy :)

SlimerJS

SlimerJS is a completely different beast from PhantomJS. It is not a C++ binary and doesn't attempt to embed the engine directly in its own application. Instead, it uses an obscure feature of Firefox to run an alternative "user interface application" which provides an environment which is almost identical to PhantomJS.

This has benefits and drawbacks

  • It is not completely headless. It doesn't require user input, but it won't work without an X server on Linux (this is easily fixed using Xvfb). Under Windows, visible windows may be shown unless running an an alternate desktop, or as a service.
  • The web browser used is really identical to the Firefox version you're using - all the same features are available.
  • If you update Firefox, SlimerJS updates too (pro: good for security; con: it might break)

SlimerJS is under moderately active development, but has a much smaller user community than PhantomJS.

  • Performance of SlimerJS (using Firefox 40) seems MUCH BETTER than PhantomJS in general
  • Stability seems much better too (although I have had a few crashes)
  • The same APIs are supported, but doucmentation is mostly worse (example: the filesystem objects are barely documented)

Wrap up

So there you are - headless web browsing IS a niche application, but it is very useful in its place. I like SlimerJS because its overall design approach seems to work better in the general case.

It would be interesting to have a SlimerJS / PhantomJS type application which uses GoogleChrome as its web browser. I imagine one may appear, if it does not already exist.