Correction: November Newsletter
In the November newsletter the link the the WL Toys WL144010 was incorrect.
Use this link to get to the correct car. If you ordered the wrong car, I suggest you cancel your order. Apologies for this mistake.
Use this link to get to the correct car. If you ordered the wrong car, I suggest you cancel your order. Apologies for this mistake.
It is a great (and interesting) time to be in AI and Donkey Car is no exception. A lot of updates have come this month for Donkey Car including a new car chassis, Donkey Car GPT, 5.X updates, Donkey UI updates and more. We are better at sending releasing software than sending newsletters
We have added support for a new RC car, the WL Toys WL144010. Given that many of the previous cars are hard to get, this should make it much easier to build a donkey car. The WLtoys car is available via Amazon (quicker) or Banggood (less expensive). It is worth noting that this is our first brushless motor car, this makes it slightly harder to collect training data at low speed than the other cars, but after testing it, we believe that it will not be an issue once you get used to it.
Try out the new Donkey Car GPT. It includes much of the community content and manuals for donkey car and does a pretty good job of solving Donkey Car problems for users
Donkey Car version 5.X is currently being tested as our newest release candidate. In functionality it matches the current main branch but it will allow for a much easier user installation through pip. A simple pip install donkeycar will do the job in the future for everyone who does not want to read / change the donkey car source code immediately, but who just want to drive around. Donkey Car 5.X is based on python 3.9 and tensorflow 2.9, a steep and overdue upgrade from the current 3.7 / 2.2 versions.
The Raspberry Pi foundation has officially released bookwork, the newest Debian release: https://www.raspberrypi.com/news/bookworm-the-new-version-of-raspberry-pi-os/.
Bookwork ships with python 3.11. We have tested Donkey Car on that OS using tensorflow 2.12 and found it working without problems, so hopefully we will have an upgrade soon.
Given it is that time of year - any donkey car store order over $50 placed in the store before tuesday will come with a Donkey Car Trucker hat, no need to do anything just place the order without the hat, and it will be automatically included with the order.
The Donkey UI is undergoing a current upgrade with a bit more modern design and some changes to certain features, including user options to select the number of pilots that can be run in parallel and a user defined layout of the screen. In addition, there will be multiple config parameters that can be edited in the training screen. Here are some pictures:
Thank you all, we will try to do a better job of getting out newsletters
It is the first new major release in 9 months. 14 committers across 30 features! Thanks to everybody involved.
Changelog here. Most notably there are many quality of life features in here, especially developed by @Ezward, many training improvements by @docgarbanzo, and many improvements in OLED support and RC control by @zlite which is relevant for the next announcement!
Since the beginning of Donkey car, we have gotten two consistent pieces of feedback
Let me use the RC controller that is included in the box with the RC car I purchased
Can I skip the servo driver and just use the GPIO on the Raspberry Pi.
While over the years, many folks have hacked their own solution to solve these two problems, but we did not have an opinionated approach for new users to make it easy to get started.
We are happy to announce that we have built a new donkey car hat, this new hat enables users to skip the servo driver, but also get a board that has several other interesting features. In particular:
CPU Fan - the raspberry pi 4 needs a fan or a large heatsink
OLED screen - tells you critical state information without using a linux shell
Odometry connection - Easy and opinionated way of connecting odometry
Throttle and steering servo control - eliminate the servo controller
Throttle and steering RC Input - use a 2 or 3 channel RC controller
One note, this board only supports the Raspberry Pi. The NVIDIA Jetson Nano does not support PiGPIO. If you have a Nano, please continue to use the servo driver and Bluetooth controllers. Special thanks to Chris Anderson for designing the board and iterating on it. It can be purchased at the Donkey Car store (here) and in Asia at the robocar store (here).
Most folks use two batteries in their donkey car - one that powers the ESC and Servo and a second that powers the Raspberry Pi or Jetson Nano. Most people use the Anker Astro E1 battery, it is a tried and tested battery that provides several hours of power, but there was never a good place to mount the battery… until now. Since we added the new donkey car hat, there is now a place to screw the battery mount to the Donkey Car. You can print the new mount here and will need zip ties
On April 2, there will be an in-person indoors race at the Robot Block Party at Circuit Launch. Competitors register here; spectators register here.
We also have a virtual race in our online simulator on April 9th. Register for that here.
Finally, the outdoor racing season is getting ready to kick off, which allows the use of GPS and other outdoors localization technologies. We’ve experimented with a figure-8 track in the Circuit Launch parking lot, which was fun.
We’ve also tried racing at Warm Springs Raceway, a RC racing track in Fremont, CA. That was super fun and we hope to be able to do an event there later this year.
In April we released Release 4.2, which comes with the following features (you can already try them in the Dev branch):
The Donkey Gym simulator, which is both super useful for training at home as well as what we use for our virtual races, has been significantly improved. Along with new tracks, it now supports virtual Lidar.
Speaking of which, lidar is now supported on real Donkey Cars, too. RPLidar and YPLidar (in beta) 2-D lidars now have parts in Donkey Car - so you can do effective obstacle avoidance, slam, drive in the dark or have a donkey F1/10.
Instructions (with pictures!!) how to setup the car to drive with the RC controller that is usually shipped with any car - this provides the ‘classic’ RC driving feel.
A new Donkey UI app:
You can edit your tub using the app, this replaces donkey tubclean
There is also support for training the model
You can run and compare two pilots
On OSX/Linux you can transfer data between the car and the PC
Support for L298 motor controller in the car app
Support for both simple and quadrature encoders to add odometry learning and driving
Enabling storage of position, gyro, acceleration, velocity in the tub when using the Donkey Gym!
MQTT logging
donkey findcar now working on RPi 4
Contributors: zlite, showsep93, sctse999, Maximellerbach, Meir Tseilin, fengye, Heavy02011, BillyCheung10botics, EricWiener, DocGarbanzo.
While we continue our monthly virtual races (the next one is this Saturday, April 24th), we’re now finally able to resume in-person racing quarterly at the newly-renovated Circuit Launch in Oakland. The first event will be on Sat, May 22.
The usual indoor races will now be joined by outdoors races, which allow the use of GPS and other external positioning systems. Our guest speaker will be the Amazon AWS DeepRacer team, which has just open-sourced its code and will be demoing it. As before, it’s free and Brazillian BBQ will be provided. It will be so good to see everyone again! Competitors sign up here; spectators sign up here.
We are going to sunset the Discourse forum server. It has gotten very little usage and just confuses new users, although we’ll leave it up for legacy and google search reasons. Our Discord server is where the action is for real-time discussion and questions, so please join if you haven’t already.
For forum-like technical questions and static user content that can be found in search, we’re shifting to Stack Exchange using the “donkeycar” tag.
Safe, in person racing returns to Oakland on December 5. It will be outdoors and limited to 50 mask wearing people. Unlike most DIYRobocars races, this one will allow GPS! Sign up here
For those of you interested in a online race, sign up here for the December 19th virtual race.
It feels like just yesterday that Donkey Car 4.0 released, nonetheless 4.1 is coming soon. With he new maintainers we expect to do quarterly releases of Donkey Car. Dirk Prange will be the release captain.
PyTorch and fast.ai support - If you have been putting off learning PyTorch and fast.ai, now is the time to get started.
Auto Encoder - More below
Lots of other fixes…
More on the AutoEncoder
The driving idea behind the auto encoder in Donkey Car is the decoupling of the visual system from the ‘motor neurons’ of the car, called the controller. As it turns out, the visual system which is represented by the CNN layers is far more complex than the controller. And the visual system is not necessarily dependent on a specific track but a component that we will train on a much bigger set of data and then re-use on new tracks, providing an efficient transfer learning for the controller part only. This is very similar to learning motor skills in biological brains. In order to learn how to catch a ball or drive a bicycle we don’t have to learn to see again, this is a skill that we have already acquired at that state. We just learn how to activate the muscles in the right order at the right time, which is a much simpler task.
Cutting the standard donkey linear model after the CNN layers and adding a single dense layer provides us with an image encoder which compresses the high-dimensional image data into a much lower dimensional latent space. Our 120x160x3 size frames are mapped into a 128-dimensional vector. The auto encoder now adds an exact reverse deconvolutional network on its back which inflates the latent vector back to the input image size. A good introduction with Keras code can be found here: https://blog.keras.io/building-autoencoders-in-keras.html.
The auto encoder can be trained completely unsupervised by minimizing the difference of the output and input images in training. However, in this approach we will not get a latent representation of the image information that is most suitable for detecting features relevant for driving, like lane edges. Hence our loss function in training contains also steering and throttle terms.
From the pre-trained auto encoder we subsequently only use the encoder part in our model, where the latent vector is directly fed into the controller. This is a small feed forward network that is about two orders (i.e. factor 100) smaller than the decoder. The training is only performed on the controller, which makes it very fast.
Things to develop further:
1) Training might be so fast that we can possibly train while we are driving. Not necessarily on the car but likely on the PC in the background.
2) We can explore RL on the physical car not only in simulation
3) With the new augmentation we can train a de-noising auto encoder (see example in Keras above). Instead of feeding the original images in training, we feed noisy images into the auto encoder but in the loss function we compare the reproduced images against the original ones. In particular, changes in light levels which are scalar multiplication of the input images can be efficiently factored out. Currently for a trained linear model such an image transformation creates a change in output, i.e. the car over or understeers and drives faster or slower with changes in light levels. Auto encoders are very good at de-noising. If you play with the Keras tutorial above you can create very blurred images that you will find hard to recognize - the network will generally do a better job than the human eye.
Loading more posts…