Colorful Map Posters

I’ve expanded my coloring page maps shop, and am now producing wall art. I’ve basically spent all my evenings for the past month developing and producing these designs. And this is just the start! As of now, I have produced maps for 10 cities in the US, and each city has from 3 to 7 different variations. These are all digital downloads, meaning that buyers will get a file, and will then print and frame it on their own.

I have designed 11 colorways – all are stylish and modern, ranging from neutral to more vibrant – and I pick and choose which colorways look best for that particular city. Some cities have more water and fewer roads, some have less water and more roads, etc. I have 33 more US cities to cover, and then I’ll expand and go international.

This time around I did not make my own mockups. I bought the mockups.

Here’s my shop! Coloring Maps. (I might change the name)

Twitter Bot that Uses Ruby to Read a Single Line From a Text File

Now that you’ve read the post title and are hooked, LET’s GO!

Following up on the last post about all/some of the Twitter bots I’ve made, here’s another I made yesterday. And I’m going to go into detail about making it.

If you read the previous post, you might have realized that these bots are simple. These bots are not sophisticated AI trying to sow discord and sway elections. They’re posting simple, goofy stuff. Today’s bot is called @badassnames, and it posts cool names that some friends and I have thought up. And it:

  • Is a script written in Ruby
  • It reads a line from a text file, and tweets that line
  • Next time it runs, it reads the next line.
  • It sits on a Raspberry Pi
  • The Pi has a crontab that instructs the script to run twice a day
  • The script uses the chatterbot ruby gem to tap into the Twitter API and post


Starting from the beginning: Ruby is a programming language. And it’s fairly easy to understand even if you don’t know much about programming. If you want to learn ruby, start with The Odin Project. Install RVM to manage ruby and its “gems”, then install ruby.

Here is the script for this bot.

!/usr/bin/env ruby

require 'rubygems'
require 'chatterbot/dsl'

names = "/home/pi/Documents/Bots/badassnames/badassnames.txt"
counter = "/home/pi/Documents/Bots/badassnames/counter.txt"

line_num = {|f| f.readline}
line_num = line_num.to_i

File.foreach(names).drop(line_num).take(1).each {
  tweet_text = line
  tweet "#{tweet_text}"

line_num = line_num.to_i + 1

File.write(counter, line_num)

And here’s a little breakdown.

The names = "/home/pi/Documents/Bots/badassnames/badassnames.txt" line is explicitly pointing to a txt file. The text file is very simple. Each line in it has a first and last name. It’s just a long list of names. Like:

Broxton Terradome
Tab Chamberlain
Griff Manifold

Change the path of that to whatever folder your script and text files are in.

Next we have another text file: counter = "/home/pi/Documents/Bots/badassnames/counter.txt"

This one is a counter whose function is to tell the script which line of the text file to read. First you’ll create this text file, and put 0 in it. And this line in the script line_num = line_num.to_i + 1 increments the value by 1 each time the script is run (to_i means turn the string into an integer… not sure why it wasn’t always an integer).

line_num = {|f| f.readline}

Is saying to read the counter.txt file, and assign the value in line number 1 of the file (line number 1 is actually line 0 in ruby… also it’s the only line in the file that has any content) to the variable line_num).

Now’s the magic. The Tweet de resistance (?).

File.foreach(names).drop(line_num).take(1).each {
  tweet_text = line
  tweet "#{tweet_text}"

This is saying to read the text file that has the names, and to ignore (drop) the first n lines (whatever the counter tells it), and then read (take) a single (1) line. The tweet text is a chatterbot function that does the magical Twitter API work. I assigned the contents of the line to a variable called tweet_text, just because.

So, to reiterate: when you first run the script, it will read the first line from the names file, and then it will read the number that’s in the counter file, and add +1 to it. It’s a nice simple script for incrementally reading a text file.


Ok, maybe I won’t go into super detail here.

  • If you have a Raspberry Pi (this is useful because it’s a computer that is always on, and thus can run your script whenever), put your files onto it.
  • I use scp to do that. Like scp myfile.txt pi@
  • Install ruby on it
  • Then type crontab -e
  • cron is basically a scheduler. You can read up on the formatting.
  • My cron here looks like 26 9,16 * * * /bin/bash -l -c '/home/pi/Documents/Bots/badassnames/badassnames.rb'
  • So it runs twice a day, at 9:26am and 4:26pm.
  • Why am I using bash to run a ruby script? well, it works.

Oh and you’ll need a Twitter Developer account

Can’t forget that:

  • Start a twitter account.
  • Use a gmail/yahoo/hotmail account or they’ll mysteriously insta-ban you because their spam sniffing is a total pile of crap.
  • Go to and apply for an account and create an “app” and get your credentials.
  • Put them in a .yml file that looks like
:consumer_key: blah
:consumer_secret: bleh
:access_token: bloh
:access_token_secret: bleh

In Conclusion

I don’t know who the audience here is. You’re either looking for a ruby script that does this. Or you’re curious to learn that twitter bots can be super basic and just for fun. If you have an idea, you can spit one out pretty quickly! In this case, I have to come up with a lot of cool names in order to keep this running. But, I don’t mind spending a couple minutes a day thinking of cool things. Do you mind that?

Possible use case for this script: Write a story, one line at a time, and tweet out the story on a regular schedule. I’m sure there are lots of services that do this. But you can do it yourself!

Twitter Bots I’ve Made

I’ve made a few twitter bots over the years. Some were good ideas, and some were bad ideas. I used them as opportunities to learn some python and ruby. I like bots! They take a day or two to setup, and then you just let them go.

Twitter is pretty strict about bots. When I create new ones, the account always gets banned 2 to 3 times BEFORE the bot has even tweeted once. Crazy, huh? Most of my bots are totally benign, but a few were annoying and rightfully banned (or disabled by me).

I think I have like 12 twitter accounts right now. This is from someone who spends at most 2 minutes on Twitter per day. I can’t stand the site! But it’s a good medium for futzing around with bots.


I processed all the collisions in Los Angeles County for a year, and then this bot tweeted out some details of each crash in real time (one year later). “Processing” involved converting the various codes in the collision reports into plain language that can be strung together into sentences.

I might modify it in the future, so it tweets a little less often. It tweeted like 20k times in one year! I thought this was a cool idea. It was a lot of work (at the time) to setup, and thus I didn’t continue it for subsequent years. I’m sure I could implement it in a much simpler way next time.

I used python for this. Code is here on Github. I hosted it on a raspberry pi at my house.


I modified this python twitterbot script for this one. Previously we were using IFTTT to check Pinball Map’s RSS feed and tweet new feed items. But then IFTTT unexpectedly imposed 25-per-day limit on this service, with no option to pay and upgrade (hello!). So this replaces IFTTT. It’s working great! It’s hosted on a friend’s server.


This one was a bad idea. I wanted to make a grammar bot. And I made the mistake of venturing into the world of partisan politics. This bots corrects people when they use the phrase “Democrat Party” instead of “Democratic Party.” The developer access for this bot was blocked a couple times. Each time I took that opportunity to better understand Twitter’s rules (such as, “bots can’t reply to people that don’t follow you”). Plus, the rules changed over time. This bot did adhere to all the rules, but was banned anyway. For the best!

I even made it so that the bot replies if people reply to it. But, I made it so the replies attempt to diffuse any animosity (with things like, “you’re right!”).

This bot used the ruby chatterbot and tracery gems. Tracery is a json format for randomizing and stringing together phrases. With it, you can create many variants of sentences. For example, you can start off your phrase with “segment_01” which contains “hi”, “hello”, “yo”. It will randomly choose one of those, and then “segment_02” can have options for the next part of the phrase (“how are you?” “what is up?” “how’s it going” etc). And then each time the script runs, it will generate a fairly unique phrase! It’s fun to play with.

This was hosted at my home on a raspberry pi. The ran periodically with a cron job.


This was another bad idea. I am an urban planner who specializes in safety and transportation, and it stands out to me when newspapers (and individuals) refer to crashes as “accidents.” There is a “crash not collision” movement that attempts to educate people and change the standards that journalists have.

However, after a couple of days I realized that this bot was in very bad taste, because some of the tweets were from victims of drunk driving collisions. And it was very bad for the bot to reply to them.

This also used ruby, chatterbot, and tracery, and was hosted on my pi.


This one is still going strong (since November 2018)! This bot tweets once a day at 8:15am PST. It tells you whether Jesus has returned to Earth. So basically, it tweets some variety of, “he’s not back yet!”

It uses ruby, chatterbot, tracery. With Tracery, it has a fair amount of variety to what it says each day. I occasionally add to it.

I think this is a pretty good bot. It’s simple. It’s not harmful.


This is another bot for Pinball Map. I wrote this myself in Ruby, as a way to learn more stuff. It tweets a random Pinball Map location every couple of hours. Each tweet includes some details, like the type of business, and a (partial) list of machines.

It uses the Pinball Map API to get the total number of location IDs, then it picks a random one, and then queries that ID for some details. I think I did a good job with this one! There isn’t much of a target audience for it, though.

Coloring Maps

I started an Etsy shop called Coloring Maps. I’m selling digital downloads of street network maps that I’ve designed for coloring.

I also took photos to use as mockup templates.
I colored this one in digitally, just as an example.

This project was a good opportunity to learn: how to process and manage big OpenStreetMap datasets; how to create fun vector graphics for things like water, parks, beaches, etc; and how to create a workflow so I can churn these maps out pretty quickly. I’m trying to have it so most of the shop items are “packs” of multiple maps. I’m adding a few new packs a week.

The maps are made in QGIS (free software), using OSM data (free data). I tried other data methods (using local street network datasets), but that data was unreliable and it was a lot of work to process.

I created an instagram account for the project.

Next I may add some printable posters of cities.

Los Angeles County Parcel Map by House Size

I make a lot of maps at work. I made this one for fun, because the dataset – LA Assessor Parcel 2015 Tax Roll – has some cool fields in it. It’s a map of parcels containing single family homes in Los Angeles County, symbolized by the size of the main building on each one.

It was an easy map to make! And it contains over a million polygons!

Parcel map by house size

I posted this on reddit last week, and it got some attention. People like pretty-looking maps.

The Fart Earth Society

I printed a bunch of stickers with the word FART emerging from the word EARTH. And since that might not seem deep, I then founded The Fart Earth Society, and wrote about why this graphic has a deeper meaning. If you want the sticker (it’s bumper sticker size), it’s $5! Visit the site for details, and some creative writing.

fart earth sticker

To Live and Ride in LA: Do Bike Lanes Make Angelenos Safer?

I recently completed my Master’s degree in Urban & Regional Planning from UCLA. Now I’m sharing my capstone project. This a research project I completed in my final year, with the Los Angeles County Bicycle Coalition (LACBC) as my client. The LACBC used some of the findings in their own report, the 2015 Bike Count Report. That link contains a nice synopsis of key findings, plus a link to download the whole report. The report is very digestible, with some great graphics. It shows the results of the 2015 bike count (I did the statistical analysis), as well as the results of my research on the safety impacts of newly-added bikeways in Los Angeles.

The two key findings are that bicycling has decreased since the last count (though it’s increased on bikeways), and that the new bikeways have made bicyclists safer.

I’ll admit that it was stressful to have to reveal to the LACBC that ridership was down. After all, one of their goals is to promote bicycling. But another one of their goals is improving safety (see Operation Firefly, for one). And so I believe the safety findings are important (and positive!).

My analysis measures changes in bicyclist-involved collisions as a function of ridership. This methodology, to be frank, makes it an exception among active transportation safety impact studies. Most seem to look at raw crash numbers, without accounting for ridership. A quick explanation for why this matters: if the number of crashes double after a bike lane is added, but ridership has quadrupled, then the rate of crashes has decreased (a simple analysis of raw crash numbers would state that crashes increased, even though safety per bicyclist had actually improved).

The bicyclist counts are conducted at discrete locations throughout Los Angeles. The counts take places every other year, and many sites are repeated each time. I identified all of the new bike lanes and sharrows in Los Angeles, and then narrowed them down to the ones with before/after counts. I found 17 sites that fit this criteria. I also included 18 control sites. The study finds that the rate of crashes per bicyclist declined by 43% after the installation of bikeways (there was no real difference between road diet, squeeze bike lanes, and sharrows). With respect to the control sites, ridership levels remained constant, while the number of crashes increased by 22%.

Read the entire report!

To Live and Ride in LA: Do Bike Lanes Make Angelenos Safer?

The report shows the detailed results for each study site. I’d love to hear feedback (not only comments about the results, but feedback on the methodology).


Los Angeles County Crashbot

Over winter break I created a twitter bot called @lacrashbot. It automatically tweets every injury-collision that occurred in Los Angeles County in 2014, and it does so exactly when each one happened. It’s depressing!

I did it using python. And I put the code on github. I thought I’d share how I did this (so this post is basically just the readme file). This is a tutorial for creating a twitter bot that reads a csv and tweets content based on a timestamp.

LA Crashbot

LA Crashbot is a Twitter Bot that posts crashes as they occurred. The idea is to help people better understand the frequency at which people crash. The data used is two years old, so the crashes are “real-time.” See it in action here.

Basic overview

  • Uses publically available data.
  • Decodes the values in the data, so that sentences can be constructed.
  • Construct narratives of each crash (that are under 140 characters).
  • Uses a Raspberry Pi to schedule and execute each tweet.
  • All the scripts are python. And you also need spreadsheet software, such as LibreOffice Calc or Excel.

Note that I’m an amateur pythonist. I used this project as a way to learn it. So do not use this project if you want to learn best practices.

If anyone wants to use these scripts as a starter kit for their own bots, feel free (The decoding and narrative scripts can be useful for other goals, as well). As long as you have the data (which is not included here), you can create something similar for your region, or for just pededstrian-involved crashes, or for just crashes that resulted in death, etc.


  • Python
  • These python modules: tweepy, time, csv, os, pandas, numpy (I think that’s all of them)
  • A computer that will stay on all the time


  • Get some data. For Los Angeles County I got raw SWITRS data from the CHP website.

  • Use the script to filter those collisions to suit your desires. In my case, I filtered them to only include injury collisions. This reduced the amount of collisions from 49023 to 25498 (in LA County, 2014).

  • In the new csv, clean up the dates. Right now, the timestamps look like this: time = 0000 date = 0101. We want them to look like: time = 00:00 date = Jan 01. So first you have to split the month from the day. In LibreOffice Calc, use Text to Columns. Then use the script to convert the two numbers t0 spelled out (to three letters) months; then insert that into your working csv. Then merge that field with the day field, to produce “Jan 01”. Next, format the time field to be four digits, with a colon in the middle. Use this: 00\:00 as the custom format option.

  • Decode the data. The values in the data are more machine-readable than human-readable. For example, collision severity is 1, 2, or 3. By decoding those, we convert the numbers into strings, such as “injured,” “severely injured,” “killed.” In I’m choosing which fields I’m (probably) going to use, and decoding them in ways that will simplify the narration process. That file also carries over some fields that didn’t need decoding. And it doesn’t carry over a whole bunch of fields that we don’t care about.

  • Create narratives from the decoded data. has LOTS of if/else statements in it. I put this together after much trial and error. It will probably give you a headache to look at. And it will probably annoy people who actually know python. But it works. When you create your own narratives you’ll discover quirks in the data. This is where you iron out those quirks the best you can. I feel that I didn’t do a great job with the bicyclist section. I was hesitant to publish it as is. But ultimately I did so because I concluded that it’s better than how the data originally was. Neither were totally accurate; but this is more accurate.

  • Use the Date and Time fields to schedule the tweets. is a script that creates another script, called (not included). Make sure to read the comments in that tell you the header to add to after it’s been created. When is run, it schedules the tweets for the entire span of the data. It does so using the at command. When a tweet is scheduled to occur, the at command runs the script, which checks LACinjury2014_Narrate.csv (or whatever your file generated by is called) for any narrative that occurs this very day and minute. Then tweets it!

  • Make sure to put your Twitter app credentials in (tutorial on setting that up). I left those fields blank here. In the future, I should move those credentials to a separate file that is imported into, and is ignored by git.

  • Test beforehand! Just keep your twitter account private at first; make a csv with like five crashes; set the dates of them for five minutes from now; run; then; then wait five minutes to see if they post. Note that twitter doesn’t always like when your tweets have the exact same content as your other tweets. That’s why I included the timestamps in the narratives. Also, do some tests by just running (because it’s easier to see errors when you run it yourself). Make sure you only run it when there’s a tweet with a timestamp of this very minute. Also, make sure to test this on the computer where you’ll be storing this stuff. In my case, I was ssh’ing into my Raspberry Pi.