Tyler's Green Chile Stew Recipe

Posted October 19, 2017

It's soup season, and people always ask me for this recipe.

So here it is:

Ingredients

  • 8 cups vegetable stock*
  • 4 large potatoes, diced
  • 3 cans of pinto beans
  • 1 can of corn kernels
  • Hatch green chile, chopped **
  • Half an onion
  • 1 tbsp of garlic
  • Olive oil

Directions

  1. Chop the onions and garlic and saute them in olive oil at the bottom of a large stock pot.
  2. When the onions have turned translucent, pour the stock into the pot.
  3. Add the potatoes, beans, and corn and bring the pot to a boil.
  4. Once the pot is boiling, reduce the heat to low.
  5. Add the green chile.
  6. Let the pot simmer until the potatoes are soft and the stew has thickened, usually 30-60 minutes, stirring every few minutes. ***
  7. Serve in a bowl with cheese on top and a warm tortilla on the side for dipping.
  8. Enjoy!

Notes

* I prefer bullion cubes to premade stock because they're cheaper and easier to store. Most bullion cubes make 2 cups of stock.

** The amount of green chile is left as an exercise to the reader. I suggest starting small and adding more until you have enough heat.

*** It's easy to burn the bottom of the pot if left unattended for too long. Make sure you stir!

Copilot: Coming Full Circle

Posted July 01, 2015

Exactly 10 years ago, Copilot was officially announced to the public. A little over a year ago, I acquired it from Fog Creek as a part of their corporate restructuring, driven mostly by the need to spin Trello out into its own company.

I've kept it pretty quiet so far, mostly because I was spending a lot of my time doing all of the things necessary to push Copilot out of the nest, something that I didn't want users to be adversely affected by. This included migrating it to new servers, splitting the Copilot credit cards into their own vault, separating the billing systems, and carving Copilot data out of the databases.

The other reason I've kept it quiet is that I was a little embarrassed about the state of Copilot. It had been quite a while since any major updates were done to it, and its age was starting to show. The next task was to rewrite the client applications to be faster, more stable, and easier to use.

It's been a long road, but with the new client applications feeling pretty solid, today I announce the soft-launch of the new client applications, along with a public acknowledgement of the new ownership of Copilot.

I wasn't actually aiming to hit the decade mark since Copilot's first official announcement. That was more of a happy coincidence than anything. But realizing it's been 10 years since that summer has made me reflect on the path I've taken to get here.

It's certainly been a windy road, but I'm happy to be back working on the project that set my career on this trajectory.

Magic: How to launch a product so no one will ever use it again.

Posted February 23, 2015

Yesterday, I came across a new text-based meta-service called Magic. Magic is supposed to be the ultimate on-demand service. According to its homepage, you can ask Magic for anything you can order online, and they'll take care of it for you.

It's actually a pretty simple idea. They hired some people to respond to the text messages that come in, decide which service is most appropriate for the request, and place the order. In terms of technology, it's also very simple, likely being a Twilio number with an admin panel on the backend, linked to Stripe's payment gateway.

Last night I was over at Jeff's place watching the eventual Best Picture winner, Birdman. We decided to order dinner, and since I had signed up for Magic, I thought it'd be fun to try it, even if it was a little pricier.

At 6:00pm I sent Magic a message asking for a burrito and a burrito bowl from Chipotle. Then we waited.

19 minutes later, I got a reply with a link to put in my credit card information. At 6:21pm I replied that I was done putting in my credit card and at 6:23pm I sent them the address to send the order to. Then we waited.

31 minutes later, at 6:54pm, Magic replied, asking for my name. I replied within a minute. Then we waited.

27 minutes later, at 7:21pm, Magic finally replied with a price. For one veggie burrito and one veggie burrito bowl to be delivered they wanted $35.

Just in case you think that might be a typo, I'll spell it out. Thirty five dollars.

An hour and twenty one minutes after first contact, we were offered delivery of our $13 meal for $35. Given most delivery times, we wouldn't have seen our food until 8:00pm. Had I gone through Postmates (which Magic likely used for this order) directly, I would have had my order for only $26 (still pricey for delivery) in only the time required for someone to pick it up and deliver it.

Magic? Not really. Surprisingly Slow And Doubly Expensive Delivery Service. That's more like it.


Like what you see here? Go check out my new project GamePlan30!

32 Is The New 20

Posted December 09, 2013

Recently, a dear friend turned 30. Being my longest tenured friend, I gave her a call to wish her well on the dawn of her fourth decade.

Like many, she was somewhat distressed by the prospect of turning 30. (In retrospect, opening the conversation with "Hey old lady!" was probably not the best choice.) Being that I had passed that same milestone only a few months before, she asked me how I got through it.

"I realized that 30 is a totally arbitrary number and that it is just another birthday."

I could tell she recognized the logic of the statement, but the sentiment rang hollow. Sure, it's an arbitrary number, but it's an arbitrary number that ends in zero. I decided to try another tack.

"That's just because we use base-10 for our numbers. If we used base-16, you would still have two years until you turn 20!"

She was intrigued by this prospect. But being one of the 99% of people who do not regularly think in other bases, she needed a quick refresher. "What's base-16?"

"It's when you don't go into double digits until 16, so in base-16, one-zero equals 16."

"Oh, right. But what do you do when you get above 9?"

"Well, most commonly you use letters. In base-16, which is also called hexadecimal, you use letters. So it goes 8, 9, A, B, C, D, E, F, 10."

"So I just turned 1E?"

"Yeah, exactly!"

"I like that! I'm going to tell everyone that I'm not turning 30, I'm just turning 1E!"

I'm not sure the deviation into base-16 helped assuage her fears about the interminable march of time, but for a moment a little math distracted her from her worries.

Microcaching for a Faster Site

Posted May 21, 2013

My website, this site, is not fast. But, because of this little trick I'm about to show you, you probably think it is.

It's not particularly slow, either, at least not when there's not too much load on it. ab reports that the median request time for the homepage is about 60ms when there's only one request per second coming in. But if traffic starts picking up, it starts slowing down. With 2 req/sec, the median jumps to 90ms per request, a 50% increase. At 5 req/sec, it slows to 225ms per request. Do some quick math and you'll see we'll soon have a problem.

Let's take a quick look under the hood. The website is a heavily modified version of an early iteration of Simple. It is written in Python using Flask and SqlAlchemy, talking to a PostgreSQL database. This is all being run by uWSGI in Emperor mode and served by Nginx.

Each of these levels could be a source of slowness. We could profile the Python code to figure out where we're spending our time. Is Jinja2 being slow about autoescaping the html? Maybe. Perhaps it's in the database layer. SqlAlchemy might be creating some bad queries, so we should log those. And, of course, we need to make sure that PostgreSQL is tuned properly so we're getting the most out of its caching. Then there's uWSGI; should we allocate 2, 4, or 8 processes to the site?

But you know what? That's a difficult, tedious process, and it's easy to make things worse in the process.


Optimization is hard! Let's go shopping.

What if we could just speed the whole thing up all at once?

It turns out that, for this type of site, where the users only see one version of the content (as opposed to a web app, where each user has their own version of the site) microcaching is an ideal solution.

Microcaching is the practice of caching the entire contents of a web page for a very short amount of time. In our case, this will be just one second.

By doing this, we ensure that when the site is under any sort of load, the vast majority of visitors are getting a copy of the site served as static content from the cache, which Nginx is very good at. In fact, because of the way the caching is set up, the only time a user would wait for the "slow" site would be if they were the first person to hit the site in over a second. But, we know that the "slow" site is pretty fast when it's under such light load.

The following is a slightly modified version of my Nginx config file for tghw.com, which shows how to do this:

# Set cache dir
proxy_cache_path /var/cache/nginx levels=1:2 
                 keys_zone=microcache:5m max_size=1000m;

# Actual server
server {
    listen 80;
    server_name a.tghw.com;
    # ...the rest of your normal server config...
}

# Virtualhost/server configuration
server {
    listen   80;
    server_name  tghw.com;

    # Define cached location (may not be whole site)
    location / {
        # Setup var defaults
        set $no_cache "";
        # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie
        if ($request_method !~ ^(GET|HEAD)$) {
            set $no_cache "1";
        }
        # Drop no cache cookie if need be
        # (for some reason, add_header fails if included in prior if-block)
        if ($no_cache = "1") {
            add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/";            
            add_header X-Microcachable "0";
        }
        # Bypass cache if no-cache cookie is set
        if ($http_cookie ~* "_mcnc") {
            set $no_cache "1";
        }
        # Bypass cache if flag is set
        proxy_no_cache $no_cache;
        proxy_cache_bypass $no_cache;
        # Point nginx to the real app/web server
        proxy_pass http://a.tghw.com/;
        # Set cache zone
        proxy_cache microcache;
        # Set cache key to include identifying components
        proxy_cache_key $scheme$host$request_method$request_uri;
        # Only cache valid HTTP 200 responses for 1 second
        proxy_cache_valid 200 1s;
        # Serve from cache if currently refreshing
        proxy_cache_use_stale updating;
        # Send appropriate headers through
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        # Set files larger than 1M to stream rather than cache
        proxy_max_temp_file_size 1M;
    }
}

(Most of this code was originally derived from Fenn Bailey. Unfortunately, it seems his site has gone down with the Posterous shutdown.)

So what's going on here? Let's start with the top. I've set up the Flask site to respond to a.tghw.com. That's the actual site that we'll be caching. Note that the subdomain is not required. You could just as easily use another port, like 8080.

Next, for tghw.com, we first check to see if there's any reason we shouldn't use the cache. This includes doing a request other than HEAD or GET or having a certain cookie (which the admin page sets for me). If that's the case, we set a cookie for the next 2 seconds that says not to use the cache and we skip the cache for this request. (You want this to be longer than the caching time so your next GET request will grab a fresh copy.)

If we are using the cache for this request, then we defer to Nginx's usual proxy_pass. We tell it that all successful requests (HTTP 200) should be cached for 1 second. The choice of 1 second is pretty arbitrary, it could be longer, but since I know the app itself performs well with 1 request per second, there wouldn't be a lot of benefit to making it longer. We also set proxy_cache_use_stale to serve from the cache if Nginx is still busy updating the cache, meaning that users won't actually see a slower response while we go to the actual site.

So how does this do compared to the stock site? Well...it blows it out of the water.

Command used: ab -k -n 50000 -c [1|5|10|25|50|100] -t 10 http://[a.]tghw.com/

f
a.tghw.com tghw.com
-c req/sec med resp (ms) req/sec med resp (ms)
1 15 64 5,952 0
5 32 151 17,283 0
10 31 312 19,991 0
25 33 751 19,916 1
50 30 1,589 17,397 3
100 32 2,984 16,717 5

While the Flask site can reliably serve up to about 30 requests per second, it starts slowing down pretty significantly. The microcached site, on the other hand, serves almost 20,000 requests per second at its peak. More importantly, the response times stay in the single digit milliseconds, making the site feel nice and fast, regardless of load.

So there's an easy way to speed up your blog without having to make any changes to the application code.