Showing results for 
Search instead for 
Did you mean: 
Walkthrough Wednesdays

Rate Limiting with Spark API

Cisco Employee

To protect the Spark environment, the API has rate limits in place for the different resources available for use, such as /messages and /rooms.  These limits will vary depending on the calls being made, how resource intensive that call is on the servers, and how many servers are actively in routing – and is subject to change as needed to protect all use cases from failing in the event of a large batch of requests.  So, what should be done if you want to make API calls very frequently, but don’t want to hit errors and lose requests by going over the rate limit?  The answer is to use the Retry-After Header that is returned with an HTTP 429 error.  The Retry-After header defines the number of seconds you’ll need to wait before sending API requests again.

The app that follows pulls the list of rooms available to the user (using a single user or bot’s bearer token), as fast as possible.  Normally, it would be recommended to put some kind of wait time between continuous looping requests, to avoid hitting the rate limit entirely.  However, since that rate limit might be reached anyway depending on the resource availability, it’s important to handle for the likely possibility you’ll catch a 429 here and there.

Basically, we make a call to continuously and look for an error.  If the error is a status code 429, we check the wait time relayed to us in the Retry-After header and sleep for the time specified before starting again.  If there’s an error, but it isn’t a 429, we break out of the loop because something else is wrong.  Maybe you had a bad bearer token, or maybe there is a general error with the format of the request.

Since this code can run forever if implemented properly, you’ll need to Ctrl+C to kill it in the Terminal itself – make sure you don’t let it go on indefinitely or you could be flagged depending on how many requests are made and for how long. The initial portion of the application is making the call to the rooms resource to get the rooms over and over, whereas the latter section beginning with the line:

except urllib2.HTTPError as e:

defines what to do if a 429 error is encountered – print out the content of the error, and then extract the value of the Retry-After header for use in the sleep. Here’s the full app (it’s short so we’ll put the whole thing):

import urllib2

import json

import time

def sendSparkGET(url):

    request = urllib2.Request(url,

headers={"Accept" : "application/json",


request.add_header("Authorization", "Bearer "+bearer)

    response = urllib2.urlopen(request)

    return response


while True:


        result = sendSparkGET('')

        print result.code, time.time(), result.headers['Trackingid']

    except urllib2.HTTPError as e:

        if e.code == 429:

            print 'code', e.code

            print 'headers', e.headers

            print 'Sleeping for', e.headers['Retry-After'], 'seconds'

sleep_time = int(e.headers['Retry-After'])

            while sleep_time > 10:


sleep_time -= 10

                print 'Asleep for', sleep_time, 'more seconds'



            print e, e.code


It’s important to note that it’s possible to send a request after receiving a 429 without getting an error; there are many servers involved with handling a request, so just because you’ve hit the rate limit of one, does not mean you’ve hit the rate limit of all.  However, you’re probably pretty close to failing on every server, so it’s best to wait for the time described by Retry-After upon detecting the very first 429.

The full code can be found on our Github.

As always, if you have any questions, please contact 24/7/365 - we’re happy to help!

Taylor Hanson, Customer Support Engineer II

Content for Community-Ad