Python & aiohttp: Sending multiple requests concurrently

Updated: August 26, 2023 By: Khue Post a comment

aiohttp is a library that allows you to perform asynchronous HTTP requests in Python. Asynchronous means that you can do multiple things at the same time without waiting for one task to finish before starting another. This can improve the performance and efficiency of your program, especially when you need to make many requests to different web URLs.

In this tutorial, I will show you how to use aiohttp to send multiple GET (POST, PUT, DELETE, HEAD, and OPTION are not much different) requests concurrently and process the responses.

The Steps

1. Import the required modules: aiohttp for making HTTP requests, asyncio for managing the event loop, and pprint for printing the results in a nice format.

import asyncio
import aiohttp
import pprint

2. Define a list of URLs that you want to send requests to. For example, you can use the Sling Academy public API to fetch sample data and get the results in JSON format:

urls = [
    "https://api.slingacademy.com/v1/sample-data/products",
    "https://api.slingacademy.com/v1/sample-data/users",
    "https://api.slingacademy.com/v1/sample-data/photos",
    "https://api.slingacademy.com/v1/sample-data/blog-posts"
]

3. Define an asynchronous function that takes a URL and a session object as parameters and returns the response data as a dictionary. The session object is used to create and manage the HTTP connections. You can use the async with statement to ensure that the session is closed properly after the requests are done:

async def fetch(url, session):
    async with session.get(url, ssl=False) as response:
        status = response.status
        data = await response.json()
        data_length = len(data)
        return {"url": url, "status": status, "data_length": data_length}

4. Define another asynchronous function that takes a list of URLs as a parameter and creates a list of tasks that call the fetch function for each URL. You can use the asyncio.gather function to run all the tasks concurrently and return a list of results when they are done.

async def fetch_all(urls):
    async with aiohttp.ClientSession() as session:
        tasks = [fetch(url, session) for url in urls]
        results = await asyncio.gather(*tasks)
        return results

5. Using asyncio.run() to call the fetch_all() function with the list of URLs and print the results using pprint.pprint().

# run the async functions
results = asyncio.run(fetch_all(urls))

# print the results
pprint(results)

The Complete Example

Here’s the full code that combines all of the steps you’ve seen above:

# SlingAcademy.com
# This example uses Python 3.11.4

# import the modules
import asyncio
import aiohttp
from pprint import pprint

# list of urls to request
urls = [
    "https://api.slingacademy.com/v1/sample-data/products",
    "https://api.slingacademy.com/v1/sample-data/users",
    "https://api.slingacademy.com/v1/sample-data/photos",
    "https://api.slingacademy.com/v1/sample-data/blog-posts",
]


# async function to make a single request
async def fetch(url, session):
    async with session.get(url, ssl=False) as response:
        status = response.status
        data = await response.json()
        data_length = len(data)
        return {"url": url, "status": status, "data_length": data_length}


# async function to make multiple requests
async def fetch_all(urls):
    async with aiohttp.ClientSession() as session:
        tasks = [fetch(url, session) for url in urls]
        results = await asyncio.gather(*tasks)
        return results


# run the async functions
results = asyncio.run(fetch_all(urls))

# print the results
pprint(results)

Output:

[{'data_length': 6,
  'status': 200,
  'url': 'https://api.slingacademy.com/v1/sample-data/products'},
 {'data_length': 7,
  'status': 200,
  'url': 'https://api.slingacademy.com/v1/sample-data/users'},
 {'data_length': 6,
  'status': 200,
  'url': 'https://api.slingacademy.com/v1/sample-data/photos'},
 {'data_length': 6,
  'status': 200,
  'url': 'https://api.slingacademy.com/v1/sample-data/blog-posts'}]

Limiting concurrent requests can prevent overloading the server or network, respect the API rate limits or quotas, and optimize the client resources and bandwidth. Let’s see how to do so in the final section of this article.

Limiting the Number of Simultaneous Connections

When using ClientSession, aiohttp automatically limits the number of simultaneous connections to 100 and putting the rest in a queue. This means you can connect to a hundred different servers (not pages) concurrently. However, you can specify your own limit (because 100 might be too big for a typical application) by creating a TCPConnector instance and passing it into ClientSession. For example, you can modify the fetch_all() function as shown below to limit to 2 simultaneous connections:

async def fetch_all(urls):
    # create a connector with a limit of 2
    my_connector = aiohttp.TCPConnector(limit=2)
    async with aiohttp.ClientSession(connector=my_connector) as session:
        tasks = [fetch(url, session) for url in urls]
        results = await asyncio.gather(*tasks)
        return results

An alternative solution is to use the asyncio.Semaphore class. Here’s a detailed article (with some practical examples) about this class: Python asyncio.Semaphore class (with examples).