Sling Academy
Home/Python/Python asyncio: How to download a list of files in parallel

Python asyncio: How to download a list of files in parallel

Last updated: February 12, 2024

Overview

In today’s fast-paced digital era, efficiency is key. Whether you’re a developer working on a high-load system, a data scientist needing to download large datasets, or simply someone looking to optimize your code for faster execution, Python’s asyncio library is an invaluable tool for performing IO-bound and high-level structured network tasks, especially when it comes to downloading files in parallel.

This tutorial will guide you through the process of using asyncio along with aiohttp to download a list of files in parallel. We’ll start with the basics and progressively delve into more advanced concepts, providing code examples and their expected outputs at each step.

Getting Started

Before diving into the code, it’s essential to understand the core concepts behind asyncio and how asynchronous programming works in Python. Asyncio is an asynchronous I/O framework that uses coroutines and event loops to execute code in a non-blocking manner, enabling the parallel execution of tasks. This is particularly useful for IO-bound tasks, such as downloading files from the internet.

To begin, you’ll need to install the necessary libraries. Run the following command in your terminal:

pip install aiohttp

Basic Parallel Downloads

Let’s start with a simple example. The following code will download three files in parallel:

import asyncio
import aiohttp

async def download_file(session, url):
    async with session.get(url) as response:
        filename = url.split('/')[-1]
        with open(filename, 'wb') as f:
            while True:
                chunk = await response.content.read(1024)
                if not chunk:
                    break
                f.write(chunk)
        print(f"Downloaded {filename}")

async def main():
    urls = ['http://example.com/file1', 'http://example.com/file2', 'http://example.com/file3']
    async with aiohttp.ClientSession() as session:
        tasks = [asyncio.create_task(download_file(session, url)) for url in urls]
        await asyncio.gather(*tasks)

asyncio.run(main())

This code initializes an asyncio event loop and creates a task for each file download. The asyncio.gather function then runs these tasks in parallel, thereby downloading the files simultaneously.

A key thing to note here is the use of async with for resource management, which ensures that the resources are released properly once the tasks are completed.

Advanced Usage

While the previous example demonstrates the basic premise of parallel downloads, a real-world application often demands more sophistication. This might include error handling, rate limiting, or working with large sets of URLs.

Error Handling

To handle errors gracefully, modify the download_file function to include a try-except block:

async def download_file(session, url):
    try:
        async with session.get(url) as response:
            # Your code as before
    except aiohttp.ClientError as e:
        print(f"Failed to download {url}, error: {e}")

Rate Limiting

To prevent overwhelming the server with too many concurrent requests, you can implement rate limiting using asyncio‘s Semaphore:

async def download_file(session, url, semaphore):
    async with semaphore:
        # Your download code here

async def main():
    semaphore = asyncio.Semaphore(10)  # Max 10 concurrent requests
    # Your code to initialize tasks, with each task now also getting the semaphore as an argument

Working with Large Sets of URLs

When dealing with a large list of URLs, splitting the task into chunks and processing each chunk in parallel can increase efficiency. Here’s how you might implement this:

async def main():
    urls = [...]  # A large list of URLs
    chunk_size = 20
    for i in range(0, len(urls), chunk_size):
        chunk = urls[i:i+chunk_size]
        async with aiohttp.ClientSession() as session:
            tasks = [asyncio.create_task(download_file(session, url)) for url in chunk]
            await asyncio.gather(*tasks)

Conclusion

Using asyncio and aiohttp, Python programmers can effectively download files in parallel, significantly reducing the overall execution time for IO-bound tasks. This tutorial has demonstrated the basic to advanced techniques for achieving this, equipping you with the knowledge to apply these techniques in your own projects.

Next Article: Python asyncio: How to download a large file and show progress (percentage)

Previous Article: Python asyncio: How to download a list of files sequentially

Series: Python Asynchronous Programming Tutorials

Python

You May Also Like

  • Python Warning: Secure coding is not enabled for restorable state
  • Python TypeError: write() argument must be str, not bytes
  • 4 ways to install Python modules on Windows without admin rights
  • Python TypeError: object of type ‘NoneType’ has no len()
  • Python: How to access command-line arguments (3 approaches)
  • Understanding ‘Never’ type in Python 3.11+ (5 examples)
  • Python: 3 Ways to Retrieve City/Country from IP Address
  • Using Type Aliases in Python: A Practical Guide (with Examples)
  • Python: Defining distinct types using NewType class
  • Using Optional Type in Python (explained with examples)
  • Python: How to Override Methods in Classes
  • Python: Define Generic Types for Lists of Nested Dictionaries
  • Python: Defining type for a list that can contain both numbers and strings
  • Using TypeGuard in Python (Python 3.10+)
  • Python: Using ‘NoReturn’ type with functions
  • Type Casting in Python: The Ultimate Guide (with Examples)
  • Python: Using type hints with class methods and properties
  • Python: Typing a function with default parameters
  • Python: Typing a function that can return multiple types