环境
python3.11,pycharm,window10
在Python中并发执行HTTP请求可以提高效率,尤其有多个API请求数据时。有多种方法可以实现并发执行requests请求,其中最常用的是使用concurrent.futures模块(对于同步编程模型)和asyncio(对于异步编程模型)
方法1 使用concurrent.futures线程池
import requests
from concurrent.futures import ThreadPoolExecutor, as_completed
# 请求URL列表
urls = ["http://www.example.com/api1", "http://www.example.com/api2", "http://www.example.com/api3"]
def fetch_url(url):
    """发送GET请求"""
    response = requests.get(url)
    return url, response.text[:100]  # 返回URL和响应的前100个字符
def main():
    with ThreadPoolExecutor(max_workers=5) as executor:  # 可以调整max_workers的值
        futures = {executor.submit(fetch_url, url) for url in urls}
        for future in as_completed(futures):
            url, content = future.result()
            print(f"URL: {url}, Content: {content}")
if __name__ == "__main__":
    main()
方法2 使用asyncio和aiohttp 异步请求法
import asyncio
import aiohttp
from typing import List
async def fetch(session, url):
    async with session.get(url) as response:
        return url, await response.text()[:100]
async def main(urls: List[str]):
    async with aiohttp.ClientSession() as session:
        tasks = [fetch(session, url) for url in urls]
        results = await asyncio.gather(*tasks)
        for url, content in results:
            print(f"URL: {url}, Content: {content}")
if __name__ == "__main__":
    urls = ["http://www.example.com/api1", "http://www.example.com/api2", "http://www.example.com/api3"]
    asyncio.run(main(urls))
python由于全局解释器锁(GIL)的存在,并不像其他语言一样是真正的多线程,可以说是“伪多线程”。对于cpu密集型任务来说,python的多线程不会提升效率,而由于有多线程的开销,倒反不如单线程。但是对于大量网络的I/O操作,例如http请求,就如虎添翼了。
俩者区别,偶尔需要并发请求时,直接用多线程简单,如果需要持续大量并发请求,使用异步I/O更高效










