目录
- python线程池threadpool
- 优点
- 线程池的基本实现方法
- python线程池使用样例
python线程池threadpool
今天在学习python进程与线程时,无意间发现了线程池threadpool模块
模块使用非常简单,前提是得需要熟悉线程池的工作原理。
我们知道系统处理任务时,需要为每个请求创建和销毁对象。当有大量并发任务需要处理时,再使用传统的多线程就会造成大量的资源创建销毁导致服务器效率的下降。
这时候,线程池就派上用场了。线程池技术为线程创建、销毁的开销问题和系统资源不足问题提供了很好的解决方案。
优点
(1)可以控制产生线程的数量。通过预先创建一定数量的工作线程并限制其数量,控制线程对象的内存消耗。
(2)降低系统开销和资源消耗。通过对多个请求重用线程,线程创建、销毁的开销被分摊到了多个请求上。另外通过限制线程数量,降低虚拟机在垃圾回收方面的开销。
(3)提高系统响应速度。线程事先已被创建,请求到达时可直接进行处理,消除了因线程创建所带来的延迟,另外多个线程可并发处理。
线程池的基本实现方法
(1)线程池管理器。创建并维护线程池,根据需要调整池的大小,并监控线程泄漏现象。
(2)工作线程。它是一个可以循环执行任务的线程,没有任务时处于 Wait 状态,新任务到达时可被唤醒。
(3)任务队列。它提供一种缓冲机制,用以临时存放待处理的任务,同时作为并发线程的 monitor 对象。
(4)任务接口。它是每个任务必须实现的接口,工作线程通过该接口调度任务的执行。
构建线程池管理器时,首先初始化任务队列(Queue),运行时通过调用添加任务的方法将任务添加到任务队列中。
之后创建并启动一定数量的工作线程,将这些线程保存在线程队列中。线程池管理器在运行时可根据需要增加或减少工作线程数量。
工作线程运行时首先锁定任务队列,以保证多线程对任务队列的正确并发访问,如队列中有待处理的任务,工作线程取走一个任务并释放对任务队列的锁定,以便其他线程实现对任务队列的访问和处理。
在获取任务之后工作线程调用任务接口完成对任务的处理。当任务队列为空时,工作线程加入到任务队列的等待线程列表中,此时工作线程处于 Wait 状态,几乎不占 CPU 资源。
一旦新的任务到达,通过调用任务列表对象的notify方法,从等待线程列表中唤醒一个工作线程以对任务进行处理。
通过这种协作模式,既节省了线程创建、销毁的开销,又保证了对任务的并发处理,提高了系统的响应速度。
简而言之:
就是把并发执行的任务传递给一个线程池,来替代为每个并发执行的任务都启动一个新的线程。只要池里有空闲的线程,任务就会分配给一个线程执行。
pool = ThreadPool(poolsize) | |
requests = makeRequests(some_callable,list_of_args,callback) | |
[pool.putRequest(req) for req in requests] | |
pool.wait() |
- 第一行的意思是创建一个可存放poolsize个数目的线程的线程池。
- 第二行的意思是调用makeRequests创建请求。 some_callable是需要开启多线程处理的函数,list_of_args是函数参数,callback是可选参数回调,默认是无。
- 第三行的意思是把运行多线程的函数放入线程池中。
- 最后一行的意思是等待所有的线程完成工作后退出。
通过分析源代码,其实发现里面的内容很简单。
import sys | |
import threading | |
import Queue | |
import traceback | |
# exceptions | |
class NoResultsPending(Exception): | |
"""All work requests have been processed.""" | |
pass | |
class NoWorkersAvailable(Exception): | |
"""No worker threads available to process remaining requests.""" | |
pass | |
# internal module helper functions | |
def _handle_thread_exception(request, exc_info): | |
"""Default exception handler callback function. | |
This just prints the exception info via ``traceback.print_exception``. | |
""" | |
traceback.print_exception(*exc_info) | |
# utility functions | |
def makeRequests(callable_, args_list, callback=None, #用来创建多个任务请求 callback是回调函数处理结果,exc_callback是用来处理发生的异常 | |
exc_callback=_handle_thread_exception): | |
"""Create several work requests for same callable with different arguments. | |
Convenience function for creating several work requests for the same | |
callable where each invocation of the callable receives different values | |
for its arguments. | |
``args_list`` contains the parameters for each invocation of callable. | |
Each item in ``args_list`` should be either a-item tuple of the list of | |
positional arguments and a dictionary of keyword arguments or a single, | |
non-tuple argument. | |
See docstring for ``WorkRequest`` for info on ``callback`` and | |
``exc_callback``. | |
""" | |
requests = [] | |
for item in args_list: | |
if isinstance(item, tuple): | |
requests.append( | |
WorkRequest(callable_, item[], item[1], callback=callback, | |
exc_callback=exc_callback) | |
) | |
else: | |
requests.append( | |
WorkRequest(callable_, [item], None, callback=callback, | |
exc_callback=exc_callback) | |
) | |
return requests | |
# classes | |
class WorkerThread(threading.Thread): #工作线程 | |
"""Background thread connected to the requests/results queues. | |
A worker thread sits in the background and picks up work requests from | |
one queue and puts the results in another until it is dismissed. | |
""" | |
def __init__(self, requests_queue, results_queue, poll_timeout=, **kwds): | |
"""Set up thread in daemonic mode and start it immediatedly. | |
``requests_queue`` and ``results_queue`` are instances of | |
``Queue.Queue`` passed by the ``ThreadPool`` class when it creates a new | |
worker thread. | |
""" | |
threading.Thread.__init__(self, **kwds) | |
self.setDaemon() | |
self._requests_queue = requests_queue #任务队列 | |
self._results_queue = results_queue #结果队列 | |
self._poll_timeout = poll_timeout | |
self._dismissed = threading.Event() | |
self.start() | |
def run(self): | |
"""Repeatedly process the job queue until told to exit.""" | |
while True: | |
if self._dismissed.isSet(): #如果标识位设为True,则表示线程非阻塞 | |
# we are dismissed, break out of loop | |
break | |
# get next work request. If we don't get a new request from the | |
# queue after self._poll_timout seconds, we jump to the start of | |
# the while loop again, to give the thread a chance to exit. | |
try: | |
request = self._requests_queue.get(True, self._poll_timeout)#获取待处理任务,block设为True,标识线程同步 ,并设置超时时间 | |
except Queue.Empty: | |
continue | |
else: | |
if self._dismissed.isSet():再次判断,因为在取任务期间,线程有可能被挂起 | |
# we are dismissed, put back request in queue and exit loop | |
self._requests_queue.put(request) #添加任务到任务队列 | |
break | |
try: | |
result = request.callable(*request.args, **request.kwds) | |
self._results_queue.put((request, result)) | |
except: | |
request.exception = True | |
self._results_queue.put((request, sys.exc_info())) | |
def dismiss(self): | |
"""Sets a flag to tell the thread to exit when done with current job.""" | |
self._dismissed.set() | |
class WorkRequest: #创建单个任务请求 | |
"""A request to execute a callable for putting in the request queue later. | |
See the module function ``makeRequests`` for the common case | |
where you want to build several ``WorkRequest`` objects for the same | |
callable but with different arguments for each call. | |
""" | |
def __init__(self, callable_, args=None, kwds=None, requestID=None, | |
callback=None, exc_callback=_handle_thread_exception): | |
"""Create a work request for a callable and attach callbacks. | |
A work request consists of the a callable to be executed by a | |
worker thread, a list of positional arguments, a dictionary | |
of keyword arguments. | |
A ``callback`` function can be specified, that is called when the | |
results of the request are picked up from the result queue. It must | |
accept two anonymous arguments, the ``WorkRequest`` object and the | |
results of the callable, in that order. If you want to pass additional | |
information to the callback, just stick it on the request object. | |
You can also give custom callback for when an exception occurs with | |
the ``exc_callback`` keyword parameter. It should also accept two | |
anonymous arguments, the ``WorkRequest`` and a tuple with the exception | |
details as returned by ``sys.exc_info()``. The default implementation | |
of this callback just prints the exception info via | |
``traceback.print_exception``. If you want no exception handler | |
callback, just pass in ``None``. | |
``requestID``, if given, must be hashable since it is used by | |
``ThreadPool`` object to store the results of that work request in a | |
dictionary. It defaults to the return value of ``id(self)``. | |
""" | |
if requestID is None: | |
self.requestID = id(self) #id返回对象的内存地址 | |
else: | |
try: | |
self.requestID = hash(requestID) #哈希处理 | |
except TypeError: | |
raise TypeError("requestID must be hashable.") | |
self.exception = False | |
self.callback = callback | |
self.exc_callback = exc_callback | |
self.callable = callable_ | |
self.args = args or [] | |
self.kwds = kwds or {} | |
def __str__(self): | |
return "<WorkRequest id=%s args=%r kwargs=%r exception=%s>" % \ | |
(self.requestID, self.args, self.kwds, self.exception) | |
class ThreadPool: #线程池管理器 | |
"""A thread pool, distributing work requests and collecting results. | |
See the module docstring for more information. | |
""" | |
def __init__(self, num_workers, q_size=, resq_size=0, poll_timeout=5): | |
"""Set up the thread pool and start num_workers worker threads. | |
``num_workers`` is the number of worker threads to start initially. | |
If ``q_size >`` the size of the work *request queue* is limited and | |
the thread pool blocks when the queue is full and it tries to put | |
more work requests in it (see ``putRequest`` method), unless you also | |
use a positive ``timeout`` value for ``putRequest``. | |
If ``resq_size >`` the size of the *results queue* is limited and the | |
worker threads will block when the queue is full and they try to put | |
new results in it. | |
.. warning: | |
If you set both ``q_size`` and ``resq_size`` to ``!=`` there is | |
the possibilty of a deadlock, when the results queue is not pulled | |
regularly and too many jobs are put in the work requests queue. | |
To prevent this, always set ``timeout >`` when calling | |
``ThreadPool.putRequest()`` and catch ``Queue.Full`` exceptions. | |
""" | |
self._requests_queue = Queue.Queue(q_size) #任务队列 | |
self._results_queue = Queue.Queue(resq_size) #结果队列 | |
self.workers = [] #工作线程 | |
self.dismissedWorkers = [] #睡眠线程 | |
self.workRequests = {} #一个字典 键是id 值是request | |
self.createWorkers(num_workers, poll_timeout) | |
def createWorkers(self, num_workers, poll_timeout=): | |
"""Add num_workers worker threads to the pool. | |
``poll_timout`` sets the interval in seconds (int or float) for how | |
ofte threads should check whether they are dismissed, while waiting for | |
requests. | |
""" | |
for i in range(num_workers): | |
self.workers.append(WorkerThread(self._requests_queue, | |
self._results_queue, poll_timeout=poll_timeout)) | |
def dismissWorkers(self, num_workers, do_join=False): | |
"""Tell num_workers worker threads to quit after their current task.""" | |
dismiss_list = [] | |
for i in range(min(num_workers, len(self.workers))): | |
worker = self.workers.pop() | |
worker.dismiss() | |
dismiss_list.append(worker) | |
if do_join: | |
for worker in dismiss_list: | |
worker.join() | |
else: | |
self.dismissedWorkers.extend(dismiss_list) | |
def joinAllDismissedWorkers(self): | |
"""Perform Thread.join() on all worker threads that have been dismissed. | |
""" | |
for worker in self.dismissedWorkers: | |
worker.join() | |
self.dismissedWorkers = [] | |
def putRequest(self, request, block=True, timeout=None): | |
"""Put work request into work queue and save its id for later.""" | |
assert isinstance(request, WorkRequest) | |
# don't reuse old work requests | |
assert not getattr(request, 'exception', None) | |
self._requests_queue.put(request, block, timeout) | |
self.workRequests[request.requestID] = request #确立一对一对应关系 一个id对应一个request | |
def poll(self, block=False):#处理任务, | |
"""Process any new results in the queue.""" | |
while True: | |
# still results pending? | |
if not self.workRequests: #没有任务 | |
raise NoResultsPending | |
# are there still workers to process remaining requests? | |
elif block and not self.workers:#无工作线程 | |
raise NoWorkersAvailable | |
try: | |
# get back next results | |
request, result = self._results_queue.get(block=block) | |
# has an exception occured? | |
if request.exception and request.exc_callback: | |
request.exc_callback(request, result) | |
# hand results to callback, if any | |
if request.callback and not \ | |
(request.exception and request.exc_callback): | |
request.callback(request, result) | |
del self.workRequests[request.requestID] | |
except Queue.Empty: | |
break | |
def wait(self): | |
"""Wait for results, blocking until all have arrived.""" | |
while: | |
try: | |
self.poll(True) | |
except NoResultsPending: | |
break |
有三个类 ThreadPool,workRequest,workThread
第一步我们需要建立一个线程池调度ThreadPool实例(根据参数而产生多个线程works),然后再通过makeRequests创建具有多个不同参数的任务请求workRequest,然后把任务请求用putRequest放入线程池中的任务队列中,此时线程workThread就会得到任务callable,然后进行处理后得到结果,存入结果队列。如果存在callback就对结果调用函数。
注意:结果队列中的元素是元组(request,result)这样就一一对应了。
python线程池使用样例
import os | |
import threading | |
import time | |
from threading import Thread | |
from concurrent.futures import ThreadPoolExecutor | |
threadPool = ThreadPoolExecutor(max_workers=,thread_name_prefix="test_") | |
def test(v,v2): | |
print(threading.current_thread().name,v,v2) | |
time.sleep() | |
if __name__=='__main__': | |
for i in range(,10): | |
threadPool.submit(test,i,i+) | |
threadPool.shutdown(wait=True) |