The problem
Except for some cases, the way different process information are retrieved varies depending on the OS. Sometimes it requires reading a file in /proc filesystem (Linux), some other times it requires using C (Windows, BSD, OSX, SunOS), but every time it's done differently. Psutil abstracts this complexity by providing a nice high-level interface so that you, say, call Process.name() without worrying about what happens behind the curtains or on what OS you're on.Internally, it is not rare that multiple process info (e.g. name(), ppid(), uids(), create_time()) may be fetched by using the same routine. For example, on Linux we read /proc/stat to get the process name, terminal, CPU times, creation time, status and parent PID, but only one value is returned and the others are discarded. On Linux the code below reads /proc/stat 6 times:
>>> import psutil >>> p = psutil.Process() >>> p.name() >>> p.cpu_times() >>> p.create_time() >>> p.ppid() >>> p.status() >>> p.terminal()Another example is BSD. In order to get process name, memory, CPU times and other metrics, a single sysctl() call is necessary, but again, because of how psutil used to work so far that same sysctl() call is executed every time (see here, here, and so on), one information is returned (say name()) and the rest is discarded. Not anymore.
Do it in one shot
It appears clear how the approach described above is not efficient, also considering that applications similar to top, htop, ps or glances usually collect more than one info per-process.
psutil 5.0.0 introduces a new oneshot() context manager. When used, the internal routine is executed once (in the example below on name()) and the other values are cached. The subsequent calls sharing the same internal routine (read /proc/stat, call sysctl() or whatever) will return the cached value.
With psutil 5.0.0 the code above can be rewritten like this, and on Linux it will run 2.4 times faster:
>>> import psutil >>> p = psutil.Process() >>> with p.oneshot(): ... p.name() ... p.cpu_times() ... p.create_time() ... p.ppid() ... p.status() ... p.terminal()
Implementation
One great thing about psutil design is its abstraction. It is dived in 3 "layers". The first layer is represented by the main Process class (python), which is what dictates the end-user high-level API. The second layer is the OS-specific Python module which is thin wrapper on top of the OS-specific C extension module (third layer). Because this was organized this way (modularly) the refactoring was reasonably smooth. In order to do this I first refactored those C functions collecting multiple info and grouped them in a single function (e.g. see BSD implementation). Then I wrote a decorator which enables the cache only when requested (when entering the context manager) and decorated the "grouped functions" with with it. The whole thing is enabled on request by the highest-level oneshot() context manager, which is the only thing which is exposed to the end user. Here's the decorator:def memoize_when_activated(fun): """A memoize decorator which is disabled by default. It can be activated and deactivated on request. """ @functools.wraps(fun) def wrapper(self): if not wrapper.cache_activated: return fun(self) else: try: ret = cache[fun] except KeyError: ret = cache[fun] = fun(self) return ret def cache_activate(): """Activate cache.""" wrapper.cache_activated = True def cache_deactivate(): """Deactivate and clear cache.""" wrapper.cache_activated = False cache.clear() cache = {} wrapper.cache_activated = False wrapper.cache_activate = cache_activate wrapper.cache_deactivate = cache_deactivate return wrapperIn order to measure the various speedups I finally wrote a benchmark script (well 2 actually) and kept tuning until I was sure the various changes made psutil actually faster. The benchmark scripts calculate the speedup you can get if you call all the "grouped" methods together (best case scenario).
Linux: +2.56x speedup
Linux process is the only pure-python implementation as (almost) all process info are gathered by reading files in the /proc filesystem. /proc files typically contain different information about the process and /proc/PID/stat and /proc/PID/status are the perfect examples. That's why on Linux we aggregate them in 3 groups. The relevant part of the Linux implementation can be seen here.Windows: from +1.9x to +6.5x speedup
Windows is an interesting one. In normal circumstances, if we're querying a process owned by our user, we group together only process' num_threads(), num_ctx_switches() and num_handles(), getting a +1.9x speedup if we access those methods in one shot. Windows is particular though, because certain methods use a dual implementation: a "fast method" is attempted first, but if the process is owned by another user it fails with AccessDenied. In that case psutil falls back on using a second "slower" method (see here for example).
The second method is slower because it iterates over all PIDs but differently than "plain" Windows APIs it can be used to get multiple info in one shot: num threads, context switches, handles, CPU times, create time and IO counters. That is why querying processes owned by other users results in an impressive +6.5 speedup.
OSX: +1.92x speedup
On OSX we can get 2 groups of information. With sysctl() syscall we get process parent PID, uids, gids, terminal, create time, name. With proc_info() syscall we get CPU times (for PIDs owned by another user) memory metrics and ctx switches. Not bad.
Thanks for sharing, nice post! Post really provice useful information!
ReplyDeleteGiaonhan247 chuyên dịch vụ vận chuyển hàng đi mỹ cũng như dịch vụ ship hàng mỹ từ dịch vụ nhận mua hộ hàng mỹ từ website nổi tiếng Mỹ là mua hàng amazon về VN uy tín, giá rẻ.