The profiling package is an interactive continuous Python profiler. It is inspired from Unity 3D profiler. This package provides these features:
- Profiling statistics keep the frame stack.
- An interactive TUI profiling statistics viewer.
- Provides both of statistical and deterministic profiling.
- Utilities for remote profiling.
- Thread or greenlet aware CPU timer.
- Supports Python 2.7, 3.3, 3.4 and 3.5.
- Currently supports only Linux.
[![Build Status] (https://img.shields.io/travis/what-studio/profiling.svg)] (https://travis-ci.org/what-studio/profiling) [![Coverage Status] (https://img.shields.io/coveralls/what-studio/profiling.svg)] (https://coveralls.io/r/what-studio/profiling)
Install the latest release via PyPI:
$ pip install profiling
To profile a single program, simply run the
$ profiling your-program.py
Then an interactive viewer will be executed:
If your program uses greenlets, choose
$ profiling --timer=greenlet your-program.py
--dump option, it saves the profiling result to a file. You can
browse the saved result by using the
$ profiling --dump=your-program.prf your-program.py $ profiling view your-program.prf
If your script reads
sys.argv, append your arguments after
It isolates your arguments from the
$ profiling your-program.py -- --your-flag --your-param=42
If your program has a long life time like a web server, a profiling result
at the end of program is not helpful enough. Probably you need a continuous
profiler. It can be achived by the
$ profiling live-profile webserver.py
See a demo:
There's a live-profiling server also. The server doesn't profile the program at ordinary times. But when a client connects to the server, it starts to profile and reports the results to the all connected clients.
Start a profling server by the
$ profiling remote-profile webserver.py --bind 127.0.0.1:8912
And also run a client for the server by the
$ profiling view 127.0.0.1:8912
TracingProfiler, the default profiler, implements a deterministic profiler
for deep call graph. Of course, it has heavy overhead. The overhead can
pollute your profiling result or can make your application to be slow.
SamplingProfiler implements a statistical profiler. Like other
statistical profilers, it also has only very cheap overhead. When you profile
you can choose it by just
$ profiling live-profile -S webserver.py ^^
Timeit then Profiling
Do you use
timeit to check the performance of your code?
$ python -m timeit -s 'from trueskill import *' 'rate_1vs1(Rating(), Rating())' 1000 loops, best of 3: 722 usec per loop
If you want to profile the checked code, simply use the
$ profiling timeit -s 'from trueskill import *' 'rate_1vs1(Rating(), Rating())' ^^^^^^^^^
Profiling from Code
You can also profile your program by
from profiling.tracing import TracingProfiler # profile your program. profiler = TracingProfiler() profiler.start() ... # run your program. profiler.stop() # or using context manager. with profiler: ... # run your program. # view and interact with the result. profiler.run_viewer()
Viewer Key Bindings
- q - Quit.
- space - Pause/Resume.
- \ - Toggle layout between NESTED and FLAT.
- ↑ and ↓ - Navigate frames.
- → - Expand the frame.
- ← - Fold the frame.
- > - Go to the hotspot.
- esc - Defocus.
- [ and ] - Change sorting column.
- The function name with the code location.
- Only the location without line number. (e.g.
- The function name with the code location. (e.g.
CALLS- Total call count of the function.
OWN(Exclusive Time) - Total spent time in the function excluding sub calls.
OWN- Exclusive time per call.
OWN- Exclusive time per total spent time.
DEEP(Inclusive Time) - Total spent time in the function.
DEEP- Inclusive time per call.
DEEP- Inclusive time per total spent time.
OWN(Exclusive Samples) - Number of samples which are collected during the direct execution of the function.
OWN- Exclusive samples per number of the total samples.
DEEP(Inclusive Samples) - Number of samples which are collected during the excution of the function.
DEEP- Inclusive samples per number of the total samples.