Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow a Python exception to be raised without throwing for improved performance #1853

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 13 additions & 5 deletions include/pybind11/pybind11.h
Original file line number Diff line number Diff line change
Expand Up @@ -971,11 +971,19 @@ class cpp_function : public function {
return nullptr;
}
if (!result) {
std::string msg = "Unable to convert function return value to a "
"Python type! The signature was\n\t";
msg += it->signature;
append_note_if_missing_header_is_suspected(msg);
PyErr_SetString(PyExc_TypeError, msg.c_str());
/* Allow the user to raise a Python exception directly using
PyErr_SetString() together with returning a null py::object, as
that may be significantly faster than throwing a C++ exception
in critical code paths. In that case we arrive here with
non-null PyErr_Occurred(), so keep that exception instead of
overwriting it with another. */
if (!PyErr_Occurred()) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to look at this more carefully and think about it.
First impression:

  • we're building a surprising bypass here for the sake of performance.
  • it employs a side effect: the wrapped function manipulating the global Python error state.
    We're definitely setting ourself up for quicksand experiences: people thinking they are on firm ground, but there is a bug elsewhere that makes the Python error state invalid, but it only manifests itself through this bypass with an off-topic error that could take hours of debugging to understand. (I've been in similar situations many times.)
    Is the benefit of this optimization so big that we're ok accepting that danger ("at scale" it is pretty much certain that people will run into it)?

Copy link
Collaborator

@rwgk rwgk Aug 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I should add: if performance really matters, why do you have this code path in your loop?
Of course it's nice if it's fast.
But there is an opportunity cost to optimizations like this PR: things get more complicated and fragile.
Now we're in the realm of philosophy:

  • Keep pybind11 lean and straightforward so that it is as good as it can be at wrapping with ease.
  • Tell users: if you have a performance critical loop, put your data into a C++ array (that you can easily wrap) and have your performance critical loop in C++, taking Python completely out of that loop, literally.

So there are clear "performance really doesn't matter" and "performance really does matter" regimes where everybody is in agreement and happy, but a gray area in the middle, where it's not quite important enough for users to bite the bullet and code up the C++ array they'd need.
What's better globally over a long period of time for all users in aggregate?

  • Having N bypasses like this in pybind11 to get more of that gray area?
  • A leaner and less fragile pybind11?

Copy link
Collaborator

@rwgk rwgk Aug 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tagging #2760 and @jblespiau who is maybe more on the N bypasses side of the spectrum than I am.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rwgk This would also allow better error messages from type casters.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drive-by response:

This would also allow better error messages from type casters.

That would change my cost-benefit equation (previously I was only seeing "performance" as the benefit).

A PR that generates better error messages, based on this PR, may be convincing.

Copy link
Contributor

@jblespiau jblespiau Jan 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tagging #2760 and @jblespiau who is maybe more on the N bypasses side of the spectrum than I am.

The bypass I was suggesting in the past was mostly about not performing a dynamic look-up every time we do a C++ to Python or Python to C++ conversion. I don't really consider this a "bypass", but more of a cache (if we know the C++ Type "A" maps to a given Python type, no need to do the look-up in the hash-map every single time to retrieve the same object again and again. It's even more obvious when you convert an std::vector<A>, you will do as many look-ups as there are items in the vector. I was suggesting to add the keyword "static" on the look-up line, and you get a huge speed-up. The only possible issue is if people unregister and re-register new types on the same symbol, but I don't even think it's possible (you cannot unregister a symbol as far as I know).
Another difference with the current CL is that I was suggesting to improve the normal path, not an exceptional one.

Specifically for the current PR, I would probably need to think more, but I am thinking that:

  • Exceptions are known to be slow in Python. Exceptions should be exceptional, and not be the standard way to return something.
  • Most of the time, after an exception, we crash.
  • Given they are expected to be exceptional, why do we want to optimize it in the first place?

So I am wondering (a) why try to optimize something that should not happen in normal circumstances, (b) and also why, according to the benchmark, it has any impact on the non-raising path : if no exception is raised, it shouldn't change the runtime, right?

std::string msg = "Unable to convert function return value to "
"a Python type! The signature was\n\t";
msg += it->signature;
append_note_if_missing_header_is_suspected(msg);
PyErr_SetString(PyExc_TypeError, msg.c_str());
}
return nullptr;
}
if (overloads->is_constructor && !self_value_and_holder.holder_constructed()) {
Expand Down
8 changes: 8 additions & 0 deletions tests/test_exceptions.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -228,6 +228,14 @@ TEST_SUBMODULE(exceptions, m) {
throw py::error_already_set();
});

m.def("raise_without_throw", [](bool return_null, bool set_error) {
if (set_error)
PyErr_SetString(PyExc_FutureWarning, "this is a robbery!");
if (return_null)
return py::object{};
return py::cast(5);
}, py::arg("return_null"), py::arg("set_error"));

m.def("python_call_in_destructor", [](const py::dict &d) {
bool retval = false;
try {
Expand Down
28 changes: 27 additions & 1 deletion tests/test_exceptions.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

import pytest

import env # noqa: F401
import env
import pybind11_cross_module_tests as cm
from pybind11_tests import exceptions as m

Expand All @@ -24,6 +24,32 @@ def test_error_already_set(msg):
assert msg(excinfo.value) == "foo"


def test_raise_without_throw(msg):
# Not setting an error and returning a non-null object
assert m.raise_without_throw(return_null=False, set_error=False) == 5

# Setting an error and returning a null object is an allowed alternative to
# throwing a C++ exception
with pytest.raises(FutureWarning) as excinfo:
m.raise_without_throw(return_null=True, set_error=True)
assert msg(excinfo.value) == "this is a robbery!"

# Setting an error and returning a non-null object is a Python system error
if not env.PY2:
with pytest.raises(SystemError) as excinfo:
m.raise_without_throw(return_null=False, set_error=True)

# Returning a null object without error being set is not allowed either, as
# that's also the case when function return value can't be converted to a
# Python type
with pytest.raises(TypeError) as excinfo:
m.raise_without_throw(return_null=True, set_error=False)
assert msg(excinfo.value) == (
"Unable to convert function return value to a Python type! The "
"signature was\n\t(return_null: bool, set_error: bool) -> object"
)


@pytest.mark.skipif("env.PY2")
def test_raise_from(msg):
with pytest.raises(ValueError) as excinfo:
Expand Down