Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

database/sql: allow driver to entirely override Scan #67546

Open
jackc opened this issue May 21, 2024 · 13 comments · May be fixed by #67648
Open

database/sql: allow driver to entirely override Scan #67546

jackc opened this issue May 21, 2024 · 13 comments · May be fixed by #67648

Comments

@jackc
Copy link

jackc commented May 21, 2024

Proposal Details

I am the creator of the https://github.com/jackc/pgx/ PostgreSQL driver.

pgx implements a very flexible and convenient type system for its native interface. Most types "just work" without needing to add any pgx or database/sql interfaces. Slices like []int64 can be used directly as query arguments and scan targets. A map[string]string can be used as a PostgreSQL json or hstore value even though the encoding formats are different. This can only work because the encoding and decoding logic has access to connection specific information (e.g. registered Codecs for each type), the PostgreSQL type (OID), the value format (text or binary), and the Go type.

I've been trying with very limited success to expose the this functionality and the relevant types to the database/sql interface. The problem is that an implementer of the sql.Scanner or driver.Valuer interfaces does not have any way of accessing the connection information, the PostgreSQL value type, or the value format.

For query arguments, this can be overridden by implementing NamedValueChecker and simply passing all values through unmodified. So at this point, all of pgx's advanced type handling is available for query arguments.

However, there does not seem to be the same for Scan. I propose adding a driver.RowsColumnScanner interface.

type RowsColumnScanner interface {
	ScanColumn(i int, dest any) error
}

If this interface is implemented it could completely replace the existing database/sql scan logic. Drivers could implement arbitrary type support and it could significantly improve performance. ErrSkip could be returned to fallback to the existing database/sql logic.

See #22544 for a related proposal. However, I prefer this proposal as the scanning interface is on the row not the connection or driver. This allows the decoding logic to know what the underlying database type and value format are.

As far as I know this would be an entirely backwards compatible change. The only subtlety I am aware of is a RowsColumnScanner would have to support ScanColumn being called multiple times on the same column to preserve support for calling rows.Scan multiple times on a single row.

@ianlancetaylor
Copy link
Contributor

CC @kardianos

@kardianos
Copy link
Contributor

Given the design of database/sql, I see no issue with this interface.
@jackc Do you want to submit the Go CL? If not, I could send it up.

jackc added a commit to jackc/go that referenced this issue May 25, 2024
Implementing RowsColumnScanner allows the driver
to completely control how values are scanned.

Fixes golang#67546
@jackc jackc linked a pull request May 25, 2024 that will close this issue
@gopherbot
Copy link
Contributor

Change https://go.dev/cl/588435 mentions this issue: database/sql: allow drivers to override Scan behavior

@jackc
Copy link
Author

jackc commented May 25, 2024

@kardianos I implemented the change in #67648. I also made a PR for pgx that implements this interface (jackc/pgx#2029).

With both of those PRs applied, the queries like the following work in pgx through database/sql:

var names []string
err := db.QueryRow("select array['John', 'Jane']::text[]").Scan(&names)

I observed a few things while making the change:

It's possible for a driver to inadvertently change the way some of the scanning edge cases work. For example, database/sql defines the behavior for scanning a time into a string as formatting the time as time.RFC3339Nano. However, when scanning to a string, pgx will simply copy any text formatted values directly. So if a timestamptz in the text format was received from PostgreSQL scanning to a string would have different behavior. This gives significantly better performance then parsing a string into a time.Time and then formatting it. But it's not strictly in spec.

There is a fair amount of work involved in creating the slice of driver.Valuer for Next(). If all columns are handled with RowsColumnScanner then that work is unnecessary. A driver could "cheat" and stub out the work. Not sure if that is a good idea or not, but I suspect it could provide a performance boost.

@rsc rsc changed the title proposal: database/sql: Allow driver to entirely override Scan proposal: database/sql: allow driver to entirely override Scan Jun 12, 2024
@rsc
Copy link
Contributor

rsc commented Jun 12, 2024

This proposal has been added to the active column of the proposals project
and will now be reviewed at the weekly proposal review meetings.
— rsc for the proposal review group

@kardianos
Copy link
Contributor

@jackc This is pending the proposal review group. But a reply to your observations:

@kardianos I implemented the change in #67648. I also made a PR for pgx that implements this interface (jackc/pgx#2029).

With both of those PRs applied, the queries like the following work in pgx through database/sql:

var names []string
err := db.QueryRow("select array['John', 'Jane']::text[]").Scan(&names)

Nice. Being able scan arrays directly is useful.

I observed a few things while making the change:

It's possible for a driver to inadvertently change the way some of the scanning edge cases work. For example, database/sql defines the behavior for scanning a time into a string as formatting the time as time.RFC3339Nano. However, when scanning to a string, pgx will simply copy any text formatted values directly. So if a timestamptz in the text format was received from PostgreSQL scanning to a string would have different behavior. This gives significantly better performance then parsing a string into a time.Time and then formatting it. But it's not strictly in spec.

I'm personally okay with that.

There is a fair amount of work involved in creating the slice of driver.Valuer for Next(). If all columns are handled with RowsColumnScanner then that work is unnecessary. A driver could "cheat" and stub out the work. Not sure if that is a good idea or not, but I suspect it could provide a performance boost.

So long as the new driver interface can return an ErrSkip, because Next is called first, and thus the driver value array is created per returned data set, I don't see a reasonable way to work around that. You could work around that, but the API change would be much larger. I'm not terribly interested in single row benchmarks on a local machine; in most real situations, if you need more performance, you will coalesce many single row queries into a one many row query. For example, yesterday I was annoyed at a slowly performing function, inspected it, and found the cause to be a single row query in a loop. I pulled it out, allocated an array, then encoded a JSON parameter and did a single query at the end and I went from multiple seconds to a single millisecond operation. I'm not concerned about saving a single allocation in Next per result set. If the driver "knows" it will call the new interface on Scan, then on Next, it can just ignore that work (stub it out, "cheat"), which I think is fine, but that would be up to the driver to do.

@jackc
Copy link
Author

jackc commented Jun 15, 2024

@kardianos

I'm not concerned about saving a single allocation in Next per result set.

It could potentially remove significantly more overhead than just the allocation of a []driver.Valuer. The real win is supporting the PostgreSQL binary format for types that database/sql doesn't support. pgx uses the binary format for integers because their driver.Valuer representation is int64. But many useful types like uuid, arrays, and ranges do not have a driver.Valuer representation. In these cases, pgx uses the text format as it assumes that the caller either wants a string or is using a sql.Scanner that expects a string. These types tend to be far more expensive to parse in the text format than the binary format.

For example, I created a quick benchmark of a query that selects 1,000 rows, each of which has a single 10 element integer array. This is through pgx directly, not database/sql but the relative difference should be similar.

BenchmarkSelectRowsArray/1000_rows/text-16          	    1143	   1075135 ns/op	 1401515 B/op	   32014 allocs/op
BenchmarkSelectRowsArray/1000_rows/binary-16        	    2964	    381924 ns/op	   72817 B/op	    3024 allocs/op

If pgx knew that it would never have to produce a valid driver.Valuer it could use binary format in more cases.

If the driver "knows" it will call the new interface on Scan, then on Next, it can just ignore that work (stub it out, "cheat"), which I think is fine, but that would be up to the driver to do.

Sounds good.

@rsc
Copy link
Contributor

rsc commented Jun 24, 2024

Can someone post the proposal in the form of a complete definition and doc comment of driver.RowsColumnScanner? I looked at the CL but the doc comment there is quite minimal. It seems like more needs to be said.

@jackc
Copy link
Author

jackc commented Jun 25, 2024

I just pushed a commit that expands on the docs for driver.RowsColumnScanner. But I'm not really sure what else to say beyond what's there and my original post above. In short, this allows a driver to completely override the scanning of query result values.

@kardianos
Copy link
Contributor

Proposal: allow specific driver, not database/sql to control scan

Forcing all results to go through database/sql to scan comes at a cost:

  1. Data must be first pulled into []any from the driver,
  2. The client then scans the data from database/sql using generic logic for base types.
    This works well for simple, common values. However arrays or more complex custom values must either marshal to a string first, then also rely on the client setting some type of converter each time.

If the driver was able to control the actual scan process, both costs could be removed. Types could be scanned directly from the driver to the client values.

The specific changes are, in database/sql/driver, add a new optional interface:

// RowsColumnScanner may be implemented by [Rows]. It allows the driver to completely
// take responsibility for how values are scanned and replace the normal [database/sql].
// scanning path. This allows drivers to directly support types that do not implement
// [database/sql.Scanner].
type RowsColumnScanner interface {
	Rows

	// ScanColumn copies the column in the current row into the value pointed at by
	// dest. It returns [ErrSkip] to fall back to the normal [database/sql] scanning path.
	ScanColumn(dest any, index int) error
}

Then in database/sql, in Rows.Scan, ensure that if supported, call the driver ScanColumn rather then convertAssignRows. If ScanColumn returns driver.ErrSkip, then convertAssignRows still.


Drawbacks

While this proposal makes it easy for a driver to support additional types without going through intermediate types or secondary encoding, it doesn't support getting a row cursor from a field value. One solution to that would be to change the definition of ScanColumn to ScanColumn(index int, dest any, rows *Rows) error. (This would prevent the interface from being declared in the driver package), then export convertAssignRows which has been requested in the past (though I'm not immediately finding references for those requests). If this was done, it would be possible to skip the driver.ErrSkip opt out method entirely, as the driver could call sql.ConvertAssignRows.

@rsc
Copy link
Contributor

rsc commented Jun 27, 2024

Have all remaining concerns about this proposal been addressed?

The proposal is:

// RowsColumnScanner may be implemented by [Rows]. It allows the driver to completely
// take responsibility for how values are scanned and replace the normal [database/sql].
// scanning path. This allows drivers to directly support types that do not implement
// [database/sql.Scanner].
type RowsColumnScanner interface {
	Rows

	// ScanColumn copies the column in the current row into the value pointed at by
	// dest. It returns [ErrSkip] to fall back to the normal [database/sql] scanning path.
	ScanColumn(dest any, index int) error
}

@rsc
Copy link
Contributor

rsc commented Jul 25, 2024

Based on the discussion above, this proposal seems like a likely accept.
— rsc for the proposal review group

The proposal is:

// RowsColumnScanner may be implemented by [Rows]. It allows the driver to completely
// take responsibility for how values are scanned and replace the normal [database/sql].
// scanning path. This allows drivers to directly support types that do not implement
// [database/sql.Scanner].
type RowsColumnScanner interface {
	Rows

	// ScanColumn copies the column in the current row into the value pointed at by
	// dest. It returns [ErrSkip] to fall back to the normal [database/sql] scanning path.
	ScanColumn(dest any, index int) error
}

@rsc
Copy link
Contributor

rsc commented Jul 31, 2024

No change in consensus, so accepted. 🎉
This issue now tracks the work of implementing the proposal.
— rsc for the proposal review group

The proposal is:

// RowsColumnScanner may be implemented by [Rows]. It allows the driver to completely
// take responsibility for how values are scanned and replace the normal [database/sql].
// scanning path. This allows drivers to directly support types that do not implement
// [database/sql.Scanner].
type RowsColumnScanner interface {
	Rows

	// ScanColumn copies the column in the current row into the value pointed at by
	// dest. It returns [ErrSkip] to fall back to the normal [database/sql] scanning path.
	ScanColumn(dest any, index int) error
}

@rsc rsc changed the title proposal: database/sql: allow driver to entirely override Scan database/sql: allow driver to entirely override Scan Jul 31, 2024
@rsc rsc modified the milestones: Proposal, Backlog Jul 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Accepted
Development

Successfully merging a pull request may close this issue.

5 participants