-
Notifications
You must be signed in to change notification settings - Fork 119
Statements
- Executing statements
- Debugging statements
- Logging statements
- Raw statements
- Examples
- Prepared Statements Caching
Following statements are supported:
Statements can be executed with following methods:
-
Query(db qrm.Queryable, dest interface{}) error
- executes statement over database connection/transactiondb
and stores query result in destinationdest
. Destination can be either pointer to struct or pointer to a slice. If destination is pointer to struct and query result set is empty, method returnsqrm.ErrNoRows
. -
QueryContext(context context.Context, db qrm.Queryable, dest interface{}) error
- executes statement with acontext
over database connection/transactiondb
and stores query result in destinationdest
. Destination can be either pointer to struct or pointer to a slice. If destination is pointer to struct and query result set is empty, method returns qrm.ErrNoRows. -
Exec(db qrm.Executable) (sql.Result, error)
- executes statement over database connection/transactiondb
and returnssql.Result
. -
ExecContext(context context.Context, db qrm.Executable) (sql.Result, error)
- executes statement with context over database/transaction connectiondb
and returnssql.Result
. -
Rows(ctx context.Context, db qrm.Queryable) (*Rows, error)
- executes statements over db connection/transaction and returns rows
Each execution method first creates parameterized sql query with list of arguments and then initiates appropriate call on database connection db
.
Exec
and ExecContext
are just a wrappers around db qrm.Executable
call to Exec
and ExecContext
.
Query
and QueryContext
are Query Result Mapping (QRM) methods. They execute statement over db qrm.Queryable
with Query
or QueryContext
methods while performing grouping of each row result into the destination
.
Database connection db qrm.Queryable
and db qrm.Executable
can be of any type that implements following interface:
type Queryable interface {
QueryContext(ctx context.Context, query string, args ...interface{}) (*sql.Rows, error)
}
type Executable interface {
ExecContext(ctx context.Context, query string, args ...interface{}) (sql.Result, error)
}
These include but are not limited to:
sql.DB
sql.Tx
sql.Conn
SQL generated from the statement can be seen by:
-
Sql() (query string, args []interface{})
- retrieves parameterized sql query with list of arguments -
DebugSql() (query string)
- retrieves debug query where every parametrized placeholder is replaced with its argument textual represenatation.
Statement execution information can be accessed automatically, for logging or other purposes, by providing a new global query logger function:
postgres.SetQueryLogger(func(ctx context.Context, queryInfo postgres.QueryInfo) {
sql, args := queryInfo.Statement.Sql()
fmt.Printf("- SQL: %s Args: %v \n", sql, args)
fmt.Printf("- Debug SQL: %s \n", queryInfo.Statement.DebugSql())
// Depending on how the statement is executed, RowsProcessed is:
// - Number of rows returned for Query() and QueryContext() methods
// - RowsAffected() for Exec() and ExecContext() methods
// - Always 0 for Rows() method.
fmt.Printf("- Rows processed: %d\n", queryInfo.RowsProcessed)
fmt.Printf("- Duration %s\n", queryInfo.Duration.String())
fmt.Printf("- Execution error: %v\n", info.Err)
callerFile, callerLine, callerFunction = queryInfo.Caller()
fmt.Printf("- Caller file: %s, line: %d, function: %s\n", callerFile, callerLine, callerFunction)
}
It is possible to write raw SQL queries without using generated SQL builder types. Using raw sql queries all the benefits of type safety and code competitions are lost. Programmer has to be careful to correctly name statement projections, otherwise QRM will not be able to scan the result of the query. Every projection has to be aliased in destination type name
.field name
format.
stmt := RawStatement(`
SELECT actor.actor_id AS "actor.actor_id",
actor.first_name AS "actor.first_name",
actor.last_name AS "actor.last_name",
actor.last_update AS "actor.last_update"
FROM dvds.actor
WHERE actor.actor_id IN (actorID1, #actorID2, $actorID3) AND ((actorID1 / #actorID2) <> (#actorID2 * $actorID3))
ORDER BY actor.actor_id`,
RawArgs{
"actorID1": int64(1),
"#actorID2": int64(2),
"$actorID3": int64(3),
},
)
var actors []model.Actor
err := stmt.Query(db, &actors)
RawArgs
contains named arguments for a placeholders in raw statement query. Named arguments naming convention does not have to follow any format, it just have to match named arguments exactly from the raw query. It is recommended NOT to use ($1, $2, ...) for postgres queries.
QueryContext
can be used to map a statement result into into single object or to the slice destination:
- Single object destination
stmt := SELECT(
Actor.AllColumns,
).FROM(
Actor,
).WHERE(
Actor.ActorID.EQ(Int(2)),
)
actor := model.Actor{}
err := query.QueryContext(ctx, db, &actor)
- Slice destination
stmt := SELECT(
Actor.AllColumns,
).FROM(
Actor,
).WHERE(
Actor.ActorID.GT(Int(20)),
)
actors := []model.Actor{}
err := query.QueryContext(ctx, db, &actors)
linkData := model.Link{
ID: 1000,
URL: "https://www.duckduckgo.com",
Name: "Duck Duck go",
}
stmt := Link.
INSERT().
MODEL(linkData)
res, err := stmt.ExecContext(ctx, db)
...
stmt := SELECT(
Inventory.AllColumns,
Film.AllColumns,
Store.AllColumns,
).FROM(
Inventory.
INNER_JOIN(Film, Film.FilmID.EQ(Inventory.FilmID)).
INNER_JOIN(Store, Store.StoreID.EQ(Inventory.StoreID)),
).ORDER_BY(
Inventory.InventoryID.ASC(),
)
rows, err := stmt.Rows(ctx, db)
defer rows.Close()
for rows.Next() {
var inventory struct {
model.Inventory
Film model.Film
Store model.Store
}
err = rows.Scan(&inventory)
...
}
err = rows.Close()
...
err = rows.Err()
...
Rows Scan
method relies on reflection to map query result into any destination provided. For a very large result set reflection can introduce additional latency into processing. To avoid reflection, developer can access underlying sql.Rows
object and scan directly into destination fields:
for rows.Next() {
var inventory model.Inventory
err = rows.Rows.Scan(&inventory.InventoryID, &inventory.FilmID, &inventory.StoreID, &inventory.LastUpdate)
...
}
The standard Go SQL library prepares a statement for each query before execution. This means that for each Query or Exec operation, there are two database calls: one to prepare and one to execute the statement. With prepared statement caching, the number of database calls is reduced by half—only one prepare call and then only one execute call is needed per query.
To enable prepared statement caching, developers only need to update the database initialization code.
For example:
var db *sql.DB
db, err := sql.Open(driverName, connectionString)
would be replaced with:
sqlDB, err := sql.Open(driverName, connectionString)
var db *stmtcache.DB
db = stmtcache.New(sqlDB)
stmtcache.DB
is a wrapper around sql.DB
that intercepts all database queries. For new queries, this wrapper first creates a prepared statement, caches it, and then executes the query. Any subsequent query will use the cached prepared statement.
Warning
Do not use prepared statements caching with raw expressions(or statements) with hard code parameters. Not only will prepared statements not be reused, but this practice may also lead to potential memory leaks, as new prepared statements will be created and cached for every distinct parameter.
Since stmtcache.DB
shares the same interface as sql.DB
, it can be used interchangeably in any context where sql.DB
is used. For example:
var db *stmtcache.DB
...
err := stmt.Query(db, &dest)
res, err := stmt.Exec(db)
tx := db.Begin() // prepared statement caching is enabled on transaction created from *stmtcache.DB as well.
To bypass statement caching, developers can access the underlying sql.DB
directly.
var db *stmtcache.DB
....
err := stmt.Query(db.DB, &dest)
Prepared statement caching should be considered in the following scenarios:
- Geographical Distance: If the application server and database are distant, such as in different AWS regions, caching prepared statements can save a round trip to the database.
- Large Queries: For complex or lengthy queries, caching ensures the database uses a prepared statement each time, reducing the need to re-parse the query.
- Repetitive Queries: When querying the database in a loop (e.g., during data migration), caching prepared statements can improve performance by avoiding repeated preparation.
However, if the application and database server are close (e.g., using SQLite) and the queries are simple, using a prepared statement cache may add unnecessary latency, as each database call would require a mutex lock/unlock and map access.
Additionally, if a connection pooler is used (such as PgBouncer), prepared statement caching should be avoided depending on the pooler mode.
- Home
- Generator
- Model
- SQL Builder
- Query Result Mapping (QRM)
-
FAQ
- How to execute jet statement in SQL transaction?
- How to construct dynamic projection list?
- How to construct dynamic condition?
- How to use jet in multi-tenant environment?
- How to change model field type?
- How to use custom(or currently unsupported) functions and operators?
- How to use IN/NOT_IN with dynamic list of values?
- Scan stopped working after naming a destination type