The cursor
class¶
- class cursor¶
Allows Python code to execute PostgreSQL command in a database session. Cursors are created by the
connection.cursor()
method: they are bound to the connection for the entire lifetime and all the commands are executed in the context of the database session wrapped by the connection.Cursors created from the same connection are not isolated, i.e., any changes done to the database by a cursor are immediately visible by the other cursors. Cursors created from different connections can or can not be isolated, depending on the connections’ isolation level. See also
rollback()
andcommit()
methods.Cursors are not thread safe: a multithread application can create many cursors from the same connection and should use each cursor from a single thread. See Thread and process safety for details.
Cursors can be used as context managers: leaving the context will close the cursor.
with conn.cursor() as curs: curs.execute(SQL) # the cursor is now closed
- description¶
Read-only attribute describing the result of a query. It is a sequence of
Column
instances, each one describing one result column in order. The attribute isNone
for operations that do not return rows or if the cursor has not had an operation invoked via theexecute*()
methods yet.For compatibility with the DB-API, every object can be unpacked as a 7-items sequence: the attributes retuned this way are the following. For further details and other attributes available check the
Column
documentation.name
: the name of the column returned.type_code
: the PostgreSQL OID of the column.display_size
: the actual length of the column in bytes.internal_size
: the size in bytes of the column associated to this column on the server.precision
: total number of significant digits in columns of typeNUMERIC
.None
for other types.scale
: count of decimal digits in the fractional part in columns of typeNUMERIC
.None
for other types.null_ok
: alwaysNone
as not easy to retrieve from the libpq.
Changed in version 2.4: if possible, columns descriptions are named tuple instead of regular tuples.
Changed in version 2.8: columns descriptions are instances of
Column
, exposing extra attributes.
- close()¶
Close the cursor now (rather than whenever
del
is executed). The cursor will be unusable from this point forward; anInterfaceError
will be raised if any operation is attempted with the cursor.Changed in version 2.5: if the cursor is used in a
with
statement, the method is automatically called at the end of thewith
block.
- closed¶
Read-only boolean attribute: specifies if the cursor is closed (
True
) or not (False
).DB API extension
The
closed
attribute is a Psycopg extension to the DB API 2.0.New in version 2.0.7.
- connection¶
Read-only attribute returning a reference to the
connection
object on which the cursor was created.
- name¶
Read-only attribute containing the name of the cursor if it was created as named cursor by
connection.cursor()
, orNone
if it is a client side cursor. See Server side cursors.DB API extension
The
name
attribute is a Psycopg extension to the DB API 2.0.
- scrollable¶
Read/write attribute: specifies if a named cursor is declared
SCROLL
, hence is capable to scroll backwards (usingscroll()
). IfTrue
, the cursor can be scrolled backwards, ifFalse
it is never scrollable. IfNone
(default) the cursor scroll option is not specified, usually but not always meaning no backward scroll (see theDECLARE
notes).Note
set the value before calling
execute()
or use theconnection.cursor()
scrollable parameter, otherwise the value will have no effect.New in version 2.5.
DB API extension
The
scrollable
attribute is a Psycopg extension to the DB API 2.0.
- withhold¶
Read/write attribute: specifies if a named cursor lifetime should extend outside of the current transaction, i.e., it is possible to fetch from the cursor even after a
connection.commit()
(but not after aconnection.rollback()
). See Server side cursorsNote
set the value before calling
execute()
or use theconnection.cursor()
withhold parameter, otherwise the value will have no effect.New in version 2.4.3.
DB API extension
The
withhold
attribute is a Psycopg extension to the DB API 2.0.
Commands execution methods
- execute(query, vars=None)¶
Execute a database operation (query or command).
Parameters may be provided as sequence or mapping and will be bound to variables in the operation. Variables are specified either with positional (
%s
) or named (%(name)s
) placeholders. See Passing parameters to SQL queries.The method returns
None
. If a query was executed, the returned values can be retrieved usingfetch*()
methods.
- executemany(query, vars_list)¶
Execute a database operation (query or command) against all parameter tuples or mappings found in the sequence vars_list.
The function is mostly useful for commands that update the database: any result set returned by the query is discarded.
Parameters are bounded to the query using the same rules described in the
execute()
method.>>> nums = ((1,), (5,), (10,)) >>> cur.executemany("INSERT INTO test (num) VALUES (%s)", nums) >>> tuples = ((123, "foo"), (42, "bar"), (23, "baz")) >>> cur.executemany("INSERT INTO test (num, data) VALUES (%s, %s)", tuples)
Warning
In its current implementation this method is not faster than executing
execute()
in a loop. For better performance you can use the functions described in Fast execution helpers.
- callproc(procname[, parameters])¶
Call a stored database procedure with the given name. The sequence of parameters must contain one entry for each argument that the procedure expects. Overloaded procedures are supported. Named parameters can be used by supplying the parameters as a dictionary.
This function is, at present, not DBAPI-compliant. The return value is supposed to consist of the sequence of parameters with modified output and input/output parameters. In future versions, the DBAPI-compliant return value may be implemented, but for now the function returns None.
The procedure may provide a result set as output. This is then made available through the standard
fetch*()
methods.Changed in version 2.7: added support for named arguments.
Note
callproc()
can only be used with PostgreSQL functions, not with the procedures introduced in PostgreSQL 11, which require theCALL
statement to run. Please use a normalexecute()
to run them.
- mogrify(operation[, parameters])¶
Return a query string after arguments binding. The string returned is exactly the one that would be sent to the database running the
execute()
method or similar.The returned string is always a bytes string.
>>> cur.mogrify("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar')) "INSERT INTO test (num, data) VALUES (42, E'bar')"
DB API extension
The
mogrify()
method is a Psycopg extension to the DB API 2.0.
- setinputsizes(sizes)¶
This method is exposed in compliance with the DB API 2.0. It currently does nothing but it is safe to call it.
Results retrieval methods
The following methods are used to read data from the database after an
execute()
call.Note
cursor
objects are iterable, so, instead of calling explicitlyfetchone()
in a loop, the object itself can be used:>>> cur.execute("SELECT * FROM test;") >>> for record in cur: ... print record ... (1, 100, "abc'def") (2, None, 'dada') (3, 42, 'bar')
Changed in version 2.4: iterating over a named cursor fetches
itersize
records at time from the backend. Previously only one record was fetched per roundtrip, resulting in a large overhead.- fetchone()¶
Fetch the next row of a query result set, returning a single tuple, or
None
when no more data is available:>>> cur.execute("SELECT * FROM test WHERE id = %s", (3,)) >>> cur.fetchone() (3, 42, 'bar')
A
ProgrammingError
is raised if the previous call toexecute*()
did not produce any result set or no call was issued yet.
- fetchmany([size=cursor.arraysize])¶
Fetch the next set of rows of a query result, returning a list of tuples. An empty list is returned when no more rows are available.
The number of rows to fetch per call is specified by the parameter. If it is not given, the cursor’s
arraysize
determines the number of rows to be fetched. The method should try to fetch as many rows as indicated by the size parameter. If this is not possible due to the specified number of rows not being available, fewer rows may be returned:>>> cur.execute("SELECT * FROM test;") >>> cur.fetchmany(2) [(1, 100, "abc'def"), (2, None, 'dada')] >>> cur.fetchmany(2) [(3, 42, 'bar')] >>> cur.fetchmany(2) []
A
ProgrammingError
is raised if the previous call toexecute*()
did not produce any result set or no call was issued yet.Note there are performance considerations involved with the size parameter. For optimal performance, it is usually best to use the
arraysize
attribute. If the size parameter is used, then it is best for it to retain the same value from onefetchmany()
call to the next.
- fetchall()¶
Fetch all (remaining) rows of a query result, returning them as a list of tuples. An empty list is returned if there is no more record to fetch.
>>> cur.execute("SELECT * FROM test;") >>> cur.fetchall() [(1, 100, "abc'def"), (2, None, 'dada'), (3, 42, 'bar')]
A
ProgrammingError
is raised if the previous call toexecute*()
did not produce any result set or no call was issued yet.
- scroll(value[, mode='relative'])¶
Scroll the cursor in the result set to a new position according to mode.
If
mode
isrelative
(default), value is taken as offset to the current position in the result set, if set toabsolute
, value states an absolute target position.If the scroll operation would leave the result set, a
ProgrammingError
is raised and the cursor position is not changed.Note
According to the DB API 2.0, the exception raised for a cursor out of bound should have been
IndexError
. The best option is probably to catch both exceptions in your code:try: cur.scroll(1000 * 1000) except (ProgrammingError, IndexError), exc: deal_with_it(exc)
The method can be used both for client-side cursors and server-side cursors. Server-side cursors can usually scroll backwards only if declared
scrollable
. Moving out-of-bound in a server-side cursor doesn’t result in an exception, if the backend doesn’t raise any (Postgres doesn’t tell us in a reliable way if we went out of bound).
- arraysize¶
This read/write attribute specifies the number of rows to fetch at a time with
fetchmany()
. It defaults to 1 meaning to fetch a single row at a time.
- itersize¶
Read/write attribute specifying the number of rows to fetch from the backend at each network roundtrip during iteration on a named cursor. The default is 2000.
New in version 2.4.
DB API extension
The
itersize
attribute is a Psycopg extension to the DB API 2.0.
- rowcount¶
This read-only attribute specifies the number of rows that the last
execute*()
produced (for DQL statements likeSELECT
) or affected (for DML statements likeUPDATE
orINSERT
).The attribute is -1 in case no
execute*()
has been performed on the cursor or the row count of the last operation if it can’t be determined by the interface.Note
The DB API 2.0 interface reserves to redefine the latter case to have the object return
None
instead of -1 in future versions of the specification.
- rownumber¶
This read-only attribute provides the current 0-based index of the cursor in the result set or
None
if the index cannot be determined.The index can be seen as index of the cursor in a sequence (the result set). The next fetch operation will fetch the row indexed by
rownumber
in that sequence.
- lastrowid¶
This read-only attribute provides the OID of the last row inserted by the cursor. If the table wasn’t created with OID support or the last operation is not a single record insert, the attribute is set to
None
.Note
PostgreSQL currently advices to not create OIDs on the tables and the default for
CREATE TABLE
is to not support them. TheINSERT ... RETURNING
syntax available from PostgreSQL 8.3 allows more flexibility.
- query¶
Read-only attribute containing the body of the last query sent to the backend (including bound arguments) as bytes string.
None
if no query has been executed yet:>>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar')) >>> cur.query "INSERT INTO test (num, data) VALUES (42, E'bar')"
DB API extension
The
query
attribute is a Psycopg extension to the DB API 2.0.
- statusmessage¶
Read-only attribute containing the message returned by the last command:
>>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar')) >>> cur.statusmessage 'INSERT 0 1'
DB API extension
The
statusmessage
attribute is a Psycopg extension to the DB API 2.0.
- cast(oid, s)¶
Convert a value from the PostgreSQL string representation to a Python object.
Use the most specific of the typecasters registered by
register_type()
.New in version 2.4.
DB API extension
The
cast()
method is a Psycopg extension to the DB API 2.0.
- tzinfo_factory¶
The time zone factory used to handle data types such as
TIMESTAMP WITH TIME ZONE
. It should be atzinfo
object. Default isdatetime.timezone
.Changed in version 2.9: previosly the default factory was
psycopg2.tz.FixedOffsetTimezone
.
- nextset()¶
This method is not supported (PostgreSQL does not have multiple data sets) and will raise a
NotSupportedError
exception.
- setoutputsize(size[, column])¶
This method is exposed in compliance with the DB API 2.0. It currently does nothing but it is safe to call it.
COPY-related methods
Efficiently copy data from file-like objects to the database and back. See Using COPY TO and COPY FROM for an overview.
DB API extension
The
COPY
command is a PostgreSQL extension to the SQL standard. As such, its support is a Psycopg extension to the DB API 2.0.- copy_from(file, table, sep='\\t', null='\\\\N', size=8192, columns=None)¶
Read data from the file-like object file appending them to the table named table.
- Parameters:
file – file-like object to read data from. It must have both
read()
andreadline()
methods.table – name of the table to copy data into.
sep – columns separator expected in the file. Defaults to a tab.
null – textual representation of
NULL
in the file. The default is the two characters string\N
.size – size of the buffer used to read from the file.
columns – iterable with name of the columns to import. The length and types should match the content of the file to read. If not specified, it is assumed that the entire table matches the file structure.
Example:
>>> f = StringIO("42\tfoo\n74\tbar\n") >>> cur.copy_from(f, 'test', columns=('num', 'data')) >>> cur.execute("select * from test where id > 5;") >>> cur.fetchall() [(6, 42, 'foo'), (7, 74, 'bar')]
Changed in version 2.0.6: added the columns parameter.
Changed in version 2.4: data read from files implementing the
io.TextIOBase
interface are encoded in the connectionencoding
when sent to the backend.Changed in version 2.9: the table and fields names are now quoted. If you need to specify a schema-qualified table please use
copy_expert()
.
- copy_to(file, table, sep='\\t', null='\\\\N', columns=None)¶
Write the content of the table named table to the file-like object file. See Using COPY TO and COPY FROM for an overview.
- Parameters:
file – file-like object to write data into. It must have a
write()
method.table – name of the table to copy data from.
sep – columns separator expected in the file. Defaults to a tab.
null – textual representation of
NULL
in the file. The default is the two characters string\N
.columns – iterable with name of the columns to export. If not specified, export all the columns.
Example:
>>> cur.copy_to(sys.stdout, 'test', sep="|") 1|100|abc'def 2|\N|dada ...
Changed in version 2.0.6: added the columns parameter.
Changed in version 2.4: data sent to files implementing the
io.TextIOBase
interface are decoded in the connectionencoding
when read from the backend.Changed in version 2.9: the table and fields names are now quoted. If you need to specify a schema-qualified table please use
copy_expert()
.
- copy_expert(sql, file, size=8192)¶
Submit a user-composed
COPY
statement. The method is useful to handle all the parameters that PostgreSQL makes available (seeCOPY
command documentation).- Parameters:
sql – the
COPY
statement to execute.file – a file-like object to read or write (according to sql).
size – size of the read buffer to be used in
COPY FROM
.
The sql statement should be in the form
COPY table TO STDOUT
to exporttable
to the file object passed as argument orCOPY table FROM STDIN
to import the content of the file object intotable
. If you need to compose aCOPY
statement dynamically (because table, fields, or query parameters are in Python variables) you may use the objects provided by thepsycopg2.sql
module.file must be a readable file-like object (as required by
copy_from()
) for sql statementCOPY ... FROM STDIN
or a writable one (as required bycopy_to()
) forCOPY ... TO STDOUT
.Example:
>>> cur.copy_expert("COPY test TO STDOUT WITH CSV HEADER", sys.stdout) id,num,data 1,100,abc'def 2,,dada ...
New in version 2.0.6.
Changed in version 2.4: files implementing the
io.TextIOBase
interface are dealt with using Unicode data instead of bytes.
Interoperation with other C API modules
- pgresult_ptr¶
Return the cursor’s internal
PGresult*
as integer. Useful to pass the libpq raw result structure to C functions, e.g. viactypes
:>>> import ctypes >>> libpq = ctypes.pydll.LoadLibrary(ctypes.util.find_library('pq')) >>> libpq.PQcmdStatus.argtypes = [ctypes.c_void_p] >>> libpq.PQcmdStatus.restype = ctypes.c_char_p >>> curs.execute("select 'x'") >>> libpq.PQcmdStatus(curs.pgresult_ptr) b'SELECT 1'
New in version 2.8.