Before a SQL panel (or the application) is closed, SQL Workbench/J will check if the current connection
has any un-committed changes (e.g. an INSERT
without a COMMIT
).
This is done by checking the pg_locks
system view. The information in this view might not always be 100% correct and can report open transactions even though
there are none.
The checking for un-committed changes can be controlled through the connection profile.
WbImport can make use of PostgreSQL's COPY
API
to send client side files to the server. The SQL statement COPY from stdin
does not work when
executed using the JDBC driver. But WbImport
can make use of the COPY API
by using the parameter -usePgCopy
If username, password or both are empty in a connection profile, SQL Workbench/J will
try to use the information stored in the password file file
or the environment variables
(PGPASS
, PGUSER
) the same way as libpq uses them.
PostgreSQL marks a complete transaction as failed if a only single statement fails. In such a case the transaction cannot be committed, e.g. consider the following script:
INSERT INTO person (id, firstname, lastname) VALUES (1, 'Arthur', 'Dent'); INSERT INTO person (id, firstname, lastname) VALUES (2, 'Zaphod', 'Beeblebrox'); INSERT INTO person (id, firstname, lastname) VALUES (2, 'Ford', 'Prefect'); COMMIT;
As the ID column is the primary key, the third insert will fail with a unique key violation.
In PostgreSQL you cannot commit anyway and thus persist the first two INSERT
s.
This problem can only be solved by using a SAVEPOINT before and after each statement. In case that statement fails, the transaction can be rolled back to the state before the statement and the reminder of the script can execute.
Doing this manually is quite tedious, so you can tell SQL Workbench/J to do this automatically for you by setting the properties:
workbench.db.postgresql.ddl.usesavepoint=true workbench.db.postgresql.sql.usesavepoint=true
in the file workbench.settings. If this is enabled,
SQL Workbench/J will issue a SET SAVEPOINT
before running each statement
and will release the savepoint after the statement. If the statement failed, a rollback to the
savepoint will be issued that will mark the transaction as "clean" again. So in the above
example (with sql.usesavepoint
set to true
), the last
statement would be rolled back automatically but the first two INSERT
s
can be committed (this will also required to turn on the "Ignore errors" option is enabled).
If you want to use the modes update/insert
or
insert/update
for WbImport, you should also add the
property:
workbench.db.postgresql.import.usesavepoint=true
to enable the usage of savepoints during imports. This setting also affects
the WbCopy
command.
This is not necessary when the using the mode upsert
or insertIgnore
with Postgres 9.5
You can also use the parameter -useSavepoint
for the
WbImport
and WbCopy
commands to control the use of
savepoints for each import.
![]() | |
Using savepoints can slow down the import substantially. |
Postgres has a very strict transaction concept which means that even a simple SELECT
statement
starts a transaction. This has some implications on concurrency, the most obvious one is that
tables that are "used" in a transaction (because a query has retrieved some values) cannot be modified
using DDL statements (ALTER TABLE
). Connections to the server that do this have the status
idle in transaction
as opposed to just "idle".
There are two ways to prevent this:
rollback
or commit
when the query is finished
SQL Workbench/J can be configured to do the second approach automatically, by setting the configuration property
workbench.db.postgresql.transaction.readonly.end
to one of the following values:
The feature is disabled if the value never
is configured. The other two values control how
the transaction is ended: either by running a rollback
or a commit
The statement to end the transaction will only be sent to the server, if the current transaction has not modified anything in the database. Once a real change has been done by running an DML or DDL statement, nothing will be sent automatically.
Before a SQL panel (or the application) is closed, SQL Workbench/J will check if the current connection
has any un-committed changes (e.g. an INSERT
without a COMMIT
).
This is done by checking the V$TRANSACTION
system view.
![]() | |
By default a regular user does not have SELECT privilege on V$TRANSACTION ,
please grant the privilege before enabling this feature. |
The checking for un-committed changes can be controlled through the connection profile.
SQL Workbench/J supports the a mode similar to "autotrace" mode in SQL*Plus. The command to turn on autotrace is the same as in SQL*Plus and supports the same options. For details see the description of the SET command.
The current user needs to have the PLUSTRACE
role in order to be able to see statement statistics (which is required by SQL*Plus as well).
The PLUSTRACE
role grants the SELECT
privilege on the system views: V$SESSTAT
, V$STATNAME
and V$MYSTAT
. The role
is not required for the traceonly explain
option.
As an extension to the Oracle syntax, SQL Workbench/J supports the keyword realplan
as a
substitute for explain
. In that case the execution plan is also displayed but not by
using EXPLAIN PLAN
but by retrieving the actual execution plan that is available
via dbms_xplan.display_cursor()
. In order to use that package, the execute SQL
will be changed by SQL Workbench/J. It will prepend it with a unique identifier so that the SQL can be
found again in Oracle's system views and it will add the gather_plan_statistics
hint
to the statement in order to get more detailed statistics in the execution plan.
In order to see the "real" execution plan, use set autotrace traceonly realplan
instead
of set autotrace traceonly explain
.
When using statistics
together with explain
or realplan
,
SQL Workbench/J will have to retrieve the generated SQL_ID
in order to get the
execution plan using dbms_xplan.display_cursor()
. To use that function the SQL_ID is required
which is retrieved from V$SQL
using a unique comment that is added to the SQL statement
before it is sent to the database. Querying V$SQL
based on the column SQL_TEXT
is quite an expensive operation and might create unwanted latch contention on the server. If you want to
avoid that overhead do not use the statistics
option when also displaying the execution plan.
Show statistics without retrieving the actual data:
set autotrace traceonly statistics
Retrieve the data and show statistics
set autotrace on statistics
Display the statistics and the execution plan but do not retrieve the data
set autotrace traceonly explain statistics
Display the statistics and the actual execution plan but do not retrieve the data
set autotrace traceonly realplan statistics
SQL Workbench/J supports most of the parameters and options the SHOW
from SQL*Plus does.
SHOW option | Description |
---|---|
ERRORS | Displays errors from the last PL/SQL compilation. |
PARAMETERS |
Displays configuration parameters.
Unlike SQL*Plus you can supply multiple parameters separated with a comma: As with SQL*Plus, you need the |
SGA |
Displays memory information.
As with SQL*Plus, you need |
SGAINFO | Displays extended memory information not available in SQL*Plus. |
RECYCLEBIN | Shows the content of the recyclebin. |
USER | Shows the current user. |
AUTOCOMMIT | Shows the state of the autocommit property. |
LOGSOURCE | Displays the location of the archive logs. |
EDITION | Shows the edition of the current database. |
CON_ID | Displays the id of the current container database (only for 12c) |
PDBS | Displays the list of pluggable databases (only for 12c) |
SQL Workbench/J uses the information returned by the JDBC driver to re-create the source of database objects (tables, views, ...). The source generated this way will not always match the source generated by the Oracle.
The use of DBMS_METADATA
for object source retrieval is controlled by configuration properties.
The property workbench.db.oracle.use.dbmsmeta
can be used to controll the use for all object types. When set to true
the source for all objects will be retrieved using DBMS_METADATA
.
The use of DBMS_METADATA
can also be controlled for a specific object type by appending
the type name to the property name workbench.db.oracle.use.dbmsmeta
. The following types
can be configured:
workbench.db.oracle.use.dbmsmeta.table
workbench.db.oracle.use.dbmsmeta.mview
(for MATERIALIZED VIEWs)workbench.db.oracle.use.dbmsmeta.index
workbench.db.oracle.use.dbmsmeta.view
workbench.db.oracle.use.dbmsmeta.sequence
workbench.db.oracle.use.dbmsmeta.synonynm
workbench.db.oracle.use.dbmsmeta.procedure
(includes packages)workbench.db.oracle.use.dbmsmeta.trigger
workbench.db.oracle.use.dbmsmeta.constraint
(for FK and PK constraints)The value of a specific object type overrides the global setting.