The WbExport
exports contents of the database into external files, e.g.
plain text ("CSV") or XML.
The WbExport
command can be used like any other SQL command
(such as UPDATE
or INSERT
). This includes the
usage in scripts that are run in batch mode.
The WbExport
command exports either the result of the
next SQL Statement
(which has to produce a result set) or the content of the table(s)
specified with the -sourceTable
parameter.
The data is directly written to the output file and not loaded into memory. The export file(s)
can be compressed ("zipped") on the fly. WbImport can
import the zipped (text or XML) files directly without the need to unzip them.
If you want to save the data that is currently displayed in the result area into an external file, please use the Save Data as feature. You can also use the Database Explorer to export multiple tables.
![]() | |
When using a |
You can also export the result of a SELECT
statement, by
selecting the statement in the editor, and then choose
→ .
When exporting data into a Text or XML file, the content of BLOB columns
is written into separate files. One file for each column of each row. Text files
that are created this way can most probably only be imported using SQL Workbench/J as
the main file will contain the filename of the BLOB data file instead of the actual BLOB data.
The only other application that I know of, that can handle this type of imports is Oracle's
SQL*Loader
utility. If you run the text export together with the
parameter -formatFile=oracle
a control file will be created that contains the
appropriate definitions to read the BLOB data from the external file.
![]() | |
Oracles's BFILE , PostgreSQL's large object and SQL Server's filestream
types are not real BLOB datatypes (from a JDBC point of view) and are currently not exported by WbExport .
Only columns that are reported as BLOB , BINARY , VARBINARY or LONGVARBINARY
in the column "JDBC type" in the DbExplorer will be exported correctly into a separate file.
|
WbExport is designed to directly write the rows that are retrieved from the database to the export file without buffering them in memory (except for the XLS and XLSX formats)
Some JDBC drivers (e.g. PostgreSQL, jTDS and the Microsoft driver) read the full result obtained from the database into memory. In that case, exporting large results might still require a lot of memory. Please refer to the chapter Common problems for details on how to configure the individual drivers if this happens to you.
If you need to export data for Microsoft Excel, additional libraries are required to write the native Excel formats (xls
and the
new xlsx
introduced with Office 2007). Exporting the "SpreadsheetML" format introduced with Office 2003 does not require
additional libraries.
SQL Workbench/J supports three different Excel file formats:
Value for -type parameter | Description |
---|---|
xlsm |
This is the plain XML ("SpreadsheetML") format introduced with Office 2003. This format is always available and does not need any additional libraries. Files with this format should be saved with the extension xml (otherwise Office is not able to open them properly) |
xls |
This is the old binary format using by Excel 97 up to 2003. To export this format, only Files with this format should be saved with the extension xls |
xlsx |
This is the "new" XML format (OfficeOpen XML) introduced with Office 2007. To create this file format, additionaly libraries are required. If those libraries are not available, this format will not be listed in the export dialog ("Save data as...") Files with this format should be saved with the extension xlsx |
For a comparison of the different Microsoft Office XML formats please refer to: http://en.wikipedia.org/wiki/Microsoft_Office_XML_formats
You can download all required POI libraries as a single archive from the SQL Workbench/J home page:
http://www.sql-workbench.net/poi-add-on3.zip. After downloading the archive, unzip it
into the directory where sqlworkbench.jar
is located.
![]() | |
To write the file formats XLS and XLSX the entire file needs to be built in memory. When exporting results with a large number of rows this will require a substantial amount of memory. |
When you use the WbExport
command together with
a SELECT
query, the "Max. Rows" setting will be
ignored for the export.
Parameter | Description | |||
---|---|---|---|---|
-type |
Possible values:
Defines the type of the output file.
In order for this to work properly the table needs to have keycolumns defined,
or you have to define the keycolumns manually using the
This parameter supports auto-completion. | |||
-file |
The output file to which the exported data is written.
This parameter is ignored if | |||
-createDir |
If this parameter is set to true , SQL Workbench/J will create any
needed directories when creating the output file.
| |||
-sourceTable |
Defines a list of tables to be exported. If this
switch is used,
If you want to export tables from a different user
or schema you can use a schema name combined with a wildcard
e.g. This parameter supports auto-completion. | |||
-schema |
Define the schema in which the table(s) specified with This parameter supports auto-completion. | |||
-types |
Selects the object types to be exported. By default only TABLEs are exported. If you want to export the content of VIEWs or SYNONYMs as well, you have to specify all types with this parameter.
This parameter supports auto-completion. | |||
-excludeTables |
The tables listed in this parameter will not be exported. This can be used when all but a few tables
should be exported from a schema. First all tables specified through
This parameter supports auto-completion. | |||
-sourceTablePrefix |
Define a common prefix for all tables listed with
When this parameter is specified the generated statement for exporting the table is
changed to a
This can be used when exporting views on tables, when for each table e.g. a view with a certain prefix
exists (e.g. table This parameter can not be used to select tables from a specific schema. The prefix will be prepended to the table's name. | |||
-outputDir |
When using the -sourceTable switch
with multiple tables, this parameter is mandatory and defines
the directory where the generated files should be stored.
| |||
-continueOnError | When exporting more than one table, this parameter controls whether the whole export will be terminated if an error occurs during export of one of the tables. | |||
-encoding |
Defines the encoding in which the file should be
written. Common encodings are ISO-8859-1, ISO-8859-15, UTF-8 (or UTF8).
To get a list of available encodings, execut
This parameter supports auto-completion and if it is invoked for this parameter, it will show a list of encodings
defined through the configuration property | |||
-showEncodings | Displays the encodings supported by your Java version and operating system. If this parameter is present, all other parameters are ignored. | |||
-lineEnding |
Possible values are:
Defines the line ending to be used for XML or text files.
The default line ending used depends on the platform where SQL Workbench/J is running. This parameter supports auto-completion. | |||
-header |
Possible values: If this parameter is set to true, the header (i.e. the column names) are placed into the first line of output file. The default is to not create a header line. You can define the default value for this parameter in the file workbench.settings. This parameter is valid for text and spreadsheet (OpenDocument, Excel) exports. | |||
-compress |
Selects whether the output file should be compressed
and put into a ZIP archive. An archive will be created with the name of the specified output file
but with the extension
When exporting multiple tables using the | |||
-tableWhere |
Defines an additional | |||
-clobAsFile |
Possible values: For SQL, XML and Text export this controls how the contents of CLOB fields are exported. Usually the CLOB content is put directly into the output file When generating SQL scripts with WbExport this can be a problem as not all DBMS can cope with long character literals (e.g. Oracle has a limit of 4000 bytes). When this parameter is set to true, SQL Workbench/J will create one file for each CLOB column value. This is the same behaviour as with BLOB columns.
Text files that are created with this parameter set to true, will
contain the filename of the generated output file instead of the
actual column value. When importing such a file using
All CLOB files that are written using the encoding specified with the
| |||
-lobIdCols |
When exporting CLOB or BLOB columns as external files, the filename with the
LOB content is generated using the row and column number for the currently
exported LOB column (e.g. data_r15_c4.data). If you prefer to have the value
of a unique column combination as part of the file name, you can specify
those columns using the | |||
-lobsPerDirectory |
When exporting CLOB or BLOB columns as external files, the generated files
can be distributed over several directories to avoid an excessive number of
files in a single directory. The parameter The directories will be created if needed, but if the directories already exist (e.g. because of a previous export) their contents will not be deleted! | |||
-extensionColumn |
When exporting CLOB or BLOB columns as external files, the extension of the generated filenames can be defined based on a column of the result set. If the exported table contains more than one type of BLOBs (e.g. JPEG, GIF, PDF) and your table stores the information to define the extension based on the contents, this can be used to re-generate proper filenames. This parameter only makes sense if exactly one BLOB column of a table is exported. | |||
-filenameColumn |
When exporting CLOB or BLOB columns as external files, the complete filename can be taken from a column of the result set (instead of dynamically creating a new file name based on the row and column numbers). This parameter only makes sense if exactly one BLOB column of a table is exported. | |||
-append |
Possible values: Controls whether results are appended to an existing file, or overwrite an existing file. This parameter is only supported for text, SQL, XLS and XLSX export types. When used with XLS oder XSLX exports, a new worksheet will be created. | |||
-dateFormat | The date format to be used when writing date columns into the output file. This parameter is ignored for SQL exports. | |||
-timestampFormat | The format to be used when writing datetime (or timestamp) columns into the output file. This parameter is ignored for SQL exports. | |||
-blobType |
Possible values: This parameter controls how BLOB data will be put into the generated SQL statements. By default no conversion will be done, so the actual value that is written to the output file depends on the JDBC driver's implementation of the Blob interface. It is only valid for Text, SQL and XML exports, although not all parameter values make sense for all export types.
The type
The type
The types
Two additional SQL literal formats are available that can be used together with PostgreSQL:
When using
The parameter value
The parameter value
The parameter value This parameter supports auto-completion. | |||
-replaceExpression -replaceWith |
Using these parameters, arbitrary text can be replaced during the export. The search and replace is done on the "raw" data retrieved from the database before the values are converted to the corresponding output format. In particular this means replacing is done before any character escaping takes place. Because the search and replace is done before the data is converted to the output format, it can be used for all export types (text, xml, Excel, ...).
Only character columns ( | |||
-trimCharData |
Possible values:
If this parameter is set to true, values from | |||
-showProgress |
Valid values:
Control the update frequence in the status bar (when running in
GUI mode). The default is every 10th row is reported. To disable
the display of the progress specify a value of 0 (zero) or the
value |
Parameter | Description | |||
---|---|---|---|---|
-delimiter | The given string sequence will be
placed between two columns. The default is a tab character
(-delimiter=\t
| |||
-rowNumberColumn |
If this parameter is specified with a value, the value defines the name
of an additional column that will contain the row number. The row number will always be
exported as the first column. If the text file is not created with
a header (-header=false ) a value must still be provided to enable
the creation of the additional column.
| |||
-quoteChar |
The character (or sequence of characters) to be used
to enclose text (character) data if the delimiter is
contained in the data. By default quoting is disabled until a quote character
is defined. To set the double quote as the quote character
you have to enclose it in single quotes: | |||
-quoteCharEscaping |
Possible values: Defines how quote characters that appear in the actual data are written to the output file. If no quote character has been defined using the -quoteChar switch, this option is ignored.
If
If This parameter supports auto-completion. | |||
-quoteAlways |
Possible values:
If quoting is enabled (via NULL values will not be quoted even if this parameter is set to true. This is usefull to distinguish between NULL values and empty strings. | |||
-decimal |
The decimal symbol to be used for numbers. The default is a dot e.g. the number Pi would be written as 3.14152
When using -decimal=',' the number Pi would be written as: 3,14152
| |||
-maxDigits |
Defines a maximum number of decimal digits. If this parameter is not specified decimal values are exported according to the global formatting settings Specifying a value of 0 (zero) results in exporting as many digits as avialable. | |||
-fixedDigits |
Defines a fixed number of decimal digits. If this parameter is not specified decimal values are exported
according to the
If this parameter is specified, all decimal values are exported with
the defined number of digits. If
This parameter is ignored if | |||
-escapeText |
This parameter controls the escaping of non-printable or non-ASCII characters. Valid options are
This will write a "short-hand" representation of control characters (e.g. If character escaping is enabled, then the quote character will be escaped inside quoted values and the delimiter will be escaped inside non-quoted values. The delimiter could also be escaped inside a quoted value if the delimiter falls into the selected escape range (e.g. a tab character).
To import a text file with escaped values using WbImport,
the This parameter supports auto-completion. | |||
-nullString |
Defines the string value that should be written into the output file for a | |||
-formatFile |
Possible values: This parameter controls the creation of a control file for the bulk load utilities of some DBMS.
You can specify more than one format (separated by a comma). In that case one control file for each format will be created.
This parameter supports auto-completion. |
Parameter | Description |
---|---|
-table |
The given tablename will be put into the <table> tag as an attribute.
|
-decimal | The decimal symbol to be used for numbers. The default is a dot (e.g. 3.14152) |
-useCDATA |
Possible values:
Normally all data written into the xml file will
be written with escaped XML characters (e.g. < will be written as <).
If you don't want that escaping, set
With
With
|
-xsltParameter |
A list of parameters (key/value pairs) that should be passed to the XSLT processor. When using e.g. the wbreport2liquibase.xslt
stylesheet, the value of the author attribute can be set using -xsltParameter="authorName=42" . This parameter
can be provided multiple times for multiple parameters, e.g. when using wbreport2pg.xslt : -xsltParameter="makeLowerCase=42" -xsltParameter="useJdbcTypes=true"
|
-stylesheet | The name of the XSLT stylesheet that should be used to transform the SQL Workbench/J specific XML file into a different format. If -stylesheet is specified, -xsltoutput has to be specified as well. |
-xsltOutput |
This parameter defines the output file for the XSLT transformation specified through the
-styleSheet parameter |
-verboseXML |
Possible values: This parameter controls the tags that are used in the XML file and minor formatting features. The default is -verboseXML=true and this will generate more readable tags and formatting. However the overhead imposed by this is quite high. Using -verboseXML=false uses shorter tag names (not longer then two characters) and does put more information in one line. This output is harder to read for a human but is smaller in size which could be important for exports with large result sets. |
Parameter | Description |
---|---|
-table | Define the tablename to be used for the UPDATE or INSERT statements. This parameter is required if the SELECT statement has multiple tables in the FROM list. table. |
-charfunc |
If this parameter is given, any non-printable character in a text/character column will be replaced with a call to the given function with the ASCII value as the parameter. If -charfunc=chr is given (e.g. for an Oracle syntax), a CR (=13) inside a character column will be replaced with:
This setting will affect ASCII values from 0 to 31 |
-concat |
If the parameter -charfunc is used
SQL Workbench/J will concatenate the individual pieces using
the ANSI SQL operator for string concatenation. In case
your DBMS does not support the ANSI standard (e.g. MS ACCESS)
you can specify the operator to be used: -concat=+
defines the plus sign as the concatenation operator. |
-sqlDateLiterals |
Possible values: This parameter controls the generation of date or timestamp literals. By default literals that are specific for the current DBMS are created. You can also choose to create literals that comply with the JDBC specification or ANSI SQL literals for dates and timestamps.
You can define the default literal format to be used for the WbExport command in the options dialog. This parameter supports auto-completion. |
-commitEvery |
A numeric value which identifies
the number of -commitEvery=100 will create a
If this is not specified one |
-createTable |
Possible values:
If this parameter is set to true, the necessary Note that this will only create the table including its primary key. This will not create other constraints (such as foreign key or unique constraints) nor will it create indexes on the target table. |
-useSchema |
Possible values:
If this parameter is set to |
-keyColumns |
A comma separated list of column names that occur in the table
or result set that should be used as the key columns for If the table does not have key columns, or the source SELECT statement uses a join over several tables, or you do not want to use the key columns defined in the database, this key can be used to define the key columns to be used for the UPDATE statements. This key overrides any key columns defined on the base table of the SELECT statement. |
-includeAutoIncColumns |
Possible values: Default value: defined by global option
With this parameter you can override the global option
to include identity and auto-increment column for |
-includeReadOnlyColumns |
Possible values: Default value:
By default, columns that are marked as read-only by the JDBC driver
or are defined as a computed column are not part of generated SQL statements. By
setting this parameter to |
Parameter | Description | |||
---|---|---|---|---|
-title | The name to be used for the worksheet | |||
-infoSheet |
Possible values: Default value: If set to true, a second worksheet will be created that contains the generating SQL of the export. For ods exports, additional export information is available in the document properties. | |||
-fixedHeader |
Possible values: Default value: If set to true, the header row will be "frozen" in the Worksheet so that it will not scroll out of view. | |||
-autoFilter |
Possible values: Default value: If set to true, the "auto-filter" fetaure for the column headers will be turned on. | |||
-autoColWidth |
Possible values: Default value: If set to true, the width of the columns is adjusted to the width of the content. | |||
-targetSheet -targetSheetName |
Possible values: any valid index or name for a worksheet in an existing Excel file This parameter is only available for XLS and XLSX exports When using this parameter, the data will be written into an existing file and worksheet without changing the formatting in the spreadsheet. No formatting is applied as it is assumed that the target worksheet is properly set up.
If this parameter is used, the target file specified with the
If
These parameters support auto-completion if the | |||
-offset |
Possible values: either a column/row combination or a cell reference in Excel format ("D3") This parameter is only available for XLS and XLSX exports When this parameter is specified the data is written starting at the specified location. No data will be written above or to the left of the specified cell.
The values can be given as a numeric row column combination, e.g. |
Parameter | Description |
---|---|
-createFullHTML |
Possible values: Default value: If this is set to true, a full HTML page (including <html>, <body> tags) will be created. |
-escapeHTML |
Possible values: Default value: If this is set to true, values inside the data will be escaped (e.g. the < sign will be written as <) so that they are rendered properly in an HTML page. If your data contains HTML tag that should be saved as HTML tags to the output, this parameter must be false. |
-title | The title for the HTML page (put into the <title> tag of the generated output) |
-preDataHtml |
With this parameter you can specify a HTML chunk that will be added before the export
data is written to the output file. This can be used to e.g. create a heading
for the data: The value will be written to the output file "as is". Any escaping of the HTML must be provided in the parameter value. |
-postDataHtml |
With this parameter you can specify a HTML chunk that will be added after the data has been written to the output file. |
Parameter | Description |
---|---|
-nullString |
Defines the string value that should be written into the output file for a |
The WbExport
command supports compressing of the generated output files.
This includes the "main" export file as well as any associated LOB files.
When using WbImport
you can import
the data stored in the archives without unpacking them. Simply specify the archive name
with the -file
parameter. SQL Workbench/J will detect that the input file
is an archive and will extract the information "on the fly". Assume the following export
command:
WbExport -type=text -file=/home/data/person.txt -compress=true -sourceTable=person;
This command will create the file /home/data/person.zip
that will contain the
specified person.txt
. To import this export into the table employee, you can
use the following command:
WbImport -type=text -file=/home/data/person.zip -table=employee;
Assuming the PERSON
table had a BLOB colum (e.g. a picture of the person),
the WbExport
command would have created an additional file called
person_blobs.zip
that would contain all BLOB data. The WbImport
command will automatically read the BLOB data from that archive.
WbExport -type=text -file='c:/data/data.txt' -delimiter='|' -decimal=',' -sourcetable=data_table;
Will create a text file with the data from data_table
.
Each column will be separated with the character | Each fractional number
will be written with a comma as the decimal separator.
WbExport -type=text -outputDir='c:/data' -delimiter=';' -header=true -sourcetable=table_1, table_2, table_3, table_4;
This will export each specified table into a text file in the specified directory. The files are named "table_1.txt"
,
"table_2.txt"
and so on. To export all tables of a schema, the -sourceTable
parameter supports wildcards:
WbExport -type=text -outputDir='c:/data' -delimiter=';' -header=true -sourcetable=my_schema.*;
Limiting the export data when using a table based export, can be done using the -tableWhere
argument.
This requires that the specified WHERE
condition is valid for all tables, e.g. when every table has a
column called MODIFIED_DATE
WbExport -type=text -outputDir='c:/data' -delimiter=';' -header=true -tableWhere="WHERE modified_date > DATE '2009-04-02'" -sourcetable=table_1, table_2, table_3, table_4;
This will add the specified where clause to each SELECT
, so that only rows are
exported that were changed after April 2nd, 2009
WbExport -type=text -file='c:/data/data.txt' -delimiter=',' -decimal=',' -dateFormat='yyyy-MM-dd'; SELECT * FROM data_table;
To export all tables from the current connection into tab-separated files and compress the files, you can use the following statement:
WbExport -type=text -outputDir=c:/data/export -compress=true -sourcetable=*;
This will create one zip file for each table containing the exported data as a text file. If a table contains BLOB columns, the blob data will be written into a separate zip file.
The files created by the above statement can be imported into another database using the following command:
WbImport -type=text -sourceDir=c:/data/export -extension=zip -checkDependencies=true;
To generate a file that contains INSERT
statements that can be
executed on the target system, the following command can be used:
WbExport -type=sqlinsert -file='c:/data/newtable.sql' -table=newtable; SELECT * FROM table1, table2 WHERE table1.column1 = table2.column1;
will create a SQL script which that contains statements like
INSERT INTO newtable (...) VALUES (...);
and the
list of columns are all columns that are defined by the SELECT statement.
If the parameter -table is omitted, the creation of SQL
INSERT
statements is only possible,
if the SELECT is based on a single table (or view).
![]() | |
To extract the contents of CLOB columns you have to specify
the parameter |
When exporting tables that contain BLOB columns, one file for each blob column and row
will be created. By default the generated filenames will contain the row and column number
to make the names unique. You can however control the creation of filenames when exporting
LOB columns using several different approaches. If a unique name is stored within the table
you can use the -filenameColumn
parameter to generate the filenames based on the contents of that column:
WbExport -file='c:/temp/blob_table.txt' -type=text -delimiter=',' -filenameColumn=file_name;
Will create the file blob_table.txt
and for each blob a file
where the name is retrieved from the column BLOB_TABLE.FILE_NAME
.
Note that if the filename column is not unique, blob files will be overwritten without
an error message.
You can also base the export on a SELECT statement and then generate the filename using several columns:
WbExport -file='c:/temp/blob_table.txt' -type=text -delimiter=',' -filenameColumn=fname; SELECT blob_column, 'data_'||id_column||'_'||some_name||'.'||type_column as fname FROM blob_table;
This examples assumes that the following columns are part of the table blob_table
:
id_column
, some_name
and type_column
.
The filenames for the blob of each row will be taken from the computed column
fname
. To be able to reference the column in the WbExport
you must give it an alias.
This approach assumes that only a single blob column is exported. When exporting multiple blob columns from a single table, it's only possible to create unique filenames using the row and column number (the default behaviour).
When writing the export data, values in character columns can be replaced using regular expressions.
WbExport -file='/path/to/export.txt' -type=text -replaceExpression='(\n|\r\n)' -replaceWith='*' -sourceTable=export_table;
This will replace each newline (either Windows' CR/LF or Unix LF) with the character *.
The value for -replaceExpression
defines a regular expression. In the example
above multiple new lines will be replace with multiple *
characters. To replace consecutive
new lines with a single *
character, use the regular expression -replaceExpression='(\n|\r\n)+'
.
(Note the + sign after the brackets)