Конференция "Базы" » Выгрузить таблицу в table.dbf. [PostgresSQL]
 
  • Ламот © (05.12.08 11:15) [0]
    День добрый! )
    В нете быстро найти ответ не получилось, попробую спросить тут: необходимо выгрузить 1 таблицу из postgres'овской базы в dbf файл, подскажите пожалуйста как?! ))

    Зарание спасибо.
  • Jeer © (05.12.08 11:33) [1]
    EMS manager ?
  • Ламот © (05.12.08 11:46) [2]
    Jeer ©   (05.12.08 11:33) [1]
    EMS manager ?

    - Может быть, не знаю. ))
    Сейчас попробую найти/почитать что это...

    Есть работающая база (комп под freebsd, база на postgrese). Нужно отдать некоторые данные для импорта в сторонюю базу. Состряпал маленький запрос, который собирает необходимые данные и кидает их во временную таблицу. Теперь неожиданно столкнулся с проблемой - я не знаю как выгрузить эту таблицу в dbf, чтоб отдать людям... )))

    Поиск по инету затрудняется тем, что во первых я не программист, а во вторых у меня плохова-то с английским.

    Я так подозреваю, должна быть стандартная команда с какими-то не сильно заумными ключиками которая позволила б вылить таблицу из базу в dbf формат... Или я неправильно понимаю ситуацию и ищу невозможного??

    Если заблуждаюсь - объясните механизм пожалуйста.
  • Ega23 © (05.12.08 11:51) [3]

    Name
    COPY — copy data between a file and a table

    Synopsis
    COPY tablename [ ( column [, ...] ) ]
       FROM { 'filename' | STDIN }
       [ [ WITH ]
             [ BINARY ]
             [ OIDS ]
             [ DELIMITER [ AS ] 'delimiter' ]
             [ NULL [ AS ] 'null string' ]
             [ CSV [ HEADER ]
                   [ QUOTE [ AS ] 'quote' ]
                   [ ESCAPE [ AS ] 'escape' ]
                   [ FORCE NOT NULL column [, ...] ]

    COPY { tablename [ ( column [, ...] ) ] | ( query ) }
       TO { 'filename' | STDOUT }
       [ [ WITH ]
             [ BINARY ]
             [ HEADER ]
             [ OIDS ]
             [ DELIMITER [ AS ] 'delimiter' ]
             [ NULL [ AS ] 'null string' ]
             [ CSV [ HEADER ]
                   [ QUOTE [ AS ] 'quote' ]
                   [ ESCAPE [ AS ] 'escape' ]
                   [ FORCE QUOTE column [, ...] ]
    Description
    COPY moves data between PostgreSQL tables and standard file-system files. COPY TO copies the contents of a table to a file, while COPY FROM copies data from a file to a table (appending the data to whatever is in the table already). COPY TO can also copy the results of a SELECT query.

    If a list of columns is specified, COPY will only copy the data in the specified columns to or from the file. If there are any columns in the table that are not in the column list, COPY FROM will insert the default values for those columns.

    COPY with a file name instructs the PostgreSQL server to directly read from or write to a file. The file must be accessible to the server and the name must be specified from the viewpoint of the server. When STDIN or STDOUT is specified, data is transmitted via the connection between the client and the server.

    Parameters
    tablename
    The name (optionally schema-qualified) of an existing table.

    column
    An optional list of columns to be copied. If no column list is specified, all columns of the table will be copied.

    query
    A SELECT or VALUES command whose results are to be copied. Note that parentheses are required around the query.

    filename
    The absolute path name of the input or output file. Windows users might need to use an E'' string and double backslashes used as path separators.

    STDIN
    Specifies that input comes from the client application.

    STDOUT
    Specifies that output goes to the client application.

    BINARY
    Causes all data to be stored or read in binary format rather than as text. You cannot specify the DELIMITER, NULL, or CSV options in binary mode.

    OIDS
    Specifies copying the OID for each row. (An error is raised if OIDS is specified for a table that does not have OIDs, or in the case of copying a query.)


  • Ega23 © (05.12.08 11:51) [4]

    delimiter
    The single ASCII character that separates columns within each row (line) of the file. The default is a tab character in text mode, a comma in CSV mode.

    null string
    The string that represents a null value. The default is \N (backslash-N) in text mode, and a empty value with no quotes in CSV mode. You might prefer an empty string even in text mode for cases where you don't want to distinguish nulls from empty strings.

    Note
    When using COPY FROM, any data item that matches this string will be stored as a null value, so you should make sure that you use the same string as you used with COPY TO.

    CSV
    Selects Comma Separated Value (CSV) mode.

    HEADER
    Specifies the file contains a header line with the names of each column in the file. On output, the first line contains the column names from the table, and on input, the first line is ignored.

    quote
    Specifies the ASCII quotation character in CSV mode. The default is double-quote.

    escape
    Specifies the ASCII character that should appear before a QUOTE data character value in CSV mode. The default is the QUOTE value (usually double-quote).

    FORCE QUOTE
    In CSV COPY TO mode, forces quoting to be used for all non-NULL values in each specified column. NULL output is never quoted.

    FORCE NOT NULL
    In CSV COPY FROM mode, process each specified column as though it were quoted and hence not a NULL value. For the default null string in CSV mode ('
    '), this causes missing values to be input as zero-length strings.

    Outputs
    On successful completion, a COPY command returns a command tag of the form

    COPY count
    The count is the number of rows copied.

    Notes
    COPY can only be used with plain tables, not with views. However, you can write COPY (SELECT * FROM viewname) TO ....

    The BINARY key word causes all data to be stored/read as binary format rather than as text. It is somewhat faster than the normal text mode, but a binary-format file is less portable across machine architectures and PostgreSQL versions.

    You must have select privilege on the table whose values are read by COPY TO, and insert privilege on the table into which values are inserted by COPY FROM.

    Files named in a COPY command are read or written directly by the server, not by the client application. Therefore, they must reside on or be accessible to the database server machine, not the client. They must be accessible to and readable or writable by the PostgreSQL user (the user ID the server runs as), not the client. COPY naming a file is only allowed to database superusers, since it allows reading or writing any file that the server has privileges to access.

    Do not confuse COPY with the psql instruction \copy. \copy invokes COPY FROM STDIN or COPY TO STDOUT, and then fetches/stores the data in a file accessible to the psql client. Thus, file accessibility and access rights depend on the client rather than the server when \copy is used.

    It is recommended that the file name used in COPY always be specified as an absolute path. This is enforced by the server in the case of COPY TO, but for COPY FROM you do have the option of reading from a file specified by a relative path. The path will be interpreted relative to the working directory of the server process (normally the cluster'
    s data directory), not the client's working directory.

    COPY FROM will invoke any triggers and check constraints on the destination table. However, it will not invoke rules.

    COPY input and output is affected by DateStyle. To ensure portability to other PostgreSQL installations that might use non-default DateStyle settings, DateStyle should be set to ISO before using COPY TO.

    Input data is interpreted according to the current client encoding, and output data is encoded in the the current client encoding, even if the data does not pass through the client but is read from or written to a file.

    COPY stops operation at the first error. This should not lead to problems in the event of a COPY TO, but the target table will already have received earlier rows in a COPY FROM. These rows will not be visible or accessible, but they still occupy disk space. This might amount to a considerable amount of wasted disk space if the failure happened well into a large copy operation. You might wish to invoke VACUUM to recover the wasted space.


  • Ega23 © (05.12.08 11:52) [5]

    File Formats
    Text Format
    When COPY is used without the BINARY or CSV options, the data read or written is a text file with one line per table row. Columns in a row are separated by the delimiter character. The column values themselves are strings generated by the output function, or acceptable to the input function, of each attribute's data type. The specified null string is used in place of columns that are null. COPY FROM will raise an error if any line of the input file contains more or fewer columns than are expected. If OIDS is specified, the OID is read or written as the first column, preceding the user data columns.

    End of data can be represented by a single line containing just backslash-period (\.). An end-of-data marker is not necessary when reading from a file, since the end of file serves perfectly well; it is needed only when copying data to or from client applications using pre-3.0 client protocol.

    Backslash characters (\) can be used in the COPY data to quote data characters that might otherwise be taken as row or column delimiters. In particular, the following characters must be preceded by a backslash if they appear as part of a column value: backslash itself, newline, carriage return, and the current delimiter character.

    The specified null string is sent by COPY TO without adding any backslashes; conversely, COPY FROM matches the input against the null string before removing backslashes. Therefore, a null string such as \N cannot be confused with the actual data value \N (which would be represented as \\N).

    The following special backslash sequences are recognized by COPY FROM:

    Sequence Represents
    \b Backspace (ASCII 8)
    \f Form feed (ASCII 12)
    \n Newline (ASCII 10)
    \r Carriage return (ASCII 13)
    \t Tab (ASCII 9)
    \v Vertical tab (ASCII 11)
    \digits  Backslash followed by one to three octal digits specifies the character with that numeric code
    \xdigits  Backslash x followed by one or two hex digits specifies the character with that numeric code

    Presently, COPY TO will never emit an octal or hex-digits backslash sequence, but it does use the other sequences listed above for those control characters.

    Any other backslashed character that is not mentioned in the above table will be taken to represent itself. However, beware of adding backslashes unnecessarily, since that might accidentally produce a string matching the end-of-data marker (\.) or the null string (\N by default). These strings will be recognized before any other backslash processing is done.

    It is strongly recommended that applications generating COPY data convert data newlines and carriage returns to the \n and \r sequences respectively. At present it is possible to represent a data carriage return by a backslash and carriage return, and to represent a data newline by a backslash and newline. However, these representations might not be accepted in future releases. They are also highly vulnerable to corruption if the COPY file is transferred across different machines (for example, from Unix to Windows or vice versa).

    COPY TO will terminate each row with a Unix-style newline (“\n”). Servers running on Microsoft Windows instead output carriage return/newline (“\r\n”), but only for COPY to a server file; for consistency across platforms, COPY TO STDOUT always sends “\n” regardless of server platform. COPY FROM can handle lines ending with newlines, carriage returns, or carriage return/newlines. To reduce the risk of error due to un-backslashed newlines or carriage returns that were meant as data, COPY FROM will complain if the line endings in the input are not all alike.

    CSV Format
    This format is used for importing and exporting the Comma Separated Value (CSV) file format used by many other programs, such as spreadsheets. Instead of the escaping used by PostgreSQL'
    s standard text mode, it produces and recognizes the common CSV escaping mechanism.

    The values in each record are separated by the DELIMITER character. If the value contains the delimiter character, the QUOTE character, the NULL string, a carriage return, or line feed character, then the whole value is prefixed and suffixed by the QUOTE character, and any occurrence within the value of a QUOTE character or the ESCAPE character is preceded by the escape character. You can also use FORCE QUOTE to force quotes when outputting non-NULL values in specific columns.

    The CSV format has no standard way to distinguish a NULL value from an empty string. PostgreSQL's COPY handles this by quoting. A NULL is output as the NULL string and is not quoted, while a data value matching the NULL string is quoted. Therefore, using the default settings, a NULL is written as an unquoted empty string, while an empty string is written with double quotes (""). Reading values follows similar rules. You can use FORCE NOT NULL to prevent NULL input comparisons for specific columns.

    Because backslash is not a special character in the CSV format, \., the end-of-data marker, could also appear as a data value. To avoid any misinterpretation, a \. data value appearing as a lone entry on a line is automatically quoted on output, and on input, if quoted, is not interpreted as the end-of-data marker. If you are loading a file created by another application that has a single unquoted column and might have a value of \., you might need to quote that value in the input file.

    Note
    In CSV mode, all characters are significant. A quoted value surrounded by white space, or any characters other than DELIMITER, will include those characters. This can cause errors if you import data from a system that pads CSV lines with white space out to some fixed width. If such a situation arises you might need to preprocess the CSV file to remove the trailing white space, before importing the data into PostgreSQL.

    Note
    CSV mode will both recognize and produce CSV files with quoted values containing embedded carriage returns and line feeds. Thus the files are not strictly one line per table row like text-mode files.

    Note
    Many programs produce strange and occasionally perverse CSV files, so the file format is more a convention than a standard. Thus you might encounter some files that cannot be imported using this mechanism, and COPY might produce files that other programs cannot process.

    Binary Format
    The file format used for COPY BINARY changed in PostgreSQL 7.4. The new format consists of a file header, zero or more tuples containing the row data, and a file trailer. Headers and data are now in network byte order.

    File Header
    The file header consists of 15 bytes of fixed fields, followed by a variable-length header extension area. The fixed fields are:


  • Ega23 © (05.12.08 11:52) [6]

    Signature
    11-byte sequence PGCOPY\n\377\r\n\0 — note that the zero byte is a required part of the signature. (The signature is designed to allow easy identification of files that have been munged by a non-8-bit-clean transfer. This signature will be changed by end-of-line-translation filters, dropped zero bytes, dropped high bits, or parity changes.)

    Flags field
    32-bit integer bit mask to denote important aspects of the file format. Bits are numbered from 0 (LSB) to 31 (MSB). Note that this field is stored in network byte order (most significant byte first), as are all the integer fields used in the file format. Bits 16-31 are reserved to denote critical file format issues; a reader should abort if it finds an unexpected bit set in this range. Bits 0-15 are reserved to signal backwards-compatible format issues; a reader should simply ignore any unexpected bits set in this range. Currently only one flag bit is defined, and the rest must be zero:

    Bit 16
    if 1, OIDs are included in the data; if 0, not

    Header extension area length
    32-bit integer, length in bytes of remainder of header, not including self. Currently, this is zero, and the first tuple follows immediately. Future changes to the format might allow additional data to be present in the header. A reader should silently skip over any header extension data it does not know what to do with.

    The header extension area is envisioned to contain a sequence of self-identifying chunks. The flags field is not intended to tell readers what is in the extension area. Specific design of header extension contents is left for a later release.

    This design allows for both backwards-compatible header additions (add header extension chunks, or set low-order flag bits) and non-backwards-compatible changes (set high-order flag bits to signal such changes, and add supporting data to the extension area if needed).

    Tuples
    Each tuple begins with a 16-bit integer count of the number of fields in the tuple. (Presently, all tuples in a table will have the same count, but that might not always be true.) Then, repeated for each field in the tuple, there is a 32-bit length word followed by that many bytes of field data. (The length word does not include itself, and can be zero.) As a special case, -1 indicates a NULL field value. No value bytes follow in the NULL case.

    There is no alignment padding or any other extra data between fields.

    Presently, all data values in a COPY BINARY file are assumed to be in binary format (format code one). It is anticipated that a future extension might add a header field that allows per-column format codes to be specified.

    To determine the appropriate binary format for the actual tuple data you should consult the PostgreSQL source, in particular the *send and *recv functions for each column's data type (typically these functions are found in the src/backend/utils/adt/ directory of the source distribution).

    If OIDs are included in the file, the OID field immediately follows the field-count word. It is a normal field except that it'
    s not included in the field-count. In particular it has a length word — this will allow handling of 4-byte vs. 8-byte OIDs without too much pain, and will allow OIDs to be shown as null if that ever proves desirable.

    File Trailer
    The file trailer consists of a 16-bit integer word containing -1. This is easily distinguished from a tuple's field-count word.

    A reader should report an error if a field-count word is neither -1 nor the expected number of columns. This provides an extra check against somehow getting out of sync with the data.

    Examples
    The following example copies a table to the client using the vertical bar (|) as the field delimiter:

    COPY country TO STDOUT WITH DELIMITER '
    |';

    To copy data from a file into the country table:

    COPY country FROM '
    /usr1/proj/bray/sql/country_data';

    To copy into a file just the countries whose names start with '
    A':

    COPY (SELECT * FROM country WHERE country_name LIKE '
    A%') TO '/usr1/proj/bray/sql/a_list_countries.copy';

    Here is a sample of data suitable for copying into a table from STDIN:

    AF      AFGHANISTAN
    AL      ALBANIA
    DZ      ALGERIA
    ZM      ZAMBIA
    ZW      ZIMBABWE
    Note that the white space on each line is actually a tab character.

    The following is the same data, output in binary format. The data is shown after filtering through the Unix utility od -c. The table has three columns; the first has type char(2), the second has type text, and the third has type integer. All the rows have a null value in the third column.

    0000000   P   G   C   O   P   Y  \n 377  \r  \n  \0  \0  \0  \0  \0  \0
    0000020  \0  \0  \0  \0 003  \0  \0  \0 002   A   F  \0  \0  \0 013   A
    0000040   F   G   H   A   N   I   S   T   A   N 377 377 377 377  \0 003
    0000060  \0  \0  \0 002   A   L  \0  \0  \0 007   A   L   B   A   N   I
    0000100   A 377 377 377 377  \0 003  \0  \0  \0 002   D   Z  \0  \0  \0
    0000120 007   A   L   G   E   R   I   A 377 377 377 377  \0 003  \0  \0
    0000140  \0 002   Z   M  \0  \0  \0 006   Z   A   M   B   I   A 377 377
    0000160 377 377  \0 003  \0  \0  \0 002   Z   W  \0  \0  \0  \b   Z   I
    0000200   M   B   A   B   W   E 377 377 377 377 377 377

    Compatibility
    There is no COPY statement in the SQL standard.

    The following syntax was used before PostgreSQL version 7.3 and is still supported:

    COPY [ BINARY ] tablename [ WITH OIDS ]
       FROM { 'filename' | STDIN }
       [ [USING] DELIMITERS '
    delimiter' ]
       [ WITH NULL AS '
    null string' ]

    COPY [ BINARY ] tablename [ WITH OIDS ]
       TO { 'filename' | STDOUT }
       [ [USING] DELIMITERS '
    delimiter' ]
       [ WITH NULL AS '
    null string' ]

  • Anatoly Podgoretsky © (05.12.08 12:03) [7]
    Ведь просили же "с не сильно заумными ключиками "
  • Ega23 © (05.12.08 12:05) [8]

    > Ведь просили же "с не сильно заумными ключиками "


    Ага, сейчас я всё брошу и буду готовое решение автору делать...
  • Ламот © (05.12.08 12:31) [9]
    to Ega23 © [3-6]
    — я правильно понял, что стандартной процедуры выгрузки в dbf формат (как в том же IBExpres'e) нет? И для того, чтоб выгрузить данные именно в него - нужно будет вначале ознакомиться с этим самым форматом...?

    Anatoly Podgoretsky ©   (05.12.08 12:03) [7]
    Ведь просили же "с не сильно заумными ключиками "
    — Да, просил! ))
    Судите сами - в том же IBExpres'e есть возможность 2’мя-3’мя кликами мышки выгрузить данные в dbf абсолютно не задумываясь о том какой должен быть заголовок, структура и прочь этого самого файла... Вот я, например, всего этого не знаю.
    Данные нужно выгрузить одноразово, отдать и забыть про это как дурной сон. Была б база на IB - я б уже про это забыл, а так прийдется угробить кучу времени, повторюсь: я не программист, и с английским у меня... В общем - хороше было б, если б была такая команда с такими (готовыми) ключиками, ну да где наше не пропадало... ))

    Ладно, или я совсем дурак и не вижу очевидного (что вполне может быть), или задачка оказалась не такой простой как показалось вначале (что еще чаще бывает).

    to Ega23 © [8] — я не просил делать, я просил подсказать, если оно есть... так что и за то, что есть - спасибо! ))

    Всем спасибо за внимание, придётся пойти по длинному пути... ))))
  • Ламот © (05.12.08 12:36) [10]
    Хотя... похоже Jeer © в [1] написал как раз про то, что нужно... спасибо!!! )))
  • Anatoly Podgoretsky © (05.12.08 13:02) [11]
    EMS manager - это отдельный универсальный менеджер, для таких баз самое то, а то там не только это заумное.
  • Ega23 © (05.12.08 13:06) [12]

    > Судите сами - в том же IBExpres'e есть возможность 2’мя-
    > 3’мя кликами мышки выгрузить данные в dbf абсолютно не задумываясь
    > о том какой должен быть заголовок, структура и прочь этого
    > самого файла... Вот я, например, всего этого не знаю.


    Ну так тоже суди сам: это не есть функция СУБД, это есть функция некой программы (IBExpert), которая в своих кишках позволяет это сделать.
    Наверняка под тот же Postgres, кроме PGAdmin III есть ещё что-то, позволяющее работать с бд в интерактивном режиме (лично я пробовал Maestro, но он мне не сильно приглянулся). Возможно даже, что найдётся бесплатная, как тот же IBExpert для ex-USSR.
    Но повторяюсь ещё раз: это не есть функция СУБД. На уровне диалекта PlPgSQL я только COPY знаю. Кстати, работает весьма шустро.
    Конкретно в твоём случае, я бы что-то своё написал. Отдельный коннект на базу, выборка таблицы в рекордсет (возможно выборка "кусками", если объём данных большой), запихивание этого дела в какой-нибудь потомок TDataSet, который имеет функцию экспорта в dbf, сохранение на диск.
  • AndreyV © (05.12.08 14:03) [13]
    Так может тех людей, которым отдать файл, устроит CSV формат, из предложенного Ega23, - загрузят у себя в ДБФ.
  • Ламот © (05.12.08 14:55) [14]
    Аnatoly Podgoretsky © [11]
    EMS manager - это отдельный универсальный менеджер, для таких баз самое то, а то там не только это заумное.
    — Угу, еще раз спасибо to Jeer ©, без него я б не скоро узнал что такие менеджеры вообще бывают и что это нечто отдельно-независимое! ))

    Ega23 © [12]
    Ну так тоже суди сам: это не есть функция СУБД, это есть функция некой программы (IBExpert), которая в своих кишках позволяет это сделать.
    Наверняка под тот же Postgres, кроме PGAdmin III есть ещё что-то, позволяющее работать с бд в интерактивном режиме (лично я пробовал Maestro, но он мне не сильно приглянулся). Возможно даже, что найдётся бесплатная, как тот же IBExpert для ex-USSR.
    Но повторяюсь ещё раз: это не есть функция СУБД. На уровне диалекта PlPgSQL я только COPY знаю. Кстати, работает весьма шустро.

    — Уже понял, sorry - я не профессионал, посему могу путать понятия и сущности... ))

    Конкретно в твоём случае, я бы что-то своё написал. Отдельный коннект на базу, выборка таблицы в рекордсет (возможно выборка "кусками", если объём данных большой), запихивание этого дела в какой-нибудь потомок TDataSet, который имеет функцию экспорта в dbf, сохранение на диск.
    — Скорее всего я б так и сделала, если б это было нужно мне и имелось бы свободное время в запасе. А так - на работе поставили несвойственную моей должности задачу и "попросили" как можно быстрей... ))

    В общем качнул SQL Manager 2007 for PostgrSQL, установил/выгрузил/удалил, всё-равно оно триальное на 30 дней... ))

    Благодарю всех за внимание, вопрос исчерпан.
  • Anatoly Podgoretsky © (05.12.08 15:08) [15]
    > Ламот  (05.12.2008 14:55:14)  [14]

    Ничего не исчерпан. Как понимать

    Скорее всего я б так и сделала
    я б не скоро узнал

    Кому верить то?
  • Ламот © (05.12.08 15:33) [16]
    Anatoly Podgoretsky © [15]
    Ламот  (05.12.2008 14:55:14)  [14]
    Ничего не исчерпан. Как понимать

    Скорее всего я б так и сделала
    я б не скоро узнал

    Кому верить то?

    — Как сказал кто-то из великих: "верить нельзя никому". ))
    По сути же - опечатка, спешил сильно... Мне стыдно... правда!! )))))
  • Anatoly Podgoretsky © (05.12.08 16:19) [17]
    > Ламот  (05.12.2008 15:33:16)  [16]

    А ответ то какой?
  • KilkennyCat © (05.12.08 16:42) [18]

    > Ega23 ©

    не устал печатать? столька буков! =О
  • Anatoly Podgoretsky © (05.12.08 16:47) [19]

    > не устал печатать? столька буков! =О

    Он хитрый, он не печатал!
 
Конференция "Базы" » Выгрузить таблицу в table.dbf. [PostgresSQL]
Есть новые Нет новых   [134477   +39][b:0][p:0.009]