Oracle® Database Administrator's Guide 11g Release 2 (11.2) Part Number E25494-02 |
|
|
PDF · Mobi · ePub |
This section describes guidelines to follow when managing tables. Following these guidelines can make the management of your tables easier and can improve performance when creating the table, as well as when loading, updating, and querying the table data.
The following topics are discussed:
Usually, the application developer is responsible for designing the elements of an application, including the tables. Database administrators are responsible for establishing the attributes of the underlying tablespace that will hold the application tables. Either the DBA or the applications developer, or both working jointly, can be responsible for the actual creation of the tables, depending upon the practices for a site.
Working with the application developer, consider the following guidelines when designing tables:
Use descriptive names for tables, columns, indexes, and clusters.
Be consistent in abbreviations and in the use of singular and plural forms of table names and columns.
Document the meaning of each table and its columns with the COMMENT
command.
Normalize each table.
Select the appropriate data type for each column.
Consider whether your applications would benefit from adding one or more virtual columns to some tables.
Define columns that allow nulls last, to conserve storage space.
Cluster tables whenever appropriate, to conserve storage space and optimize performance of SQL statements.
Before creating a table, you should also determine whether to use integrity constraints. Integrity constraints can be defined on the columns of a table to enforce the business rules of your database automatically.
Here are the types of tables that you can create:
Type of Table | Description |
---|---|
Ordinary (heap-organized) table | This is the basic, general purpose type of table which is the primary subject of this chapter. Its data is stored as an unordered collection (heap). |
Clustered table | A clustered table is a table that is part of a cluster. A cluster is a group of tables that share the same data blocks because they share common columns and are often used together.
Clusters and clustered tables are discussed in Chapter 22, "Managing Clusters". |
Index-organized table | Unlike an ordinary (heap-organized) table, data for an index-organized table is stored in a B-tree index structure in a primary key sorted manner. Besides storing the primary key column values of an index-organized table row, each index entry in the B-tree stores the nonkey column values as well.
Index-organized tables are discussed in "Managing Index-Organized Tables". |
Partitioned table | Partitioned tables enable your data to be broken down into smaller, more manageable pieces called partitions, or even subpartitions. Each partition can have separate physical attributes, such as compression enabled or disabled, type of compression, physical storage settings, and tablespace, thus providing a structure that can be better tuned for availability and performance. In addition, each partition can be managed individually, which can simplify and reduce the time required for backup and administration.
Partitioned tables are discussed in Oracle Database VLDB and Partitioning Guide. |
It is advisable to specify the TABLESPACE
clause in a CREATE TABLE
statement to identify the tablespace that is to store the new table. For partitioned tables, you can optionally identify the tablespace that is to store each partition. Ensure that you have the appropriate privileges and quota on any tablespaces that you use. If you do not specify a tablespace in a CREATE TABLE
statement, the table is created in your default tablespace.
When specifying the tablespace to contain a new table, ensure that you understand implications of your selection. By properly specifying a tablespace during the creation of each table, you can increase the performance of the database system and decrease the time needed for database administration.
The following situations illustrate how not specifying a tablespace, or specifying an inappropriate one, can affect performance:
If users' objects are created in the SYSTEM
tablespace, the performance of the database can suffer, since both data dictionary objects and user objects must contend for the same datafiles. Users' objects should not be stored in the SYSTEM
tablespace. To avoid this, ensure that all users are assigned default tablespaces when they are created in the database.
If application-associated tables are arbitrarily stored in various tablespaces, the time necessary to complete administrative operations (such as backup and recovery) for the data of that application can be increased.
You can use parallel execution when creating tables using a subquery (AS SELECT
) in the CREATE TABLE
statement. Because multiple processes work together to create the table, performance of the table creation operation is improved.
Parallelizing table creation is discussed in the section "Parallelizing Table Creation".
To create a table most efficiently use the NOLOGGING
clause in the CREATE TABLE...AS SELECT
statement. The NOLOGGING
clause causes minimal redo information to be generated during the table creation. This has the following benefits:
Space is saved in the redo log files.
The time it takes to create the table is decreased.
Performance improves for parallel creation of large tables.
The NOLOGGING
clause also specifies that subsequent direct loads using SQL*Loader and direct load INSERT
operations are not logged. Subsequent DML statements (UPDATE
, DELETE
, and conventional path insert) are unaffected by the NOLOGGING
attribute of the table and generate redo.
If you cannot afford to lose the table after you have created it (for example, you will no longer have access to the data used to create the table) you should take a backup immediately after the table is created. In some situations, such as for tables that are created for temporary use, this precaution may not be necessary.
In general, the relative performance improvement of specifying NOLOGGING
is greater for larger tables than for smaller tables. For small tables, NOLOGGING
has little effect on the time it takes to create a table. However, for larger tables the performance improvement can be significant, especially when also parallelizing the table creation.
As your database grows in size, consider using table compression. Compression saves disk space, reduces memory use in the database buffer cache, and can significantly speed query execution during reads. Compression has a cost in CPU overhead for data loading and DML. However, this cost might be offset by reduced I/O requirements.
Table compression is completely transparent to applications. It is useful in decision support systems (DSS), online transaction processing (OLTP) systems, and archival systems.
You can specify compression for a tablespace, a table, or a partition. If specified at the tablespace level, then all tables created in that tablespace are compressed by default.
Compression can occur while data is being inserted, updated, or bulk loaded into a table. Operations that permit compression include:
Single-row or array inserts and updates
The following direct-path INSERT
methods:
Direct path SQL*Loader
CREATE
TABLE
AS
SELECT
statements
Parallel INSERT
statements
INSERT
statements with an APPEND
or APPEND_VALUES
hint
Oracle Database supports several methods of table compression. They are summarized in Table 20-1.
Table 20-1 Table Compression Methods
Table Compression Method | Compression Level | CPU Overhead | Applications | Notes |
---|---|---|---|---|
Basic compression |
High |
Minimal |
DSS |
None. |
OLTP compression |
High |
Minimal |
OLTP, DSS |
None. |
Warehouse compression (Hybrid Columnar Compression) |
Higher |
Higher |
DSS |
The compression level and CPU overhead depend on compression level specified (LOW or HIGH). |
Online archival compression (Hybrid Columnar Compression) |
Highest |
Highest |
Archiving |
The compression level and CPU overhead depend on compression level specified (LOW or HIGH). |
Basic table compression compresses data inserted by direct path load only and supports limited data types and SQL operations. OLTP table compression is intended for OLTP applications and compresses data manipulated by any SQL operation.
Warehouse compression and online archival compression achieve the highest compression levels because they use Hybrid Columnar Compression technology. Hybrid Columnar Compression technology uses a modified form of columnar storage instead of row-major storage. This enables the database to store similar data together, which improves the effectiveness of compression algorithms. Because Hybrid Columnar Compression requires high CPU overhead for DML, use it only for data that is updated infrequently.
The higher compression levels of Hybrid Columnar Compression are achieved only with data that is direct-path inserted. Conventional inserts and updates are supported, but result in a less compressed format, and reduced compression level.
Table 20-2 lists characteristics of each table compression method.
Table 20-2 Table Compression Characteristics
Table Compression Method | CREATE/ALTER TABLE Syntax | Direct-Path INSERT | Notes |
---|---|---|---|
Basic compression |
|
Rows are compressed with basic compression. |
Rows inserted without using direct-path insert and updated rows are uncompressed. |
OLTP compression |
|
Rows are compressed with OLTP compression. |
Rows inserted without using direct-path insert and updated rows are compressed using OLTP compression. |
Warehouse compression (Hybrid Columnar Compression) |
|
Rows are compressed with warehouse compression. |
This compression method can result in high CPU overhead. Rows inserted without using direct-path insert and updated rows go to a block with a less compressed format and have lower compression level. |
Online archival compression (Hybrid Columnar Compression) |
|
Rows are compressed with online archival compression. |
This compression method can result in high CPU overhead. Rows inserted without using direct-path insert and updated rows go to a block with a less compressed format and have lower compression level. |
You specify table compression with the COMPRESS
clause of the CREATE
TABLE
statement. You can enable compression for an existing table by using these clauses in an ALTER
TABLE
statement. In this case, only data that is inserted or updated after compression is enabled is compressed. Similarly, you can disable table compression for an existing compressed table with the ALTER
TABLE
...NOCOMPRESS
statement. In this case, all data that was already compressed remains compressed, and new data is inserted uncompressed.
The COMPRESS
FOR
QUERY
HIGH
option is the default data warehouse compression mode. It provides good compression and performance when using Hybrid Columnar Compression on Exadata storage. The COMPRESS
FOR
QUERY
LOW
option should be used in environments where load performance is critical. It loads faster than data compressed with the COMPRESS
FOR
QUERY
HIGH
option.
The COMPRESS FOR ARCHIVE LOW
option is the default online archival compression mode. It provides a high compression level and is ideal for infrequently-accessed data. The COMPRESS FOR ARCHIVE HIGH
option should be used for data that is rarely accessed.
A compression advisor, provided by the DBMS_COMPRESSION
package, helps you determine the expected compression level for a particular table with a particular compression method.
Note:
Hybrid Columnar Compression is dependent on the underlying storage system. See Oracle Database Licensing Information for more information.See Also:
Oracle Database Concepts for an overview of table compression
The following examples are related to table compression:
Example 20-1, "Creating a Table with OLTP Table Compression"
Example 20-2, "Creating a Table with Basic Table Compression"
Example 20-3, "Using Direct-Path Insert to Insert Rows Into a Table"
Example 20-5, "Creating a Table with Online Archival Compression"
Example 20-1 Creating a Table with OLTP Table Compression
The following example enables OLTP table compression on the table orders
:
CREATE TABLE orders ... COMPRESS FOR OLTP;
Data for the orders
table is compressed during both direct-path INSERT
and conventional DML.
Example 20-2 Creating a Table with Basic Table Compression
The following statements, which are equivalent, enable basic table compression on the sales_history
table, which is a fact table in a data warehouse:
CREATE TABLE sales_history ... COMPRESS BASIC; CREATE TABLE sales_history ... COMPRESS;
Frequent queries are run against this table, but no DML is expected.
Example 20-3 Using Direct-Path Insert to Insert Rows Into a Table
This example demonstrates using the APPEND
hint to insert rows into the sales_history
table using direct-path INSERT
.
INSERT /*+ APPEND */ INTO sales_history SELECT * FROM sales WHERE cust_id=8890; COMMIT;
Example 20-4 Creating a Table with Warehouse Compression
This example enables Hybrid Columnar Compression on the table sales_history
:
CREATE TABLE sales_history ... COMPRESS FOR QUERY;
The table is created with the default COMPRESS FOR QUERY HIGH
option. This option provides a higher level of compression than basic or OLTP compression. It works well when load performance is critical, frequent queries are run against this table, and no DML is expected.
Example 20-5 Creating a Table with Online Archival Compression
The following example enables Hybrid Columnar Compression on the table sales_history
:
CREATE TABLE sales_history ... COMPRESS FOR ARCHIVE;
The table is created with the default COMPRESS FOR ARCHIVE LOW
option. This option provides the highest level of compression and works well for infrequently-accessed data.
A table can have both compressed and uncompressed partitions, and different partitions can use different compression methods. If the compression settings for a table and one of its partitions do not match, then the partition setting has precedence for the partition.
To change the compression method for a partition, do one of the following:
To change the compression method for new data only, use ALTER
TABLE
... MODIFY
PARTITION
... COMPRESS
...
To change the compression method for both new and existing data, use either ALTER
TABLE
... MOVE
PARTITION
... COMPRESS
... or online table redefinition.
In the *_TABLES
data dictionary views, compressed tables have ENABLED
in the COMPRESSION
column. For partitioned tables, this column is null, and the COMPRESSION
column of the *_TAB_PARTITIONS
views indicates the partitions that are compressed. In addition, the COMPRESS_FOR
column indicates the compression method in use for the table or partition.
SQL> SELECT table_name, compression, compress_for FROM user_tables; TABLE_NAME COMPRESSION COMPRESS_FOR ---------------- ------------ ----------------- T1 DISABLED T2 ENABLED BASIC T3 ENABLED OLTP T4 ENABLED QUERY HIGH T5 ENABLED ARCHIVE LOW
SQL> SELECT table_name, partition_name, compression, compress_for FROM user_tab_partitions; TABLE_NAME PARTITION_NAME COMPRESSION COMPRESS_FOR ----------- ---------------- ----------- ------------------------------ SALES Q4_2004 ENABLED ARCHIVE HIGH ... SALES Q3_2008 ENABLED QUERY HIGH SALES Q4_2008 ENABLED QUERY HIGH SALES Q1_2009 ENABLED OLTP SALES Q2_2009 ENABLED OLTP
When Hybrid Columnar Compression tables are updated, the rows change to a lower level of compression, such as from warehouse compression (QUERY
HIGH
) to OLTP compression or no compression. To determine the compression level of a row, use the GET_COMPRESSION_TYPE
function in the DBMS_COMPRESSION
package.
For example, the following query returns the compression type for a row in the hr.employees
table:
SELECT DECODE(DBMS_COMPRESSION.GET_COMPRESSION_TYPE( ownname => 'HR', tabname => 'EMPLOYEES', row_id => 'AAAVEIAAGAAAABTAAD'), 1, 'No Compression', 2, 'Basic or OLTP Compression', 4, 'Hybrid Columnar Compression for Query High', 8, 'Hybrid Columnar Compression for Query Low', 16, 'Hybrid Columnar Compression for Archive High', 32, 'Hybrid Columnar Compression for Archive Low', 'Unknown Compression Type') compression_type FROM DUAL;
By sampling the table rows, you can determine the percentage of rows that are no longer at the higher compression level. You can use ALTER TABLE
or MOVE PARTITION
to specify a higher compression level. For example, if 10 percent of the rows are no longer at the highest compression level, then you might alter the table or move the partition to specify a higher compression level.
See Also:
Oracle Database PL/SQL Packages and Types Reference for additional information aboutGET_COMPRESSION_TYPE
You can change the compression level for a partition, table, or tablespace. For example, suppose a company uses warehouse compression for its sales data, but sales data older than six months is rarely accessed. If the sales data is stored in a table that is partitioned based on the age of the data, then the compression level for the older data can be changed to online archival compression to free disk space.
If a table is partitioned, then the DBMS_REDEFINITION
package can change the compression level of the table. This package performs online redefinition of a table by creating a temporary copy of the table that holds the table data while it is being redefined. The table being redefined remains available for queries and DML statements during the redefinition. The amount of free space for online table redefinition depends on the relative compression level of the existing table and the new table. Ensure you have enough hard disk space on your system before using the DBMS_REDEFINITION
package.
If a table is not partitioned, then you can use the ALTER TABLE...MOVE...COMPRESS FOR...
statement to change the compression level. The ALTER TABLE...MOVE
statement does not permit DML statements against the table while the command is running.
To change the compression level for a partition, use the ALTER TABLE...MODIFY PARTITION
statement. To change the compression level for a tablespace, use the ALTER TABLESPACE
statement.
See Also:
"Moving a Table to a New Segment or Tablespace" for additional information about the ALTER TABLE
command
Oracle Database PL/SQL Packages and Types Reference for additional information about the DBMS_REDEFINITION
package
The following restrictions apply when adding columns to compressed tables:
Basic compression—You cannot specify a default value for an added column.
OLTP compression—If a default value is specified for an added column, then the column must be NOT
NULL
. Added nullable columns with default values are not supported.
The following restrictions apply when dropping columns in compressed tables:
Basic compression—Dropping a column is not supported.
OLTP compression—DROP
COLUMN
is supported, but internally the database sets the column UNUSED
to avoid long-running decompression and recompression operations.
Hybrid Columnar Compression tables can be imported using the impdp
command of the Data Pump Import utility. By default, the impdp
command preserves the table properties, and the imported table is a Hybrid Columnar Compression table. On tablespaces not supporting Hybrid Columnar Compression, the impdp
command fails with an error. The tables can also be exported using the expdp
command.
You can import the Hybrid Columnar Compression table as an uncompressed table using the TRANSFORM:SEGMENT_ATTRIBUTES=n
option clause of the impdp
command.
An uncompressed or OLTP-compressed table can be converted to Hybrid Columnar Compression format during import. To convert a non-Hybrid Columnar Compression table to a Hybrid Columnar Compression table, do the following:
Specify default compression for the tablespace using the ALTER TABLESPACE ... SET DEFAULT COMPRESS
command.
Override the SEGMENT_ATTRIBUTES
option of the imported table during import.
See Also:
Oracle Database Utilities for additional information about the Data Pump Import utility
Oracle Database SQL Language Reference for additional information about the ALTER TABLESPACE
command
There may be times when a Hybrid Columnar Compression table must be restored from a backup. The table can be restored to a system that supports Hybrid Columnar Compression, or to a system that does not support Hybrid Columnar Compression. When restoring a table with Hybrid Columnar Compression to a system that supports Hybrid Columnar Compression, restore the file using Oracle Recovery Manager (RMAN) as usual.
When a Hybrid Columnar Compression table is restored to a system that does not support Hybrid Columnar Compression, you must convert the table from Hybrid Columnar Compression to OLTP compression or an uncompressed format. To restore the table, do the following:
Ensure there is sufficient storage in environment to hold the data in uncompressed or OLTP compression format.
Use RMAN to restore the Hybrid Columnar Compression tablespace.
Complete one of the following actions to convert the table from Hybrid Columnar Compression to OLTP compression or an uncompressed format:
Use the following statement to change the data compression from Hybrid Columnar Compression to COMPRESS FOR OLTP
:
ALTER TABLE table_name MOVE COMPRESS FOR OLTP;
Use the following statement to change the data compression from Hybrid Columnar Compression to NOCOMPRESS
:
ALTER TABLE table_name MOVE NOCOMPRESS;
Use the following statement to change each partition to NOCOMPRESS
:
ALTER TABLE table_name MOVE PARTITION partition_name NOCOMPRESS;
Change each partition separately.
Use the following statement to move the data to NOCOMPRESS
in parallel:
ALTER TABLE table_name MOVE NOCOMPRESS PARALLEL;
See ALso:
Oracle Database Backup and Recovery User's Guide for additional information about RMAN
Oracle Database SQL Language Reference for additional information about the ALTER TABLE
command
The following are notes and restrictions related to compressed tables:
Online segment shrink is not supported for compressed tables.
The table compression methods described in this section do not apply to SecureFiles large objects (LOBs). SecureFiles LOBs have their own compression methods. See Oracle Database SecureFiles and Large Objects Developer's Guide for more information.
Compression technology uses CPU. Ensure that you have enough available CPU to handle the additional load.
Tables created with basic compression have the PCT_FREE
parameter automatically set to 0
unless you specify otherwise.
If you use conventional DML on a table compressed with basic compression or Hybrid Columnar Compression, then all inserted and updated rows are stored uncompressed or in a less-compressed format. To "pack" the compressed table so that these rows are compressed, use an ALTER
TABLE
MOVE
statement. This operation takes an exclusive lock on the table, and therefore prevents any updates and loads until it completes. If this is not acceptable, then you can use online table redefinition.
See Also:
Oracle Database SQL Language Reference for more details on the CREATE
TABLE
...COMPRESS
, ALTER
TABLE
...COMPRESS
, and ALTER
TABLE
...MOVE
statements, including restrictions
Oracle Database VLDB and Partitioning Guide for more information on table partitioning
You can encrypt individual table columns that contain sensitive data. Examples of sensitive data include social security numbers, credit card numbers, and medical records. Column encryption is transparent to your applications, with some restrictions.
Although encryption is not meant to solve all security problems, it does protect your data from users who try to circumvent the security features of the database and access database files directly through the operating system file system.
Column encryption uses the transparent data encryption feature of Oracle Database, which requires that you create an Oracle wallet to store the master encryption key for the database. The wallet must be open before you can create a table with encrypted columns and before you can store or retrieve encrypted data. When you open the wallet, it is available to all sessions, and it remains open until you explicitly close it or until the database is shut down.
Transparent data encryption supports industry-standard encryption algorithms, including the following Advanced Encryption Standard (AES) and Triple Data Encryption Standard (3DES) algorithms:
AES256
AES192
AES128
3DES168
You choose the algorithm to use when you create the table. All encrypted columns in the table use the same algorithm. The default is AES192. The encryption key length is implied by the algorithm name. For example, the AES128 algorithm uses 128-bit keys.
If you plan on encrypting many columns in one or more tables, you may want to consider encrypting an entire tablespace instead and storing these tables in that tablespace. Tablespace encryption, which also uses the transparent data encryption feature but encrypts at the physical block level, can perform better than encrypting many columns. Another reason to encrypt at the tablespace level is to address the following limitations of column encryption:
If the COMPATIBLE
initialization parameter set to 10.2.0, which is the minimum setting to enable transparent data encryption, data from encrypted columns that is involved in a sort or hash-join and that must be written to a temporary tablespace is written in clear text, and thus exposed to attacks. You must set COMPATIBLE
to 11.1.0 or higher to ensure that encrypted data written to a temporary tablespace remains encrypted. Note that as long as COMPATIBLE
is set to 10.2.0 or higher, data from encrypted columns remains encrypted when written to the undo tablespace or the redo log.
Certain data types, such as object data types, are not supported for column encryption.
You cannot use the transportable tablespace feature for a tablespace that includes tables with encrypted columns.
Other restrictions, which are detailed in Oracle Database Advanced Security Administrator's Guide.
See Also:
Oracle Database Advanced Security Administrator's Guide for more information about transparent data encryption and for instructions for creating and opening wallets
Oracle Database SQL Language Reference for information about the CREATE
TABLE
statement
Oracle Real Application Clusters Administration and Deployment Guide for information on using an Oracle wallet in an Oracle Real Application Clusters environment
Beginning with Oracle Database 11g Release 2, when you create heap-organized tables in a locally managed tablespace, the database defers table segment creation until the first row is inserted.
In addition, segment creation is deferred for any LOB columns of the table, any indexes created implicitly as part of table creation, and any indexes subsequently explicitly created on the table.
Note:
In Release 11.2.0.1, deferred segment creation is not supported for partitioned tables. This restriction is removed in release 11.2.0.2 and later.The advantages of this space allocation method are the following:
It saves a significant amount of disk space in applications that create hundreds or thousands of tables upon installation, many of which might never be populated.
It reduces application installation time.
There is a small performance penalty when the first row is inserted, because the new segment must be created at that time.
To enable deferred segment creation, compatibility must be set to '11.2.0' or higher.
The new clauses for the CREATE
TABLE
statement are:
SEGMENT
CREATION
DEFERRED
SEGMENT
CREATION
IMMEDIATE
These clauses override the default setting of the DEFERRED_SEGMENT_CREATION
initialization parameter, TRUE
, which defers segment creation. To disable deferred segment creation, set this parameter to FALSE
.
Note that when you create a table with deferred segment creation, the new table appears in the *_TABLES
views, but no entry for it appears in the *_SEGMENTS
views until you insert the first row.
You can verify deferred segment creation by viewing the SEGMENT_CREATED
column in *_TABLES
, *_INDEXES
, and *_LOBS
views for nonpartitioned tables, and in *_TAB_PARTITIONS
, *_IND_PARTITIONS,
and *_LOB_PARTITIONS
views for partitioned tables.
Note:
With this new allocation method, it is essential that you do proper capacity planning so that the database has enough disk space to handle segment creation when tables are populated. See "Capacity Planning for Database Objects".The following example creates two tables to demonstrate deferred segment creation. The first table uses the SEGMENT
CREATION
DEFERRED
clause. No segments are created for it initially. The second table uses the SEGMENT
CREATION
IMMEDIATE
clause and, therefore, segments are created for it immediately.
CREATE TABLE part_time_employees ( empno NUMBER(8), name VARCHAR2(30), hourly_rate NUMBER (7,2) ) SEGMENT CREATION DEFERRED; CREATE TABLE hourly_employees ( empno NUMBER(8), name VARCHAR2(30), hourly_rate NUMBER (7,2) ) SEGMENT CREATION IMMEDIATE PARTITION BY RANGE(empno) (PARTITION empno_to_100 VALUES LESS THAN (100), PARTITION empno_to_200 VALUES LESS THAN (200));
The following query against USER_SEGMENTS
returns two rows for HOURLY_EMPLOYEES
, one for each partition, but returns no rows for PART_TIME_EMPLOYEES because
segment creation for that table was deferred.
SELECT segment_name, partition_name FROM user_segments; SEGMENT_NAME PARTITION_NAME -------------------- ------------------------------ HOURLY_EMPLOYEES EMPNO_TO_100 HOURLY_EMPLOYEES EMPNO_TO_200
The USER_TABLES
view shows that PART_TIME_EMPLOYEES
has no segments:
SELECT table_name, segment_created FROM user_tables;
TABLE_NAME SEGMENT_CREATED ------------------------------ ---------------------------------------- PART_TIME_EMPLOYEES NO HOURLY_EMPLOYEES N/A
For the HOURLY_EMPLOYEES
table, which is partitioned, the segment_created
column is N/A
because the USER_TABLES
view does not provide that information for partitioned tables. It is available from the USER_TAB_PARTITIONS
view, shown below.
SELECT table_name, segment_created, partition_name FROM user_tab_partitions; TABLE_NAME SEGMENT_CREATED PARTITION_NAME -------------------- -------------------- ------------------------------ HOURLY_EMPLOYEES YES EMPNO_TO_100 HOURLY_EMPLOYEES YES EMPNO_TO_200
The following statements add employees to these tables.
INSERT INTO hourly_employees VALUES (99, 'FRose', 20.00); INSERT INTO hourly_employees VALUES (150, 'LRose', 25.00); INSERT INTO part_time_employees VALUES (50, 'KReilly', 10.00);
Repeating the same SELECT
statements as before shows that PART_TIME_EMPLOYEES
now has a segment, due to the insertion of row data. HOURLY_EMPLOYEES
remains as before.
SELECT segment_name, partition_name FROM user_segments; SEGMENT_NAME PARTITION_NAME -------------------- ------------------------------ PART_TIME_EMPLOYEES HOURLY_EMPLOYEES EMPNO_TO_100 HOURLY_EMPLOYEES EMPNO_TO_200
SELECT table_name, segment_created FROM user_tables; TABLE_NAME SEGMENT_CREATED -------------------- -------------------- PART_TIME_EMPLOYEES YES HOURLY_EMPLOYEES N/A
The USER_TAB_PARTITIONS
view does not change.
See Also:
Oracle Database SQL Language Reference for notes and restrictions on deferred segment creationBeginning with Oracle Database 11g release 2 (11.2.0.2), the DBMS_SPACE_ADMIN
package includes the MATERIALIZE_DEFERRED_SEGMENTS()
procedure, which enables you to materialize segments for tables, table partitions, and dependent objects created with deferred segment creation enabled.
You can add segments as needed, rather than starting with more than you need and using database resources unnecessarily.
The following example materializes segments for the EMPLOYEES
table in the HR
schema.
BEGIN DBMS_SPACE_ADMIN.MATERIALIZE_DEFERRED_SEGMENTS( schema_name => 'HR', table_name => 'EMPLOYEES'); END;
See Also:
Oracle Database PL/SQL Packages and Types Reference for details about this procedureEstimate the sizes of tables before creating them. Preferably, do this as part of database planning. Knowing the sizes, and uses, for database tables is an important part of database planning.
You can use the combined estimated size of tables, along with estimates for indexes, undo space, and redo log files, to determine the amount of disk space that is required to hold an intended database. From these estimates, you can make correct hardware purchases.
You can use the estimated size and growth rate of an individual table to better determine the attributes of a tablespace and its underlying datafiles that are best suited for the table. This can enable you to more easily manage the table disk space and improve I/O performance of applications that use the table.
See Also:
"Capacity Planning for Database Objects"Here are some restrictions that may affect your table planning and usage:
Tables containing object types cannot be imported into a pre-Oracle8 database.
You cannot merge an exported table into a preexisting table having the same name in a different schema.
You cannot move types and extent tables to a different schema when the original data still exists in the database.
Oracle Database has a limit on the total number of columns that a table (or attributes that an object type) can have. See Oracle Database Reference for this limit.
Further, when you create a table that contains user-defined type data, the database maps columns of user-defined type to relational columns for storing the user-defined type data. This causes additional relational columns to be created. This results in "hidden" relational columns that are not visible in a DESCRIBE
table statement and are not returned by a SELECT *
statement. Therefore, when you create an object table, or a relational table with columns of REF
, varray, nested table, or object type, be aware that the total number of columns that the database actually creates for the table can be more than those you specify.
See Also:
Oracle Database Object-Relational Developer's Guide for more information about user-defined types