Oracle® Database SecureFiles and Large Objects Developer's Guide 11g Release 2 (11.2) Part Number E18294-01 |
|
|
PDF · Mobi · ePub |
This chapter contains these topics:
This section discusses the steps required for the installation of DBFS.
This section contains the following topics:
The dbfs_client
can be used as a direct RDBMS client using the DBFS Command Interface only on Linux, Linux.X64, Solaris, Solaris64, AIX, HPUX and Windows platforms.
The dbfs_
client
host must have the Oracle client libraries installed.
The dbfs_client
can be used as a mount client only on Linux and Linux.X64 platforms, and the following are also required:
The dbfs_client
host must have the kernel-devel
package installed to configure and build FUSE
.
The dbfs_client
host must have the FUSE
Linux package installed.
A group named fuse
must be created and the user name that is running the dbfs_client
must be a member of the fuse
group.
Download the kernel-devel
package from your Linux distributor that matches your Linux release.
Download FUSE
2.7.4 package from http://fuse.sourceforge.net/
.
Install kernel-devel
package. For example:
# rpm -i kernel-devel-2.6.18-8.el5.i686.rpm
Determine the kernel directory. The kernel directory is usually /usr/src/kernels/`uname -r`-`uname -p`
Install FUSE
.
$ tar -xzvf fuse-2.7.4.tar.gz $ cd [fuse_src_dir] $ ./configure --prefix=/usr --with-kernel=[your kernel dir] $ make $ sudo su # make install # /sbin/depmod # /sbin/modprobe fuse # chmod 666 /dev/fuse # echo "/sbin/modprobe fuse" >> /etc/rc.modules
The DBFS installation home contains the SQL (.plb
extension) scripts for the content store, and the dbfs_client
executable.
At minimum, database users must have the following privileges to create a file system: GRANT CONNECT
, CREATE SESSION
, RESOURCE
, CREATE TABLE
, and CREATE PROCEDURE
, and also DBFS_ROLE
.
Create a file system by running dbfs_create_filesystem
.sql
while logged in as a user with DBFS administrator privileges.
$ sqlplus @$ORACLE_HOME/rdbms/admin/dbfs_create_filesystem.sql tablespace_name file_system_name
The following example creates a file system called staging_area
in the tablespace dbfs_tbspc
. The tablespace has been previously created.
$ sqlplus @$ORACLE_HOME/rdbms/admin/dbfs_create_filesystem.sql dbfs_tbspc staging_area
dbfs_create_filesystem.sql
creates a partitioned file system. Partitioning is the best performing and most scalable way to create a file system in DBFS. Partitioning creates multiple physical segments in the database and files are distributed randomly in these partitions.
Space cannot be shared between partitions, so it is possible for one partition to run out of space even when other partitions have space. This is usually not an issue if the file system size is big compared to the size of the individual files. However, if file sizes are a big percentage of the file system size, it may result in the ENOSPC
error even if the file system is not full.
Another implication of partitioning is that a "rename" operation can require the file to be rewritten. This can be expensive if the file is big.
dbfs_create_filesystem_advanced.sql
can be used to create a non-partitioned file system.
Drop the file system by running dbfs_drop_filesystem.sql
.
$ sqlplus @$ORACLE_HOME/rdbms/admin/dbfs_drop_filesystem.sql file system name
This section discusses how to mount DBFS. It applies only to Linux.
This section contains the following topics:
Run the dbfs_client
program to mount the DBFS store. Ensure that LD_LIBRARY_PATH
has the correct path to the Oracle client libraries before calling this program. The dbfs_client
program does not return until the file system is unmounted.
For the most secure method of specifying the password, see"Using Oracle Wallet with DBFS Client".
The dbfs_client
command has the following syntax:
dbfs_client db_user@db_server [-o option_1 -o option_2 ...] mount_point
where the mandatory parameters are:
db_user
is the name of the database user who owns the DBFS content store filesystem(s).
db_server
is a valid connect string to the Oracle Database server, such as hrdb_host:1521/hrservice
.
mount_point
is the path where the Database File System is mounted. Note that all file systems owned by the database user are seen at the mount point.
and the options are:
direct_io
bypasses the Linux page cache, and provides improved performance for large files. Programs in the file system cannot be executed with this option. Oracle recommends this option when DBFS is used as an ETL staging area.
wallet
runs the DBFS client in the background. Wallet must be configured to get its credentials.
failover
fails over the DBFS client to the surviving database instance with no data loss. Expect some performance cost on writes, especially for small files.
allow_root
allows the root user to access the filesystem. This option requires setting the user_allow_other
parameter in the /etc/fuse.conf
configuration file.
allow_other
allows other users to access the filesystem. This option requires setting the user_allow_other
parameter in the /etc/fuse.conf
configuration file.
rw
mounts the filesystem as read-write; this is the default setting.
ro
mounts the filesystem as read-only; files cannot be modified.
trace_level=
n
sets the trace level. Trace levels are:
DEBUG
INFO
WARNING
ERROR
CRITICAL
; this the the default setting
trace_file
file_name
|'syslog'
specifies the trace log file
Prior to mounting a file system, you must create an Oracle Wallet and assign credentials for the appropriate DBFS database user; see "Using Oracle Wallet with DBFS Client".
Example 6-1 Mounting a File System
Login as root user.
Add a new library path.
# echo "/usr/local/lib" >> /etc/ld.so.conf.d/usr_local_lib.conf
Change directory to lib
, and create the following symbolic links to the libclntsh.so.11.1
and libnnz11.so
libraries.
# cd /usr/local/lib # ln -s $ORACLE_HOME/lib/libclntsh.so.11.1 # ln -s $ORACLE_HOME/lib/libnnz11.so
Locate libfuse.so
, and create a symbolic link this library.
# locate libfuse.so determined_path/libfuse.so # ln –s determined_path/libfuse.so
Run ldconfig
to create the links and cache for the new symbolic links.
# ldconfig
Create a symbolic link to dbfs_client
in /sbin
as mount.dbfs
.
# ln -s $ORACLE_HOME/bin/dbfs_client /sbin/mount.dbfs
Login as admin user. (Oracle recommends that you do not perform the next step as root user.)
Mount the DBFS store.
% dbfs_client @/dbfsdb -o wallet,rw,user,direct_io /mnt/dbfs
[Optional] To test if the previous step was successful, list the dbfs
directory.
# ls /mnt/dbfs
Example 6-2 Mounting a File System with Password at Command Prompt
To mount a file system using dbfs_client
by entering the password on the command prompt:
$ dbfs_client ETLUser@DBConnectString /mnt/dbfs password: xxxxxxx
Example 6-3 Mounting a File System with Password Read from a File
The following example mounts a file system and frees the terminal. It reads the password from a file:
$ nohup dbfs_client ETLUser@DBConnectString /mnt/dbfs < passwordfile.f &
$ ls -l /mnt/dbfs
drwxrwxrwx 10 root root 0 Feb 9 17:28 staging_area
Run fusermount
to unmount file systems.
$ fusermount -u <mount point>
DBFS supports most file system operations with the exception of ioctl
, locking, asynchronous I/O through libaio
, O_DIRECT
file opens, hard links, pipes, and other special file modes. Memory-mapped files are supported except in shared-writable mode. For performance reasons, DBFS does not update the file access time every time file data or its attributes are read.
You cannot run programs from a DBFS-mounted file system if the direct_io
option is specified.
File systems are commonly configured using the fstab
utility in Linux. To mount DBFS through /etc/fstab
, You must use Oracle Wallet for authentication. Run the following operations as root
user.
Login as root
user.
Change the user and group of dbfs_client
to be user root
and group fuse
.
# chown root.fuse $ORACLE_HOME/bin/dbfs_client
Set the setuid
bit on dbfs_client
, and restrict execute
priveleges to the user and group only.
# chmod u+rwxs,g+rx-w,o-rwx dbfs_client
Create a symbolic link to dbfs_client
in /sbin
as "mount.dbfs
".
$ ln -s $ORACLE_HOME/bin/dbfs_client /sbin/mount.dbfs
Create a new Linux group called "fuse
".
Add the Linux user that is running the DBFS Client to the fuse
group.
Add the following line to /etc/fstab
:
/sbin/mount.dbfs#db_user@db_server mount_point fuse rw,user,noauto 0 0
For example:
/sbin/mount.dbfs#/@DBConnectString /mnt/dbfs fuse rw,user,noauto 0 0
The Linux user can mount the DBFS file system using the standard Linux mount
command. For example:
$ mount /mnt/dbfs
Note that FUSE
does not currently support automount
.
The DBFS command interface allows files to be easily copied in and out of a DBFS file system from any host on the network. The command interface does not require mounting the file system, and has somewhat better performance than the mounting interface because it bypasses the user mode file system overhead, but it is not transparent to applications.
All DBFS content store paths must be preceded by "dbfs:"
. For example: dbfs:/staging_area/file1
. All database path names specified must be absolute paths. To run DBFS commands, specify --command
to the DBFS client.
dbfs_client db_user@db_server --command command [switches] [arguments]
where:
command
is the executable command, such as ls
, cp
, mkdir
, or rm
.
switches
are specific for each command
arguments
are file names or directory names, and are specific for each command
Note that dbfs_client
returns a nonzero value in case of failure.
This section contains the following topics:
The mkdir
command creates a new directory.
dbfs_client db_user@db_server --command mkdir directory_name
where directory_name
is the name of the directory created. For example:
$ dbfs_client ETLUser@DBConnectString --command mkdir dbfs:/staging_area/dir1
The ls
command lists the contents of a directory.
dbfs_client db_user@db_server --command ls [switches] target
where target
is the listed directory, and switches
is any combination of the following:
-
a
shows all files, including '.
' and '..
'.
-
l
shows the long listing format: name of each file, the file type, permissions, and size.
-
R
lists subdirectories, recursively.
For example:
$ dbfs_client ETLUser@DBConnectString --command ls dbfs:/staging_area/dir1 $ dbfs_client ETLUser@DBConnectString --command ls -l -a -R dbfs:/staging_area/dir1
The cp
command copies files or directories from the source location to the destination location. It also supports recursive copy of directories.
dbfs_client db_user@db_server --command cp [switches] source destination
where source
is the source location, destination
is the destination location, and switches
is either -R
or -r
, the options to recursively copy all source contents into the destination directory.
The following example copies the contents of the local directory, 01-01-10-dump
recursively into a directory in DBFS:
$ dbfs_client ETLUser@DBConnectString --command cp -R 01-01-10-dump dbfs:/staging_area/
The following example copies the file hello.txt
from DBFS to a local file Hi.txt
:
$ dbfs_client ETLUser@DBConnectString --command cp dbfs:/staging_area/hello.txt Hi.txt
The command rm
deletes a file or directory. It also supports recursive delete of directories.
dbfs_client db_user@db_server --command rm [switches] target
where target
is the listed directory, and switches
is either -R
or -r
, the options to recursively delete all contents. For example:
$ dbfs_client ETLUser@DBConnectString --command rm dbfs:/staging_area/srcdir/hello.txt $ dbfs_client ETLUser@DBConnectString --command rm -R dbfs:/staging_area/dir1
This sections describes the DBFS administration tools.
This section contains the following topics:
An Oracle Wallet allows the DBFS client to mount a DBFS store without the user having to enter a password. Please refer to Oracle Database Advanced Security Administrator's Guide for more information about creation and management of wallets. The "/@
" syntax means to use the wallet.
Create a directory for the wallet. For example:
mkdir $ORACLE_HOME/oracle/wallet
Create an auto-login wallet.
mkstore -wrl $ORACLE_HOME/oracle/wallet -create
Add the wallet location in the client's sqlnet.ora
file:
vi $TNS_ADMIN/sqlnet.ora WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = $ORACLE_HOME/oracle/wallet) ) )
Add the following parameter in the client's sqlnet.ora
file:
vi $TNS_ADMIN/sqlnet.ora SQLNET.WALLET_OVERRIDE = TRUE
Create credentials:
mkstore -wrl wallet_location -createCredential db_connect_string username password
For example:
mkstore -wrl $ORACLE_HOME/oracle/wallet -createCredential DBConnectString scott tiger
Add the connection alias to your tnsnames.ora
file.
Use you can use dbfs_client
with Oracle Wallet.
For example:
$ dbfs_client -o wallet /@DBConnectString /mnt/dbfs
The database manages the security in the DBFS, not the operating system security model. Access to a database file system requires a database login as a database user with privileges on the tables that underly the file system. Access to the file system may be granted to users by the database administrator; this implies that different database users may have different READ
or UPDATE
privileges to the file system, as determined by database administrator. The database administrator of course has access to all files stored in the DBFS file system.
On the client machine, access to a DBFS mount point is limited to the operating system user that mounts the file system. This, however, does not limit the number of users who can access the DBFS, becuase many users may separately mount the same DBFS file system.
Linux performs operating system file-level permission checking when a DBFS file system is mounted. This check is not performed by DBFS either when using the command interface, or when using the PL/SQL interface directly. Instead, it performs only database privilege checking.
DBFS operates under a security model where all filesystems created by a user are private to that user, as default. Oracle recommends maintaining this model. Because operating system users and RDBMS users are different, it is possible to allow multiple operating system users to mount a single DBFS filesystem. These mounts may potentially have different mount options and permissions. For example, user1
may mount the DBFS as READ ONLY
, and user2
may mount it as READ WRITE
. However, RDBMS views both users as having the same privileges becuase they would be accessing the filesystem as the same RDBMS user.
This section contains the following topics:
The operating system user who mounts the file system may allow root access to the file system by specifying the allow_root
option. This option requires that /etc/fuse.conf
file contain the user_allow_other
field, as demonstrated in Example 6-4.
Some circumstances may require that multiple RDBMS users access the same filesystem. For example, the RDBMS user that owns the filesystem may be a privledged user and sharing its user credentials may pose a security risk. To mitigate this, DBFS allows multiple RDBMS users to share a subset of the filesystem state.
While DBFS registrations and mounts made through the DBFS content API are private to each user, the underlying filesystem and the tables on which they rely may be shared across users. After this is done, the individual filesystems may be independently mounted and used by different RDBMS users, either through SQL/PLSQL, or through dbfs_client
APIs.
In the following example, user user1
is able to modify the filesystem, and user user2
can see these changes. Here, user1
is the RDBMS user that creates a filesystem, and user2
is the RDBMS user that eventually uses dbfs_client
to mount and access the filesystem. Both user1
and user2
must have the DBFS_ROLE
privilege.
To esablish DBFS access sharing across multiple RDBMS users:
Connect as the user who creates the filesystem.
sys@tank as sysdba> connect user1 Connected.
Create the filesystem user1_FS
, register the store, and mount it as user1_mt
.
user1@tank> exec dbms_dbfs_sfs.createFilesystem('user1_FS'); user1@tank> exec dbms_dbfs_content.registerStore('user1_FS', 'posix', 'DBMS_DBFS_SFS'); user1@tank> exec dbms_dbfs_content.mountStore('user1_FS', 'user1_mnt'); user1@tank> commit;
[Optional] You may check that the previous step has completed successfully by viewing all mounts.
user1@tank> select * from table(dbms_dbfs_content.listMounts); STORE_NAME | STORE_ID|PROVIDER_NAME ---------------------|- ---------|------------------------------------------ PROVIDER_PKG |PROVIDER_ID|PROVIDER_VERSION |STORE_FEATURES ---------------------|-----------|---------------------|-------------------- STORE_GUID ---------- STORE_MOUNT ---------------------------------------------------------------------------- CREATED ---------------------------------------------------------------------------- MOUNT_PROPERTIES(PROPNAME, PROPVALUE, TYPECODE) ---------------------------------------------------------------------------- user1_FS | 1362968596|posix "DBMS_DBFS_SFS" | 3350646887|0.5.0 | 12714135 141867344 user1_mnt 01-FEB-10 09.44.25.357858 PM DBMS_DBFS_CONTENT_PROPERTIES_T( DBMS_DBFS_CONTENT_PROPERTY_T('principal', (null), 9), DBMS_DBFS_CONTENT_PROPERTY_T('owner', (null), 9), DBMS_DBFS_CONTENT_PROPERTY_T('acl', (null), 9), DBMS_DBFS_CONTENT_PROPERTY_T('asof', (null), 187), DBMS_DBFS_CONTENT_PROPERTY_T('read_only', '0', 2))
[Optional] Connect as the user who will use the dbfs_client
.
user1@tank> connect user2 Connected.
[Optional] Note that user2
cannot see user1
's DBFS state, as he has no mounts.
user2@tank> select * from table(dbms_dbfs_content.listMounts);
While connected as user1
, export fielesystem user1_FS
for access to any user with DBFS_ROLE
privilege.
user1@tank> exec dbms_dbfs_sfs.exportFilesystem('user1_FS'); user1@tank> commit;
Connect as the user who will use the dbfs_client
.
user1@tank> connect user2 Connected.
As user2
, view all available mounts.
user2@tank> select * from table(dbms_dbfs_content.listMounts); SCHEMA_NAME |TABLE_NAME |PTABLE_NAME ---------------------------|---------------------------|------------------- VERSION# --------------------------------CREATED --------------------------------------------------------------------------- FORMATTED --------------------------------------------------------------------------- PROPERTIES(PROPNAME, PROPVALUE, TYPECODE) --------------------------------------------------------------------------- user1 |SFS$_FST_11 |SFS$_FSTP_11 0.5.0 01-FEB-10 09.43.53.497856 PM 01-FEB-10 09.43.53.497856 PM (null)
As user2
, register and mount the store, but do not re-create the user1_FS
filesystem.
user2@tank> exec dbms_dbfs_sfs.registerFilesystem( 'user2_FS', 'user1', 'SFS$_FST_11'); user2@tank> exec dbms_dbfs_content.registerStore( 'user2_FS', 'posix', 'DBMS_DBFS_SFS'); user2@tank> exec dbms_dbfs_content.mountStore( 'user2_FS', 'user2_mnt'); user2@tank> commit;
[Optional] As user2
, you may check that the previous step has completed successfully by viewing all mounts.
user2@tank> select * from table(dbms_dbfs_content.listMounts); STORE_NAME | STORE_ID|PROVIDER_NAME ---------------------|- ---------|------------------------------------------ PROVIDER_PKG |PROVIDER_ID|PROVIDER_VERSION |STORE_FEATURES ---------------------|-----------|---------------------|-------------------- STORE_GUID ---------- STORE_MOUNT ---------------------------------------------------------------------------- CREATED ---------------------------------------------------------------------------- MOUNT_PROPERTIES(PROPNAME, PROPVALUE, TYPECODE) ---------------------------------------------------------------------------- user2_FS | 1362968596|posix "DBMS_DBFS_SFS" | 3350646887|0.5.0 | 12714135 141867344 user1_mnt 01-FEB-10 09.46.16.013046 PM DBMS_DBFS_CONTENT_PROPERTIES_T( DBMS_DBFS_CONTENT_PROPERTY_T('principal', (null), 9), DBMS_DBFS_CONTENT_PROPERTY_T('owner', (null), 9), DBMS_DBFS_CONTENT_PROPERTY_T('acl', (null), 9), DBMS_DBFS_CONTENT_PROPERTY_T('asof', (null), 187), DBMS_DBFS_CONTENT_PROPERTY_T('read_only', '0', 2))
[Optional] List pathnames for user2
and user1
. Note that another mount, user2_mnt,
for store user2_FS,
is available for user2
. However, the the underlying filesystem data is the same for user2
as for user1
.
user2@tank> select pathname from dbfs_content; PATHNAME ------------------------------------------------------------------------------- /user2_mnt /user2_mnt/.sfs/tools /user2_mnt/.sfs/snapshots /user2_mnt/.sfs/content /user2_mnt/.sfs/attributes /user2_mnt/.sfs/RECYCLE /user2_mnt/.sfs user2@tank> connect user1 Connected. user1@tank> select pathname from dbfs_content; PATHNAME --------------------- /user1_mnt /user1_mnt/.sfs/tools /user1_mnt/.sfs/snapshots /user1_mnt/.sfs/content /user1_mnt/.sfs/attributes /user1_mnt/.sfs/RECYCLE /user1_mnt/.sfs
In filesystem user1_FS
, user1
creates file xxx
.
user1@tank> var ret number;
user1@tank> var data blob;
user1@tank> exec :ret := dbms_fuse.fs_create('/user1_mnt/xxx', content => :data);
user1@tank> select :ret from dual;
:RET
----------
0
[Optional] Write to file xxx
, created in the previous step.
user1@tank> var buf varchar2(100); user1@tank> exec :buf := 'hello world'; user1@tank> exec dbms_lob.writeappend(:data, length(:buf), utl_raw.cast_to_raw(:buf)); user1@tank> commit;
[Optional] Show that file xxx
exists, and contains the appended data.
user1@tank> select pathname, utl_raw.cast_to_varchar2(filedata)
from dbfs_content where filedata is not null;
PATHNAME
-------------------------------------------------------------------------------
UTL_RAW.CAST_TO_VARCHAR2(FILEDATA)
-------------------------------------------------------------------------------
/user1_mnt/xxx
hello world
User user2
sees the same file in their own DBFS-specific pathname and mount prefix.
user1@tank> connect user2
Connected.
user2@tank> select pathname, utl_raw.cast_to_varchar2(filedata) from
dbfs_content where filedata is not null;
PATHNAME
-------------------------------------------------------------------------------
UTL_RAW.CAST_TO_VARCHAR2(FILEDATA)
-------------------------------------------------------------------------------
/user2_mnt/xxx
hello world
After the export and register pairing completes, both users behave as equals with regard to their usage of the underlying tables. The exportFilesystem()
procedure manages the necessary grants for access to the same data, which is shared between schemas. After user1
calls exportFilesystem()
, the filesystem access may be granted to any user with DBFS_ROLE
. Note that a different role can be specified.
Subsequently, user2
may ceate a new DBFS filesystem that shares the same underlying storage as the user1_FS
filesystem, by invoking dbms_dbfs_sfs.registerFilesystem()
, dbms_dbfs_sfs.registerStore()
, and dmbs_dbfs_sfs.mountStore()
procedure calls.
When multiple RDBMS users share a filesystem, they must ensure that all RDBMS users unregister their interest in the filesystem before the owner (here, user1
) drops the filesystem.
Oracle does not recommend that the root user run the DBFS.
The dbfs_client
supports multiple levels of tracing to help diagnose problems. The dbfs_client
can either output traces to a file or to /var/log/messages
using the syslog
daemon on Linux. When tracing to a file, it keeps two trace files on disk. dbfs_client
rotates the trace files automatically and limits disk usage to 20 MB.
By default, tracing is turned off except for critical messages which are always logged to /var/log/messages
.
If dbfs_client
is not able to connect to the Oracle Database, enable tracing using trace_level
and trace_file
options. Tracing prints additional messages to log file for easier debugging.
DBFS uses Oracle Database for storing files. Sometimes Oracle server issues are propagated to dbfs_client
as errors. If there is a dbfs_client
error, please see the Oracle server logs to see if that is the root cause.
In cases of failure of one database instance in an Oracle RAC cluster, dbfs_client
can failover to one of the other existing database instances. For dbfs_client
failover to work correctly, you must modify the Oracle database service and specify failover parameters, as demonstrated in Example 6-5.
Example 6-5 Enabling DBFS Client Failover Events
exec DBMS_SERVICE.MODIFY_SERVICE(service_name => 'service_name', aq_ha_notifications => true, failover_method => 'BASIC', failover_type => 'SELECT', failover_retries => 180, failover_delay => 1);
To ensure no data loss during failover of the DBFS connection after a failure of the back-end Oracle database instance, specify the -o failover
mount option, as demonstrated in Example 6-6. In this case, cached 'writes' may be lost if the client looses the connection. However, back-end failover to other Oracle RAC instances or standby databases does not cause lost writes.
It is possible to have multiple copies of dbfs_client
accessing the same shared file system. The sharing and caching semantics are similar to NFS. Like NFS, the default mode caches writes on the client and flushes them after a timeout, or the user closes the file. Also like NFS, writes to a file are only visible to clients that open the file after the writer closed the file. This behavior is commonly referred to as close-to-open cache consistency.
To bypass client side write caching, specify O_SYNC
when the file is opened. Writes in the cache can be forced to disk by calling fsync
.
There are two alternatives for backing up DBFS. In the first one, you must back up the tables that underly the file system at the database level. Alternatively, use a file system backup utility, such as Oracle Secure Backup, through a mount point.
This section contains the following topics:
An advantage of backing up the tables at the database level is that the files in the file system are always consistent with the relational data in the database. A full restore and recover of the database also fully restores and recovers the file system with no data loss. During a point-in-time recovery of the database, the files are recovered to the specified time. As usual with database backup, modifications that occur during the backup do not affect the consistency of a restore. The entire restored file system is always consistent with respect to a specified time stamp.
The advantage of backing up the file system using a file system backup utility is that individual files can be restored form backup more easily. Any changes made to the restored files after the last backup are lost.
You must specify the allow_root
mount option if backups are scheduled using the Oracle Secure Backup Administrative Server.
Like any shared file system, the performance of DBFS for small files lags the performance of a local file system. Each file data or metadata operation in DBFS must go through the FUSE
user mode file system, and then be forwarded across the network to the database. Therefore, each operation that is not cached on the client takes a few milliseconds to run in DBFS.
For operations that involve an input/output (IO) to disk, the time delay overhead is masked by the wait for the disk IO. Naturally, larger IOs have a lower percentage overhead than smaller IOs. The network overhead is more noticeable for operations that do not issue a disk IO.
When you compare the operations on a few small files with a local file system, the overhead is not noticeable, but operations that affect thousands of small files incur a much more noticeable overhead. For example, listing a single directory or looking at a single file produce near instantaneous response, while searching across a directory tree with many thousands of files results in a larger relative overhead.
DBFS offers the advanced features available with SecureFiles LOBs: compression, deduplication, encryption, and partitioning. For example, DBFS can be configured as a compressed file system with partitioning. At the time of creating a file system, you must specify the set of enabled features for the file system. See Chapter 4, "Using Oracle SecureFiles LOBs" for more information about the features of SecureFiles LOBs.