Ir al contenido principal

Curso Oracle (2)

  1. Oracle Architecture Components
    1. Describe the Oracle Server Architecture and its main components.
    2. Oracle is basically formed from :

      1. An instance: Which is basically a set of memory structures and background processes that cache the most used data, and serve as an exchange point between the users(clients) and the datafiles that have the information on disks.
      2. Datafiles : These datafiles are located on hard disks, and they compose all the data that is stored in Oracle,control files,tablespaces, redo-log files,etc.
      3. Oracle-Software : This contains the Oracle-kernel which is much like a sub-operating system on top of the host system , because it coordinates the memory-structures and processes (Instance) and the file updates (Datafiles) on the hard drives.
    3. List the Structures involved in connecting a user to an Oracle Instance
    4. The Oracle server has a set of memory structures and operating process called "The Instance" which control the actions that occur between the end user and the database,it main components are:


      1. SGA
        1. Database Buffer Cache(depends on DB_BLOCK_SIZE & DB_BLOCK_BUFFERS parameters)
        2. Redo Log Buffer (depends on LOG_BUFFER , monitored V$SYSSTAT )
        3. Shared Pool (depends on SHARED_POOL_SIZE )
          1. Library Cache
            1. Shared SQL Areas (Contains the parse tree and execution plan for a single SQL statement)
            2. Private SQL Areas (Contains data such as bind information and runtime buffers,each session that issues a SQL statement has a private SQL area;many private SQL areas can be associated with the same shared SQL area, if a session is connected via a dedicated server private SQL areas are in the users PGA. If the sessionis connected via the multi-threaded server, the private area is kept in the SGA )
              1. Persistent (Contains bind information that persists accorss executions,code for datatype conversion and other state information, in contrast to Runtime,the persistent area remains waiting after the statement completes, the open cursors that will not be used shoudld be closed to free the persistant area and to minize the amount of memory required for the application)
              2. Runtime ( Contains information used while the SQL statement is being executed, it is the first step in an execute REQUEST, it releases this memory when the statement finishes)
          2. Dictonary Cache (Also known as the row cache, because it holds data as rows instead of buffers)
          3. Control Structures
        4. Large Pool (Optional,used for large memory allocations)
      2. PGA : A PGA (Process Global area ) is a memory region containing data and control information for a single process. One PGA is allocate for each server process.
        1. Stack Space: A PGA always contains stack space, which is memory allocate to hold a sessions variables,arrays, and other information.
        2. Session Information : If the instance is in single-serer mode, the PGA also contains information about the users session, such a private SQL areas. If the instane is in multi-threaded server, this session information is not in the PGA, but is instead allocated in the SGA. The PGA's initial size is fixed and OS specific, however the parameters OPEN_LINKS and DB_FILES affect the sizes of PGAs.
        3. Sort Areas : Use memory from the PGA of the Oracle server process that performs the sort on behalf of the user process. However, a part of the sort area ( up to SORT_AREA_RETAINED) exists in the runtime area of the process's private SQL area. The sort area can grow up to limit SORT_AREA_SIZE.
      3. Software Code Areas : Portions of memory used to store Oracle code
      4. Background Processes:
        1. PMON (Process Monitor, performs process recovery when a user processes fails, it is reponsible for cleaning up the database buffer cache and freein resources that the user process was using )
        2. SMON (System Monitor, Performs crash recovery , if necessary at instance startup, it is also responsible for coalescing contiguos free space within tablespaces, and cleaning up temporary segments that are no longer in use)
        3. DBWR (Writes the modified (dirty) buffers in the database buffer cache to disk)
        4. LGWR (Writes the redo log buffer contents to a redo log file on disk, it write of the buffer to disk when : A user process commits a transaction, every three seconds, when the redo-log buffer is one-third full, when a DBWn process writes modified buffers to disk)
        5. ARCH : Copies online redo log files to a designated storage device once they become ful or when the ALTER SYSTEM SWITCH LOGFILE command forces a log switch.
        6. CKPT (when a checkpoint ocurrs , Oracle must update the headers of all datafiles to record the details of the checkpoint.)
        7. RECO : Used with distributed database configurations to automatically resolve failed distributed transactions
        8. SNPn ( Job Queue/Snapshot Process): With the distributed database configuration, up to 36 job queues can automatically refresh table snapshots.
        9. LCKn : In Oracle Parallel Server, a lock process provides inter-instance locking
        10. QMN ( Queue Monitor) (Optional for Oracle Advanced Queuing which monitors the message queues)
        11. Snn (shared Server Processes serve multiple client requests in the multi-threaded server configuration )
        12. Dnn (Dispatcher Processes, suppor multi-threaded configuration by allowing user processes to share limited number of server processes.
        13. Pnnn (Parallel query processes)
    1. List the Stages in processing:queries,DML statements, COMMITS.
    2. Queries : The server session hashes de SQL statement passed to it and compares that hash number with the hash numbers of statements already saved in the Shared SQL area. If an exact duplicate of the statement is found in the shared pool, the parsed form of the statement and the execution plan that are already stored are used. If a match is not found in the shared pool, the server session parses de statement.

      Next, the server checks to see wheter the data blocks necessary to complete the transaction (query) are already stored in the database buffer cache. If the block are not in the buffer, the server reads the necessary blocks from the data files and copies them into the cache. NOW it returns the information.

      DML Statements : It performs the same steps up to when the data is copied into the buffer (data block buffer cache), before it returns the infromation (as is the case with a query),the blocks in memory (the cache) are modified accordingly. Once they are modified in memory these blocks are marked as dirty, and are placed on the dirty list. Redo log information is also generated on this transaction and placed in the redo-log cache. Up to this point any of the following can occur:

      1. The user commits, this signals the LGWR to flush the redo-log buffer to the online redo-log file.
      2. The redo information that was generated causes de redo-log buffer to become one-third full. This triggers a redo-log buffer flush by LGRW
      3. The number of dirty blocks reached a threshold length. This triggers DBWR to flush all the dirty blocks in the database buffer cache to the data files, which in turn also causes LGWR to flush the redo-log buffers to the online-redo log files.
      4. A database checkpoint occurs. This triggers the database buffer cache ( with DBWR) as well as the redo-log buffers ( with LGRW) to flush.
      5. The number of available free buffers in the buffer cache drops below the threshold value. This also causes the database buffer cache to flush
      6. An unrecoverable error ocurrs. This forces the transaction to be terminated and rolled back and an error reported back to the server session.

      NOTE: The transaction never records as successful until the redo-log buffer successfully writes to the online redo-log files. This demonstrates the importance of the LGRW and ARCH processes.

      1. Getting Started with the Oracle Server.
        1. Identify the Features of the Universal Installer
        2. Install and Deinstall Products. Upgrade Oracle version

        3. Setup operating system and password file authentication
        4. Via Operating System : When setting up a user to be authenticated via the operating system the user name defined in the operating system will differ from the user name in Oracle by a prefix defined in the init.ora file, name OS_AUTHENT_PREFIX , this prefix defaults to OPS$.Example:

          If in the UNIX system there is a user by the name of alfonso , if an Oracle account by the name of OPS$ALFONSO , then this user will be able to login automatically into SQL*Plus, once authenticated by the OS.

          The user can be defined in various way inside Oracle :

            1. Allowing access through another OS account: This method will allow that the account be accessed from another OS user account
            2.                create user OPS$alfonso identified by ALFONSO; 
            3. Authentication only possible through the OS user account:
              1. create user OPS$alfonso identified externally
              2. create user OPS$alfonso identified VALUES 'no way'
            4. NOTE: If a user is connecting via remote access and the connection is nonsecure (non-encrypted) then the parameter REMOTE_OS_AUTHENT must be set to TRUE in the init.ora (the default is false)

          Via Password File : When the user cannot be authenticated via the Operating System, then a password file is created. To generate a password file it is necessary to use the ORAPWD utility.

            1. Create the password file with the command :
            2.      
              ORAPWD FILE=filename PASSWORD=password ENTRIES= max users

              The password parameter specifies the password for that must be used in order to login as a DBA to the database. The ENTRIES parameter cannot be modified afterword,so it is recommended to be set at a high value.

            3. Set the parameter REMOTE_LOGIN_PASSWORDFILE in the init.ora file. This parameter has three values: NONE,SHARED and EXCLUSIVE .
            4. The NONE value causes Oracle to behave as if the password file did not exist (this is the default).

              The SHARED value enables the password file to be used by multiple databases. However the only users recognized by a SHARED password file are SYS and INTERNAL , thus you cannot add users to a SHARED password file.

              The EXCLUSIVE is of course used for a single database.

            5. Once this is done and the database has been restarted, you connect internal , then grant either the SYSOPER or SYSDBA to your users. SYSDBA gives the user DBA authority; SYSOPER lets the user perform database operations suppor activities. The views V$PWFILE_USERS holds information on which users have been granted these privileges.

          The use of a Password file does not-prevent OS-authenticated users from connecting if they meet the criteria for OS authenticating.

        5. List the main components of the Oracle Enterprise Manager and their uses
        6. Oracle Enterprise Manager provide a framework for an enterprise wide distributed system management solution. Enterprise Manager is open and extendible, Tcl is used to submit commands to remote operating systems and databases for execution. The implementation of Tcl used by Enterprise Manager,is known as OraTcl, and includes extensions that enable functions you need to fully manager an Orcle database environment:

          1. Start and Stop Oracle databases
          2. Execute SQL
          3. Access operating system resources and functions
          4. Access Simple Network Management Protocol (SNMP) Management Information Base (MIB) variables describing Oracle databases.
          5. Acces non-oracle SNMP-enabled services and devices.

          NOTE on SNMP: Originally SNMP was used to communicate with network devices, but it is now used to communicate with applications as well. Application developers now integrate applications within the console using Object Linking and Embedding (OLE) or calls to one of the published APIs specific to Oracle Enterprise Manager.

          When information moves over the network between the console and the intelligent agents, it can be secured with Oracle Server Network Services using the Net8 Advanced Networking Option (ANO). This level of security makes it possible to administer remote databases over the Internet.

          In addition to messages between the communications daemon and the intelligent agents, the application components such as the database administration tools communicate directly with the remote databases by using SQL over Net8. Tasks performed by the application components directly on remote databases in real time do not use the communication daemon. The communication daemon uses either native TCP/IP or Net8 TNS (Transparent Network Substrate) connections to communicate with remote intelligent agents to perform core console functions such as job scheduling and event management.

          The intelligent agent autonomously executes and manages remote activities. After the communication dameon instructs a remote agent to execute a particular script at a given time, the script executes independently of the console. If the console is unavailable when it is time to execute the remote job, the agent manages the execution and buffers up to 500 returnes messages. The agent passes this information back to the console, when the console becomes available.

          Getting Started : After installing the OEM, it is recommended that a repositary be generated as this provides an easier way to login to the console and store prefered credentials between sessions. This is even more beneficial when there are various systems administrators, each one should have their own repository so he or she can store his information.The Adminsitration Tools which are provided in OEM can can accessed in four ways.

          1. Directly from the Windows Taskbar:
          2. From th Enterprise Manager console application launch palette. Before using the console however a repositary must be created.This process is done automatically. The first time the console starts with a given UserID, Repositary Manager describes the sub-components that are necessary to start the console. When responding OK at the prompt, Repositary manager creates a repositary for itself, "Enterprise Manager", "Software Manager", and it starts the "Discover New Services Wizard". If there are nodes on the network that already have Oracle Intelligent Agent configured,this wizard communicates with the remote nodes to populate the navigation tree on the console.NOTE:Because a repository is built automatically during the first login, you must be careful to avoid creating a repository for a system user,as the repository will be generated in the SYSTEM tablespace.
          3. From within the console on the menu bar by selecting Tools,Application.Enterprise Manager is now ready to be used, but there are no databases or listeners known to the Enterprise Manager unless the "Discover New Services Wizard" took advantage of remote intelligent agents to automatically discover and define them.
          4. After defining the listeners and databases, Database Administration are accesible by the fourth method, By right-clicking on one of the newly added databases in the upper-left navigator window.

          The FOUR primary CONSOLE functions

          1. Navigator : Provides a tree structure representing all databases,servers, Net 8 listeners, Web Servers, and groups of these components. This is the most widely used part of the Enterprise Manager console. It behaves much like Internet Explorer. The menus enables the creation of database obejects or the deletion of them. Some object changes are possible directly from the Navigator using the "Quick Edit" wheras more complex changes might require accessing one of the Database Administration Tools that operate on the selected object by selecting Related Tools.
          2. Group : (formerly Map) provides a way to geographically track the status of systems and drill down into systems to exmine listeners and databases, in conjunction with event functionality it provides system status at a glance. Uses bitmap files (*.bmp) to presents organizational, geographical maps, used in conjunction with remote intelligent agents, Group can indicate the status of each object on the map with a small signal flag.The event management component enable the DBA to define conditions or thresholds neccessary to trigger changes in the status indicated by the signal flags in the maps.
          3. Job Schedules : Is also dependant on the availability of remote intelligent agents. In case a job runs to resolve or prevent a fault ( in event manager ) then Job Scheduler handles the task. In addition, the Job Scheduler can launch,monitor,and report completion of a series of activities on remote systems. It provides a way for DBAs to automate tasks at specific times or based on events in the OS environment. Scheduling is especially valuable in complex environements with many systems. Job can perform any tasks requiring OS commands or SQL. The remote agent handles actual extension independant of the Enterprise Manager console.
          4. Event Management : Is dependant on the availability of remote intelligent agents. It communicates with remote intelligent agents to track activities and conditions on remote systems.Using this capability, Enterprise Manager sends mail,pages people, or runs a job to correct a fault or capture diagnostic information for later analysis. The Event Management System is the Enterprise Manager component that manages the Health and welfare of remote databases and the services that make them available to applications. Unlike Job which triggers tasks on a Specific Schedule, EMS works with remote intelligent agents to constantly monitor for unusual conditions and then manage the events on specifications defined by the DBA. Like Job, EMS uses the communication deamon on the local client and remote intelligent agents on each system to monitor and respond to events. The intelligent agent actually monitor system resources, so the events are managed 24 hrs without the direct participation of the Enterprise Manager console. A variety of predefined events come ready to configure and register.

          The following Database Administration Tools are available :

            1. Instance Manager : All the configuration information stored in the init.ora file is accessible through the Instance Manager. The DBA can start-shutdown database instances, mount and open databases,and toggle archive logging on and off.
            2. Schema Manager : It administers clusters,constraints, database links, functions, indexes, packages, partitions, priviliges , procedures, queues, refresh groups, sequences , snapshot logs, synonyms, tables, triggers, views and the relationship thse object have to each other.
              Using Schema manager is useful for building objects such as temporary tables outside the scope of the production database design,however Schema Manager is no replacement for database design tools, such as Oracle Designer, Computer Systems Advisors Silverrrun, Logic Works Erwin, or Powersoft PowerDesigner, using Schema manager in one of these environments is just as counterproductive as using native DDL.
            3. SQL Worksheet : It is the Database administration tool to Server Manager and SQL*Plus. Targeted to DBAs it provides a way to submit SQL, PL/SQL, DBA commands , and SQL scripts.
            4. Security Manager : The DBA can easily create,alter and drop security objects and associate privileges, roles, quotas, and profiles to users and other roles.
            5. Storage Manager : The DBA can create,alter and drop storage. It adds rollback segments and data files to tablespaces, takes tablespaces online and offline, and takes them into and out of read-only mode.It can also shrink rollback-segments.
            6. Backup Manager : It provides two subsystems for managing recovery and backup for Oracle8i : Operating System Backup and Oracle8i Recovery Manager . The OS backup doesn't provide a comprehensive solution for 8i. (it is Oracle 7 orientated ). Oracle 8i Recovery Manager is a comprehensive recovery and backup solution included features such as point-in-time database recovery,scripting, and control file recovery.
            7. Data Manager : Assists DBAs to perform imports,exports, and table loads. This tool provides an easy to use interface that generates parameter files on the SQL*Loader and the import/export utilities.
            8. Sofware Manager : Provides a method for database administrators to distribute Oracle software and related configuration information across a network of computers by using an easy-to use central client interface working with remote intelligent agents .
            9. Intermedia Text Manager
            10. Replication Manager
          The Diagnostic Pack

          The Enterprise manager Diagnostic pack is a value added component of Enterprise Manager that provides tools for performance monitoring and tracking, session monitoring, trace management, and lock management. The diagnostic pack provides tools necessary to capture, store, and analyze performance data to meet this objective.

          Performance Manager is the centerpiece of the Diagnostic pack . This tool provides real-time performance monitoring and the capability to record performance characterstics for later analysis.It has built in charts and tables for collection and analysis of contention, global database statisitics, I/O, workload levels, memory, and parallel server information.

          Trace Manager is a comprhensive tool for collecting data for performance management, capacity planning, and workload managment. By using the provided API,trace manager can even collect information from third-party and custom applications. Job (the component of the Enterprise Manager) must be configured before using Trace Manager. The tool also uses its own repository and an Oracle Expert repository, which it builds the first time it is accessed by a userr who hasn't used Expert or Trace.

          Top Sessions Top sessions enables database administrators to monitor database sessions and kill them if necessary due to inactivity or over-utilization of resources. This tool monitors all sessions or only the most active based on consumption of specified resources such as memory or I/O.

          Capacity Planner The capacity planner of the diagnostic pack is used to collect and store database and operating system statistics in a repository that can be later analyzed.

          Lock Manager It is a utility for monitoring database locks and troubleshooting problem blocking and waiting conditions.

          The prerequiste for this utility are the existence of the views: V$LOCK, V$SESSION, DBA_OBJECTS

          The Tuning Pack

          This utility includes:

          Tablespace Manager : Provides detailed information on storage and utilization in tablespaces, rollback segments, and data files.

          Oracle Expert : Expert is the implementation of a performance management methedology. Wheras other diagnositc packs provide information for database administratos to analyze, Expert applies a set of rules to the data to develop conclusions and make recommenadtions to improve performance.

          SQL Analyze : SQL Analyze is a tool that will enable you to performance tune any troublsome SQL statement.

          OTHER ENTERPRISE VALUE ADDED PRODUCTS
          1. Replication Manager : Is useful for environments that make heavy use of Oracle8i replication features.
          2. Oracle Fail Safe: Manages high availability environments implemented on Windows NT clusters.
          3. Oracle Biometrics: Works with specialized hardware to administer fingerprint identification used with Advanced Networking Option.
          4. Change Management Pack : This helps the DBA manage object definition changes across mulitple databases.
          5. Management Pack for Oracle Applications: The job for managing the multiple tier Oracle Application environment is made simpler by this pack.
          6. Standard Management Pack: This pack is a group of general utilities to help the DBA troubleshoot problems,track changes, and tune database indexes.

      2. Managing the Oracle Instance
        1. Create the Parameter File
        2. The default parameter file (init.ora) is located at $ORACLE_HOME/dbs directory, the default naming for this file is initSID.ora, where SID is the system identifier for the database. At the very least this file should contain the following three parameters: CONTROLFILE, DB_BLOCK_BUFFER and DB_NAME

        3. Startup the Instance and open the database
        4. The Instance is named based on the ORACLE_SID environment variable of the OS. The exact time the instance creation occurs is called nomount what this does is allocate the SGA and start the background processes.


          # svrmgrl
          > connect internal as sysdba
          > startup nomount pfile=/u01/app/admin/init(SID).ora

          If the path to pfile is not specified, then the command will look in the directory $ORACLE_HOME/dbs/ for a file named init($SID).ora , where SID is the environment variable SID.

          The next stage in the process is mount stage, in this stage the control file(s) is read and accessible,and queries and modifiactions to the data stored in the control file can be made.

          alter database mount

          The final stage, the database is opened, in this stage the database files referenced by the control files are locked for exclusive use by the instance.

          alter database open

          NOTE: The alter database mount can be skipped if instead of the startup nomount the command startup mount is issued.

        5. Close the Database and Shutdown the Instance
        6. The keyword shutdown is issued from svrmgr in order to shutdown the Database and Instance.

          The command shutdown abort is used to closed down the Instance, Oracle will need to perform recovery prior to opening the database with command alter database open , the command shutdown is used when having deadlocks in the database application, because Oracle will not be able to shut down the database if the command shutdown immediate is issued.

        7. Get and set parameter values
        8. Changes made to parameter values in the init.ora file do not take effect until the database is shutdown and restarted. These values can queried from the view V$PARAMETER .There is an exception, these parameters can be changed with the command ALTER SYSTEM ,when the system (Instance) is up , however in case the instance is shutdown the changes will not take effect on the next startup.

        9. Manage Sessions
        10. You can set a limit on the number of concurrent sessions that can connect to a database. To set the maximum number of concurrent sessions for an instance, set the parameter LICENSE_MAX_SESSION, once this limit is reached only users who have the RESTRICTED SESSION system privilege can connect to the instance; this allows DBAs to kill unneeded sessions, allowing other sessions to connect.

          In addition to this parameter you can set a warning limit on the number of concurrent sessions, this is set with the parameter LICENSE_SESSIONS_WARNING. Once this limit is reached, additional users can continue to connect up to the maximum limit, but Oracle sends a warning for each connecting user.

          A session marked to be terminated is displayed with the V$SESSION view.To KILL a session ,after querying the V$SESSION columns SID and SERIAL# , you issue the following command: (Assuming SID=7 and SERIAL=15, and of course the STATUS(column) INACTIVE )

           ALTER  SYSTEM KILL SESSION '7,15';

          After the command is issued the STATUS (column) is changed to KILLED and the SERVER(column) is changed to PSEUDO

          You can also see the current limits of all the license setting, the currnet number of sessions, and the maximum number of concurrent sessions for the instance by querying the V$LICENSE data dictionary view.

          V$LOCK shows which sessions hold locks on certain processes . Net 8 adds support for session multiplexing.

        11. Monitor the ALERT file and the trace file.
        12. The ALERT file , typically name SIDalrt.log contains : all internal errors (ORA-600), block corruption errors (ORA-1578) and deadlock errors, administrative operations, such as CREATE /ALTER /DROP /DATABASE /TABLESPACE /ROLLBACK SEGMENT, SQL statements and STARTUP,SHUTDOWNS, and ARCHIVE LOG, several messages and errors relating to the functions of shared server and dispatcher processes, errors occurring during the automatic refresh of a snapshot, the values of all the initialization parameters at the time the database and instace start. It is stored in the location specified by BACKGROUND_DUMP_TEST. Critical functions are always logged here, as well as database startup and shutdown messages.

          Background processes also create their own trace files where problems and failures are logged. Background proccesses are logged to the directory specified by the parameter BACKGROUND_DUMP_TEST, and the tracefiles for server processes are logged to USER_DUMP_TEST. Both of these files are named processname_number.trc

          In order to enbale SQL tracing, issue the command ALTER SESSION SET SQL_TRACE=TRUE, or set the parameter SQL_TRACE=true in init.ora


          1. Creating the Database
            1. Prepare the Operating System
            2. On a UNIX system the majority of the required environment variables are stored in a file named oraenv or coraenv , on a Windows machine these are stored in registry. The main enviornment variables to be aware of are :

              DISPLAY: The machine name or IP Address-X server-and-screen being used by your workstation to connect to the system where the software will be installed, if not sure what X server and screen setting should be, use 0 (zero) for both, DISPLAY=workstation_name:0.0

              ORACLE_HOME: Set according to OFA (Oracle Flexible Architecture)

              ORACLE_SID: System ID for the Database ( Important for specifying which database to mount)

              ORACLE_BASE: Set according to OFA

              ORA_NLS33: Required to be set if installing a database with a storage character set other then US7ASCII.To be set $ORACLE_HOME/ocommon/nls/admin/data

              NLS_LANG: Required if installing or creating a database that uses a character set other than US7ASCII ( Recommend Mexican Spanish Character Set : WE8DEC , NLS_VALUE= esm )

              LD_LIBRARY_PATH : To be updated with the location of the Oracle Libraries $ORACLE_HOME/lib

              CLASSPATH : Location for Java Classes to load.

              A Unix a group called dba is generated for the DBAs. An account orainstall is used to install the Oracle Software, this account must be a member of the dba group.

            3. Prepare the parameter file :
            4. The minimum parameters required to create the database are controlfile, db_block_size and db_name, however other recommended parameters include USER_DUMP_TEST,BACKGROUND_DUMP_TEST,NLS_DATE_FORMAT,etc..

            5. Create the Database :
            6. The create database command creates:

              The system datafiles and tablespace, controlfiles, initial redo log groups, the system rollback segments and the internal database tables necessary for the operation of the Oracle Database. ( This command is issued after the database is in the no mount stage).

              It is very important to correctly size the datafile specified with the create database command, as this datafile will be assigned to the SYSTEM Tablespace. Allocate more space to the database if the database will make heavy usage of stored packages, procedures, functions and triggers. Set the MAX parameters high as well. Example :

              CREATE DATABASE "ODBC"
              maxdatafiles 254
              maxinstances 8
              maxlogfiles 32
              character set US7ASCII
              national character set US7ASCII
              DATAFILE '/u01/app/oracle/oradata/ODBC/system01.dbf' SIZE 175M
              logfile '/u01/app/oracle/oradata/ODBC/redo01.log' SIZE 500K,
              '/u01/app/oracle/oradata/ODBC/redo02.log' SIZE 500K;

              After the command is run the database opens automatically. At this point it is a working, although very bare database.

          2. Create Data Dictonary Views and the Standard Packages
            1. Construct the Data Dictonary views
            2. The data dictonary views are created by the catalog.sql script located at $ORACLE_HOME/rdbms/admin/

            3. Query the Data Dictionary
            4. The data dictonary is composed of:

              1. Internal RDBMS(X$) Tables: The tables are used by the Oracle RDBMS to keep track of internal database information. Most of them are not designed to be used directly by DBAs or users.
              2. Data Dictonary Tables : Hold information for tables,indexes,constraints and all other database constructs. They are owned by SYS, and easily identifiable by a trailing $ in their names tab$,seg$,cons$,etc. They are generated by the script SQL.BSQ
              3. Dynamic Performance Views : These are the mainstay of the DBA, V$ views are actully public synonyms to the V$ views owned by SYS.
              4. Data Dictonary Views : The data dictonary views are views created on the X$ and data dictonary tables. They are divided into three categories DBA_, ALL_ , USER_ .
            5. Prepare the PL/SQL environment using the administrative scripts.
            6. The scripts catproc.sql is used to install the procedural Option (or PL/SQL objects) and its supporting database structures.

            7. Administer stored procedures and packages
            8. When a subprogram is created via the REPLACE OR CREATE command, it is stored in the database. The subprogram is stored in compiled form, which is known as p-code. The p-code has all the references in the subprogram evaluated , and the source code is translated into the form that is easily readable by the PL/SQL engine. When the subprogram is called, the p-code is read from disk, if necessary and exectued. P-code is analogouse to the object code generated by other 3GL compilers. Since the p-code has the object references in the subprogram already evaluated (this is the early binding property mentioned in PL/SQL characteristics ), executing the p-code is a comparetively inexpensive operation.

              Information about the subprogram is accessible through varios dictonary views:

              1. USER_OBJECTS: This view contains all the information on the subprograms (including all other objects ), which includes when the object was created and last modified, the type of the object (function,procedure,table,sequence,etc) and the validity of the object.
              2. USER_SOURCE : Contains the original source_code for the object.
              3. USER_ERRORS : Contains information about compile errors.

              A package is esentially a named declarative section. Anything that can go into the declarative part of a block can go in a package. This includes procedures, functions, cursors, types, and variables. A package is composed of two parts a header (or specfication) and a body . They are both separate dictonary objects in the database.

              The header contains all the elements that will conform the package: procedure specifications , function specifications , variable declarations, type definitions , exception declarations and cursor declarations. However it does not contain any code for any part.

              The package body cannot be compiled unless the package header is compiled successfully. It contains the code for the forward subprogram declarations in the package header. Objects in the header that are not forward decalarations (like an exception) can be referenced in the package body without being redeclared. The package body is optional in case that the header does not contain any procedures or functions ( only variable declarations, cursors,types,etc)

              The package body and header are also stored in the USER_OBJECTS where object_type is PACKAGE and PACKAGE BODY.

              NOTE : To find DEPENDENCIES among OBJECTS the views: DBA_DEPENDENCIES,USER_DEPENDENCIES and ALL_DEPENDENCIES are provided.

            9. List the types of database event triggers
            10. A trigger is executed implicitly whenever the triggering event happens, and the trigger doesnt accept arguments. Triggers are used to

              1. Maintain complex integrity constraints not possible through declarative constraints enabled on the table.
              2. Auditing information in a table, by recording the changes made and who made them
              3. Automatically signalling other programs that action needs to take place, when changes are made to a table.

              The general syntax is:


              CREATE [OR REPLACE] TRIGGER trigger_name
              [BEFORE | AFTER] triggering_event ON table_reference
              [FOR EACH ROW [WHEN trigger condition]]
              trigger_body;

              Where trigger_event specifies when the trigger fires, table_reference is the table for which the trigger is defined, and trigger body is the main code for the trigger.

              The required components of a trigger are the trigger_name, triggering_event, and the body . The WHEN clause is optional. The Namespace for trigger names is different from that of other subprograms. Subprograms (procedures,functions,tables,packages,etc) share the same namespace (Schema) which means that a certain procedure cannot have the same name as another existing object in the Schema (be it table,package,etc.). Triggers however, can have the same name as table or procedure, since they are in a separate Namespace.

              TYPES OF TRIGGERS

              The triggering event determines the type of the trigger. Triggers can be defined for INSERT, UPDATE, or DELETE operations. They can be fired before or after the operation, and they can also fire on row or statement operations. The values for the statement, timing and level determine the type of the trigger. There are a total of 12 possible types: 3 statements,2 timing and 2 levels.

              Category Values Comments
              Statement INSERT, UPDATE, DELETE Defines which kind of DML statements causes the trigger to fire
              Timing BEFORE or AFTER Defines wether the trigger fires before the statement is executed or after the statement is executed
              Level Row or Statement If the trigger is a row-level trigger, it fires once for each row affected by the triggering statement. If the trigger is a statement-level trigger it fires once , either before or after the statement. A row-level trigger is identified by the FOR EACH ROW clause in the trigger definition

              You can drop or enable/disable the trigger with the following commands:

              drop trigger 
              alter trigger [ENABLE|DISABLE]


              1. Maintaining the Control Files
                1. Explain the uses of the control file
                2. A control file contains entries that specify the physical structure of the database.

                3. List the Contents of the control file
                4. It contains information on what data files and redo log files belong to the database, what character set the data should be stored as in the database, the status and revision of each datafile in the database, and other critical information. The control file is in binary format and cannot be modified manually, if the control file(s) for the database is corrupted then the data within the database cannot be accessed.

                5. Multiplex the Controlfile
                  1. Once the database is shutdown, it is necessary to copy the control files via the OS to the desired (mirrored) locations.
                  2. Modify the init.ora paraemeter CONTROL_FILES and indicate where the new mirrored (multiplexed) control-file can can be accessed.
                6. Obtain Control File Information
                7. In order to see plain text information on the control file it is necessary to generate a DUMP_FILE from the control file since its information is conatined in binary format. The command is :

                  ALTER DATABASE BACKUP CONTROLFILE TO TRACE

                  This creates a user trace file at USER_DUMP_TEST.

                  The view V$CONTROLFILE contains information on what controlfiles Oracle is currently reading and writing to.

                  The view V$DATABASE indicates wheter the control file is current or a backup, when the controlfile was created, and the last time stamp. The V$LOG view contains information from the controlfile.

              2. Maintaining the Redo Log Files
                1. Explain the use of the Online Redo Log Files
                2. These redo log files store all the change information for the database and are used by Oracle during the database recovery. These files are made up of at least two groups of redo-log files, and are written to in circular nature. Each log group can consist of multiple members. Each member is an exact mirror of the other member, and the entries are written in parallel. By using mutliple members per group you safeguard against database failure resulting from lost redo logs, as long as one member of the group is accesible, the database continues to function.

                3. Obtain log and archive information
                4. The V$LOG view holds information on the online redo log groups, the number of members per group,and which logs have been archived. And V$LOGFILE view: Displays filenames and status information on the redo log group members.

                  In SQL*Plus the command ARCHIVE LOG LIST displays information on archiving .

                5. Control Log Switches and checkpoints
                6. Checkpionts help reduce the amount of time needed to perform instance recovery. Checkpoints cause DBWRs to write all the blocks that have been modified since the last checkpoint to the datafiles and update the datafile headers and control files to record the checkpoint.Checkpoints occur autmatically when an online redo log file fills; the LOG_CHECKPOINT_INTERVAL in the database instance may be used to set more frequent checkpoints.

                  The parameter LOG_CHECKPOINT_TIMEOUT specifies an upper bound on the time a buffer can be dirty in the cache before DBWn must write it to disk. If you set LOG_CHECKPOINT_TIMEOUT to 60, then no buffer remains dirty in the cache for more than 60 seconds. The default value for LOG_CHECKPOINT_TIMEOUT is 1800.

                  LOG_CHECKPOINT_INTERVAL, causes the checkpoint to be triggered when a number of OS block (not Oracle bloocks!) are written to REDO. Oracle limits the maximum value to 90% of the smallest log to ensure that the checkpoint advances far enough to eliminate "log wrap". Log wrap occurs when Oracle fills the last available redo log file and cannot write to any other log file because the checkpoint has not advanced far enough. By ensuring that the checkpoint never gets too far from the end of the log, Oracle never has to wait for the checkpoint to advance before it can switch logs.

                  The command ALTER SYSTEM SWITCH LOGFILE forces a log switch

                  The parameter LOG_ARCHIVE_START specified wheter archiving is enabled, if set to TRUE then archiving is enabled, if set to FALSE then no archiving takes place. As with other systems parameters this parameter can be set on the fly with the command ALTER SYSTEM ARCHIVELOG START

                7. Multiplex and maintain online redo log files
                8. This multiplexing is specified while the database is in mount and is done through the alter database command.


                  alter database
                  >add logfile group 5
                  > ('/u02/app/oradata/OSMOSIS/log_1c.dbf',
                  > /u05/app/oradata/OSMOSIS/log_2c.dbf') size 5M;

                  To add a new log file to an existing group:


                  alter database
                  >add logfile member '/u05/app/oradata/OSMOSIS/log_3c.dbf'
                  >to group 3;

                  When adding a file to a redo log group there is no need to specify the size, because the group has already been indicated that parameter, besides all the redo log files in the group must be of the same size.

                  Mirroring the Archived Redo log files, you can specify up to five archive log destination areas, this is done with the parameter LOG_ARCHIVE_DEST_n. To enable or disable the destinations or these archived redo log files the parameter LOG_ARCHIVE_STATE_n (values are DEFER and ENABLE (default)),example:

                  LOG_ARCHIVE_DEST_1 = 'LOCATION =/u02/oradata/archive'
                  LOG_ARCHIVE_DEST_2 = 'SERVICE = OSMOSIS2'

                  The service specifier is used when maintaining a standby database, use the SERVICE keyword to specify a valid net service name from the tnsnames.ora file.

                9. Plan Online Redo Log Files:
                10. Maintain at least three online redo-log groups with two members each, place each member of the group in a different disk and controller. The size of the redo log file directly influences checkpoint performance. The smaller the size of the smallest log, the more Oracle writes dirty buffers to disk to ensure the position of the checkpoint has advanced to the current log buffer that log completly fills.Oracle enforces this behavior by ensuring the number of redo block between the checkpoint and the most recent redo record is less than 90 % of the size of the smallest log.

                  If the redo logs are small compared to the number of changes made against the database, Oracle must switch logs frequently. If the value of LOG_CHECKPOINT_INTERVAL is less than 90 % of the size of the smallest log, this parameter will have the most influence over checkpointing behavior.

                  Although you specify the number and sizes of online redo log files at database creation, the commands ALTER DATABASE ADD LOGFILE can be used to add redo log file and specify its size, or the DROP LOGFILE clause to drop a redo log.

                  The size of the redo log appears in the LOG_FILE_SIZE_REDO_BLKS column of the V$INSTANCE_RECOVERY view. This value shows how the size of the smallest online redo log is affecting checkpointing. By increasing or decreasing the size of your online redo logs, you indirectly influence the frequency of chekcpoint writes.

                  Checkpoints affect: Instance recovery time performance and Run-time performance.

                  Frequent checkpoints can reduce instance recovery time in the event of an instance failure. If checkpoints are relatively frequent, then relatively few changes to the datasbase are made between checkpoints. In the case,relatively few changes must be rolled forward for instance recovery.

                  Checkpoints can momentarily reduce run-time performance because checkpoints cause DBWn processes to perform I/O.However, the overhead associated with checkpoints is usually small and affects performance only while Oracle performs the checkpoint.

                  Choosing Checkpoint Frequency

                  Choose a checkpoint frequency based on your performance concerns. If you are more concerned with efficient run-time performance than recovery time, choose a lower checkpoint frequency. If you are more concerned with having fast instance recovery than with achieving optimal run-time performance,choose a higher chekcpoint frequency.

                  Because checkpoints are necessary for redo log maintanence, you cannot eliminate checkpoints entirely. However, you can reduce checkpoint frequency to a minimum by:

                  1. Setting LOG_CHECKPOINT_INTERVAL in init.ora to be larger than the size of your largest redo log file.
                  2. Setting LOG_CHECKPOINT_TIMEOUTto zero.This value eliminates time-based checkpoints. In addition to setting these parameters, also consider the size of the log files, as small log files can increase checkpoint activity and reduce performance. Oracle checkpoints current read blocks. By contrast, sort blocks and consistent read blocks are not checkpointed.
                11. Troubleshoot common redo log file problems
                  1. If LGRW cannot access the next group at a log switch because the group needs to be archived, database operation temporarily halts until the group becomes available,this can be remedid normally until the group is archived.
                  2. All members of the next group are inaccessinble to LGWR at log switch because of disk failures.Oracle returns an error and the database instance shut downs. You may need to perform media recovery on the database from the loss of an online redo log file.
                  3. All members of the next group are inaccesible and the database checkpoint has moved beyond the lost redo log. Media recovery is not necessary because Oracle has saved the data recorded in the redo log to the datafiles. Simply drop the inaccesible redo log group.
                  4. You want to drop an unarchived redo log when in ARCHIVELOG mode. Issue ALTER DATABASE CLEAR UNARCHIVED LOG to disable archiving before the log can be dropped.
                  5. All the members of a group become inaccessible to LGWR while it is writing to them. Oracle returns an error and the database instance immediatly shuts down.You may need to perform media recovery. If the media containing the log is not actually lost or if the drive for the log was inadevertently turned off then media recovery may not be needed. In this case, you only need to turn the drive back on and let Oracle perform instance recovery.
                12. Analyze online and archived redo logs.
                13. LogMiner is a utility that is used to read information contained in online or archived redo logs based on various selection criteria, you can select information from the V$LOGMNR_CONTENTS view that enables you to:

                  Track changes to a specific table, Track changes made by a specific user, Map data access patterns, View the SQL syntax for undoing or redoing a specific change made against the database, use archived data for tuning and capacity planning.

                  LogMiner has a fully relational SQL interface that provides direct access to a complete historical view of a database- without forcing the restauration of the archived redo log files.

                  LogMiner is especially usefull for identifying and undoing logical corruption. LogMiner processes redo log files, translating their contents into SQL statements that represent the logical operations performed on the database. The V$LOGMNR_CONTENTS view then lists the reconstructed SQL statements that represent the original operations (SQL_REDO column) and the corresponding SQL statement to undo the operations ( SQL_UNDO columnn) . Apply the SQL_UNDO statements to roll back the original changes to the database. The V$LOGMNR_CONTENTS is used to view :

                  1. Determine when a logical corruption to the database may have begun,pinpointing the time or SCN to which you need to perform incomplete recovery.
                  2. Track changes to a specific table
                  3. Track changes made by a specific user
                  4. Map data access patterns
                  5. Use archived data for tuning and capacity planning

                  LogMiner runs in an Oracle instance with the database either mounted or unmounted. LogMiner uses a dictonary file , which is a special file that indicates the database that created it as well as the time the file was created. The dictionary file is not requeired,but is recommended. Without a dictonary file, the equivalent SQL statements will use Oracle internal object IDs for the object name and present column values as hex data. e.g,Instead of the SQL statement: INSERT INTO emp(name,salary) VALUES ('John Doe', 50000);, Log Miner will display:


                  insert into object#2581(col#1, col#2) values ( hextoraw('4a6f686e20446f65'),hextoraw('c306'));

                  Create a dictonary file by mounting a database and then extracting dictonary information into an external file. You must create the dictionary file from the same database that generated the log file you want to analyze. Once created you can use the dictionary file to analyze redo logs.

                  To create the dictonary file:

                  1. Specify the parameter UTL_FILE_DIR in the init.ora file, if you do not reference this parameter, the procedure will fail. UTL_FILE_DIR = /u01/oradata/logs
                  2. In SQLPLUS mount and open the database whose files you want to analyze
                  3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both filename for the dictonary and a directory pathname for the file. This porcedure creates the dictionary file, whcih you should use to analyze log files.
                  EXECUTE dbms_logmnr_d.build
                  dictionary_filename => 'dictonary.ora',
                  dictionary_location => '/u01/oradata/logs');

                  Once you have created a dictonary file, you can begin analyzing redo logs. The first step is to specify the log files that you want to analyze using the ADD_LOGFILE procedure. Use the folloing constants : NEW to create a new list, ADDFILE to add redo logs to a list, REMOVFILE to remove redo logs from the list.

                  To use log miner:

                  1. With the database either mounted or unmounted, create a list of logs specifying the NEW option when executing the DBMS_LOGMNR.ADD_LOGFILE procedure.
                  2. execute dbms_logmnr.add_logfile(
                    LogFileName => '/oracle/logs/log1.f',
                    Options => dbms_logmnt.NEW);

                    The options to add more logs or delete them are (respectively).

                    execute dbms_logmnr.add_logfile(
                    LogFileName => '/oracle/logs/log2.f',
                    Options => dbms_logmnr.ADDFILE);
                    execute dbms_logmnr.add_logfile(
                    LogFileName => '/oracle/logs/log2.f
                    Options => dbms_logmnr.REMOVEFILE);

                  The following options are used to narrow the range of your search at start time:

                  Option Specifies
                  StartScn The beginning of an SCN range
                  EndScn The termination of an SCN range
                  StartTime The beginning of a time interval
                  EndTime The ending of a time interval
                  DictFileName The name of the dictionary file
                  execute dbms_logmnr.start_logmnr(
                  DictFileName => '/u01/oradata/dictionary.ora',
                  StartScn => 100,
                  EndScn => 150);

                  Once LogMiner is started, you can make use of the following data dictionary views:

                  View Information About
                  V$LOGMNR_DICTIONARY The dictionary file in use
                  V$LOGMNR_PARAMETERS Current parameter setting for the LogMiner
                  V$LOGMNR_FILES Which redo log files are being analyzxed
                  V$LOGMNR_CONTENTS The contents of the redo files being analyzed

              3. Managing Tablepace and Data Files
                1. Describe the logical structure of the Database.
                2. A TABLESPACE is a logical division of a database. Each database has at least one tablespace (SYSTEM). These logical divisions are normally used to group user applications together and ease the maintanence and better perfromance of the database. A tablespace can belong to only one database.

                  Each tablespace is constituted of one or more files (datafiles) on a disk. A datafile can belong to one and only one tablespace. Creating new tablespaces requires creating new datafiles.

                3. Distinguish the different types of temporary segments.
                4. When a temporary segment is created, it uses the default storage parameters for the tablespace. The temporary segment extends itself as necessary, and drops itself when the operation completes or enocounters an error. The temporary segment by its nature is normally fragmented. A good sizing for the initial and next extents is 1/20 to 1/50 the size of the tablespace, you should also choose a pctincrease of 0 , as this will generate segments of the same size. When these segments are dropped,the next temporary segment to be formed will be able to reuse the dropped extents.

                  The first sort to use the temporary tablespace allocates a temporary segment in the tablespace, when the query completes the space used by the temporary segment is available for use by other queries; this allows the sorting operation to avoid the costs of allocating and releasing space for temporary segments. To dedicate a table for temporary segments, you use de temporary argument in the create tablespace or alter tablespace alter tablespace TEMP temporary;

                  To return a tablespace to store permanent storage:


                  alter tablespace TEMP permanent;

                  The contents column in the DBA_Tablespaces contains information on the status of the tablespaces as either 'Temporary' or 'Permanent',the temporary segments are Table,Index and Rollback

                  CREATE TEMPORARY TABLESPACE tbs_1 TEMPFILE 'file_1.f'
                  EXTENT MANAGEMENT LOCAL UNIFORM SIZE 16M;
                5. Create Tablespaces
                6. create tablespace DATA
                  datafile '/u01/app/oradata/OSMOSIS/data01.dbf' size 200M
                  autoextend ON
                  next 10M
                  maxsize 250M

                  These parameters specify the size and location of the datafile, the autoextend argument means that when the datafile is full, it will extend itself by 10M (the next argument) until 250M are reached (the max argument).

                  It is also possible to add a datafile to an existing tablespace, to enable the autoextend feature.

                  alter tablespace DATA
                  add datafile '/u10/app/oradata/OSMOSIS/data02.dbf' size 50M
                  autoextend ON
                  maxsize 300M;

                  NOTE: The table DBA_DATA_FILES holds information on the mapping between datafiles and tablespaces.

                7. Change the size of tablespaces
                8.  alter database datafile '/u02/oradata/OSMOSIS/data02.dbf'
                  resize 80M;
                9. Allocate space for temporary segments
                10. alter tablespace TEMP add tempfile '/u01/oradata/tmp01.dbf'
                11. Change the status of your Tablespaces
                12. alter tablespace DATA offline
                  alter tablespace DATA online
                  alter tablespace TEMP permaenent
                  alter tablespace DATA_TEMP temporary
                13. Change the storage settings of Tablespaces.(Done directly through the datafile)
                14. alter database
                  datafile '/u02/app/oradata/OSMOSIS/data01.dbf'
                  autoextend ON
                  maxsize 300M
                15. Relocate Tablespaces.
                16. This can be done either through the alter database or alter tablespace command.

                  With alter database

                      1. Shutdown the database.
                      2. Move the file with an OS command (mv)
                      3. startup mount the database.
                      4. Issue the command
                      5.  alter database rename file
                        '/u01/oradata/OSMOSIS/data01.dbf' to --Old location of datafile
                        '/u04/oradata/OSMOSIS/data01.dbf'; --New location of datafile
                      6. open the database. alter database open

                  With alter tablespace

                      1. Take the tablespace offline alter tablespace DATA offline
                      2. Move the datafile via the OS (mv)
                      3. Issue the command
                      4.  alter tablespace DATA rename datafile 
                        '/u01/oradata/OSMOSIS/data01.dbf' to --Old location of datafile
                        '/u04/oradata/OSMOSIS/data01.dbf'; --New location of datafile
                      5. Bring the tablespace back online alter tablespace DATA online

              4. Storage Structure and Relationships
                1. Describe the logical structure of the database
                2. Each time an object is generated (Table or Index), this object is placed in a "Tablespace", which will be either the one specfied at object creation or the default tablespace for the user generating the object. At this point a "segment" is generated that will be the placeholder for the information on the object . The space that is assigned to this segment is not dropped until it is manuallly shrunk or truncated.

                  Each segment is divided into "extents" which are contiguos "oracle blocks" once these extents are no longer capable of maintaing information, the segment will acquire another "extent". This process will continue until there is no more free space in the "datafile" ("Tablespace") or when a maximum limit of extents per segment is reached. If a segment is composed of various extents there is no guarantee that these extents are continuos.

                3. List the Segment Types and their Uses
                  1. TABLE: Store the rows of data associated with tables or clusters Unless very large, a properly size table will have only 1 extent. Once a data segment is acquires an extent, it keeps that extent until the segment is either dropped or truncated.
                  2. INDEX : Hold the space of the Tables that generate indexes.They are subject to the same space problems as Table Segments. Their parameters can be changed via the alter index .... rebuild
                  3. ROLLBACK: Rollback segments are involved in every transaction in the database. They allow the database to maintain read consistency between multiple transactions. Ideal rollback segments will have multiple evenly sized extents that add up to the optimal size. Each extent should be large enough to handle all the data from a single transaction. Rollback segments can dynamiclaly shrink to a specfied size, or they can be manually shrunk to a size. The optimal parameter at creation generates this.
                  4. TEMPORARY: These store temporary data during sort operations ( such as large queries, index creations and unions.)
                  5. PARTITION: Similar to table segments
                  6. CLUSTER: Similar to table segments
                4. List the key words that control block space usage
                5. When a segment is created (create table, create index,etc) the size of the segments, and thus its block space usage is determined by various parameters:

                  1. initial : This determines the initial size of the "extent"
                  2. next : Determines the values of the allocated "extents" after the initial extent is full.
                  3. pctincrease : Factor by which the extents ("next") will grow in a geometrical manner.
                  4. maxextents :Maximum number of extents for the segment
                  5. minextents : Minimum number of extents for the segment
                  6. pctfree: This parameter is used to reserve space for the extent which will only be used for updates.
                  7. pctused : Determines when a used-block is readded to the list of blocks into which rows can be inserted.e.g. If an extent (table) has a pctfree of 20 and pctused of 50. When rows are inserted into Oracle, the extent will reserve 20 % of each block free. When the 80 % mark is reached and even if you now begin to delete rows, Oracle will not write to this block until the blocks used space falls below 50 %. By DEFAULT pctused is at 40

                  If these paremetrs are not specified at segment creation, then the database will use the default parameters which are stored in the views DBA_TABLESPACES and USER_TABLESPACES.

                6. Obtain information about storage structures from the data dictonary.
                  1. DBA_TABLESPACES : List the tablespaces and parametrs of all tablespaces
                  2. USER_TABLESPACE :Lists all the tablespace the user owns.
                  3. DBA_TABLES :Self-Explanatory
                  4. DBA_INDEXES :Self-Explanatory
                  5. DBA_CLUSTERS :Self-Explanatory
                  6. DBA_SEGMENTS :Self-Explanatory
                  7. DBA_SEQUENCES :Self-Explanatory
                  8. DBA_OBJECTS :Self-Explanatory
                7. List the criteria for separating segments
                8. The data segments are separated on the basis of I/O access, number of updates, size of the rows in each segment,and other factors. As a good rule of thumb TABLE and INDEX extents must be in different tablespace because ther I/O is concurrent. Rollback segments due to their dynamic nature are stored in a separate tablespace to avoid fragmentation,etc,etc.


Comentarios

Entradas populares de este blog

Partición de Disco Duro En Windows 7 / 8 / 8.1 / 10 | FotoTutorial | Soluciona

*Enlaces protegidos. Descarga aquí el tutorial en PDF que ha ayudado a más de un millón de usuarios en el mundo: Descarga: 1- Manual en PDF:  http://adf.ly/rLafU http://fumacrom.com/1QSO Pasos para descargar : Abrir enlace protegido, espera 5 segundos, luego clic en el botón "Saltar Publicidad", Descarga directa en MEGA. Si tienes alguna duda deja un comentario aquí con gusto le responderé para darle solución a su equipo. Solucionamos particiones para instalar varios sistemas operativos, respaldar archivos, fotos, documentos importantes, recuperamos información perdida en su disco duro. Recomendaciones: leer comentarios Escrito Por Ing. Wilmer Barrios Jr Twitter: @wilmerbarriosjr admin@wilmerbarrios.com.ve Todos los derechos reservados Lo Más Buscado en la Web: en relación con la publicación - Partición de Disco Duro En Windows 7 - FotoTutorial particionar disco duro windows 7 1950 como particionar un disco duro en windows 7 ...

Autodesk AutoCAD 2016 Español (32Bits & 64 Bits) + Descarga MEGA

Descarga: http://adf.ly/1CxMDB Autodesk AutoCAD 2016 Español (32Bits & 64 Bits) EXE | 1.44 GB & 1.84 GB | Spanish | Medicina Incl. | 32 Bits & 64 Bits | Win XP/Vista/7/8 | NFO

Planillas de solicitud para la linea CANTV

Dejo la planilla de solicitud de linea telefónicas CANTV, deben imprimirla en una hoja y doblar en forma de triptico, luego rellenar todos los datos sin dejar tachaduras y despues llevarlo a un Centro CANTV de su preferencia. Clic aquí para descargar la planilla en MEGA:  http://sh.st/dHbGR Clic en saltar publicidad. RECOMENDACION: Como solicitar ABA de CANTV de forma rapida y segura , visita la web: http://adf.ly/rWKAu NOTA : POR FAVOR, EN LOS COMENTARIOS DE ESTE BLOG, NO DEN SU TELEFONO Y/O DIRECCIÓN DE HABITACIÓN YA QUE SE PRESTA PARA FRAUDES. HAGO REFERENCIA A ESTO YA QUE HAY GENTE COBRANDO POR INSTALACIÓN DE LINEAS O PARA AGILIZAR EL PROCESO DE ADQUISICIÓN DE LA MISMA, NO SE DEJEN ENGAÑAR!.