Oracle RAC Administration



1. Understanding Oracle RAC Administration

Oracle Real Application Clusters (RAC) allows multiple database instances, running on separate servers (nodes), to access a single, shared database. This configuration provides unparalleled high availability and scalability, making it ideal for enterprise-level applications. 

Key Benefits of Oracle RAC:

  • High Availability: In case of a node or instance failure, other active instances immediately take over the workload, ensuring continuous database operations with minimal downtime.
  • Scalability: You can add more nodes (servers) to the cluster as your workload grows, distributing the load across multiple instances and increasing overall processing capacity without application changes.
  • Workload Management: RAC enables efficient distribution of client connections and workloads across instances using features like Services, Load Balancing, and Connection Failover.
  • Flexibility: It supports rolling upgrades and patching, allowing maintenance on individual nodes without bringing down the entire database.

This guide focuses on the post-installation administration of an Oracle RAC environment, covering:

  • The core architectural components and how they interact.
  • Essential command-line tools for managing the cluster, instances, and storage.
  • SQL queries for monitoring performance and health.
  • Practical lab steps to perform common administrative tasks.

2. Architecture and Flow: The Pillars of Oracle RAC

Understanding the underlying architecture is paramount for effective RAC administration. Oracle RAC is built upon a foundation of Oracle Clusterware (now part of Oracle Grid Infrastructure) and relies heavily on shared storage and a high-speed interconnect.

 

2.1. Core Components

1. Oracle Clusterware (Grid Infrastructure): This is the foundation of any Oracle RAC installation. It provides the clustering capabilities that allow multiple servers to act as a single system.

Cluster Ready Services (CRS): Manages the various cluster resources (databases, instances, services, VIPs, ASM). It ensures that resources are always running on their designated nodes or fail over to other nodes if a failure occurs.

Cluster Synchronization Services (CSS): Manages node membership within the cluster. It detects node failures and notifies other components.

Event Management (EVM): Publishes events that occur in the cluster, such as resource state changes, node status, etc.

Oracle Cluster Registry (OCR): A vital shared file that stores the configuration information for the cluster and all cluster-managed resources (e.g., database names, instance names, service definitions, VIPs). It's typically stored on shared storage (preferably ASM) and multiplexed for redundancy.

Voting Disk: Also a shared file, it's used by CSS to determine node membership and ensure cluster integrity (quorum). It prevents "split-brain" scenarios where nodes lose communication but continue operating independently, potentially corrupting data. Like the OCR, it's stored on shared storage (preferably ASM) and multiplexed.

Grid Plug and Play (GPnP): Simplifies the configuration and management of nodes within the cluster.

 

2. Shared Storage: All Oracle RAC instances in the cluster access the *same* set of datafiles, control files, and redo logs. This shared storage is critical.

Oracle Automatic Storage Management (ASM): The recommended and most common solution for RAC storage. ASM is a portable, high-performance, and robust volume manager and file system optimized for Oracle database files. It handles disk striping, mirroring, and rebalancing automatically.

Third-Party Storage Solutions: Network File System (NFS), Storage Area Network (SAN), or other cluster file systems can also be used, but ASM simplifies management and offers superior integration with Oracle databases.

 

3. Interconnect: This is a private, high-speed, low-latency network exclusively used for communication *between* the RAC nodes.

Purpose: Primarily used by Oracle's Global Cache Service (GCS) for Cache Fusion operations (transferring data blocks directly between instance caches) and Global Enqueue Service (GES) for global lock management.

Requirement: Redundancy (multiple network paths) and high bandwidth are crucial for optimal RAC performance.

 

4. Public Network: Used for client connections to the database and for external administration tools. Each node has at least one public IP address.

 

5. Virtual IP Addresses (VIPs): Each node in a RAC cluster has an associated VIP. When a node fails, its VIP automatically fails over to another active node in the cluster. This allows clients to quickly detect and reconnect to the database without waiting for TCP/IP timeouts, greatly enhancing application availability.

 

6. Single Client Access Name (SCAN): A single network name (and associated set of IP addresses) that provides a stable connection point for clients regardless of which nodes are up or down. SCAN simplifies client configuration and offers transparent load balancing and connection failover. It resolves to three IP addresses by default, managed by Clusterware.

 

7. Oracle Instances: Each node runs an independent Oracle instance, consisting of its own System Global Area (SGA), Background Processes (PMON, SMON, DBWn, LGWR, etc.), and private server processes. Crucially, all these instances *share* the same set of database files on the shared storage.

 

2.2. Key RAC Concepts

  • Cache Fusion: The cornerstone of RAC scalability and performance. Instead of writing a modified data block to disk and then having another instance read it from disk (which is slow), Cache Fusion directly transfers the data block from the cache (SGA) of one instance to the cache of another instance over the high-speed interconnect. This eliminates disk I/O for cache coherency.

Global Cache Service (GCS): Manages block access and ensures data coherency across all instances in the cluster. It tracks the location and state of every data block. 

Global Enqueue Service (GES): Manages global resources (locks) across the cluster, ensuring that multiple instances don't modify the same data concurrently in a conflicting way.

  • Services: An Oracle Service is an abstraction layer for an application workload. Instead of connecting to a specific instance, applications connect to a service. Services can be configured to prefer certain instances, fail over to others, and distribute workload. This is fundamental for managing high availability and workload balancing in RAC.
  • Client Connection Failover:

Transparent Application Failover (TAF): Allows an application to automatically reconnect to another active instance in the cluster if its current connection fails, often transparently to the end-user.

Fast Application Notification (FAN): A more advanced and proactive notification mechanism. When an instance or service goes down (or comes up), FAN events are published, allowing client connection pools to react immediately by removing stale connections and creating new ones to available instances, significantly reducing outage time.

 

2.3. Data Flow Example

1. Client Connection: A client connects to the database using the SCAN listener. The SCAN listener directs the connection to an appropriate instance listener on one of the nodes, which then establishes a session with an instance. 

2. Data Access (Local): If the requested data block is already in the local instance's SGA (buffer cache), it's served immediately. 

3. Data Access (Remote - Cache Fusion): If the requested data block is not in the local instance's SGA but is in the SGA of another instance (e.g., modified by that instance), the requesting instance will contact the GCS. The GCS will direct the block to be transferred directly from the remote instance's SGA to the requesting instance's SGA over the interconnect. 

4. Data Access (Disk): If the block is not in any instance's SGA, it's read from the shared storage into the local instance's SGA. 

5. Failure Scenario: If Node 1 fails, its VIP will fail over to Node 2. The SCAN listener will stop directing new connections to Node 1. Existing client connections using TAF/FAN will automatically reconnect to available instances (e.g., on Node 2). Resources (databases, services) previously running on Node 1 will be restarted on Node 2 by CRS. ---

 

3. Scripts: Essential Commands for RAC Administration

This section provides critical commands for daily RAC administration usingcrsctl,srvctl,asmcmd, and SQL. All commands should be executed as thegriduser for Clusterware and ASM tasks, and theoracleuser for database-specific tasks, unless otherwise specified.

 

3.1. Oracle Clusterware Management (crsctl)

 

crsctl is the primary command-line tool for managing Oracle Clusterware resources and the cluster itself.

 

  • Check Cluster Status:

crsctl stat res -t # Tabular format of all resources
    crsctl status cluster -all # Status of all nodes in the cluster

  • Check Individual Resource Status (e.g., SCAN Listener, Database VIP):

crsctl stat res ora.scan1.listener -t
    crsctl stat res ora.racnode1.vip -t
    crsctl stat res -t | grep -i  # e.g., grep -i dbname

  • Stop/Start Cluster (entire stack on a node):

# As root user
    crsctl stop crs # Stops the entire cluster stack on the local node
    crsctl start crs # Starts the entire cluster stack on the local node

  • Stop/Start Clusterware Managed Resources (e.g., database, listener, ASM instance):

# Stop a specific database (all instances)
    crsctl stop resource ora..db -init

# Start a specific database (all instances)
    crsctl start resource ora..db -init

# Stop/Start a specific listener (e.g., SCAN Listener)
    crsctl stop resource ora.scan1.listener -init
    crsctl start resource ora.scan1.listener -init

# Stop/Start the ASM instance on a specific node (e.g., +ASM1)
    crsctl stop resource ora..asm -n
    crsctl start resource ora..asm -n

  • Enable/Disable Resources (for auto-start on boot):

crsctl modify resource ora..db -attr "AUTO_START=always" -unsupported # Enable
    crsctl modify resource ora..db -attr "AUTO_START=never" -unsupported # Disable

  • Check OCR/Voting Disk Locations:

ocrcheck
    crsctl query css votedisk

  • Verify Cluster Health (cluvfy- execute asgriduser):

cluvfy stage -post crsinst -all # Post-installation check for Clusterware
    cluvfy comp nodecon -n all -i eth0 # Check network connectivity between nodes
    cluvfy comp ocr # Check OCR integrity
    cluvfy comp health # General health check

  • Check Interconnect Configuration (oifcfg- execute asgriduser):

oifcfg getif # Shows private and public network interfaces

 

3.2. Oracle Database and Services Management (srvctl)

 

srvctlis used to manage Oracle databases, instances, listeners, and services within the RAC environment.

 

  • Check Database Status:

srvctl status database -d

  • Check Instance Status:

srvctl status instance -d  -i

  • Check All Services for a Database:

srvctl status service -d

  • Start/Stop Database:

srvctl start database -d
    srvctl stop database -d  -o immediate # -o normal, -o abort

  • Start/Stop Specific Instance:

srvctl start instance -d  -i
    srvctl stop instance -d  -i  -o immediate

  • Start/Stop a Specific Service:

srvctl start service -d  -s
    srvctl stop service -d  -s

  • Relocate a Service (manual failover):

srvctl relocate service -d  -s  -i  -t

  • Add a New Service:

srvctl add service -d  -s  -r  -a  -P BASIC -e SESSION -m BASIC -w 10 -j SHORT
    # -r: preferred instances (comma-separated)
    # -a: available instances (comma-separated)
    # -P: Policy (BASIC or UNIFORM)
    # -e: Failover Type (NONE, SESSION, SELECT, TRANSACTION)
    # -m: Failover Method (NONE, BASIC, PRECONNECT)
    # -w: Failover RETRY_COUNT (seconds)
    # -j: Failover RETRY_DELAY (seconds)

  • Modify an Existing Service:

srvctl modify service -d  -s  -i  -a

  • Remove a Service:

srvctl remove service -d  -s

  • Check VIP/Listeners:

srvctl status vip
    srvctl status listener -l LISTENER # Node Listener
    srvctl status listener -l  # SCAN Listener

 

3.3. ASM Management (asmcmd)

 

asmcmd is the command-line utility for managing Oracle Automatic Storage Management (ASM).

 

  • List Disk Groups and Usage:

asmcmd lsdg

  • Navigate ASM Directories (like a file system):

asmcmd ls
    asmcmd cd +DATA/DB_NAME/DATAFILE/
    asmcmd ls -l

  • Check Disk Status within a Disk Group:

asmcmd lsdsk -k # Lists all disks known to ASM

  • Add a Disk to a Disk Group (Example: Assuming '/dev/newdisk' is provisioned):

asmcmd adddisk -g DATA DISK_NEW /dev/newdisk

  • Drop a Disk from a Disk Group:

asmcmd dropdisk -g DATA DISK_TO_DROP

  • Initiate/Monitor Rebalance Operation:

# Rebalance automatically starts when disks are added/dropped.
    # To manually rebalance or change power:
    alter diskgroup DATA rebalance power 5; # Power 1-11, 0 stops rebalance

  • View ASM Alert Log:

asmcmd afdlog # This will usually show the path to the ASM alert log

 

3.4. SQL for Monitoring and Diagnostics (Connect asSYSDBA)

 

These queries provide insight into RAC specific behavior and performance.

 

  • Check All Instances in the Cluster:

SELECT * FROM GV$INSTANCE;

  • Monitor Cache Fusion Statistics:

SELECT name, value FROM GV$SYSSTAT WHERE name LIKE '%global cache%';
    SELECT inst_id, class, count FROM GV$SEGMENT_STATISTICS WHERE statistic_name = 'global cache cr blocks served';
    -- High 'global cache gets' indicate frequent block transfers.

  • Identify Active Sessions Across All Instances:

SELECT inst_id, sid, serial#, username, program, status, machine FROM GV$SESSION WHERE username IS NOT NULL;

  • Check Service Status and Mapping to Instances:

SELECT inst_id, name, network_name, goal, failover_type, failover_method FROM GV$SERVICES;

  • Monitor Global Resource Activity (Locks/Enqueues):

SELECT inst_id, resource_name, requests, waits, failures FROM GV$GES_RESOURCE_STATS;
    SELECT inst_id, type, count FROM GV$ENQUEUE_STAT;

  • Identify Global Contention:

-- V$ACTIVE_SESSION_HISTORY also has 'PX_FLAGS' indicating parallel execution and global waits.
    SELECT inst_id, event, total_waits, time_waited FROM GV$SYSTEM_EVENT WHERE event LIKE 'gc%' ORDER BY time_waited DESC;

  • Check ASM Disk Group Usage:

SELECT name, state, type, total_mb, free_mb, required_mirror_free_mb, usable_file_mb FROM V$ASM_DISKGROUP;

  • Check Redo Log Archiving Status (Global):

SELECT inst_id, name, status, archiver FROM GV$ARCHIVE_DEST;

 

3.5. Python for Automation (Example)

 

Python can be used to automate repetitive RAC administration tasks by wrappingsubprocesscalls tocrsctl,srvctl,asmcmd, orsqlplus.

import subprocess
import os

def run_command(command, user='grid'):
    """Helper to run shell commands as a specific user."""
    # Ensure environment variables are set for grid/oracle user
    # For a real script, you'd handle environment setup more robustly
    env = os.environ.copy()
    if user == 'grid':
        env['ORACLE_HOME'] = '/u01/app/19.0.0/grid' # Adjust to your Grid HOME
        env['PATH'] = f"{env['ORACLE_HOME']}/bin:{env['PATH']}"
        # Often, grid commands need to be run as root or through sudo.
        # For simplicity, assuming direct execution if permissions allow.
        # In production, use sudo.
    elif user == 'oracle':
        env['ORACLE_HOME'] = '/u01/app/oracle/product/19.0.0/dbhome_1' # Adjust to your DB HOME
        env['PATH'] = f"{env['ORACLE_HOME']}/bin:{env['PATH']}"
        env['ORACLE_SID'] = 'yourdb_1' # Example SID for database operations

print(f"Running command as {user}: {' '.join(command)}")
    try:
        result = subprocess.run(command, capture_output=True, text=True, check=True, env=env)
        print(result.stdout)
        if result.stderr:
            print(f"STDERR: {result.stderr}")
        return result.stdout
    except subprocess.CalledProcessError as e:
        print(f"Command failed with error code {e.returncode}")
        print(f"STDOUT: {e.stdout}")
        print(f"STDERR: {e.stderr}")
        return None

def get_crs_resource_status(resource_name):
    """Gets the status of a specific CRS resource."""
    cmd = ['crsctl', 'stat', 'res', resource_name, '-t']
    return run_command(cmd, user='grid')

def start_db_service(db_name, service_name):
    """Starts a specific database service."""
    cmd = ['srvctl', 'start', 'service', '-d', db_name, '-s', service_name]
    return run_command(cmd, user='oracle') # srvctl typically run as oracle user

def get_asm_diskgroup_usage():
    """Gets ASM disk group usage."""
    cmd = ['asmcmd', 'lsdg']
    return run_command(cmd, user='grid')

if __name__ == "__main__":
    print("\n--- CRS Resource Status ---")
    get_crs_resource_status("ora.scan1.listener")
    get_crs_resource_status("ora.yourdb.db") # Replace yourdb with actual DB name

print("\n--- ASM Diskgroup Usage ---")
    get_asm_diskgroup_usage()

# Example: Start a service (uncomment and replace with actual names)
    # print("\n--- Starting a Database Service ---")
    # start_db_service("yourdb", "your_app_service")

---



4. Lab Steps: Practical RAC Administration

These lab steps assume you have access to a working Oracle RAC environment (e.g., a two-node RAC setup on virtual machines). You'll typically perform tasks asgriduser (for Clusterware/ASM) ororacleuser (for database). Environment Setup (Pre-requisite, not covered in detail here):

  • 2 (or more) Linux VMs configured for Oracle Grid Infrastructure and RAC Database.
  • Shared storage configured (e.g., ASM with several disk groups: DATA, FRA).
  • Oracle Grid Infrastructure and Oracle Database software installed and configured.
  • Users:grid(owning Grid Infrastructure home),oracle(owning Database home),root.

---

Lab 1: Initial Environment Health Check

 

Goal: Verify the overall health and status of your RAC cluster, its components, and the database.

 

1. Log in to Node 1 (asgriduser):

ssh grid@racnode1
    . oraenv # Select your Grid Infrastructure Home (e.g., /u01/app/19.0.0/grid)

2. Check Clusterware Status:

crsctl stat res -t # This provides a summary of all cluster resources
    crsctl status cluster -all # Shows the status of all nodes in the cluster

*Expected Output:

* All resources should beONLINEon their respective nodes. All nodes should beOnline.

3. Check ASM Disk Group Status:

asmcmd lsdg

*Expected Output:

* All configured disk groups (e.g., DATA, FRA) should beMOUNTEDand show theirTotal_MBandFree_MB.

4. Log in to Node 1 (asoracleuser):

ssh oracle@racnode1
    . oraenv # Select your Database Home (e.g., /u01/app/oracle/product/19.0.0/dbhome_1)

5. Check Database and Instance Status:

srvctl status database -d  # Replace  with your actual database name (e.g., ORCL)
    srvctl status instance -d  -i 1
    srvctl status instance -d  -i 2

*Expected Output:* Database and both instances should berunning.

6. Check Services Status:

srvctl status service -d

*Expected Output:

* All defined services should berunningon their preferred instances. ---

 

Lab 2: Managing Database Instances

 

Goal: Practice stopping and starting individual instances and observing their impact.

 

1. Connect to Node 1 (asoracleuser, Grid Home set):

ssh oracle@racnode1
    . oraenv

2. Stop Instance 2:

srvctl stop instance -d  -i 2 -o immediate

*Observation:* Note the output indicating the instance is stopped. 3. Verify Instance Status:

srvctl status instance -d  -i 1
    srvctl status instance -d  -i 2
    srvctl status database -d

*Expected Output:*DB_NAME>1running,DB_NAME>2stopped. The database status should still berunning(with one instance down).

4. Connect to the Database (via SQL*Plus from Node 1):

sqlplus sys/@:1521/ as sysdba
    -- Example: sqlplus sys/oracle@myscan.example.com:1521/myservice as sysdba

*Observation:

* You should connect successfully toDB_NAME>1.

5. Identify Current Instance:

SELECT instance_name, host_name FROM V$INSTANCE;

*Expected Output:

* ShowsDB_NAME>1onracnode1.

6. Start Instance 2:

srvctl start instance -d  -i 2

7. Verify Instance Status again:

srvctl status instance -d  -i 1
    srvctl status instance -d  -i 2

*Expected Output:

* Both instances running. ---

 

Lab 3: Managing Services and Workload Relocation

 

Goal: Understand how to manage services, relocate them, and observe workload distribution.

 

1. Connect to Node 1 (asoracleuser, Database Home set):

ssh oracle@racnode1
    . oraenv

2. Create a New Service:

srvctl add service -d  -s APP_SERVICE -r 1 -a 2 -P BASIC -e SESSION -m BASIC -w 10 -j SHORT
    srvctl start service -d  -s APP_SERVICE
    srvctl status service -d  -s APP_SERVICE

*Observation:

*APP_SERVICEshould be running onDB_NAME>1(the preferred instance).

3. Connect Clients to the New Service: 

* Open two separate terminal windows (or use a simple Python/Java client).

* From each, connect to the database using the new service:

sqlplus appuser/appuser@:1521/APP_SERVICE
        -- Example: sqlplus appuser/appuser@myscan.example.com:1521/APP_SERVICE

* Inside eachsqlplussession, confirm the instance:

SELECT instance_name FROM V$INSTANCE;

*Expected:* Both should connect toDB_NAME>1initially.

4. Relocate the Service (Manual Failover):

srvctl relocate service -d  -s APP_SERVICE -i 1 -t 2

*Observation:

* The service should now be running onDB_NAME>2.

5. Check Client Connections (insqlplussessions): 

* For the clients connected *before* relocation, they might still be onDB_NAME>1until their session ends or TAF kicks in (if configured). New connections should go toDB_NAME>2.

* From a *new* SQL*Plus session, connect toAPP_SERVICEand verify it connects toDB_NAME>2.

6. Simulate Node Failure (for the preferred instance): * Keep yourAPP_SERVICEclient connections active.

 * Log in to Node 1 (asrootuser):

ssh root@racnode1
        crsctl stop crs # This will stop all Clusterware services on racnode1

Quickly observe yourAPP_SERVICEclient sessions: 

* If TAF/FAN is configured, they should automatically reconnect toDB_NAME>2. You might see a brief pause.

* If not, they will hang or error out. * From Node 2 (asoracleuser):

srvctl status service -d  -s APP_SERVICE

*Expected Output:

*APP_SERVICEshould now be running onDB_NAME>2.

7. Bring Node 1 Back Online: * Log in to Node 1 (asrootuser):

crsctl start crs

Verify on Node 1 (asgriduser):

crsctl stat res -t

*Expected Output:* Node 1 and its resources should come back online. The serviceAPP_SERVICEmight migrate back toDB_NAME>1if itsFAILOVER_RESTOREattribute is set toTRUE(default for BASIC policy). ---

 

Lab 4: ASM Disk Group Management (Conceptual/Simulated)

 

Goal: Understand how to check ASM disk groups and prepare for adding/dropping disks. (Note: Actual disk manipulation requires careful planning and available physical/virtual disks).

 

1. Log in to Node 1 (asgriduser):

ssh grid@racnode1
    . oraenv # Set Grid Home

2. List Disk Groups and Usage:

asmcmd lsdg

3. List Disks within a Disk Group:

asmcmd ls -lt +DATA/

This shows the contents of theDATAdisk group. You can alsocdinto it.

4. Identify ASM Disks:

asmcmd lsdsk -k

*Observation:

* This lists all physical/logical disks known to ASM and their corresponding disk group.

5. Simulate Adding a Disk (conceptual steps):

* Provision a new disk: In a VM environment, you'd add a new virtual disk to both RAC nodes.

* Discover the disk: Ensure the operating system sees the new disk (e.g.,fdisk -l,lsblk).

* Mark the disk for ASM: Useoracleasmutility (if using ASMLib) orudevrules. * Add disk in ASM:

asmcmd adddisk -g DATA NEW_DISK_0003 /dev/newdisk # Replace /dev/newdisk

*Observation:

* ASM will automatically start a rebalance operation to distribute data across the new disk. You can monitor this inV$ASM_OPERATIONorV$ASM_DISKGROUP. ---

 

Lab 5: Monitoring and Diagnostics

 

Goal: Use SQL and shell commands to gather diagnostic information and monitor RAC-specific events.

 

1. Connect to Node 1 (asoracleuser, Database Home set):

ssh oracle@racnode1
    . oraenv
    sqlplus sys/ as sysdba

2. Monitor Global Cache Statistics:

SELECT inst_id, name, value FROM GV$SYSSTAT WHERE name LIKE '%global cache%';

*Analysis:

* Look forglobal cache gets,global cache cr blocks servedandglobal cache current blocks served. High values indicate significant inter-instance communication.

3. Identify Sessions and Their Current Instance:

SELECT inst_id, sid, serial#, username, program, status, machine FROM GV$SESSION WHERE username IS NOT NULL;

*Analysis:

* See how sessions are distributed acrossinst_id1 and 2.

4. Check for Global Contention Events:

SELECT inst_id, event, total_waits, time_waited FROM GV$SYSTEM_EVENT WHERE event LIKE 'gc%' AND total_waits > 0 ORDER BY time_waited DESC;

*Analysis:

* Focus ongc cr block busy,gc current block busy. High wait times for these events can indicate hot blocks or application design issues leading to contention.

5. Review Alert Logs (for both instances): 

* On Node 1:

tail -f $ORACLE_BASE/diag/rdbms//1/trace/alert_1.log

* On Node 2:

tail -f $ORACLE_BASE/diag/rdbms//2/trace/alert_2.log

* *Analysis:

* Look for ORA errors, instance startups/shutdowns, resource failures, and background process issues.

6. Review Clusterware Logs (asgriduser on any node):

ssh grid@racnode1
    . oraenv # Grid Home
    tail -f $ORACLE_BASE/diag/crs//crs/trace/crsd.log # CRS Daemon log
    tail -f $ORACLE_BASE/diag/crs//crs/trace/cssd.log # CSS Daemon log

*Analysis:

* Crucial for diagnosing node evictions, resource start/stop failures, and interconnect issues. --- This comprehensive guide should provide a strong foundation for managing Oracle RAC environments. Remember that hands-on practice is key to mastering these concepts and tools. Happy administrating!

 



No comments:

Post a Comment