Troubleshooting Network Issues in Oracle RAC

Troubleshooting Network Issues in Oracle RAC 19c – Step-by-Step Guide
Oracle Real Application Clusters (RAC) 19c provides high availability and scalability, but network issues can impact node communication, cluster stability, and performance. Here’s a step-by-step approach to troubleshooting network problems in an Oracle RAC environment.

πŸ› οΈ Step 1: Verify the Network Configuration
Check if the Public, Private (Interconnect), and VIP addresses are correctly configured.

πŸ”Ή List all network interfaces on each node

ifconfig -a # Linux

ip a show # Alternative Linux command

SELECT inst_id, host_name, ip_address FROM gv$instance;
πŸ”Ή Check SCAN (Single Client Access Name) settings:

srvctl config scan
srvctl config scan_listener

πŸ”Ή Ensure VIP addresses fail over properly:

srvctl status nodeapps

πŸ› οΈ Step 2: Check Cluster Interconnect Status
The private network (interconnect) should be low latency and have no packet loss.

πŸ”Ή Verify private network settings:

oifcfg getif
crsctl stat res -t | grep -i interconnect

πŸ”Ή Test connectivity between nodes:

ping
traceroute

πŸ”Ή Check for packet loss:

ifstat -i eth1 # Check interconnect interface
netstat -i # Verify dropped packets

πŸ› οΈ Step 3: Validate SCAN and Listener Configuration
A misconfigured SCAN listener can cause connection failures.

πŸ”Ή Check SCAN listener status:

srvctl status scan_listener

πŸ”Ή Validate SCAN DNS resolution:

nslookup scan-name
πŸ”Ή Manually test listener connectivity:

tnsping scan_name
lsnrctl status LISTENER_SCAN1

πŸ”Ή Restart SCAN listener (if needed):

srvctl stop scan_listener
srvctl start scan_listener

πŸ› οΈ Step 4: Check CRS and GI Logs for Errors
πŸ”Ή View CRS logs for network-related failures:

cat /u01/app/grid/diag/crs/hostname/crs/trace/crsd.log | grep -i “network”

πŸ”Ή Check Grid Infrastructure logs:

cat /u01/app/grid/diag/tnslsnr/hostname/listener/alert/log.xml

πŸ› οΈ Step 5: Resolve Network Latency Issues
πŸ”Ή Check for Jumbo Frames Support (recommended for RAC interconnects):

ifconfig eth1 mtu 9000

πŸ”Ή Ensure CPU and Memory are not overloaded:
top
vmstat 5

πŸ”Ή Test TCP performance between nodes:

iperf -c -i 1 -t 10

πŸ’‘ Key Takeaways
βœ” Always validate SCAN, VIP, and interconnect settings first.
βœ” Use ping, traceroute, netstat, and iperf to diagnose connectivity issues.
βœ” Ensure correct DNS resolution for SCAN addresses.
βœ” Monitor CRS logs for network-related errors.
βœ” Optimize interconnect performance using Jumbo Frames and low-latency interfaces.

Leave a Reply

Your email address will not be published. Required fields are marked *