前陣子,在RHEL5或者OEL5上安裝Oracle Clusterware 10.2.0.1遇到了不少問題,其中的原因主要是因為Oracle RAC 10.2.0.1發布的時候
前陣子,在RHEL5或者OEL5上安裝Oracle Clusterware 10.2.0.1遇到了不少問題,其中的原因主要是因為Oracle RAC 10.2.0.1發布的時候RHEL5還沒出來,,那時的RedHat才是RHEL4,在SUSE Linux SLES10上也有同樣的問題。
問題出現剛開始安裝的時候和最后一個節點運行root.sh的時候。其中主要的問題是三個:
Issue#1: To install 10gR2, you must first install the base release, which is 10.2.0.1. As these version of OS are newer, you should use the following command to invoke the installer:
$ runInstaller -ignoreSysPrereqs // This will bypass the OS check //
Issue#2: At end of root.sh on the last node vipca will fail to run with the following error:
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
/home/oracle/crs/oracle/product/10/crs/jdk/jre//bin/java: error while loading
shared libraries: libpthread.so.0: cannot open shared object file:
No such file or directory
Issue#3: After working around Issue#2 above, vipca will fail to run with the following error if the VIP IP’s are in a non-routable range [10.x.x.x, 172.(16-31).x.x or 192.168.x.x]:
# vipca
Error 0(Native: listNetInterfaces:[3])
[Error 0(Native: listNetInterfaces:[3])]
原因是這樣的:
These releases of the Linux kernel fix an old bug in the Linux threading that Oracle worked around using LD_ASSUME_KERNEL settings in both vipca and srvctl, this workaround is no longer valid on OEL5 or RHEL5 or SLES10 hence the failures.
對于問題一,比較容易解決,只需要runInstaller的時候忽略檢查即可。
問題二的解決方法是:
To workaround Issue#2 above, edit vipca (in the CRS bin directory on all nodes) to undo the setting of LD_ASSUME_KERNEL. After the IF statement around line 120 add an unset command to ensure LD_ASSUME_KERNEL is not set as follows:
if [ "$arch" = "i686" -o "$arch" = "ia64" -o "$arch" = "x86_64" ]
then
LD_ASSUME_KERNEL=2.4.19
export LD_ASSUME_KERNEL
fi
unset LD_ASSUME_KERNEL <<<== Line to be added
問題三的解決方法:
To workaround issue#3 (vipca failing on non-routable VIP IP ranges, manually or during root.sh), if you still have the OUI window open, click OK and it will create the “oifcfg” information, then cluvfy will fail due to vipca not completed successfully, skip below in this note and run vipca manually then return to the installer and cluvfy will succeed. Otherwise you may configure the interfaces for RAC manually using the oifcfg command as root, like in the following example (from any node):
/bin # ./oifcfg setif -global eth0/192.168.1.0:public
/bin # ./oifcfg setif -global eth1/10.10.10.0:cluster_interconnect
/bin # ./oifcfg getif
eth0 192.168.1.0 global public
eth1 10.10.10.0 global cluster_interconnect
然后在手工運行vipca添加nodeapps resource即可。
詳細的情況記錄在Oracle notes: 414163.1。
聲明:本網頁內容旨在傳播知識,若有侵權等問題請及時與本網聯系,我們將在第一時間刪除處理。TEL:177 7030 7066 E-MAIL:11247931@qq.com