Home » Server Options » RAC & Failsafe » 10g RAC configuration on Solaris 10 (10g. 10.2.0.2)
10g RAC configuration on Solaris 10 [message #412907] Mon, 13 July 2009 06:30 Go to next message
kudur_kv
Messages: 75
Registered: February 2005
Member
Hi,

Can you kindly provide me with any links to documetation for configuring Oracle 10g RAC on Solaris 10?

TIA.
KV
Re: 10g RAC configuration on Solaris 10 [message #412910 is a reply to message #412907] Mon, 13 July 2009 06:42 Go to previous messageGo to next message
Kamran Agayev
Messages: 145
Registered: February 2009
Location: Azerbaijan, Baku
Senior Member

The result is from simple Google search:
http://www.oracle.com/technology/pub/articles/osullivan-rac.html
Re: 10g RAC configuration on Solaris 10 [message #412960 is a reply to message #412907] Mon, 13 July 2009 11:25 Go to previous messageGo to next message
ebrian
Messages: 2794
Registered: April 2006
Senior Member
Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide 10g Release 2 (10.2) for Solaris Operating System
Re: 10g RAC configuration on Solaris 10 [message #412998 is a reply to message #412907] Mon, 13 July 2009 23:16 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
I'am planning to install Oracle RAC 10g on Solaris 10, with ASM.
Would you like to discuss about these tasks and implementing?


Thank you!
Re: 10g RAC configuration on Solaris 10 [message #413004 is a reply to message #412998] Mon, 13 July 2009 23:48 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member

Quote:
I'am planning to install Oracle RAC 10g on Solaris 10, with ASM.
Would you like to discuss about these tasks and implementing?


Yes always welcome; AS per Ebrain go throw that link; we can discuss Smile

Thanks
Re: 10g RAC configuration on Solaris 10 [message #413008 is a reply to message #413004] Mon, 13 July 2009 23:59 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
This is the first time I've really installed RAC on server side - SPARC. And I wonder about the statement:

Note that OCFS is not required for 10g RAC. In fact, I never use OCFS for RAC systems

I do not want to use Solaris Cluster, I just want to Oracle Cluster & ASM. Would you like to explain why ASM/RawDevice were the most favorite in RAC? Have them been being faster than FileSystem? I remembered, said Oracle "ASM will make you do not wonder about tuning I/O". Is it right?

Back to the article Build your RAC Cluster 10 on Solaris 10 and iSCSI, I saw, the author used Oracle Cluster software & ASM. One first question:

- Does CRS & voting disk use ASM to manage, doesn't it?


Thank you!

[Updated on: Tue, 14 July 2009 00:02]

Report message to a moderator

Re: 10g RAC configuration on Solaris 10 [message #413019 is a reply to message #413008] Tue, 14 July 2009 00:38 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member

Quote:
Raw files are unformatted disk partitions that can be used as one large file. Raw files have the benefit of no file system overhead, because they are unformatted partitions. Windows supports raw files, similar to UNIX. Using raw files for database or log files can have a slight performance gain. Windows 2003 has a disk manager (diskmgmt.msc) to manage all volumes. Windows 2003 also includes command line utilities (diskpart.exe) to manage volumes including raw. Oracle recommends that you use Windows volume mount points for addressing raw volumes.


Ref: http://download.oracle.com/docs/cd/B19306_01/win.102/b14305/architec.htm#i1005793


>>I remembered, said Oracle "ASM will make you do not wonder about tuning I/O". Is it right?

Yes. Your correct.


>>Does CRS & voting disk use ASM to manage

No. OCR & Voting disk to manage your OCFS Or Crs cluster (shared storage) not ASM.

ASM Only to manage your asm diskgroup.

Thanks

Re: 10g RAC configuration on Solaris 10 [message #413041 is a reply to message #413019] Tue, 14 July 2009 01:44 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
Thank you for your reply!

I just think, I should brief step-by-step installation RAC on Solaris 10, with ASM. And may you discuss some parts (need) to me?

Thank you!
Re: 10g RAC configuration on Solaris 10 [message #413078 is a reply to message #413041] Tue, 14 July 2009 05:32 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member


>>with ASM. And may you discuss some parts (need) to me?

As per Ebrain link pls check Installation steps. Incase any help require always welcome here.

Thanks
Re: 10g RAC configuration on Solaris 10 [message #413089 is a reply to message #413008] Tue, 14 July 2009 06:03 Go to previous messageGo to next message
ebrian
Messages: 2794
Registered: April 2006
Senior Member
trantuananh24hg wrote on Tue, 14 July 2009 00:59

- Does CRS & voting disk use ASM to manage, doesn't it?

The important distinction is that CRS (OCR) & voting disk can NOT use ASM to manage. OCR and voting disk are supported on NFS via a certified NAS device, shared raw partitions or CFS.
Re: 10g RAC configuration on Solaris 10 [message #413121 is a reply to message #412907] Tue, 14 July 2009 08:20 Go to previous messageGo to next message
kudur_kv
Messages: 75
Registered: February 2005
Member
I think the reason behind that is because the cluster services must be up and running before the ASM instance can come up. If you place the OCR and the voting files in the ASM directory, the cluster serices might not come up correctly.

I hope I read that right!

Regards,
KV.

and Thank you all for the links.
Hope this discussion link gives me more tips when i do the implementation at the end of the month.

Thanks again.
Re: 10g RAC configuration on Solaris 10 [message #413123 is a reply to message #413121] Tue, 14 July 2009 08:36 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member

>>If you place the OCR and the voting files in the ASM directory, the cluster serices might not come up correctly.

NO Way to store you OCP & Voting disk in ASM disk group

Babu
Re: 10g RAC configuration on Solaris 10 [message #413230 is a reply to message #413123] Tue, 14 July 2009 22:06 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
Following the brief of article
Build Your Own Oracle RAC Cluster on Solaris 10 and iSCSI

Quote:

On Solaris, the UDP parameters are udp_recv_hiwat and udp_xmit_hiwat. The default values for these paramters on Solaris 10 are 57344 bytes. Oracle recommends that you set these parameters to at least 65536 bytes.

To see what these parameters are currently set to, enter the following commands:

# ndd /dev/udp udp_xmit_hiwat
57344
# ndd /dev/udp udp_recv_hiwat
57344

To set the values of these parameters to 65536 bytes in current memory, enter the following commands:

# ndd -set /dev/udp udp_xmit_hiwat 65536
# ndd -set /dev/udp udp_recv_hiwat 65536

Now we want these parameters to be set to these values when the system boots. The official Oracle documentation is incorrect when it states that when you set the values of these parameters in the /etc/system file, they are set on boot. These values in /etc/system will have no effect for Solaris 10. Please see Bug 5237047 for more information.



and

Quote:

Setting Kernel Parameters

In Solaris 10, there is a new way of setting kernel parameters. The old Solaris 8 and 9 way of setting kernel parameters by editing the /etc/system file is deprecated. A new method of setting kernel parameters exists in Solaris 10 using the resource control facility and this method does not require the system to be re-booted for the change to take effect.

Let's start by creating a new resource project.

# projadd oracle

Kernel parameters are merely attributes of a resource project so new kernel parameter values can be established by modifying the attributes of a project. First we need to make sure that the oracle user we created earlier knows to use the new oracle project for its resource limits. This is accomplished by editing the /etc/user_attr file to look like this:

#
# Copyright (c) 2003 by Sun Microsystems, Inc. All rights reserved.
#
# /etc/user_attr
#
# user attributes. see user_attr(4)
#
#pragma ident "@(#)user_attr 1.1 03/07/09 SMI"
#
adm::::profiles=Log Management
lp::::profiles=Printer Management
root::::auths=solaris.*,solaris.grant;profiles=Web Console Management,All;lock_after_retries=no
oracle::::project=oracle
..........................



Do I really not configure some kernel lines in /etc/system, such as:

set shmsys:shminfo_shmmax=12884901888
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmseg=10
set shmsys:shminfo_shmmni=100
set semsys:seminfo_semmni=800
set semsys:seminfo_semmsl=256
set semsys:seminfo_semmns=204800
set noexec_user_stack=1
.....


One more time, I tried to set kernel by using projmod command, and did not set those above arguments in /etc/system (Solaris 10, of course, but single database), but Oracle returned problem events when it passed & checked in implementation steps. Were I wrong?

Thank you!

[Updated on: Tue, 14 July 2009 22:12]

Report message to a moderator

Re: 10g RAC configuration on Solaris 10 [message #413337 is a reply to message #413230] Wed, 15 July 2009 04:52 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member

Quote:
Now we want these parameters to be set to these values when the system boots. The official Oracle documentation is incorrect when it states that when you set the values of these parameters in the /etc/system file, they are set on boot. These values in /etc/system will have no effect for Solaris 10. Please see Bug 5237047 for more information.


That's the reason you going to add /etc/init.d

>>Do I really not configure some kernel lines in /etc/system, such as

Yes. This is for Solaris 10
Re: 10g RAC configuration on Solaris 10 [message #413339 is a reply to message #413230] Wed, 15 July 2009 05:16 Go to previous messageGo to next message
ebrian
Messages: 2794
Registered: April 2006
Senior Member
Solaris 10 now uses the resource control utility (prctl) to configure/set the kernel parameters.
Re: 10g RAC configuration on Solaris 10 [message #414096 is a reply to message #413339] Mon, 20 July 2009 06:44 Go to previous messageGo to next message
kudur_kv
Messages: 75
Registered: February 2005
Member
In continuation of the original topic, i am trying to configure RAC on solaris. Due to constraints i am working this out on VMWare. As of now I have crated 2 nodes that are able to talk to each other. Before I can install oracle on these individual nodes, I am taking the help of a system admin to install and configure the Sun cluster 3.2 on the VMware.

The question that i have is how can I add a common storage on VMware that will be equally accessible to all nodes?

Any clues pls?

TIA
KV
Re: 10g RAC configuration on Solaris 10 [message #414109 is a reply to message #414096] Mon, 20 July 2009 07:10 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member


Good Question.

AS per my understanding; I have done rac setup (using vmware)

1/ Configure Guest OS using your VMware (I have harddisk should be SCSI0:0)
2/ Install Sun OS
3/ Install Necessary RPM Packages
4/ Create Oracle user & groups
5/ Create env. file for oracle (like ORACLE_HOME,ORACLE_BASE etc..)
6/ Configure your kernal parameters
7/ Confifgure other parameter (which is releated to oracle)

Storage:

1/ Shutdown your Guest OS
2/ Add new hard disks (It's should be independent-persistent type & Hard disk should be SCSI 1:0 etc..)
3/ Start your Guest OS
4/ Format Newly added hard disk (which you added hard disk; as per your requirement use RAC or OCRS2 etc..)

Now down your Guest OS (say Sun1); Just copy & paste your Guest OS (say Sun2)

This is setup for Vmware; Check oracle document to install rac. Smile

Thanks
Re: 10g RAC configuration on Solaris 10 [message #414152 is a reply to message #412907] Mon, 20 July 2009 09:52 Go to previous messageGo to next message
kudur_kv
Messages: 75
Registered: February 2005
Member
Hi Babu,

Thank you for the tip. Looks like I created the clone of the guest OS a little too soon!

I will try this approach tomorrow and let you know.

Cheers!
KV
Re: 10g RAC configuration on Solaris 10 [message #414306 is a reply to message #414152] Tue, 21 July 2009 08:38 Go to previous messageGo to next message
kudur_kv
Messages: 75
Registered: February 2005
Member
I added a new disk to the solaris VM and cloned it.
The clone machine has created another virtual disk. But not sure how that will get shared. Will try to continue with the oracle custerware installation.
Re: 10g RAC configuration on Solaris 10 [message #414313 is a reply to message #414306] Tue, 21 July 2009 09:18 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member

>>The clone machine has created another virtual disk. But not sure how that will get shared. Will try to continue with the oracle custerware installation

Check attached screen shot. Then you may get some idea about your vm nodes & shared storage.

PS: This is information releated to oracle rac configuration using Vmware. So this is oracle releated discussion. /forum/fa/6563/0/

[Updated on: Tue, 21 July 2009 09:19]

Report message to a moderator

Re: 10g RAC configuration on Solaris 10 [message #417300 is a reply to message #412907] Fri, 07 August 2009 01:44 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
Dear!
I've got the problem as following code:
$ ./runcluvfy.sh stage -post hwos -n mbfdb01,mbfdb02 -verbose

Performing post-checks for hardware and operating system setup

Checking node reachability...

Check: Node reachability from node "mbfdb01"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  mbfdb01                               yes
  mbfdb02                               yes
Result: Node reachability check passed from node "mbfdb01".


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  mbfdb02                               failed
  mbfdb01                               failed
Result: User equivalence check failed for user "oracle".

ERROR:
User equivalence unavailable on all the nodes.
Verification cannot proceed.


Post-check for hardware and operating system setup was unsuccessful on all the nodes.
$


At 2 nodes, I've configured SSH exactly to connect from each to other node without password. You see

At Node 1:
login as: oracle
Using keyboard-interactive authentication.
Password:
Last login: Fri Aug  7 13:22:55 2009 from 10.252.20.110
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ cat /etc/hosts
#
# Internet host table
#
::1     localhost
127.0.0.1       localhost
10.252.20.71    mbfdb01 mbfdb01.neo.com.vn      loghost
10.252.20.76    mbfdb02 mbfdb02.neo.com.vn
10.252.20.72    mbfdb01-test-bge0
10.252.20.73    mbfdb01-test-nxge0
$ ssh oracle@mbfdb02
Last login: Fri Aug  7 13:24:57 2009 from mbfdb01
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ exit
Connection to mbfdb02 closed.
$ ssh oracle@mbfdb02.neo.com.vn
Last login: Fri Aug  7 13:30:12 2009 from mbfdb01
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ ssh oracle@10.252.20.76
Last login: Fri Aug  7 13:30:32 2009 from mbfdb01
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ exit
Connection to 10.252.20.76 closed.
$ ssh mbfdb02 "date;hostname"
Fri Aug  7 13:31:28 ICT 2009
mbfdb02
$ ls


Image_of_Node_1
http://i95.photobucket.com/albums/l130/trantuananh24hg/Node_1.jpg

At Node 2:
login as: oracle
Using keyboard-interactive authentication.
Password:
Last login: Fri Aug  7 13:31:28 2009 from mbfdb02
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ cat /etc/hosts
#
# Internet host table
#
#::1    localhost
127.0.0.1       localhost
10.252.20.76    mbfdb02  mbfdb02.neo.com.vn     loghost
10.252.20.71    mbfdb01  mbfdb01.neo.com.vn
10.252.20.77    mbfdb02-test-bge0
10.252.20.78    mbfdb02-test-nxge0
$ ssh oracle@mbfdb01
Last login: Fri Aug  7 13:28:44 2009 from 10.252.20.110
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ exit
Connection to mbfdb01 closed.
$ ssh oracle@mbfdb01.neo.com.vn
Last login: Fri Aug  7 13:33:23 2009 from mbfdb02
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ exit
Connection to mbfdb01.neo.com.vn closed.
$ ssh oracle@10.252.20.71
Last login: Fri Aug  7 13:33:41 2009 from mbfdb02
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ exit
Connection to 10.252.20.71 closed.
$ ssh mbfdb01 "date;hostname"
Fri Aug  7 13:34:09 ICT 2009
mbfdb01
$


Image of Node 2
http://i95.photobucket.com/albums/l130/trantuananh24hg/Node_2.jpg


May you help me to resolve this problem?

Thank you very much!
Re: 10g RAC configuration on Solaris 10 [message #417307 is a reply to message #412907] Fri, 07 August 2009 02:20 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
Dear!

I've fixed this above problem. That caused by simple reason: I did not create link to ssh in /usr/bin.

Fix by following on all of 2 (or more) nodes:
# mkdir -p /usr/local/bin
# ln -s -f /usr/bin/ssh /usr/local/bin/ssh


And now, I've recheck by cluvfy utility:
login as: root
Using keyboard-interactive authentication.
Password:
Last login: Fri Aug  7 11:28:31 2009 from 10.252.20.110
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
You have mail.
Sourcing //.profile-EIS.....
root@mbfdb01 # mkdir -p /usr/local/bin
root@mbfdb01 # ln -s -f /usr/bin/ssh /usr/local/bin/ssh
root@mbfdb01 # su - oracle
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ export SRVM_TRACE=true
$ cd 10gR2_RAC/Cluster/cluvfy
$ ./runcluvfy.sh comp nodecon -n mbfdb01,mbfdb02 -verbose

Verifying node connectivity

Verification of node connectivity was unsuccessful on all the nodes.
$ ./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02 -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "mbfdb01"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  mbfdb01                               yes
  mbfdb02                               yes
Result: Node reachability check passed from node "mbfdb01".


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  mbfdb02                               passed
  mbfdb01                               passed
Result: User equivalence check passed for user "oracle".

Pre-check for cluster services setup was unsuccessful on all the nodes.
$ ./runcluvfy.sh stage -post hwos -n mbfdb01,mbfdb02 -verbose

Performing post-checks for hardware and operating system setup

Checking node reachability...

Check: Node reachability from node "mbfdb01"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  mbfdb01                               yes
  mbfdb02                               yes
Result: Node reachability check passed from node "mbfdb01".


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  mbfdb02                               passed
  mbfdb01                               passed
Result: User equivalence check passed for user "oracle".

Post-check for hardware and operating system setup was unsuccessful on all the n                                             odes.
$



Re: 10g RAC configuration on Solaris 10 [message #417310 is a reply to message #417307] Fri, 07 August 2009 02:29 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member


Great. Thanks for your feedback.
Re: 10g RAC configuration on Solaris 10 [message #417323 is a reply to message #412907] Fri, 07 August 2009 04:00 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
Dear!

When I use cluvfy utility to verify configuration, I've got some thing:


$ ./runcluvfy.sh comp sys -n mbfdb01,mbfdb02 -p crs -verbose

Verifying system requirement

Verification of system requirement was unsuccessful on all the nodes.
$ ./runcluvfy.sh stage -post hwos -n mbfdb01,mbfdb02 -verbose

Performing post-checks for hardware and operating system setup

Checking node reachability...

Check: Node reachability from node "mbfdb01"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  mbfdb01                               yes
  mbfdb02                               yes
Result: Node reachability check passed from node "mbfdb01".


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  mbfdb02                               passed
  mbfdb01                               passed
Result: User equivalence check passed for user "oracle".

Post-check for hardware and operating system setup was unsuccessful on all the nodes.
$ ./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02

Performing pre-checks for cluster services setup

Checking node reachability...
Node reachability check passed from node "mbfdb01".


Checking user equivalence...
User equivalence check passed for user "oracle".

Pre-check for cluster services setup was unsuccessful on all the nodes.

$ ./runcluvfy.sh comp nodecon -n mbfdb01,mbfdb02 -verbose

Verifying node connectivity

Verification of node connectivity was unsuccessful on all the nodes.



It did not verify successfully, as the Oracle document note:
Quote:

The CVU Oracle Clusterware pre-installation stage check verifies the following:

* Node Reachability: All of the specified nodes are reachable from the local node.
* User Equivalence: Required user equivalence exists on all of the specified nodes.
* Node Connectivity: Connectivity exists between all the specified nodes through the public and private network interconnections, and at least one subnet exists that connects each node and contains public network interfaces that are suitable for use as virtual IPs (VIPs).
* Administrative Privileges: The oracle user has proper administrative privileges to install Oracle Clusterware on the specified nodes.
* Shared Storage Accessibility: If specified, the OCR device and voting disk are shared across all the specified nodes.
* System Requirements: All system requirements are met for installing Oracle Clusterware software, including kernel version, kernel parameters, memory, swap directory space, temporary directory space, and required users and groups.
* Kernel Packages: All required operating system software packages are installed.
* Node Applications: The virtual IP (VIP), Oracle Notification Service (ONS) and Global Service Daemon (GSD) node applications are functioning on each node.



I'm bad of networking, then, I post here, hope you help me.
This is my network and /etc/hosts which I wrote in

Node 1:
# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
bge0: flags=209040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,CoS> mtu 1500 index 2
        inet 10.252.20.72 netmask ffffff00 broadcast 10.252.20.255
        groupname ipmp-mbfdb01
        ether 0:21:28:1a:66:5e
bge0:1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
        inet 10.252.20.71 netmask ffffff00 broadcast 10.252.20.255
nxge0: flags=239040803<UP,BROADCAST,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED,STANDBY,CoS> mtu 1500 index 3
        inet 10.252.20.73 netmask ffffff00 broadcast 10.252.20.255
        groupname ipmp-mbfdb01
        ether 0:21:28:38:38:c6
sppp0: flags=10010008d1<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 4
        inet 10.252.20.2 --> 10.252.20.1 netmask ff000000
        ether 0

# cat /etc/hosts
#
# Internet host table
#
::1     localhost
127.0.0.1       localhost
10.252.20.71    mbfdb01 mbfdb01.neo.com.vn      loghost
10.252.20.76    mbfdb02 mbfdb02.neo.com.vn
10.252.20.72    mbfdb01-priv
10.252.20.73    mbfdb01-test-nxge0
10.252.20.74    mbfdb01-vip
10.252.20.79    mbfdb02-vip
10.252.20.77    mbfdb02-priv
#


I did not understand how to set priv and public network to do.
May you help me?

Thank you very much!

Original brief information about network:
Quote:

No ITEMS Values
1 Server Type Sun SPARC Enterprise M4000
2 Host Name mbfdb01
3 Hostname IP Address 10.252.20.71
4 Netmask 255.255.255.0
5 Default Gateway 10.252.20.254
6 1st NIC bge0
7 2nd NIC nxge0
8 Test address 1 [IPMP] 10.252.20.72
9 Test address 2 [IPMP] 10.252.20.73
10 IPMP Group ipmp- mbfdb01
11 ORACLE VIP IP address 10.252.20.74
12 Admin IP Address (XSCF) (port0/port1) 10.252.20.75




[Updated on: Fri, 07 August 2009 04:11]

Report message to a moderator

Re: 10g RAC configuration on Solaris 10 [message #417333 is a reply to message #412907] Fri, 07 August 2009 05:39 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
I recreated the network as guide:
Quote:

. Public Ping test
¨ Pinging Node1 from Node1 should return Node1's public IP address
¨ Pinging Node2 from Node1 should return Node2's public IP address
¨ Pinging Node1 from Node2 should return Node1's public IP address
¨ Pinging Node2 from Node2 should return Node2's public IP address

· Private Ping test
¨ Pinging Node1 private from Node1 should return Node1's private IP address
¨ Pinging Node2 private from Node1 should return Node2's private IP address
¨ Pinging Node1 private from Node2 should return Node1's private IP address
¨ Pinging Node2 private from Node2 should return Node2's private IP address

· VIP Ping test
Pinging the VIP address at this point should fail. VIPs will be activated at the end of the
Oracle Clusterware install.



On Node 1:
# cat /etc/hosts
#
# Internet host table
#
::1     localhost
# Public IPs
127.0.0.1       localhost
10.252.20.71    mbfdb01 mbfdb01.neo.com.vn      loghost
10.252.20.76    mbfdb02 mbfdb02.neo.com.vn

# Private IPs
10.252.20.73    mbfdb01-priv
10.252.20.78    mbfdb02-priv

# VIPs
10.252.20.74    mbfdb01-vip
10.252.20.79    mbfdb02-vip

# Test IP-nxge0
10.252.20.72    mbfdb01-test

# ping mbfdb01-priv
mbfdb01-priv is alive
# ping mbfdb02-priv
mbfdb02-priv is alive
# ping mbfdb02
mbfdb02 is alive
# ping mbfdb02.neo.com.vn
mbfdb02.neo.com.vn is alive


On Node 2:
root@mbfdb02 # cat /etc/hosts
#
# Internet host table
#
#::1    localhost
# Public IPs
127.0.0.1       localhost
10.252.20.71    mbfdb01 mbfdb01.neo.com.vn
10.252.20.76    mbfdb02 mbfdb02.neo.com.vn      loghost

# Private IPs
10.252.20.73    mbfdb01-priv
10.252.20.78    mbfdb02-priv

# VIPs
10.252.20.74    mbfdb01-vip
10.252.20.79    mbfdb02-vip

# Test IP-nxge0
10.252.20.77    mbfdb02-test

root@mbfdb02 # ping mbfdb01.neo.com.vn
mbfdb01.neo.com.vn is alive
root@mbfdb02 # ping mbfdb01-priv
mbfdb01-priv is alive
root@mbfdb02 # ping mbfdb01
mbfdb01 is alive
root@mbfdb02 #



But when I check with cluvfy:
# exit
$ pwd
/oracle/app/10gR2_RAC/Cluster/cluvfy
$ ./runcluvfy.sh comp nodecon -n mbfdb01,mbfdb02 -verbose

Verifying node connectivity

Verification of node connectivity was unsuccessful on all the nodes.
$



Were I wrong? Sad
Re: 10g RAC configuration on Solaris 10 [message #417429 is a reply to message #417333] Sat, 08 August 2009 07:01 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member


Try

./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02  -verbose
Re: 10g RAC configuration on Solaris 10 [message #417497 is a reply to message #412907] Sun, 09 August 2009 22:39 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
Dear Gent!
I'm not at RAC's machine now, but, I've executed this command

./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02  -verbose


It replied as unsuccessfully at node verification.

I meant that I've not had little experiences for some situations like that. May you describe/guess about problem that caused?

Thank you very much!
Re: 10g RAC configuration on Solaris 10 [message #417500 is a reply to message #412907] Sun, 09 August 2009 23:01 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
I am guessing & I could be 100% off base.
Is ssh access functioning in all possible directions, even to "localhost"?

By this I mean between all possible "host names" & even ssh to myself/same host/same VIP

[Updated on: Sun, 09 August 2009 23:04]

Report message to a moderator

Re: 10g RAC configuration on Solaris 10 [message #417503 is a reply to message #412907] Sun, 09 August 2009 23:20 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
Dear BlachSwan!

I really do not understand your word, but I guess you wanted I must:

1 - Must SSH be configured to connect without any password?
2 - Is it (SSH) was to be used to connect local/priv/vip... correctly?

May you clarify me?

Thank you!

[Updated on: Sun, 09 August 2009 23:25]

Report message to a moderator

Re: 10g RAC configuration on Solaris 10 [message #417504 is a reply to message #412907] Sun, 09 August 2009 23:28 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>1 - Must SSH be configured to connect without any password?
Yes, see below

>2 - Is it (SSH) was to be used to connect local/priv/vip...?
local1 ->local2
local1 ->local1
local2 ->local1
local2 ->local2

same for priv
same for vip
Re: 10g RAC configuration on Solaris 10 [message #417505 is a reply to message #412907] Sun, 09 August 2009 23:32 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
Dear BlackSwan!

I will reply as soon as possible.
(Because I'am not at RAC's machines now)

Thank you for your reply!
Re: 10g RAC configuration on Solaris 10 [message #417640 is a reply to message #417505] Mon, 10 August 2009 12:42 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member


Sorry to say you; If your not posting output of my above command. We can't help you.

This Error/Warning very easy to fix using my command; But I don't think why your not posting output. I hope you know OraFaq Rules.

Thanks
Re: 10g RAC configuration on Solaris 10 [message #417668 is a reply to message #417640] Mon, 10 August 2009 20:35 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
gentlebabu wrote on Tue, 11 August 2009 00:42

Sorry to say you; If your not posting output of my above command. We can't help you.

This Error/Warning very easy to fix using my command; But I don't think why your not posting output. I hope you know OraFaq Rules.

Thanks


Dear Gent!

Said I that I were not at RAC's machines, I'm at the other location, so that, I can not connect through LAN/VPN/... to the servers.

Of course, I must post the result of your above command as soon as possible when I come back. I hope you help me to resolve this problem as well.

Thank you!
Re: 10g RAC configuration on Solaris 10 [message #417792 is a reply to message #417668] Tue, 11 August 2009 09:49 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member


>>Of course, I must post the result of your above command as soon as possible when I come back

>> hope you help me to resolve this problem as well.

Okay. We'll
Re: 10g RAC configuration on Solaris 10 [message #418078 is a reply to message #417429] Wed, 12 August 2009 21:00 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
gentlebabu wrote on Sat, 08 August 2009 19:01

Try

./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02  -verbose



Dear gent!

I've executed this command as following
root@mbfdb01 # su - oracle
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ pwd
/oracle/app
$ cd 10gR2_RAC/Cluster/cluvfy
$ ./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02  -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "mbfdb01"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  mbfdb01                               yes
  mbfdb02                               yes
Result: Node reachability check passed from node "mbfdb01".


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  mbfdb02                               passed
  mbfdb01                               passed
Result: User equivalence check passed for user "oracle".

Pre-check for cluster services setup was unsuccessful on all the nodes.
$



Hope you help me soon!

Thank you!
Re: 10g RAC configuration on Solaris 10 [message #418124 is a reply to message #412907] Thu, 13 August 2009 01:45 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
This is the brief I did below:

SSH & oracle user (Passed)
Oracle user (2 nodes)
$ id -a
uid=175(oracle) gid=116(oinstall) groups=116(oinstall),115(dba)
$


Host file (2 nodes)
$ cat /etc/hosts
#
# Internet host table
#
::1     localhost
# Public IPs
127.0.0.1       localhost
10.252.20.71    mbfdb01 mbfdb01.neo.com.vn      loghost
10.252.20.76    mbfdb02 mbfdb02.neo.com.vn

# Private IPs
10.252.20.73    mbfdb01-priv
10.252.20.78    mbfdb02-priv

# VIPs
10.252.20.74    mbfdb01-vip
10.252.20.79    mbfdb02-vip

# Test IP-nxge0
10.252.20.72    mbfdb01-test

$ exit
root@mbfdb01 # ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
bge0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
        inet 10.252.20.73 netmask ffffff00 broadcast 10.252.20.255
        ether 0:21:28:1a:66:5e
bge0:1: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2
        inet 10.252.20.71 netmask ffffff00 broadcast 10.252.20.255
nxge0: flags=201000802<BROADCAST,MULTICAST,IPv4,CoS> mtu 1500 index 3
        inet 0.0.0.0 netmask 0
        ether 0:21:28:38:38:c6
sppp0: flags=10010008d1<UP,POINTOPOINT,RUNNING,NOARP,MULTICAST,IPv4,FIXEDMTU> mtu 1500 index 4
        inet 10.252.20.2 --> 10.252.20.1 netmask ff000000
        ether 0
root@mbfdb01 #


SSH connectivity
At Node 1:
$ ssh oracle@mbfdb02
Last login: Thu Aug 13 13:16:22 2009 from mbfdb01
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ exit
Connection to mbfdb02 closed.
$ ssh oracle@mbfdb02.neo.com.vn
Last login: Thu Aug 13 13:19:24 2009 from mbfdb01
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ exit
Connection to mbfdb02.neo.com.vn closed.
$ ssh oracle@mbfdb02-priv
Last login: Thu Aug 13 13:19:38 2009 from mbfdb01
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ exit
Connection to mbfdb02-priv closed.
$ ssh mbfdb02 "date;hostname"
Thu Aug 13 13:20:29 ICT 2009
mbfdb02
$ ssh mbfdb02.neo.com.vn "date;hostname"
Thu Aug 13 13:20:44 ICT 2009
mbfdb02
$ ssh mbfdb02-priv "date;hostname"
Thu Aug 13 13:20:55 ICT 2009
mbfdb02
$



At Node 2:
$ ssh oracle@mbfdb01
Last login: Thu Aug 13 13:16:15 2009 from mbfdb01
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ exit
Connection to mbfdb01 closed.
$ ssh oracle@mbfdb01.neo.com.vn
Last login: Thu Aug 13 13:21:49 2009 from mbfdb02
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ exit
Connection to mbfdb01.neo.com.vn closed.
$ ssh oracle@mbfdb01-priv
Last login: Thu Aug 13 13:21:57 2009 from mbfdb02
Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
$ exit
Connection to mbfdb01-priv closed.
$ ssh mbfdb01 "date;hostname"
Thu Aug 13 13:22:13 ICT 2009
mbfdb01
$ ssh mbfdb01.neo.com.vn "date;hostname"
Thu Aug 13 13:22:26 ICT 2009
mbfdb01
$ exit
Connection to mbfdb01-priv closed.
$ ssh mbfdb01-priv "date;hostname"
Thu Aug 13 13:22:43 ICT 2009
mbfdb01
$


Cluster verification: cluvfy.(Failed)


Check hardware OS:
$ ./runcluvfy.sh stage -post hwos -n mbfdb01,mbfdb02 -verbose

Performing post-checks for hardware and operating system setup

Checking node reachability...

Check: Node reachability from node "mbfdb01"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  mbfdb01                               yes
  mbfdb02                               yes
Result: Node reachability check passed from node "mbfdb01".


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  mbfdb02                               passed
  mbfdb01                               passed
Result: User equivalence check passed for user "oracle".

Post-check for hardware and operating system setup was unsuccessful on all the nodes.



Check shared storage (2 nodes)
$ ./runcluvfy.sh comp ssa -n mbfdb01,mbfdb02

Verifying shared storage accessibility

Verification of shared storage accessibility was unsuccessful on all the nodes.

$ ./runcluvfy.sh comp ssa -n mbfdb01,mbfdb02 -s /dev/dsk/c3t600A0B80002AFEF600001C4A4A497F8Ad0s0 -verbose

Verifying shared storage accessibility

Verification of shared storage accessibility was unsuccessful on all the nodes

$ ./runcluvfy.sh comp ssa -n mbfdb01,mbfdb02 -s /dev/dsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0 -verbose

Verifying shared storage accessibility

Verification of shared storage accessibility was unsuccessful on all the nodes.
$


Check node connectivity (2 nodes)
$ ./runcluvfy.sh comp nodecon -n mbfdb01,mbfdb02 -verbose

Verifying node connectivity

Verification of node connectivity was unsuccessful on all the nodes.
$



Check pre-crs_installation (2 nodes)
$ ./runcluvfy.sh stage -pre crsinst -n mbfdb01,mbfdb02 -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "mbfdb01"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  mbfdb01                               yes
  mbfdb02                               yes
Result: Node reachability check passed from node "mbfdb01".


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Comment
  ------------------------------------  ------------------------
  mbfdb02                               passed
  mbfdb01                               passed
Result: User equivalence check passed for user "oracle".

Pre-check for cluster services setup was unsuccessful on 



[/size]Internal disk & shared storage (RAID 1) information (2 nodes)[/size=3]
At Node 1
$ hostname
mbfdb01
$ df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/md/dsk/d100     20655025 6770856 13677619    34%    /
/devices                   0       0       0     0%    /devices
ctfs                       0       0       0     0%    /system/contract
proc                       0       0       0     0%    /proc
mnttab                     0       0       0     0%    /etc/mnttab
swap                 94219368    1688 94217680     1%    /etc/svc/volatile
objfs                      0       0       0     0%    /system/object
sharefs                    0       0       0     0%    /etc/dfs/sharetab
fd                         0       0       0     0%    /dev/fd
swap                 94217768      88 94217680     1%    /tmp
swap                 94217752      72 94217680     1%    /var/run
/dev/dsk/c3t600A0B80002AFF0200001A474A498CE9d0s0
                     495674704   65553 490652404     1%    /mbfdata
/dev/dsk/c3t600A0B80002AFEF600001C494A497E1Ad0s0
                     82611933   65553 81720261     1%    /mbfbacku
/dev/dsk/c3t600A0B80002AFEF600001C4A4A497F8Ad0s0
                     402735694   65553 398642785     1%    /mbfcrs
/dev/md/dsk/d130     54298766  955157 52800622     2%    /oracle
/dev/dsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0
                     3077135    3089 3012504     1%    /ocr_voti
$


At Node 2:
$ hostname
mbfdb02
$ df -k
Filesystem            kbytes    used   avail capacity  Mounted on
/dev/md/dsk/d200     20655025 4699124 15749351    23%    /
/devices                   0       0       0     0%    /devices
ctfs                       0       0       0     0%    /system/contract
proc                       0       0       0     0%    /proc
mnttab                     0       0       0     0%    /etc/mnttab
swap                 95027496    1648 95025848     1%    /etc/svc/volatile
objfs                      0       0       0     0%    /system/object
sharefs                    0       0       0     0%    /etc/dfs/sharetab
fd                         0       0       0     0%    /dev/fd
swap                 95025888      40 95025848     1%    /tmp
swap                 95025920      72 95025848     1%    /var/run
/dev/dsk/c3t600A0B80002AFEF600001C494A497E1Ad0s0
                     82611933   65553 81720261     1%    /mbfbacku
/dev/dsk/c3t600A0B80002AFEF600001C4A4A497F8Ad0s0
                     402735694   65553 398642785     1%    /mbfcrs
/dev/dsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0
                     3077135    3089 3012504     1%    /ocr_voti
/dev/dsk/c3t600A0B80002AFF0200001A474A498CE9d0s0
                     495674704   65553 490652404     1%    /mbfdata
/dev/md/dsk/d230     54298766 4833208 48922571     9%    /oracle
/vol/dev/dsk/c0t3d0/sol_10_509_sparc
                     2621420 2621420       0   100%    /cdrom/sol_10_509_sparc
$


Iostat (2 nodes)
At Node 1
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C4A4A497F8Ad0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C494A497E1Ad0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFF0200001A474A498CE9d0s0



At Node 2
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C4A4A497F8Ad0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFEF600001C494A497E1Ad0s0
$ iostat -En /dev/dsk/c3t600A0B80002AFF0200001A474A498CE9d0s0




Were I wrong? May you clarify me more?

Thank you very much!

[Updated on: Thu, 13 August 2009 01:49]

Report message to a moderator

Re: 10g RAC configuration on Solaris 10 [message #418197 is a reply to message #418124] Thu, 13 August 2009 06:45 Go to previous messageGo to next message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member

Are you using Vmware?? If No. What kind of storage cluser your using ??

[Updated on: Thu, 13 August 2009 06:48]

Report message to a moderator

Re: 10g RAC configuration on Solaris 10 [message #418238 is a reply to message #418124] Thu, 13 August 2009 09:39 Go to previous messageGo to next message
Mahesh Rajendran
Messages: 10707
Registered: March 2002
Location: oracleDocoVille
Senior Member
Account Moderator
Did you try ssa with one node at a time?
Re: 10g RAC configuration on Solaris 10 [message #418292 is a reply to message #412907] Thu, 13 August 2009 21:01 Go to previous messageGo to next message
trantuananh24hg
Messages: 744
Registered: January 2007
Location: Ha Noi, Viet Nam
Senior Member
@gent: No, those are the actual RAC's machine - M4000 Sparc, 1TB after configured by RAID 1 for share_storage
@Mash: I've tried this - comp ssa with one node at a time, but it still failed. Sad
I configure OCR & Voting disk with Cluster File System. Database file with ASM (or File System, depending to my benmarch data activities).


$ ./runcluvfy.sh comp ssa -n mbfdb01 -s /dev/rdsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0

Verifying shared storage accessibility

Verification of shared storage accessibility was unsuccessful on all the nodes.
$ ./runcluvfy.sh comp ssa -n mbfdb01 -s /dev/rdsk/c3t600A0B80002AFEF600001C4A4A497F8Ad0s0

Verifying shared storage accessibility

Verification of shared storage accessibility was unsuccessful on all the nodes.

$ ./runcluvfy.sh comp ssa -n mbfdb01 -s /dev/dsk/c3t600A0B80002AFEF600001C4B4A498CECd0s0 -verbose

Verifying shared storage accessibility

Verification of shared storage accessibility was unsuccessful on all the nodes.

Re: 10g RAC configuration on Solaris 10 [message #418393 is a reply to message #418292] Fri, 14 August 2009 09:55 Go to previous messageGo to previous message
babuknb
Messages: 1736
Registered: December 2005
Location: NJ
Senior Member


Okay,

Instead of check oracle cluster level; first of all you need to configure/check Operating system cluster.

like; ACTIVE <=> PASSIVE method; verify from os level cluser working or not.

Thanks
Previous Topic: Oracle10g RAC
Next Topic: RAC shutdown one node
Goto Forum:
  


Current Time: Thu Mar 28 18:23:07 CDT 2024