Oracle Database & Real Application Clusters 10g on SGI Altix 350 Servers


This page provides detailed information about how to implement the Oracle Database Server 10g on SGI's Altix 350 servers alternatively with Real Application Clusters option. This combination can provide a competitive solution to common mid-range to enterprise database servers e.g. Sun's SunFire or HP's Integrity servers usually more supported in combination with Oracle Database. The main goal is to prove some advantages of using this combination and discuss typical problems with its implementation. The document is based on our own experience with a configuration of two Altix 350 servers in a database cluster.

2. Hardware


The SGI (Silicon Graphics) company has been known for providing systems of computer graphics and technical servers for years. However, its server formerly equipped with IRIX operating systems were able to run also other applications of enterprise IT infrastructure. In last years SGI started to change its operating platform of new developed servers to Linux and Linux became the reason of combining SGI hardware not only with technical operations but also with other applications such as Oracle Database. Also the hardware itself provides enough reasons to run the database even if it is not so common.

Today SGI provides two types of Linux servers: Altix 350 series (up to 16 CPUs per one kernel image) and Altix 3000 (today up to 256 CPUs). We shall focus on the smaller and younger Altix 350 server originally released in January 2004 but all principles should be the same.

Unlike any other vendors' system SGI Atlix is based on SGI NUMAflex architecture. This means that the server is divided into physically separate modules. Each module contains 2 CPUs, memory and/or I/O subsystem for connecting other devices (PCI bus, network interface etc). All modules are connected via special NUMA link cables at the back side of the servers to share all resources between all servers. This practically means that you can easily change your hardware configuration only by adding modules to or disconnecting from the system. The system can also be expanded without buying expensive infrastructure (motherboards, cases) in advance. In Atlix 350 systems modules are connected directly into a ring topology so that Altix 350 can scale "only" up to 16 CPUs per one shared memory (kernel image). We received very good (really almost linear scalability) up 10 CPUs while we didn't try more. Altix 3000 servers use the same technology but modules are connected via NUMA link switches.


Altix systems use full 64bit Intel Itanium 2 processors at different speeds. This means, that you can run applications designed for Linux IA-64 platform at this time. Even if the operating system has special support for SGI architecture the application is the same as for other vendors' server (e.g. HP).

A Linux /proc/cpuinfo example:
processor  : 0
vendor     : GenuineIntel
arch       : IA-64
family     : Itanium 2
model      : 1
revision   : 5
archrev    : 0
features   : branchlong
cpu number : 0
cpu regs   : 4
cpu MHz    : 1400.000000
itc MHz    : 1400.000000
BogoMIPS   : 2076.18


SGI uses its own chipsets and I/O subsystems. Our version of Altix uses IO9 cards with internal SCSI disks support (up to two disks per modules) newer versions may use also SATA disks. This subsystem uses QLogic ISP12160 Dual Channel Ultra3 SCSI Processor. Other components are adopted from other vendors as well (e.g. Broadcom NetXtreme BCM5701 Gigabit Ethernet cards). With 64-bit PCI slots at speeds 133MHz and 66MHz additional components can be easily attached.

L1 and L2 controllers

Altix servers are equipped with built in L1 controller. This device provides a full control of the computer (power, environment and error control) via serial or USB line. SGI also offers a L2 controller which can control more than one Altix 350 server from one place via USB connection. This L2 controller is sold as a special device or as a software emulation for Linux. This practically means that you have a full control of all your Altix servers in one place.


The system is provided as a group of boxes (modules). Each is equipped with own 500W power supply (up to two independent supplies per one module). Each power supply has standard computer connector to connect to 110V or 230V electrical network. It is possible to install it to any standard rack or you can buy original rack designed especially for Altix 350 systems. This rack is equipped with a Power Distribution Unit (PDU) which provides power to all device placed in the rack. This PDU has to be connected into the 2-phase network with not so common (large) connector so that don't forget to check your local environment before purchasing it.

Known Hardware Problems

There are no known unsolved hardware or firmware problems

3. Operating System

Even if Itanium 2 processors are supported by several operating systems including HP-UX and Windows Server, SGI supports only 64-bit Linux operating system for IA-64 platform. Unlike other Linux HW vendors (IBM or HP), SGI has chosen Linux as the only (and primary) operating platform so that Linux is really supported.

There are two Linux distributions officially certified by SGI to be installed as an operating platform. SGI develops its own distribution - SGI Advanced Linux Environment with Pro Pack. This combination comes from RedHat Linux and is compatible with RedHat Enterprise Linux. You can also buy Altix servers with Novell SuSE Linux Enterprise Server (SLES) 9 while SLES 8 is supported with Service Pack 3 as well. The best configuration for running Oracle Database seems to be SLES 9 with Service Pack 1 and latest kernel updates. SLES is the only operating platform for Altix 350 officially supported by Oracle Database 10g.

We purchased our Altix 350 in June 2004 with SLES 8 SP3 officially with 2.4.21 kernel. In May 2005 we upgraded the system to SLES 9 SP1. Due to Oracle RAC we were able to upgrade the system without any outage.

SLES 9 SP 1 has a full support for Altix 350 server. The instalation of SLES 8 was not so comfortable because you had to use a special version of Linux kernel but finally it was successful. Current version SLES 9 SP 3 is without any problem. The upgrade from older patch levels (service packs) can be installed online.

Known Problems with Linux Operating System

There is no known Linux problem at this time.
Current version of SUSE Linux distribution is: SLES 9 SP3.

The only problem with SLES 8 SP3 which was solved by SGI in kernel was an incorrect behaviour of some SCSI (and network) kernel drivers. This problem lead to system hang few times a week without any hardware or kernel error logged. The appropriate driver has been patched by a SGI engineer Jesse Barness and the correction is officially included in kernel since 2.6.10 version. Even if it is not supported by SGI and Oracle you can run the 2.6.10 kernel on SLES 8 SP3 as well. With SLES 9 SP1 we did not encounter this problem. You can contact me for details about this issue.

4. Database Software

As an database software we use Oracle Database 10g. We strongly recommend using Oracle Database 10g Enterprise Edition version or newer to run on Altix 350 server. This version is distributed as a release (not as a patchset like on other platforms) and you have to contact Oracle to receive CDs or download it from OTN. This release is the only version certified with SLES 9. Service pack 1 does not seem to affect certification of SLES 9.

The installation and operation of the database is without any significant problem. You can use every feature of the database including Real Application Clusters option. Altix 350 provides really great performance on one CPU and the overall throughput has almost linear scalability.

Known Problems with Oracle Database

There is no known and unresolved problem at this time.
Current version of Oracle Database for Linux Itanium is (not tested here)

Older problems:

5. Cluster Configuration

Oracle Real Application Clusters is a feature of Oracle Database that allows you to serve the database with more then one database instance. Each instance can be run on separate server. Oracle believes that you will be able to run your application against a cluster of many small (1 - 2 CPUs) servers instead of buying large SMP or NUMA environment. Our experience says, that sharing memory with SGI NUMAflex is still much faster than with Oracle Cache Fusion (global cache between Oracle instances). The communication overhead is still very high for our type of application (web based information system with more than 1,000,000 online and complex transactions of different types per day). It is still more efficient to connect another CPU and memory module to your current NUMAflex system than adding another separate server connected via Ethernet network.

However, Real Application Clusters can bring some new features to Altix 350 environment as well. For example you can buy another Altix server (it means at least one new Altix base module with internal disk) and connect it into a two-node cluster and this second node will be available to serve your clients in case of the first node failure. This second node can also be used to provide other capacity for special database clients (e.g. batch jobs, important clients, overloading clients etc.).

In a cluster configuration it is necessary to store the database data on an external (usually Fibre-Channel) disc array. This array used to include two disc controllers each with two host connectors. Configuration like this allows both nodes to be connected directly into the array so that Oracle instances can share all the data on all nodes. As a disc array we use SGI Infinite Storage TP9300 (originally from LSI Logic) with up to 14 Fibre-Channel discs per one case.

Oracle Real Application Clusters 10g provides also a cluster management software so that if you decide to use Automatic Storage Management you don't have to install any other software or filesystem.

Typical configuration

A two-node cluster schema

3. References

4. Author

Miroslav Kripac, Masaryk University

Comments, suggestions and questions are welcome and can be sent to

Creation date: January 18, 2005,
Last modified: Tuesday, 31-Jan-2006 13:51:09 CET