This is a combination of two courses, “Oracle Grid Infrastructure 11g: Manage Clusterware” and “ASM and Oracle Database 11g: RAC Administration”. The two courses are normally taught in five days and three days respectively. This combination was taught in five days by extending the classroom time (8:30AM – 6:30PM). Knowledge Services instructor Pat Frey taught the class in Columbus/Dublin OH. There is some overlap in the two courses so it’s not as bad as it sounds
There are lots of good labs in these courses and we did all of them except the overlapping parts. I was surprised that the virtual machines for our three node clusters worked so well. There were some performance issues and one student’s cluster became unresponsive on the last day. Mine stayed up and I always finished the labs in about half the time other students took.
I couldn’t understand why it took the other students so long to work through the labs until I realized how much more experience I have with Oracle. For some of them it was their first exposure to Clusterware and RAC. I got the training materials the week before and one of the exercises actually solved an issue I was having on our Exadata RAC systems. Also, I have been using RAC since it first came out on 9i. Here’s a review of the RAC class I took back in 2003. So I think my experience and knowledge of Oracle in general made a huge difference in my ability to perform the lab work efficiently and quickly. I also asked a lot more questions for clarification than anyone else because I was able to see how the different features were relevant to production uptime and performance.
This was a very good course for me and I recommend it to anyone who already has a basic grounding in single-instance Oracle.
I spent last week in downtown Chicago attending this class. It was a very good class and the instructor, Andy Fortunak was excellent. We had lots of labs to work through. The computers we used ran virtual database servers and virtual storage servers. This setup caused some problems, mainly slowness. My virtual database machine’s clusterware failed towards the end of the week but the virtual storage servers continued to work fine.
The main message was: invest the time needed to set up services and Database Resource Manager. IO Resource Manager and Instance Caging depends on it.
We also discussed consolidation and using RAC one-node. We are consolidating scores of databases on Exadata. Most of them do not require multiple RAC nodes for workload but we do want high availability and load balancing. This week I tested manually moving one-node databases between nodes and it works great. The client does not get disconnected but does get an error message. Srvctl actually makes it a two node RAC and then stops the original node. I testted automatic failover of rac one-node and compared it with a full rac two-node with one instance down. They both work the way I wanted, a second instance starts up. I can’t see any technical reason for going with one-node.
We recently bought another company and we are in the process of integrating our systems with theirs. One requirement was to import their daily business data into our Business Intelligence system for reporting. We agreed they would provide an Oracle datapump export file for us to import. We wanted to automate this and have a non-privileged application owner perform the import. I created a stored procedure that could run the import as the system user and granted execute on the procedure to the application owner. The dbms_datapump package has lots of options and it was fun exploring them to find the best way of doing this.
Run as sys :
grant execute on dbms_lock to system;
Run as system:
CREATE OR REPLACE DIRECTORY SOURCE AS '/nfsmount/exp/incoming';
CREATE OR REPLACE PROCEDURE SYSTEM.SCHEMA_IMPORT (
dumpfilename IN VARCHAR2)
h1 NUMBER; -- data pump job handle
job_state VARCHAR2 (30);
status ku$_Status; -- data pump status
PRAGMA EXCEPTION_INIT (job_not_exist, -31626);
DBMS_DATAPUMP.open (operation => 'IMPORT',
job_mode => 'SCHEMA',
job_name => NULL);
DBMS_DATAPUMP.set_parameter (h1, 'TABLE_EXISTS_ACTION', 'TRUNCATE');
DBMS_DATAPUMP.add_file (h1, dumpfilename, 'SOURCE');
dumpfilename || '.log',
job_state := 'UNDEFINED';
WHILE (job_state != 'COMPLETED') AND (job_state != 'STOPPED')
handle => h1,
mask => DBMS_DATAPUMP.ku$_status_job_error
timeout => -1);
job_state := status.job_status.state;
DBMS_OUTPUT.put_line ('job finished');
grant execute on schema_import to APP_OWNER;
Run as APP_OWNER:
Today we got a request to refresh a development database from a copy of production. I decided to use the canned clone process that ships with OEM 12c instead of the usual scripts or manual processes. I was pleasantly surprised with the results. There was a failure related to an NFS problem and I was able to restart the process easily.
- Minimizes the amount of scratch disk space by backing up and restoring one datafile at a time.
- No scripting or manual process required.
- Uses standard RMAN under the covers.
- Works with older versions (This one was 10.2)
- Works with existing backups or directly from the source database (This database was not previously backed up and was not in archivelog mode)
- Can duplicate a live database if it is in archivelog mode and running at version 11
- Easy to restart after errors.
- Runs as a database job in the background, unattended.
- Works on the same machine or across the network.
- More time required – one datafile at a time, no parallelization. 763G ran for 6.5 hours.
- No email notification for errors or completion.
- Black box – you really need to know how to do the steps manually and this will not teach you.
- Needs scratch space on local disks to avoid NFS performance problems. (size of largest datafile)
I recently moved a 2TB database from Linux, Oracle 11.1 to AIX, Oracle 11.2 using impdp with network_link and parallel options. The data was imported in about two hours and the indexes took another 7 hours. There was no downtime but we kept the users off the system while this was happening.
Here is the command I used on the new server:
nohup impdp system/secret NETWORK_LINK=olddb FULL=y PARALLEL=25 &
Then you can run this to see which parallel process is importing which table:
impdp system attach
Import> parallel=30 << this will increase the parallel processes if you want
I had created all the datafiles in the new database and created a db link named olddb beforehand. The servers were on the same local area network and the network was the bottleneck.
Today while installing SQLDeveloper on OEL 6.3 I ran into a couple of issues I wasn’t expecting. The installation went fine using the Oracle rpm file. The first execution of the /opt/sqldeveloper/sqldeveloper.sh script prompted for the location of a jdk. A find came up with /etc/alternatives/java_sdk_openjdk and that worked fine. The next issue was “Can’t connect to X11 window server using ‘:0.0′ as the value of the DISPLAY variable”. I tried running xterm to see if it would work. Xterm was not installed on the system and I couldn’t find any of the other X11 stuff. So I ran “yum install xterm”. Now xterm works and so does SQLDeveloper.
I took the opportunity to attend the OEM 12c Workshop which was offered here in Indianapolis at the Oracle office last week. I had seen a presentation at the Oracle User’s Group meeting in April. Greg Walters was the presenter at the INOUG meeting and he also ran the workshop. It was a very good hands on workshop. Each attendee got their own servers. One of the exercises involved locking up a database and then connecting using real time ADDM (Automatic Database Diagnostics Monitor). It connects directly to the shared memory on the server and reads the v$ information. It reported the specific Unix process that was causing the problem. I logged on to the server and ran a kill -9 on that process which cleared up the database problem. Another good exercise involved generating an ADM (Application Data Model) and then using the model to subset the data on a new database. I typed in the SQL to select a subset of one table using bind variables for future flexibility. When I ran the job it selected a subset for that table plus all the other tables in the schema according to the ADM discovered referential integrity rules. That was very cool and fast too.