I have been automating and centralizing dba and system administration processes using Python recently. One of the problems I solved was how to create Bash and SQL scripts to run on the remote database servers, copy them to the database servers and then execute them as another Linux account. After I got my Python scripts working perfectly, I discovered that I had re-invented Ansible 🙂
My scripts would have been much less complex and easier to write if I had done them with Ansible in mind. So that’s how I will write them going forward. I also wanted to implement a DBA Data Warehouse so I added cx_Oracle to the mix and used it to store the admin data in a database. I have been using a local virtual machine for Python development and the database was just a local installation on the VM.
Before moving to production I needed to figure out an easy way to get this stuff installed on the production Linux box. Setting it up on my VM was an adventure in Googling and trial and error. So here’s the documentation on how to set it up. You need to define the shell variables first.
wget https://bootstrap.pypa.io/ez_setup.py -O - | python
yum install python-devel
pip install --proxy=https://$my_userid:$my_pasword@$proxy_server:$port cx_Oracle
pip install --proxy=https://$my_userid:$my_pasword@$proxy_server:$port ansible
When I first heard about the virtual floppy bug called Venom I wondered if all Xen guests were affected. I quickly discovered that paravirtualized X86 guests are NOT affected. I was pretty sure the Linux guests running on our Exalogic were paravirtualized so I didn’t worry about it. Over the weekend I noticed more publicity about Venom and I decided I should make sure. I don’t have access to Dom0 so I needed to see the virtualization mode from within the guest OS. After a little research I found that if you are using the PVHVM drivers (xen-blkfront for disk, and xen-netfront for network) you are paravirtualized on Xen. I checked lsmod and verified my initial assumption was correct.
> lsmod | grep -i xen
xen_netfront 16420 0
xen_blkfront 13602 7
I recently moved a 2TB database from Linux, Oracle 11.1 to AIX, Oracle 11.2 using impdp with network_link and parallel options. The data was imported in about two hours and the indexes took another 7 hours. There was no downtime but we kept the users off the system while this was happening.
Here is the command I used on the new server:
nohup impdp system/secret NETWORK_LINK=olddb FULL=y PARALLEL=25 &
Then you can run this to see which parallel process is importing which table:
impdp system attach
Import> parallel=30 << this will increase the parallel processes if you want
I had created all the datafiles in the new database and created a db link named olddb beforehand. The servers were on the same local area network and the network was the bottleneck.
Today while installing SQLDeveloper on OEL 6.3 I ran into a couple of issues I wasn’t expecting. The installation went fine using the Oracle rpm file. The first execution of the /opt/sqldeveloper/sqldeveloper.sh script prompted for the location of a jdk. A find came up with /etc/alternatives/java_sdk_openjdk and that worked fine. The next issue was “Can’t connect to X11 window server using ‘:0.0’ as the value of the DISPLAY variable”. I tried running xterm to see if it would work. Xterm was not installed on the system and I couldn’t find any of the other X11 stuff. So I ran “yum install xterm”. Now xterm works and so does SQLDeveloper.