I am glad that I had already done the Golden Gate tutorials before I attended this class. The labs were great and fun to do so I think the class itself was OK. However the instructor did not know the material and could not understand the students questions. It was a very large Live Virtual Class across multiple time zones. The instructor was in India so the class ran till 2:30AM for her. I would recommend the class but not the instructor.
I recently got a request to load hundreds of database tables in a day and a half using csv files our developers created. I considered using SQL Loader but I didn’t want to spend the time understanding the table structure and creating the control file for each table. Another alternative was to use SQL Developer to import the csv files. That goes against my nature because it would be a manual, repetitive and error prone process.
The csv files were fairly small so plain inserts would work. I looked at writing a BASH script to generate the insert statements and it became very complex very quickly. PL/SQL was an option too but I ruled it out because of the requirement to read flat files. I have been using the Python cx-Oracle module recently so I decided to write a Python script for this task.
Python turned out to be the right choice and the script was very easy to write. The only tricky part was recognizing the date fields. Those are unquoted like a numeric field but contain literals like JAN,FEB, etc. Python and cx-Oracle handled everything with a minimum of effort on my part.
Create the csv files in a dedicated subdirectory like : tablename.csv. The first row of each csv file must contain the column names. The script truncates the target table before inserting the csv records. Any double quoted field is considered a literal and unquoted fields are numeric except date fields which must have the format 01-JAN-16.
Use these GitHub links for more information : Readme, Source code
This was a four day course delivered online. The Oracle labs were inaccessible from my work laptop and from my work network so I used my personal laptop from home. This turned out to be a good thing because work interruptions were minimal and I could focus on the class.
The instructor Barry Isaacman has many years of experience with Sun hardware and has done many ZFS Appliance installations. So he had lots of tips on how to set up things optimally. He also provided documents outside the course materials to support further learning. I was really interested in the API and Python integration and he was very helpful.
The labs are excellent and they utilize real ZFS Appliance hardware so performance is pretty good. There is also a ZFS Appliance simulator here.The hardest part was interpreting the class specific instructions for student IP address assignments. After getting the initial settings correct everything worked fine.
I was able to immediately apply my new knowledge as soon as I returned to work this week. We have recently installed ZFS appliances for database backups over Infiniband in two data centers and the installer set up replication between them. I reviewed the configuration and was pleased that the best practices in the class were implemented in the new installation.
I’m just starting to experiment with the API using Python. Here are some links that are promising:
I attended the Greater Cincinnati Oracle User’s Group meeting yesterday at the local Oracle office. I was expecting a good presentation because Maria Colgan (@sqlmaria) was presenting.
I was not disappointed and this was an excellent presentation and conversation with Maria who used to head up the optimizer team and now runs the In Memory team. She went into deep dives of a lot of technical innovations they have used for performance and reliability. I took lots of notes.
All of the code is embedded in Oracle database software (12.1) so it’s just a parameter change to turn it on. The only other step is to alter table INMEMORY. Some of the inmemory stuff can be offloaded to Exadata but you have to pay the license when you turn it on.
In Oracle 12.2 the whole thing can run in exadata flash instead of in the db server dram. Oracle 12.2 will be out “sometime this year”.
I have been automating and centralizing dba and system administration processes using Python recently. One of the problems I solved was how to create Bash and SQL scripts to run on the remote database servers, copy them to the database servers and then execute them as another Linux account. After I got my Python scripts working perfectly, I discovered that I had re-invented Ansible🙂
My scripts would have been much less complex and easier to write if I had done them with Ansible in mind. So that’s how I will write them going forward. I also wanted to implement a DBA Data Warehouse so I added cx_Oracle to the mix and used it to store the admin data in a database. I have been using a local virtual machine for Python development and the database was just a local installation on the VM.
Before moving to production I needed to figure out an easy way to get this stuff installed on the production Linux box. Setting it up on my VM was an adventure in Googling and trial and error. So here’s the documentation on how to set it up. You need to define the shell variables first.
wget https://bootstrap.pypa.io/ez_setup.py -O - | python
yum install python-devel
pip install --proxy=https://$my_userid:$my_pasword@$proxy_server:$port cx_Oracle
pip install --proxy=https://$my_userid:$my_pasword@$proxy_server:$port ansible
When I first heard about the virtual floppy bug called Venom I wondered if all Xen guests were affected. I quickly discovered that paravirtualized X86 guests are NOT affected. I was pretty sure the Linux guests running on our Exalogic were paravirtualized so I didn’t worry about it. Over the weekend I noticed more publicity about Venom and I decided I should make sure. I don’t have access to Dom0 so I needed to see the virtualization mode from within the guest OS. After a little research I found that if you are using the PVHVM drivers (xen-blkfront for disk, and xen-netfront for network) you are paravirtualized on Xen. I checked lsmod and verified my initial assumption was correct.
> lsmod | grep -i xen
xen_netfront 16420 0
xen_blkfront 13602 7
Our Supply Chain application creates a lot of tablespaces when it is installed. Then it creates tables in those tablespaces. Over the course of time, we have received updates from the software vendor that create additional tables. Some of these table creation scripts did not specify a tablespace, so the new tables were created in the default tablespace for the schema owner. I wrote this package to move the tables (and indexes) to the correct tablespaces.
I did not consider “alter table” because these databases are in continuous use by the development and testing teams. I happened to see an email on the oracle-l mailing list where Andrew Kerber was working on a similar issue. He posted his script and that is where I got the inspiration to write this package. See : https://www.freelists.org/post/oracle-l/Dbms-metadata-experts,12
I had used dbms_redefinition in the past to redefine tables to use partitions. I had done all that manually and I kept the log files. So I was able to figure out the redefinition function quickly. The part that took more time was the dbms_metadata function to generate the interim table ddl.
I wanted this package to be easy to read and easy to understand. I used my favorite programming structure, a series of nested if statements that test the outcome of functions. I used error trapping so that the errors could be handled without stopping the procedure. I’m currently running this in our non-prod databases without any issues. Of course you should test it before running it in production.
Compile and run as sys.
See the example in run_move_tables.sql
Source code: https://github.com/steveharville/move_tables
Blog post: https://steveharville.wordpress.com/2015/04/08/plsql-package-to-move-tables-to-new-tablespace/
Edit : Andrew has written a post about a similar procedure :