loopback iSCSI on Red Hat EL6 – For study only

Well, I needed to test something that uses Oracle ASM and didn’t have a test environment available, nor the possibility to use virtualization and even worse, not having autonomy to partition my work laptop. I know this is ugly and is intended for STUDY ONLY… I follow that up by saying:


If you read past the above disclaimer, let’s get this thing to work.


Install dependencies (your machine may need more or less packages)

rpm -Uvh perl-Config-General-2.52-1.el6.noarch.rpm
rpm -Uvh sg3_utils-libs-1.28-12.el6.x86_64.rpm
rpm -Uvh sg3_utils-1.28-12.el6.x86_64.rpm
rpm -Uvh scsi-target-utils-1.0.24-18.el6.x86_64.rpm
yum -y install iscsi-initiator-utils.x86_64

Enable tgtd and iscsid on system boot

chkconfig tgtd on
chkconfig iscsid on

Create the directory to host the luns

mkdir -p /disks


Create the luns (in the example below 10 disks of 2 GB each)

for i in 1 2 3 4 5 6 7 8 9 10
dd if=/dev/zero of=/disks/disk$i.dat bs=1M count=2048

Amend /etc/rc.d/rc.local and add the commands below to be executed on boot. These will simply add the target iqn.com.example.san, assign the luns to it and grant access to everyone… because I am running this on a very old laptop for testing purposes, I don’t care about security.

tgtadm --lld iscsi --op new --mode target --tid 1 -T iqn.com.example.san

for i in 1 2 3 4 5 6 7 8 9 10
 tgtadm --lld iscsi --op new --mode logicalunit --tid 1 --lun $i -b /disks/disk$i.dat

tgtadm --lld iscsi --op bind --mode target --tid 1 -I ALL

Reboot and check everything

init 6
tgtadm --lld iscsi --op show --mode target


Connect to the target:

[root@rhel6 ~]# iscsiadm -m discovery -t sendtargets -p -l,1 iqn.com.example.san
Logging in to [iface: default, target: iqn.com.example.san, portal:,3260] (multiple)
Login to [iface: default, target: iqn.com.example.san, portal:,3260] successful.
[root@rhel6 ~]# fdisk -l | grep Disk
Disk /dev/sda: 320.1 GB, 320072933376 bytes
Disk identifier: 0x000b3048
Disk /dev/mapper/luks-1b0981f6-d6d5-466d-b1d5-ba49fef8cda1: 319.0 GB, 319020859392 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_oc4572715320-lv_root: 310.4 GB, 310412050432 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_oc4572715320-lv_swap: 8589 MB, 8589934592 bytes
Disk identifier: 0x00000000
Disk /dev/sdb: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdc: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdd: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sde: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdf: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdg: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdi: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdj: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdk: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000
Disk /dev/sdh: 2147 MB, 2147483648 bytes
Disk identifier: 0x00000000

With that you should be good to go ahead and setup ASM.

Posted in Storage | Tagged , , , , , , , , , , , , , | Leave a comment

ODC Appreciation day! Those cool free stuff #ThanksODC

Reading the blog of Tim Hall at Oracle-Base, I saw this initiative to post something about a few stuff we enjoy while working with Oracle technology on a particular day, which is today, with the hashtag #ThanksODC, previously #ThanksOTN. That’s awesome to just talk about a few things we enjoy and not get into technical posts for a change, so here are some cool and free stuff that we can use and I enjoy a lot, might that be for study or for a production deployment.

Virtualbox CLI – It is not news that I use virtualbox a lot both for study and to work, and I really enjoy what the command line allow me to do. I can script things I do often to reproduce it faster, I can share a lot customized virtual machines with a lot of custom settings and this allows me to keep standards in a very easy way.

RMAN Active duplicate – I am too lazy to keep recreating some standbys for a particular setup I have, this functionality is really a life saver. I just have my scripts ready and whenever the time comes, I just run them and a few minutes later, I’m all set.

Silent Installation – Scripting these repeating steps is awesome, in this changing world we are living where we start to hear about DevOps, automation, continuous delivery and etc, this is a big deal and silent installation and response files really helps on getting things done faster and consistently.

Dataguard Broker – Of course, if you have data guard, why not manage it with broker? It is free if you have dataguard, it keeps your environment consistent and makes your life a lot easier to perform routine operations.



Posted in ORACLE Database | Leave a comment

Introduction to Disaster Recovery to the cloud with Oracle Secure Backup Cloud Module – AWS S3

Cloud computing is around for years now and companies still struggle to make the move to the cloud… and they are not completely wrong, every major change must be well designed and tested, setting up a roadmap is in my opinion a good approach and why not starting with Disaster Recovery?

Many companies rely on off-site backup storage to have a cost-effective disaster recovery solution, that means they have to perform backups to tape, transport those tapes from the datacenter to the offsite storage provider and when things go bad, it is necessary to request the transport back from the tape to the disaster recovery site. That means you have costs with tapes, logistics and the offsite storage vendor. The RPO (Recovery Point Objective) is big and the RTO (Restore Time Objective) will be hugely dependent on logistics, transit, etc. In this post I will demonstrate how to reduce the RTO and RPO while still (possibly) reducing operational costs $$$.

Offsite Storage - Traditional vs Cloud

Setup AWS credentials

Go to the IAM service, select the “Users” tab and click on “Add user” the user to be used for Oracle Backups. You should see something like below, this is the first page of the wizard and you have to enter the user name desired and select the access type. Since I will use this account to backup Oracle to AWS, I will not allow it to access the management console, therefore I picked only “Programmatic Access”.

Enter details AWS User Name

Now you will have to pick a group, since I didn’t have any group that provided just the access to S3, I will create one for the cloud-backups and grant the AmazonS3FullAccess policy to it.

Create the group

Once the group is created, I will select it and move on to the review tab.

Group selection

The review tab will only show a short summary of the operation, if you are happy with it click on “Create User”.


Now you can download the Access key ID and Secret access key. That’s all.

Complete S3 Bucket creation

Create an AWS S3 Bucket

To create the bucket, you can use your root user or another user with privileges. If the user created is part of the same AWS Account, it will have privileges to read/write to the bucket. Go to AWS S3 service and click on “Create bucket”.


Give your bucket a name, if you want to comply with Oracle standards it should start with “oracle-data”. In my example I will name it “oracle-data-osbcm-demo” and place it on the region “US East (N. Virginia)”. Click Next.

Create bucket 1

Here I will leave versioning and logging disabled, I will simply add a “description” tag to make it easy to identify. Click Next.

Create bucket 2 - Properties

Now, to set permissions, if your credentials are under the same AWS account, you can simply hit next… I got a little confused here and tried to add the username in the account field… no problems with that, it just gave me a warning in the end, but the access worked just fine.

Screenshot from 2017-09-30 20-29-46

The review page will display a summary of the operation. Just click on “Create bucket”.

Screenshot from 2017-09-30 20-30-18

Download the software

To download the OSB Cloud Module for AWS, follow the link below:



Unzip the contents of osbws_installer.zip. Important: You will need Java 1.7 or newer to perform the installation.

[oracle@mustang ~]$ unzip osbws_installer.zip
 Archive: osbws_installer.zip
 inflating: osbws_install.jar
 inflating: osbws_readme.txt

If you don’t have the osbws_wallet directory yet, create it like in the example below:

[oracle@mustang ~]$ mkdir -p $ORACLE_HOME/dbs/osbws_wallet

Install the module using your AWS credentials. For security reasons, since the AWS credentials are supplied for installation, Oracle recommends to create a file where only Oracle has access to and run the installation from there.

[oracle@mustang ~]$ touch osbws.sh
[oracle@mustang ~]$ chmod 700 osbws.sh

The contents of osbws.sh are as follows (replace what is in red with your AWS credentials):

[oracle@mustang ~]$ cat osbws.sh
java -jar osbws_install.jar \
-AWSKey kwywnVVRIi5HVwfk9CQNRPXwYn3kHpIdlY+myqMI \
-walletDir $ORACLE_HOME/dbs/osbws_wallet \
-libDir $ORACLE_HOME/lib/

Execute the script and the installation will proceed.

[oracle@mustang ~]$ ./osbws.sh 
Oracle Secure Backup Web Service Install Tool, build 2017-06-01
AWS credentials are valid.
Oracle Secure Backup Web Service wallet created in directory /u01/app/oracle/product/
Oracle Secure Backup Web Service initialization file /u01/app/oracle/product/ created.
Downloading Oracle Secure Backup Web Service Software Library from file osbws_linux64.zip.
Download complete.

At this point you are good to go. However, some additional configuration might be needed to satisfy your needs, in my example I am not interested setting S3 as the default backup destination, it will be used to replace my offsite storage.

Additional configuration

I have added my S3 bucket in the parameter file created for the osbcm as you can see below:

[oracle@mustang ~]$ cat $ORACLE_HOME/dbs/osbwsSTNT101.ora
OSB_WS_WALLET='location=file:/u01/app/oracle/product/ CREDENTIAL_ALIAS=bruno580_aws'

Tip: You can see the accepted parameters for this lib by running “strings” against it:

[oracle@mustang ~]$ strings $ORACLE_HOME/lib/libosbws.so | grep OSB_

Testing the offsite backup to AWS S3

Perform the backup by allocating the channel as SBT type, just as you would for a tape backup and provide the lib details.

RMAN> run {
allocate channel dev1 type
sbt parms='SBT_LIBRARY=/u01/app/oracle/product/,
backup recovery area;
release channel dev1;
}2> 3> 4> 5> 6> 7>

allocated channel: dev1
channel dev1: SID=99 device type=SBT_TAPE
channel dev1: Oracle Secure Backup Web Services Library VER=

Starting backup at 01-OCT-17 20:04:41
specification does not match any datafile copy in the repository
channel dev1: starting archived log backup set
channel dev1: specifying archived log(s) in backup set
input archived log thread=1 sequence=44 RECID=65 STAMP=956255533
input archived log thread=1 sequence=45 RECID=66 STAMP=956255753
channel dev1: starting piece 1 at 01-OCT-17 20:04:41
channel dev1: finished piece 1 at 01-OCT-17 20:16:47
piece handle=07sfuomp_1_1 tag=TAG20171001T200441 comment=API Version 2.0,MMS Version
channel dev1: backup set complete, elapsed time: 00:12:06
channel dev1: input backup set: count=6, stamp=956255755, piece=1
channel dev1: starting piece 1 at 01-OCT-17 20:16:47
channel dev1: backup piece /u01/app/oracle/fast_recovery_area/STNT101/STNT101/autobackup/2017_10_01/o1_mf_s_956255755_dx2qxwkl_.bkp
piece handle=c-3200260673-20171001-00 comment=API Version 2.0,MMS Version
channel dev1: finished piece 1 at 01-OCT-17 20:18:52
channel dev1: backup piece complete, elapsed time: 00:02:05
Finished backup at 01-OCT-17 20:18:52

Starting Control File and SPFILE Autobackup at 01-OCT-17 20:18:52
piece handle=c-3200260673-20171001-01 comment=API Version 2.0,MMS Version
Finished Control File and SPFILE Autobackup at 01-OCT-17 20:20:57

released channel: dev1

Check the backups

RMAN> list backupset device type SBT_TAPE;

List of Backup Sets

BS Key Type LV Size Device Type Elapsed Time Completion Time 
------- ---- -- ---------- ----------- ------------ ------------------
6 Full 10.22M SBT_TAPE 00:00:01 01-OCT-17 18:35:56
 BP Key: 8 Status: AVAILABLE Compressed: NO Tag: TAG20171001T183555
 Handle: c-3200260673-20171001-00 Media: s3.amazonaws.com/oracle-data-osbcm-demo
 SPFILE Included: Modification time: 01-OCT-17 17:56:57
 SPFILE db_unique_name: STNT101
 Control File Included: Ckp SCN: 10507534 Ckp time: 01-OCT-17 18:35:55

BS Key Size Device Type Elapsed Time Completion Time 
------- ---------- ----------- ------------ ------------------
7 63.00M SBT_TAPE 00:00:05 01-OCT-17 20:04:46
 BP Key: 7 Status: AVAILABLE Compressed: NO Tag: TAG20171001T200441
 Handle: 07sfuomp_1_1 Media: s3.amazonaws.com/oracle-data-osbcm-demo

List of Archived Logs in backup set 7
 Thrd Seq Low SCN Low Time Next SCN Next Time
 ---- ------- ---------- ------------------ ---------- ---------
 1 44 10500175 30-SEP-17 23:10:27 10507356 01-OCT-17 18:32:11
 1 45 10507356 01-OCT-17 18:32:11 10507525 01-OCT-17 18:35:53

BS Key Type LV Size Device Type Elapsed Time Completion Time 
------- ---- -- ---------- ----------- ------------ ------------------
8 Full 10.25M SBT_TAPE 00:00:06 01-OCT-17 20:18:58
 BP Key: 9 Status: AVAILABLE Compressed: NO Tag: TAG20171001T201852
 Handle: c-3200260673-20171001-01 Media: s3.amazonaws.com/oracle-data-osbcm-demo
 SPFILE Included: Modification time: 01-OCT-17 17:56:57
 SPFILE db_unique_name: STNT101
 Control File Included: Ckp SCN: 10518161 Ckp time: 01-OCT-17 20:18:52

And now, just looking at S3, we can actually see the files there from the web interface.

S3 Bucket with Oracle backups


With cloud storage for disaster recovery, you can save on vendor contracts, logistics, tapes and possibly libraries. Of course you will have a monthly fee for the cloud storage, but that is usually a lot cheaper. So, if you are considering cloud as part of your business, that’s one way to get started and benefit not only from the possible cost savings but also from the removal of human interaction during restore and recovery. Assuming you are backing up to disk and then to tape, you can use the backup recovery area as I did in my example and you won’t have to go through the headache of asking the storage administrator to restore file 1 through 10… do some recovery, ask again for files 11 through 20… you get the idea… specially if you ever depended on a storage administrator to assist you with restoring archives, for example.

Downsides: You will have to ensure you have a good dedicated link for throughput. You will also need to encrypt your backups prior to moving them into S3, but I will just assume you already encrypt your backups before handing them over to logistics to transport it to the offsite storage, right? 😉






Posted in Cloud Computing, ORACLE Database | Leave a comment

ORA-29783: GPnP attribute SET failed with error [CLSGPNP_NOT_FOUND]

Today I will just leave a tip here on a issue I came across during the installation of Grid Infrastructure 12c Release 1 on Oracle Linux 7.4. The installation went fine until the execution of the root scripts where it failed with the error below:

Creation of ASM spfile in disk group failed. Following error occurred: 
ORA-29783: GPnP attribute SET failed with error [CLSGPNP_NOT_FOUND]


In my particular case, the firewall service was up, and it is recommended by Oracle to disable firewalld (iptables replacement on OL7).

[root@oldbs1 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
 Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
 Active: active (running) since Sat 2017-09-16 21:26:11 -03; 1h 20min ago
 Docs: man:firewalld(1)
 Main PID: 664 (firewalld)
 CGroup: /system.slice/firewalld.service
 └─664 /usr/bin/python -Es /usr/sbin/firewalld --nofork --nopid


I just stoped the firewall and disabled it:

[root@oldbs1 ~]# systemctl stop firewalld
[root@oldbs1 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

I found the metalink note Doc ID 2180883.1 which also asks to check and disable SELinux.

Keep in mind that depending on the argument between brackets, it may be a different issue.

Posted in ORACLE Database | Leave a comment

Port forward to NAT networks from different hosts on Virtualbox

Here is a quick tutorial on how to setup and use port forwarding with Virtualbox. Before anything, why use NAT (Network Address Translation) at all? Well, in my case, I like to keep things separate and recreate scenarios as similar to a production system as possible and using NAT with virtualbox allows me to have DHCP up and running in no time and if I want to place a webserver in my public network and my database in a NAT, I just added an additional layer of security with almost no effort, more security is always good.

Back to the topic, in this example, there are 3 virtual machines (RAC on OL 7) running on my host machine (hawkeye) which has the static ip I will forward connections coming to ports 3122, 3222 and 3322 to the 3 virtual machines behind the NAT, so the setup will look like below:

NAT Diagram

The goal here is to be able to connect from my laptop, directly into the virtual machines (rac nodes). So I will first connect to my physical machine where virtualbox is running and check on the network setup. (If you want to know how to create such setup, you can check on this post: Virtualbox: How to setup NAT with DHCP).

Check the existing NAT Networks

[hypervisor@hawkeye ~]$ vboxmanage natnetwork list
NAT Networks:

Name: NAT-
 IPv6: No
 Enabled: Yes

Setup port forwarding

Here is where we actually add the rules using vboxmanage natnetwork modify. We specify the netname obtained from the previous command and setup the TCPv4 rule in the format of “Rule_Name:protocol:[host_ip]:host_port:[vm_ip]:vm_port”

[hypervisor@hawkeye ~]$ vboxmanage natnetwork modify --netname NAT- \
--port-forward-4 "SSH_oldbs1:tcp:[]:3122:[]:22"

[hypervisor@hawkeye ~]$ vboxmanage natnetwork modify --netname NAT- \
--port-forward-4 "SSH_oldbs2:tcp:[]:3222:[]:22"

[hypervisor@hawkeye ~]$ vboxmanage natnetwork modify --netname NAT- \
--port-forward-4 "SSH_oldbs3:tcp:[]:3322:[]:22"

Test the setup

From my laptop (mustang) I will connect directly to olsdb1 using the port forwarding rule SSH_oldbs1. I will hit hawkeye ( on port 3122 and will be redirected to oldbs1 (

[bcarvalho@mustang ~]$ ssh oracle@ -p 3122
The authenticity of host '[]:3122 ([]:3122)' can't be established.
ECDSA key fingerprint is SHA256:rN8NXWQjNbzjtOMVyZr1K2+XUNHd2kbofq/O6FiDHk4.
ECDSA key fingerprint is MD5:6c:a5:c3:6e:cd:37:68:68:c2:b0:41:1e:74:eb:f3:32.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '[]:3122' (ECDSA) to the list of known hosts.
oracle@'s password: 
Last login: Sat Sep 16 10:04:48 2017 from
[oracle@oldbs1 ~]$ uname -a
Linux oldbs1.siselo.priv.net 4.1.12-103.3.8.el7uek.x86_64 #2 SMP Mon Aug 21 17:27:54 PDT 2017 x86_64 x86_64 x86_64 GNU/Linux
[oracle@oldbs1 ~]$ ip addr show | grep inet
 inet scope host lo
 inet6 ::1/128 scope host 
 inet brd scope global enp0s3
 inet6 fe80::a00:27ff:fe28:d0df/64 scope link 
 inet brd scope global enp0s8
 inet6 fe80::a00:27ff:fefe:5840/64 scope link 
 inet brd scope global virbr0

Check the rules from GUI

You can actually create the rules from GUI (Graphical User Interface), but where is the fun in that? If you want to script things you will need CLI (Command Line Interface). Click on File -> Preferences, select Network, select the NAT Network and click on the edit button, a new window will appear, just click on “Port Forwarding“. You should see something like below:

Port Forwarding Rules

Virtualbox NAT Port Forwarding Rules table example.

Troubleshoot connectivity

You may run into issues depending on your current setup. Most common issues are:

Unsupported ports

Currently (september 2017), virtualbox does not support the binding of ports lower than 1024. So you can’t use port forwarding with values lower than that (believe me, I tried!).


You may need to add a firewall rule to allow the connection to go through. In my case, I am using firewalld, so it goes something like:

[hypervisor@hawkeye ~]$ sudo firewallctl new -p service -n ssh-oldbs1

[hypervisor@hawkeye ~]$ sudo firewallctl service "ssh-oldbs1" --permanent add port "3122/tcp"

[hypervisor@hawkeye ~]$ sudo firewallctl reload


If using SELinux in the host, you will probably need to add the port to the allow list

[hypervisor@hawkeye ~]$ sudo semanage port -a -t ssh_port_t -p tcp 3122

Hope that helps =)


Posted in ORACLE Database | Leave a comment

GUOB Techday / LAD OTN Tour 2017

Last Saturday (Aug, 5th) it was held in São Paulo the 8th edition of GUOB Techday. This event is one of the most important event for DBAs and Oracle Professionals, it is part of the Latin America Oracle Technology Network Tour, which brings top professionals of different Oracle technologies to speak and exchange ideas with the Oracle community.

I will speak about the event in general and my experience throughout the day.

Arrival / Check-in

I arrived around 8:15 AM at Uninove Vergueiro, the university where the event was being held. The organization was magnificent and I could get my credentials ready in no time. I then went ahead to coffee and met already with a lot of colleagues, friends and speakers. The exchange of ideas and chatting was great while enjoying a whole lot of awesome food and beverages. The networking coffee during the day were pretty similar to this, so I will not cover each interval between sessions, they were all awesome.


At 8:30 I got a seat as the opening started. Eduardo Hahn thanked Uninove for providing the infrastructure for the event as this was the first time on a university instead of a hotel, which is nice as this was due ever growing number of participants and concurrent sessions. After a brief overview of the past and future of GUOB, Pablo Cicarello took the stage to talk about the Oracle ACE Program and encourage Oracle professionals to engage and participate more and more in the community. Eduardo Hahn took the stage back for a few moments to say a few more words and call the first speaker of the day: Mike Dietrich (Master Product Manager Database Upgrades & Migrations @ Oracle)


Upgrade to 12.2 – Live and Uncensored – Mike Dietrich

This was the first session of the day, everybody participated in this one as it was held in the main room in the first floor of the university. In this session, Mike introduced us to a few cases of success for Oracle Upgrades, some challenging scenarios such as moving from 8i to 12c and performed 2 upgrades live, one from command line and one from DBUA (graphical interface). I’m suspicious to speak about Mike’s presentation since I always watch them whenever possible. I’ll try to summarize a few interesting things that I believe to be of great help: Do not wait too long to upgrade, you will just trap yourself and make the whole process a lot harder and painful. Use the command line, there are awesome features there like parallelism and resume option. Finally, download the virtual machine to practice the upgrades and test it out from his website: https://mikedietrichde.com/hands-on-lab/

Ensure Performance Stability when upgrading Oracle Database – Mike Dietrich

The second session I attended was also from Mike, the sessions now were moved up to the 8th floor of the university. In this session Mike approached some key factors to ensure a successful upgrade. It is not rare the cases where after upgrading a database results in users complaining about how their systems were faster before, so be prepared! Take a baseline snapshot of your workload data before the upgrade, patch your environment, take care with optimizer parameters settings and most important: TEST! TEST! And… TEST! Cool tip about tools to allow a great testing scenario: Since Enterprise Edition doesn’t come with SPA (SQL Performance Analyzer) license included, it is possible to setup it in the Oracle Cloud, as in the cloud the license is included. You can now capture your SQL into SQL Tuning Sets and run them against the SQL Performance Analyzer in the cloud, anticipate issues and fix them, shutdown the cloud database (EE High Performance or Extreme Performance) and move on with your on-premises upgrade.

Database Security with Transparent Data Encryption – Adriano Bonacin

Following session presented by Adriano Bonacin (Pagseguro UOL), demonstrated from the concepts of cryptography to details of the process involved in symmetric and asymmetric encryption, hashing algorithms, what is salt and what it is used for. Moving on to the practical aspects of the TDE, he demonstrated the techniques and encryption types by using column encryption vs tablespace encryption, how the different types of encryption might affect storage utilization. He presented also the procedures to implement TDE and administer it, this included the creation and maintenance of the wallet, how to change their keys and how to set it up to auto open, so we can access the data encrypted after a database bounce.

Maximizing Oracle Cloud Buffer Cache Throughput – Craig Shallahamer

Craig Shallahamer from OraPub presented this very interactive session on how to optimize buffer cache throughput, even though the title of the session said “cloud”, the tips and techniques presented by him can be applied to every Oracle Database, both on-premises and in the cloud. He explained the internals about the buffer cache when we see “free buffer” wait event and what is going on in the memory and how that affects the performance. A lot was said from when the database needs to do an physical I/O to how Oracle keeps or eliminates the blocks from memory using the MRU and LRU lists.

Getting the most out of Oracle Grid Infrastructure – Franky Weber

This session presented by my friend Franky Weber (Pagseguro UOL) is really a thunderstorm of information about Grid Infrastructure with great emphasis on new features of Oracle 12c. In this presentation, Franky introduces us to good practices on maintaining ASM with ASM Filter Driver, setup of diskgroups and a ton of information on Flex ASM, including new features such as ASM File groups and Quota groups. Practical examples can be obtained from his blog at http://loredata.com.br/blog/, including the setup and configuration of ASMFD, Diskgroups with Flex redundancy and much more. During the session he also presented a few parameters worth changing from the default values, to ensure a better response time on failures from nodes in a RAC configuration, what is and how to move GIMR to a different diskgroup. One of my favorite things in his presentation is the “explain work for” feature for ASM operations.

How to diagnose Random Oracle Cloud Performance Incidents using ASH – Craig Shallahamer

And we finally arrive to the last session of GUOB Techday 2017. Craig Shallahamer took over the main stage to demonstrate how we can use ASH to drill down on active session history and find the needle in a haystack. He used the bloodhound toolkit to find a particular case of a deadlock impacting a batch processing application. He took a step further to generate a script dynamically to be consumed by R and generate the visual representation of that deadlock.

Closing and conclusion

A couple of lucky fellows won tablets on a raffle help by Eduardo Hahn with the speakers, where the event finished and can be considered a huge success. As always it was great to be in the presence of such great professionals and have the opportunity to talk to them and learn from them.

I hope to see you there next year!

Posted in GUOB | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Batch Script – Reading parameter file into variables

Hi folks, quick tip on how to easily read a parameter file from a batch script. This is an idea I had to make the life easier when trying to troubleshoot connection issues from client to server and I had to change a lot the service names, server names, user and passwords and so on.

In the example I will have the parameter file (parameter.ini) and the script (testConn.bat)


# Parameter file for testConn.bat
# Adjust the parameters according to your
# needs.
# Important: Do not leave spaces after "="
SERVER         =example.localdomain
PORT           =1521


FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"USERNAME" "%CD%"\parameters.ini`) DO (SET USERNAME=%%F)
FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"PASSWORD" "%CD%"\parameters.ini`) DO (SET PASSWORD=%%F)
FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"SERVER" "%CD%"\parameters.ini`) DO (SET SERVER=%%F)
FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"PORT" "%CD%"\parameters.ini`) DO (SET PORT=%%F)
FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"SERVICE_NAME" "%CD%"\parameters.ini`) DO (SET SERVICE_NAME=%%F)
FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"TNSALIAS" "%CD%"\parameters.ini`) DO (SET TNSALIAS=%%F)




As you may have noticed, the big trick here is to simply read starting at a desired position (16 in my case) and use the findstr command to filter the strings and get precisely what is wanted into each variable.

Posted in Scripts | Tagged , , , , , , , , , , , | Leave a comment