GUOB Techday / LAD OTN Tour 2017

Last Saturday (Aug, 5th) it was held in São Paulo the 8th edition of GUOB Techday. This event is one of the most important event for DBAs and Oracle Professionals, it is part of the Latin America Oracle Technology Network Tour, which brings top professionals of different Oracle technologies to speak and exchange ideas with the Oracle community.

I will speak about the event in general and my experience throughout the day.

Arrival / Check-in

I arrived around 8:15 AM at Uninove Vergueiro, the university where the event was being held. The organization was magnificent and I could get my credentials ready in no time. I then went ahead to coffee and met already with a lot of colleagues, friends and speakers. The exchange of ideas and chatting was great while enjoying a whole lot of awesome food and beverages. The networking coffee during the day were pretty similar to this, so I will not cover each interval between sessions, they were all awesome.


At 8:30 I got a seat as the opening started. Eduardo Hahn thanked Uninove for providing the infrastructure for the event as this was the first time on a university instead of a hotel, which is nice as this was due ever growing number of participants and concurrent sessions. After a brief overview of the past and future of GUOB, Pablo Cicarello took the stage to talk about the Oracle ACE Program and encourage Oracle professionals to engage and participate more and more in the community. Eduardo Hahn took the stage back for a few moments to say a few more words and call the first speaker of the day: Mike Dietrich (Master Product Manager Database Upgrades & Migrations @ Oracle)


Upgrade to 12.2 – Live and Uncensored – Mike Dietrich

This was the first session of the day, everybody participated in this one as it was held in the main room in the first floor of the university. In this session, Mike introduced us to a few cases of success for Oracle Upgrades, some challenging scenarios such as moving from 8i to 12c and performed 2 upgrades live, one from command line and one from DBUA (graphical interface). I’m suspicious to speak about Mike’s presentation since I always watch them whenever possible. I’ll try to summarize a few interesting things that I believe to be of great help: Do not wait too long to upgrade, you will just trap yourself and make the whole process a lot harder and painful. Use the command line, there are awesome features there like parallelism and resume option. Finally, download the virtual machine to practice the upgrades and test it out from his website:

Ensure Performance Stability when upgrading Oracle Database – Mike Dietrich

The second session I attended was also from Mike, the sessions now were moved up to the 8th floor of the university. In this session Mike approached some key factors to ensure a successful upgrade. It is not rare the cases where after upgrading a database results in users complaining about how their systems were faster before, so be prepared! Take a baseline snapshot of your workload data before the upgrade, patch your environment, take care with optimizer parameters settings and most important: TEST! TEST! And… TEST! Cool tip about tools to allow a great testing scenario: Since Enterprise Edition doesn’t come with SPA (SQL Performance Analyzer) license included, it is possible to setup it in the Oracle Cloud, as in the cloud the license is included. You can now capture your SQL into SQL Tuning Sets and run them against the SQL Performance Analyzer in the cloud, anticipate issues and fix them, shutdown the cloud database (EE High Performance or Extreme Performance) and move on with your on-premises upgrade.

Database Security with Transparent Data Encryption – Adriano Bonacin

Following session presented by Adriano Bonacin (Pagseguro UOL), demonstrated from the concepts of cryptography to details of the process involved in symmetric and asymmetric encryption, hashing algorithms, what is salt and what it is used for. Moving on to the practical aspects of the TDE, he demonstrated the techniques and encryption types by using column encryption vs tablespace encryption, how the different types of encryption might affect storage utilization. He presented also the procedures to implement TDE and administer it, this included the creation and maintenance of the wallet, how to change their keys and how to set it up to auto open, so we can access the data encrypted after a database bounce.

Maximizing Oracle Cloud Buffer Cache Throughput – Craig Shallahamer

Craig Shallahamer from OraPub presented this very interactive session on how to optimize buffer cache throughput, even though the title of the session said “cloud”, the tips and techniques presented by him can be applied to every Oracle Database, both on-premises and in the cloud. He explained the internals about the buffer cache when we see “free buffer” wait event and what is going on in the memory and how that affects the performance. A lot was said from when the database needs to do an physical I/O to how Oracle keeps or eliminates the blocks from memory using the MRU and LRU lists.

Getting the most out of Oracle Grid Infrastructure – Franky Weber

This session presented by my friend Franky Weber (Pagseguro UOL) is really a thunderstorm of information about Grid Infrastructure with great emphasis on new features of Oracle 12c. In this presentation, Franky introduces us to good practices on maintaining ASM with ASM Filter Driver, setup of diskgroups and a ton of information on Flex ASM, including new features such as ASM File groups and Quota groups. Practical examples can be obtained from his blog at, including the setup and configuration of ASMFD, Diskgroups with Flex redundancy and much more. During the session he also presented a few parameters worth changing from the default values, to ensure a better response time on failures from nodes in a RAC configuration, what is and how to move GIMR to a different diskgroup. One of my favorite things in his presentation is the “explain work for” feature for ASM operations.

How to diagnose Random Oracle Cloud Performance Incidents using ASH – Craig Shallahamer

And we finally arrive to the last session of GUOB Techday 2017. Craig Shallahamer took over the main stage to demonstrate how we can use ASH to drill down on active session history and find the needle in a haystack. He used the bloodhound toolkit to find a particular case of a deadlock impacting a batch processing application. He took a step further to generate a script dynamically to be consumed by R and generate the visual representation of that deadlock.

Closing and conclusion

A couple of lucky fellows won tablets on a raffle help by Eduardo Hahn with the speakers, where the event finished and can be considered a huge success. As always it was great to be in the presence of such great professionals and have the opportunity to talk to them and learn from them.

I hope to see you there next year!

Posted in GUOB | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Batch Script – Reading parameter file into variables

Hi folks, quick tip on how to easily read a parameter file from a batch script. This is an idea I had to make the life easier when trying to troubleshoot connection issues from client to server and I had to change a lot the service names, server names, user and passwords and so on.

In the example I will have the parameter file (parameter.ini) and the script (testConn.bat)


# Parameter file for testConn.bat
# Adjust the parameters according to your
# needs.
# Important: Do not leave spaces after "="
SERVER         =example.localdomain
PORT           =1521


FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"USERNAME" "%CD%"\parameters.ini`) DO (SET USERNAME=%%F)
FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"PASSWORD" "%CD%"\parameters.ini`) DO (SET PASSWORD=%%F)
FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"SERVER" "%CD%"\parameters.ini`) DO (SET SERVER=%%F)
FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"PORT" "%CD%"\parameters.ini`) DO (SET PORT=%%F)
FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"SERVICE_NAME" "%CD%"\parameters.ini`) DO (SET SERVICE_NAME=%%F)
FOR /F "tokens=* USEBACKQ" %%F IN (`FINDSTR /C:"TNSALIAS" "%CD%"\parameters.ini`) DO (SET TNSALIAS=%%F)




As you may have noticed, the big trick here is to simply read starting at a desired position (16 in my case) and use the findstr command to filter the strings and get precisely what is wanted into each variable.

Posted in Scripts | Tagged , , , , , , , , , , , | Leave a comment

GUOB Tech Day / LA OTN TOUR 2017

On Aug, 5th we will have the GUOB Tech Day in São Paulo! This is an amazing opportunity to exchange knowledge and ideas with great Oracle Professionals from Brazil and around the world. I strongly encourage everyone that can go to do so and support the event. I’m sure you will not regret it 😉


Here are a few great names in Oracle community which will be present in the event: Mike Dietrich , Alex Gorbachev, Nirmala Sundarappa, Alex Zaballa, among others.

For more information and to subscribe to the event go to:

See you there!

Posted in ORACLE Database | Leave a comment

Oracle ASM on FreeNAS 8.3.0 x86

Hey there folks, just writing this very quick tip on how to setup an iSCSI storage for your *crash and burn* environment… In this tutorial I will not care for network settings, security or anything, I will just demonstrate the installation and configuration of the storage using the raw devices as luns. (They can be split within FreeNAS and used as extents, but this is for a later post).

You can download this version of FreeNAS in both 32 and 64 bits from the links below:

Create virtual machine

Since the idea is to have a storage simulated using as few resources as possible, I’m using FreeNAS 8.3.0 – 32 bits, create a VM and set the TYPE to BSD and VERSION to FreeBSD (32-bit) — or 64 bit if you are using that architecture for storage.


Set the memory to 512 MB, it will fire an warning during installation but does not block the installation.


Create a new virtual disk, I recommend to leave the default of 16 GB.


Leave the default disk type, VDI.


Dynamically allocated is fine for this setup.


This is the disk where the installation will reside. I’ll leave the default.


As the initiatior (client machine) resides on my physical network, I will change the NIC to Bridged mode.


In the storage tab, I’ve added a SCSI controller and created 3 disks will be later on used as LUNS, I also inserted the iso image in the CD drive.


Here is an overview of the machine ready to begin the installation.


FreeNAS Installation

When you boot the machine, you will be presented the Console Setup, select option 1 and hit Enter.


Select the drive ada0 which is the IDE disk where the installation will reside.


Select NO when prompted if you want to preserve existing parameters, as this is a fresh install.


Select YES to format and proceed.


The installation takes 1 minute and a few seconds on my hardware (which is not that good).


After rebooting you will see the IP obtained from DHCP. You can set a static IP in option 1 but I am using the default here just to demonstrate the storage setup.


iSCSI Configuration

Browse to the IP address of FreeNAS and click on “Services”.


Locate iSCSI and click on the tool icon to configure it before turning it on.


This step is necessary just for older versions of FreeNAS, select “Enable LUC” and change the “Controller Auth Method” to “None”.


Go to the Portals tab and add the portal. I’ll leave the IP address to and the default TCP Port 3260.


Move on to Initiators tab and configure the initiators according to your needs, I will just leave ALL and ALL, because I don’t intend to setup authentication for this storage.


Now, on Targets tab, create the target name which will be servicing the Luns to the Initiators. I will follow the standard naming convention (iqn.YYYY-MM) which is usually when the iSCSI target was created, in my example: iqn.2017-05 + storage + purpose. (I’m not sure if there’s a convention for what follows the month).

Notice I have also disabled the authentication and set the Portal Group ID and Initiator Group ID to 1, which is what was created in previous tabs.


Now on the Device Extents tab, just click on Add Device Extent button and assign the devices you want to be presented as LUNs.


Finally, on the “Associated Targets” tab, assign the luns to the target.


Back to Services, just change the switch from OFF to ON and you are good to go.


Present LUNs to the server

Now go to the server you want to present the luns and install the package iscsi-initiator-utils

[oracle@mustang ~]$ su -
[root@mustang ~]# dnf -y install iscsi-initiator-utils
Fedora 25 - x86_64 - VirtualBox 243 kB/s | 33 kB 00:00 
Fedora 25 - x86_64 - Updates 1.1 MB/s | 23 MB 00:20 
google-chrome 24 kB/s | 3.8 kB 00:00 
Last metadata expiration check: 0:00:01 ago on Sun May 21 09:30:23 2017.
Package iscsi-initiator-utils- is already installed, skipping.
Dependencies resolved.
Nothing to do.

Now connect to the portal and request the target as in the example below:

[root@mustang ~]# iscsiadm -m discovery -t sendtargets -p,1 iqn.2017-05.freenas.oracleasm

Now if we look at the devices, we still can’t see the LUNs:

[root@mustang ~]# ls -la /dev/sd*
brw-rw----. 1 root disk 8, 0 May 19 15:34 /dev/sda
brw-rw----. 1 root disk 8, 1 May 19 15:35 /dev/sda1
brw-rw----. 1 root disk 8, 2 May 19 15:34 /dev/sda2
brw-rw----. 1 root disk 8, 3 May 19 15:34 /dev/sda3
brw-rw----. 1 root disk 8, 16 May 21 08:50 /dev/sdb
brw-rw----. 1 root disk 8, 17 May 21 08:58 /dev/sdb1

To finish it up, login to the target and the devices are presented.

[root@mustang ~]# iscsiadm -m node --login
Logging in to [iface: default, target: iqn.2017-05.freenas.oracleasm, portal:,3260] (multiple)
Login to [iface: default, target: iqn.2017-05.freenas.oracleasm, portal:,3260] successful.

Now we can see the LUNs and you can proceed to configure them with udev rules, asmlib, etc.

[root@mustang ~]# ls -la /dev/sd*
brw-rw----. 1 root disk 8, 0 May 19 15:34 /dev/sda
brw-rw----. 1 root disk 8, 1 May 19 15:35 /dev/sda1
brw-rw----. 1 root disk 8, 2 May 19 15:34 /dev/sda2
brw-rw----. 1 root disk 8, 3 May 19 15:34 /dev/sda3
brw-rw----. 1 root disk 8, 16 May 21 08:50 /dev/sdb
brw-rw----. 1 root disk 8, 17 May 21 08:58 /dev/sdb1
brw-rw----. 1 root disk 8, 32 May 21 09:33 /dev/sdc
brw-rw----. 1 root disk 8, 48 May 21 09:33 /dev/sdd
brw-rw----. 1 root disk 8, 64 May 21 09:33 /dev/sde


Hope this helps.

Posted in Storage | Tagged , , , , , , , , , | Leave a comment

AWS – Securing your cloud with IAM


Moving to cloud equals to concerns with security, as it should. You are putting your services up on a bunch of remote datacenters which you have no control of whatsoever. So let’s start securing what we can, like our root credentials first.

For the sake of understanding, let’s pretend you just started using AWS. Go ahead and connect to the management console with your root credentials, a.k.a, the one that has your credit card/ billing information and can do whatever is you wish, and of course, screw you by the end of the month 😉

All right, back to securing the account… find IAM (Identity and Access Management) under Security, Identity & Compliance as in the image below:


You will land to its dashboard which initially will look something like the image below. The first thing here I would is click on “Customize“, so I can get my custom URL to access my account as you can see under “IAM users sign-in link”.


IAM Password Policy

Just like building a house, let’s start from the bottom up… so first thing here is to define and apply a password policy by expanding the last option and clicking on “Manage Password Policy“:


You can define your password policy as your business might require you to, one good configuration would look similar to:


If you scroll down, you can disable regions where you won’t be able to request temporary credentials from. In my case I am more interested in the regions I intend to work with, which are North Virginia, Oregon and São Paulo.


Groups and permissions

Back to the dashboard, now it is time to create groups, for that just expand the “Use groups to assign permissions” box and click on “Manage Groups”.


Just enter the group name you wish and hit “Next Step”. I will create an infrastructure group which will have admin privileges in this example.


Now you can choose from existing policies and customize your group’s privileges. In my case, AdministratorAccess will be enough, as I am creating my own non-root account. This will allow me to use everything I need without having access to finance data.


The last step is a simple review window. If you are happy with your setup, just click on “Create Group” and you are done here.


Go back to the dashboard for the next step.

Create IAM users

Here is where you add users to access your cloud, including your own. Just expand “Create individual IAM users” and click on “Manage Users”.


On the next screen, just click on “Add user”.


Enter the user details. Please note that if you are creating a user account to perform automation or system integration, it is a good idea to change its access type to programmatic, which is not the case for me since I intend to use the management console. Click on “Next step” to proceed.


Here we set the permissions for the user. I will add my user to the group “infrastructure”.


The last page is the review, just like for all the other stuff we’ve been doing. Review your settings and click on “Create user”.


Multi Factor Authentication for root account

Now things will get more interesting, we will add a second layer of security to our root account. At this point I don’t have to tell you to go back to the dashboard, right? Moving on… expand “Activate MFA on your root account” and click on “Manage MFA”.


Spoiler alert: Currently supported MFA virtual devices:


I will use Google Authenticator because google owns me, probably owns you too…

Well, once you’ve clicked on Manage MFA, the next screen would be actually where you pick what device you want to use. I will go with a virtual one, as stated in the above spoiler.


There’s a small disclaimer that can direct you to a link with compatible stuff, which is the spoiler image above. Click on “Next Step”.


Now you have to scan the QR code with your MFA app, in my case Google Authenticator.


Once you scan the QR Code, you will see an image like below and the token will start changing every minute. The above Authentication Code 1 & 2 must be filled with the 2 sequentially generated numbers on your MFA app


Having done so, just click on “Activate Virtual MFA” and if all went well, you should see the message below. Just click on “Finish”.


Delete root access keys

This is a best practice, you should delete your root access key. You will understand why as soon as you expand the “Delete your root access keys” panel and read the disclaimer. Click on “Manage Security Credentials”.


You will be prompted with the following popup, just click on “Continue to Security Credentials”


Here you will see a lot of options and panes you can expand and collapse. Expand the “Access Keys (Access Key ID and Secret Access Key)” panel and you will find your root access key. Click on Delete under “Actions” column.


A prompt will appear so you can confirm the deletion of the Access Key. Click on Yes. (Be sure to be deleting the right access key, if you have already other keys in place).


Once you are back to the dashboard, your settings should appear all green. Looks a lot better, right?


Let’s test it out… the whole point of doing that is to avoid my IT account wouldn’t have privileges to access my finance data… let’s put it to proof.

Connecting to AWS with your IAM User

Now we should test the access. Just browse to the custom URL you have setup up there in the beginning and enter the user id and password created.


Once you are connected… Expand the options and click on “My Billing Dashboard”, which would take you to finance stuff.


If all is setup well, you should receive the error below. Yay…


Now you can start creating different groups and granting different privileges for your technical team while leaving the finance data out of their reach… or kind of… we have limited their access so they can’t see or deal with finance data, but they still can use as much resource as they like in the cloud… which will incur in charges and $$$$.

Stay tuned for the next chapters on how to avoid that. 😉

Posted in Cloud Computing | 2 Comments

AWS – Disabling Termination Protection for unwanted instances and terminating them.

If you are new to cloud computing, specially with AWS, you will probably see an annoying message while trying to terminate instances you no longer need. The message will look like below “These instances have Termination Protection and will not be terminated”.


Be sure that the instance can be terminated (by that I mean excluded!)

In order to terminate such instance you have to go to EC2 -> Instances, select the desired server and click on “Actions” -> Instance Settings -> Change Termination Protection as demonstrated below.


You should see a prompt similar to the image below, just click on “Yes, Disable”.


Now you can terminate the instance by selecting “Actions” -> Instance State -> Terminate.


Confirm your choice by clicking on “Yes, Terminate”. (Again… be sure you can really remove this server)


That’s it… the server will still show up in the list with the status “Terminated” and will disappear in about 20 minutes. You will no longer be able to start that server.

Hope this helps…


Posted in Cloud Computing | Tagged , , , , , , , , , , , , , | Leave a comment

Virtualbox: How to setup NAT with DHCP

Hi folks, in this article I will demonstrate how easy it is to setup a NAT (Network Address Translation) on virtualbox. I use this very often to isolate resources from my virtual servers and my physical network while still having access to the internet from my virtual servers. Another great thing about this setup on virtualbox is the possibility to have native DHCP for your NAT configuration, allowing you to perform a broader range of tests.

Let’s get it started! Open up your virtualbox and go to File -> Preferences and on the newly opened window, select Network and click on the add icon as illustrated in the image below:


The new network will be added to the list, double click on it and edit the network name, IP range, mask and enable the DHCP. Here is an example of how to setup a NAT that will work with IPs ranging from to, where will be its default gateway:


Now you can go to any of your VMs and change their NIC to NAT and select the entry nat_net1.


That’s it, simple and easy. Hope this helps.

Posted in Virtualization | Tagged , , , , , , , , , | Leave a comment