Missing peaks in ASH results

The ASH charts in OEM are great utilities for getting a quick summary of your system’s activity. However, these results can be misleading because of how the data is represented on screen. First, ASH is data is collected by sampling so it’s not a complete picture of everything that runs. Another thing to consider is that the charting in OEM doesn’t plot every ASH data point. Instead, it will average them across time slices. Within Top Activity and the ASH Analytics summary charts these points are then connected by curves or straight lines which then further dilutes the results.

Some example snapshots will help illustrate these issues.

The OEM Top Activity screen may produce a chart like this…
Top Activity

First, note the large spike around 1:30am on the 16th. This spike was largely comprised of RMAN backups and is a significant increase in overall activity on the server with approximately 9 active sessions at its peak and a sustained activity level of 8 for most of that period.

Next, let’s look at that same database using ASH Analytics and note how that spike is drawn as a pyramid of activity. While the slope of the sides is fairly steep, it’s still significantly more gradual than that illustrated by the Top Activity chart. The peak activity is still approximately 9 active sessions at its highest but it’s harder to determine when and where it tapers off because the charting simply draws straight lines between time slices.

ASH Analytics

But, ASH Analytics offers a zoom window feature and using that we can highlight the 1am-2am hour and we get a different picture that more closely reflects the story told in the Top Activity chart. Note the sharp increase at 1:30 as see in the Top Activity. Also, note the higher peaks approaching and exceeding 12 active sessions whereas each of the previous charts indicated a peak of 9. The last curiosity is when the activity declines it is more gradual than the Top Activity but steeper than the Analytics overall chart.

ASH Analytics wall

The charts above demonstrate some ambiguities in using any one visualization. In those examples though, the data was mostly consistent in magnitude, but differing on rate of change due to resolution of the time slices.

Another potential problem with the averaging is losing accuracy by dropping information. For instance, in the first chart above, note the brief IO spike around 9:30am with a peak of 6 active sessions. If you look on the ASH Analytics summary chart it has averaged the curve down to approximately 2 active sessions. If we now go to the ASH Analytics page and zoom in to only the 9am-10am hour, we see that spike was in fact much larger at 24! This is 4 to 12 times our previous values and more importantly, running at twice the number of available processors. It was a brief surge and the system recovered fine but if you were looking for potential trouble areas of resource contention, the first two charts could be misleading.

ASH Analytics peak

I definitely don’t want to discourage readers from using OEM’s ASH tools; but I also don’t want to suggest you need to zoom in on every single time range in order to get the most accurate picture. Instead I want readers to be aware of the limitations inherent in data averaging and if you do have reason to inspect activity at a narrow time range, then by all means zoom in with ASH Analytics to get the best picture. If you need larger scale summary views, consider querying the ASH data yourself to find extreme values that may have been hidden by the averaging.

The Curse of “Expertise”

Like everyone else, I make mistakes. While the results can sometimes be unfortunate, it’s also a truth that shouldn’t be ignored. A recurring problem though is that as a designated “expert” sometimes people don’t bother to test what I’ve given them. They just roll with it and then are surprised when their production installation goes awry.

I just ran into this situation again a few days ago. I was asked to help with a query that didn’t ever finish. I worked on it for a little while and came up with something that finished in a few seconds. Since the original didn’t finish, I didn’t have a predetermined set of results to test against. I manually walked through some sample data and my results seemed to tie out… so, it seemed like I was on the right track. I showed the client what I had and they were elated with the speed improvement.

I gave a brief description of what I had attempted to do and why it ran quickly. Then I asked them to test and contact me again if there were any questions.

The next day I got a message that they were very happy with the speed and were using it. I was glad to hear that but I also had been thinking that my query was extremely complicated, so even though it has apparently passed inspection I spent a few more minutes on it and came up with a simpler approach. This new method was almost as fast the other one but more significantly it returned more rows than my previous version. Clearly, at least one of them was incorrect.

With the simplified logic of the new version, it was much easier to verify that this second attempt was correct and the older more complicated version was wrong. I reached out to my client again and notified them of the change in query and problem I found. Then suggested they rerun more extensive tests anyway because I still could be wrong.

Fortunately, this second attempt did appear to be truly correct and the performance was still more than adequate.

Finding the name of an Oracle database

Oracle offers several methods for finding the name of a database.

More significantly, 12c introduces new functionality which may change the expected value from some of the old methods due to the multi-tenant feature.

Here are 11 methods for finding the name of a database.

SELECT ‘ora_database_name’ method, ora_database_name VALUE FROM DUAL
SELECT ‘SYS_CONTEXT(userenv,db_name)’, SYS_CONTEXT(‘userenv’, ‘db_name’) FROM DUAL
SELECT ‘SYS_CONTEXT(userenv,db_unique_name)’, SYS_CONTEXT(‘userenv’, ‘db_unique_name’) FROM DUAL
SELECT ‘SYS_CONTEXT(userenv,con_name)’, SYS_CONTEXT(‘userenv’, ‘con_name’) FROM DUAL
SELECT ‘SYS_CONTEXT(userenv,cdb_name)’, SYS_CONTEXT(‘userenv’, ‘cdb_name’) FROM DUAL
SELECT ‘V$DATABASE name’, name FROM v$database
FROM v$parameter
WHERE name = ‘db_name’
SELECT ‘V$PARAMETER db_unique_name’, VALUE
FROM v$parameter
WHERE name = ‘db_unique_name’
SELECT ‘GLOBAL_NAME global_name’, global_name FROM global_name
FROM database_properties
WHERE property_name = ‘GLOBAL_DB_NAME’
SELECT ‘DBMS_STANDARD.database_name’, DBMS_STANDARD.database_name FROM DUAL;

The results of these will vary by version, whether the db is a container or not,  and if its is a container, whether the query runs within a pluggable database or the container root database.
Note, the con_name and cdb_name options for the SYS_CONTEXT function do not exist in 11g or lower. So those queries in the union must be removed to execute in an 11g database. Within a pluggable database some of the methods recognize the PDB as the database, while others recognize the container as the database.

So, if you are using any of these methods in an 11g database and you upgrade to a 12c pluggable db, you may expect the PDB name to be returned, but instead you’ll get the CDB name instead.
Also note, some of the methods always return the name in capital letters, others will return the exact value used to create the database.







ora_database_name SDS12CR1 SDSCDB1 SDSPDB1 SDS11GR2
SYS_CONTEXT(userenv,db_name) sds12cr1 sdscdb1 sdscdb1 sds11gr2
SYS_CONTEXT(userenv,db_unique_name) sds12cr1 sdscdb1 sdscdb1 sds11gr2
SYS_CONTEXT(userenv,con_name) sds12cr1 CDB$ROOT SDSPDB1 n/a
SYS_CONTEXT(userenv,cdb_name) sdscdb1 sdscdb1 n/a
V$PARAMETER db_name sds12cr1 sdscdb1 sdscdb1 sds11gr2
V$PARAMETER db_unique_name sds12cr1 sdscdb1 sdscdb1 sds11gr2

On a related note, only the container of a multi-tenant database has instances. So, while PDBs can declare their own name for the database level with some methods above; there is no corresponding PDB-instance name functionality.

See you in Las Vegas!

I’m flying out tomorrow for Collaborate 16.
Looking forward to another great conference.

I’m presenting again this year.
I’ll be speaking on Tuesday, at 2:15
“Why Developers Need to Think like DBAs, Why DBAs Need to Think like Developers”
Session 1355 in Jasmine C

How Oracle Stores Passwords

Several years ago I wrote a small summary of the Oracle password hashing and storage for versions up to 11g.

Today I’ve completed my update of that article up to, including code to mimic generation of passwords given the appropriate salts.
The initial publication is in PDF format, I may convert and reformat it to other forms for better distribution.

The pdf file can be downloaded from my dropbox here.

It was interesting and enjoyable digging into the details of the hashes and how they change between versions and interact with the case-sensitivity settings.

I hope you enjoy it as much as I did writing it.

Splitting a clob into rows

I’ve used this tool for a wide variety of other parsing projects. One of the interesting tuning techniques I used was to pull the clob apart into 32K varchar2 chunks.
It is possible to split the clob directly using the DBMS_LOB package or through the overloaded SQL functions; but clobs are expensive objects. Varchar2 variables on the other hand are relatively light weight making the sub-parsing within them much faster. Doing this does take a little bit of care though to make sure the chunks don’t accidentally split a line in two.

Also, I do make the assumption that no one line will be more than 32K long which is fine for this function anyway since the output is a SQL collection with a varchar2 limit of 4000 bytes.
The returned VCARRAY type is a simple table collection type.

CREATE OR REPLACE TYPE VCARRAY as table of varchar2(4000)

I wrote this originally in 9i. With 12c support for 32K varchar2 in SQL I may need to revisit it and make a new version.

    RETURN vcarray
    --                    .///.
    --                   (0 o)
    --  Sean D. Stuber
    --  sean.stuber@gmail.com
    --             oooO      Oooo
    --------------(   )-----(   )---------------
    --             \ (       ) /
    --              \_)     (_/

    c_chunk_limit   CONSTANT INTEGER := 32767;
    v_clob_length            INTEGER := DBMS_LOB.getlength(p_clob);
    v_clob_index             INTEGER;
    v_chunk                  VARCHAR2(32767);
    v_chunk_end              INTEGER;
    v_chunk_length           INTEGER;
    v_chunk_index            INTEGER;
    v_delim_len              INTEGER := LENGTH(p_delimiter);
    v_line_end               INTEGER;
    v_clob_length := DBMS_LOB.getlength(p_clob);
    v_clob_index := 1;

    WHILE v_clob_index <= v_clob_length
            Pull one 32K chunk off the clob at a time.
            This is because it's MUCH faster to use built in functions
            on a varchar2 type than to use dbms_lob functions on a clob.
        v_chunk := DBMS_LOB.SUBSTR(p_clob, c_chunk_limit, v_clob_index);

        IF v_clob_index > v_clob_length - c_chunk_limit
            -- if we walked off the end the clob,
            -- then the chunk is whatever we picked up at the end
            -- delimited or not
            v_clob_index := v_clob_length + 1;
            v_chunk_end := INSTR(v_chunk, p_delimiter, -1);

            IF v_chunk_end = 0
                DBMS_OUTPUT.put_line('No delimiters found!');
            END IF;

            v_chunk := SUBSTR(v_chunk, 1, v_chunk_end);
            v_clob_index := v_clob_index + v_chunk_end + v_delim_len - 1;
        END IF;

            Given a varchar2 chunk split it into lines

        v_chunk_index := 1;
        v_chunk_length := NVL(LENGTH(v_chunk), 0);

        WHILE v_chunk_index <= v_chunk_length
            v_line_end := INSTR(v_chunk, p_delimiter, v_chunk_index);

            IF v_line_end = 0 OR (v_line_end - v_chunk_index) > 4000
                PIPE ROW (SUBSTR(v_chunk, v_chunk_index, 4000));
                v_chunk_index := v_chunk_index + 4000;
                PIPE ROW (SUBSTR(v_chunk, v_chunk_index, v_line_end - v_chunk_index));
                v_chunk_index := v_line_end + v_delim_len;
            END IF;
        END LOOP;

    WHEN no_data_needed
END split_clob;

Thank you, thank you, thank you!

A little while ago Oracle announced the winners of the Oracle Database Developer Choice Awards and I was a winner in both of categories I was nominated,


I was surprised and overjoyed when I was notified that I had not only been nominated; but named a finalist.
I’m truly humbled by the supportive votes I received.

I’m also inspired to try to give back even more and I’m got a few ideas brewing for my next few articles.

Thank you again!

%d bloggers like this: