Wednesday, 19 March 2014

Observations on an Oracle bug: You can't CTAS an IOT with Timestamp(n)!

If you are planning on loading data into a partitioned IOT with a "Timestamp(n)" field using CTAS, well, you can't!

I came across the issue a while ago, and had it confirmed by Oracle as a bug, which manifests itself even in 12cR1.

Here is the scenario:

You would require to load an IOT (named EMP) by adding new partitions and exchanging with another IOT that's produced using a CTAS.

This is basically what you would need to do:
CREATE TABLE EMP(
DeptID NUMBER,
EmpID   NUMBER,
HireDate TIMESTAMP(3),
CONSTRAINT pk_emp PRIMARY KEY(DeptID, EmpID))
 ORGANIZATION INDEX COMPRESS 1
 PARTITION BY LIST(DeptID)
 (PARTITION DEPT_0 VALUES(0));

CREATE TABLE Temp(
DeptID,
EmpID,
HireDate,
CONSTRAINT pk_temp PRIMARY KEY(DeptID, EmpID)) ORGANIZATION INDEX COMPRESS 1 AS
SELECT 1,1,CAST(SYSTIMESTAMP AS TIMESTAMP(3))
FROM dual;

ALTER TABLE Emp ADD PARTITION DEPT_1 VALUES(1);

ALTER TABLE Emp EXCHANGE PARTITION DEPT_1 WITH TABLE Temp;
And That's when you're hit in the face by a:
SQL Error: ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
14097. 00000 -  "column type or size mismatch in ALTER TABLE EXCHANGE PARTITION"
*Cause:    The corresponding columns in the tables specified in the
           ALTER TABLE EXCHANGE PARTITION are of different type or size
*Action:   Ensure that the two tables have the same number of columns
           with the same type and size.
Obviously you will check:
desc emp
Name     Null     Type      
-------- -------- ------------
DEPTID   NOT NULL NUMBER    
EMPID    NOT NULL NUMBER    
HIREDATE          TIMESTAMP(3)

desc temp
Name     Null     Type      
-------- -------- ------------
DEPTID   NOT NULL NUMBER    
EMPID    NOT NULL NUMBER    
HIREDATE          TIMESTAMP(3)
To prove that it is actually possible to exchange partition with the IOT:
CREATE TABLE Temp1(
DeptID NUMBER,
EmpID   NUMBER,
HireDate TIMESTAMP(3),
CONSTRAINT pk_temp1 PRIMARY KEY(DeptID, EmpID))
 ORGANIZATION INDEX COMPRESS 1;

INSERT INTO Temp1 VALUES(1,2,SYSTIMESTAMP);
COMMIT;

ALTER TABLE Emp EXCHANGE PARTITION DEPT_1 WITH TABLE Temp1;
And you're politely made aware of the fact that:
"table EMP altered."

What's going on here?

Well, the following simple query would shine enough light on the issue:

SELECT TABLE_NAME, COLUMN_NAME, COLUMN_ID, DATA_TYPE, DATA_LENGTH, DATA_SCALE
FROM USER_TAB_COLS
WHERE TABLE_NAME IN ('EMP','TEMP','TEMP1')
ORDER BY 3,1;

TABLE_NAME COLUMN_NAME  COLUMN_ID DATA_TYPE    DATA_LENGTH
---------- ----------- ---------- ------------ -----------
EMP        DEPTID               1 NUMBER                22
TEMP       DEPTID               1 NUMBER                22
TEMP1      DEPTID               1 NUMBER                22
EMP        EMPID                2 NUMBER                22
TEMP       EMPID                2 NUMBER                22
TEMP1      EMPID                2 NUMBER                22
EMP        HIREDATE             3 TIMESTAMP(3)          11
TEMP       HIREDATE             3 TIMESTAMP(3)          20
TEMP1      HIREDATE             3 TIMESTAMP(3)          11 

As you can see CAST(column as TIMESTAMP(n)) would merely manifest itself as a TIMESTAMP(n) but in fact, it's not different than a "TIMESTAMP!"

Solution:

Modify the source of your CTAS operation to actually hold a "TIMESTAMP(n)" value, and get rid of the CAST function. Although the function works for other data types.

Sunday, 2 February 2014

The challenging journey to overcome lack of knowledge

It should be called a crime when people liken the process of "gaining knowledge" to simply "solving a puzzle."

At least we all have to agree that it's an extremely patronising way of describing the process; mainly because the person in the position of power (in this case "knowledge"), the same person who's describing the process of "gaining knowledge," fails to address the characteristics of that "puzzle" honestly or correctly.

Knowledge has no shape

This is for people like me, who aspire to be good at what they do; those who have a dream to be true professional -no matter whether they put in enough effort or not- and they are very well aware of the fact that "getting better" is directly associated with gaining more knowledge.

This specifically holds for people who aspire to carry the title of "Oracle Scientist."

The fact of the matter is: knowledge is a puzzle; but with no shape or specific boundaries.

You keep discovering new pieces but you have no idea where -in the broad context of this shapeless puzzle- the pieces sit.

And in the case of Oracle, most of the time the pieces can sit right in the middle of the puzzle!

Embrace the pilgrimage!

I'd have to apologise for committing "the crime" too; because the process of gaining knowledge is no journey.

It's a pilgrimage.

You would know it's a pilgrimage when you overcome the initial fear that takes over you, when you realise that the "puzzle piece" you just found can ONLY fit in the middle, not directly connected to any other pieces you've previously found, but can be potentially connected to either of them indirectly!

Easy answers are rare, the path is long, and above all -as Connor McDonald rightfully quotes- "Learning is not a spectator sport!"

Find a guiding light!

Nobody has done it on his own, EVER.

I've been reminded of this, time and time again during the course of my life, and most lately by Kirby Ferguson.

You need to be surrounded by knowledgable people; the kind of people who don't give you the answer, but are patient enough with you so that you'd learn the path to acquire that answer.

There's no shortage of such people in the Oracle community, if you've got the will to find them.

Wednesday, 25 December 2013

IMPDP/EXPDP, Virtual Columns, and user-defined functions

If you have virtual columns based on user defined functions in a schema, you can not use IMPDP/EXPDP for that schema; not that easily, at least.

In this post I would lay out a semantic scheme for a test, and propose a successfully-tested work-around.

Required material for the scenario:

- A schema named VCOL_TEST, containing:
- A function named VCOL_FUNCTION
- A table named VCOL_TABLE

The script below can be used to create such schema:

create user vcol_test identified by vcol_test;

grant   create session,
        select any dictionary,
        create procedure,
        create table,
        debug connect session,
        debug any procedure
to vcol_test;

conn vcol_test/vcol_test;

create or replace function vcol_function(p_parameter number)
return number
deterministic
is
begin
  if p_parameter > 10 then
    return 10;
  else
    return 1;
  end if;
end vcol_function;
/

show errors;

create table vcol_table(id number, vcol as(vcol_function(id)));

exit;

What's the problem?

If you try to use IMPDP to create a "remapped" schema (e.g. VCOL_TEST2) on a different server, you would come across this:

Processing object type SCHEMA_EXPORT/TABLE/TABLE
ORA-39083: Object type TABLE:"VCOL_TEST2"."VCOL_TABLE" failed to create with error:
ORA-00904: "VCOL_TEST"."VCOL_FUNCTION": invalid identifier
Failing sql is:
CREATE TABLE "VCOL_TEST2"."VCOL_TABLE" ("ID" NUMBER, "VCOL" NUMBER GENERATED ALWAYS AS ("VCOL_TEST"."VCOL_FUNCTION"("ID")) VIRTUAL VISIBLE ) SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "USERS"

Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 1 error(s)
What's happening here?

Checking the "Failing sql" would manifest the bug: IMPDP is trying to create table VCOL_TEST2.VCOL_TABLE referring to function VCOL_TEST.VCOL_FUNCTION (as opposed to VCOL_TEST2.VCOL_FUNCTION).

i.e. IMPDP is not able to remap the owner of the function used for the virtual column, to the new schema name.

What does Oracle say about this?

Officially, nothing.

Although a consulting member of technical staff at Oracle was kind enough to send the following response:

"In releases prior to 11.2.0.4, a table with a virtual column based on the result of a PL/SQL function would not be recreated successfully on Import, even when REMAP_SCHEMA was not used. This is because the necessary PL/SQL function would not have been created before the table was created.

The problem was addressed by modifying Import to create the table with a temporary definition for virtual columns, then creating the function, then modifying the definition of the virtual columns to use the correct function.

The initial fix was in BUG#10186633 and this was later extended in BUG#14506804. The extended fix is also included in certain patch set updates and one-off patches for 11.2.0.2 and 11.2.0.3.


Unfortunately this fix does not address the problem of REMAP_SCHEMA. Even in 11.2.0.4.0 or 12.1.0.1.0, it is still possible to encounter a failure during Import, or a failure during runtime, depending on the privileges of the owner of the target schema and the existence of the source schema.

The work-around

Since re-importing and remapping existing test schemas is part of our daily tasks, I have developed a bash script which follows these steps:

In Export:
1. Create a list of user defined functions which are used in virtual columns, and according to their dependencies, use DBMS_METADATA to create a SQL script which can recreate the functions with no reference to the schema name.

2. Create a list of tables whose virtual columns rely on the exported functions, and (using DBMS_METADATA, again) build a schema-independent SQL script to recreate them.

3. Obviously you would need to build another script to grant quotas on different tablespaces that would hold the concerning tables, to the new user/schema.
4. Export as usual, using EXPDP.

In Import:
1. Create the new schema, grant all the privileges and quotas according to the source schema.
2. Use the script created in step 1, above, to create the functions.
3. Use the script created in step 2, above, to create the concerning tables.
4. IMPDP everything EXCLUDING the tables mentioned in the previous step. Ignore the "already exists" messages during the process.
5. IMPDP DATA_ONLY for the tables mentioned in previous step.

IMPDP is done in two passes, due to the fact that the existence of the already-created concerning tables (in step 3 of Import) would definitely lead to "already exist" errors, which would prevent importing the data for the tables.

Note: The method above only works, if -like us- the concerning tables do not hold any referential constraints to any other tables (our "solution" is a datawarehouse and the concerning tables are "fact" tables whose integrity constraints -if any- are enforced through application).

If they did, you might have to make sure that their "child" tables -if any- are imported after they are.

Final Note

The same problem could occur for function-based indexes as well, and it does not seem to be an issue that Oracle DBAs can overlook.

Although Virtual Columns are extremely helpful, But as Matthias Rogel explains in an example their implementation has not been without issues.

Fortunately this time, based on an Oracle insider, the Datapump team are aware of the bug.

There's still hope!

Sunday, 8 December 2013

Real World Performance; A course in its infancy

Clarity is the most vital element of the "learning" process.

If both parties involved in the process (i.e. teachers & students) do not have a clear idea about the subjects of the study, the ideal target audience, and the presentation methods, the ultimate goal -the effective knowledge transfer, that is- would not be fully realized.

At this moment (December 2013) Real World Performance training sessions offered by Oracle does not seem to be clear about some of the factors.

belated disclaimer: this is a long post! But if you are an anti-corporation (specifically Oracle) kind of person, you would find some interesting material along the way!

The Target Audience

The 3.5 days course was designed to cover three general areas of SQL, OLTP, and Data Warehousing so that audience could learn to "apply the correct performance techniques to get the best Oracle Database and Exadata performance."

Although I could not find a member of the audience who assumed Exadata would not be central to this course, not even a five minutes compact Exadata architecture presentation was offered; and that's despite the fact that the level of exposure to Exadata was one of the earliest questions that were asked from the audience, and some -including yours truly- responded: "not much."

Two days after the course, I'm still struggling to work out that whether the course designers knew whom they wanted as audience, and what technical abilities/qualities should their ideal audience have.

Subjects of Study

The SQL part started with clarifying different elements in the structure of a SELECT statement, followed by Joins, and Analytical/Window functions, and then straight into an examination of a parallel query's execution plan; while completely overlooking an explanation on how to interpret an Execution Plan.

(How many people are not clear on what does "Row Source" mean, but have a clear knowledge of how to interpret an Execution plan?!)

A presentation titled "Why my SQL is slow" followed with emphasis on importance of CBO having access to correct cardinality estimates, choosing the correct degree of parallelism, when to use indexes and histograms, the effect of using Flash Temp, choosing the right size for PGA, and correct parallel distribution method (see Parallel Execution), seeding column usage, and using extended stats!

All in about an hour (!) with not so clear presentation slides (see Demo Tools). The following presentation on "SQL Tuning tool" (see A Presenter's Cardinal Sin, below) was definitely not the highlight of the course!

OLTP

Graham Wood's presentation on OLTP had a major point of emphasis, based on the graph presented below:

Comparing min/max/avg number of transactions against number of processes per cpu
(Copyright Oracle, 2012)

The test case clearly shown that increasing the number of parallel threads to a DB would lead to reduction in throughput, not an increase => control the number of connections from the middle-tier to the database.

(This often-overlooked point was presented in a presentation by Andrew Holdsworth -known as "The Boss" by the Real World Performance team members!)

Graham Wood warns against using dynamic connection pools, and referred to the following, as the possible culprit in poor performance scenarios:
  • Excessive logon/logout
  • Invalid SQL (leads to "SQL*Net Break/Reset")
  • Cursor leaks (if you'd need to increase "open_cursors" something is wrong)
  • Session leaks (exceptions happen, but middle tier doesn't close session => high # of idle sessions, free mem is reduced, idle/leaked session might hold a lock ...)
  • Lock leaking (side effect of session leak, no rollback/commit in exception handler)
Data Warehousing

Although the Real World Performance team admitted that this was the first time that the subject was included in the course, but Mike Hallas made a great presentation out of it.

He was knowledgable and experienced, so much so that you wished he had an alternative outlet (blog, twitter, etc) to share his experience outside of the framework of OOW or Oracle's training courses.

He compared row-by-row, Array based, home-grown parallelism, and set based processes during the ETL/ELT operations.

Although based on the available slides, it appears that the course simply ran out of time to specifically talk about different indexing strategies for OLTP, and on Exadata, and also Partitioning pitfalls.

Parallel Execution

This was the ever present concept throughout the course, and rightly so, considering the architecture of Exadata and its capabilities, and also the globally dominant thirst to achieve 100% efficiency from any Oracle box.

A unique real-world presentation was made with members of the audience collaborating, to show the difference between Hash and Broadcast distribution methods used between Parallel consumers and producers (While Bjorn Engsig acted as Query Coordinator!). Although it could have been completed by association to a semantic query.

The clear distinction between the use-cases of the two method, and how to influence either was one of the main plus points of the course.

But considering the level of emphasis on the subject, perhaps having a compact and objective presentation on the following points could be beneficial:
  • How to enable parallel processing for an object
  • How to ensure read/write in parallel
  • What could disable/block parallel operations
  • Effects of parallel write on compression
  • Clarification on parallel operations and logging

Impressive Demo Tools

The presenters of Oracle's Real World Performance group were using a number of fantastic demo platforms (apparently home grown) to showcase different performance tuning scenarios.

Graham Wood used one of the more effective ones for a presentation on how to balance the workload, throughput and the number of connections to the DB, during a day of talk on OLTP.


A very informative presentation on the factors contributing to "Why is my SQL Slow" used a less effective platform, mainly because SQL Monitor output had much more detail and was not completely readable/comprehensible at times, because the words and characters simply too small.

Presentation Demo Tool

Mike Hallas who served as the main presenter of the course, used another of the more effective platforms to present a comparison of different operations on a data warehouse.


A Presenter's Cardinal Sin!

The fact that you would never hear a sentence like "here's the boring bit!" in an OakTable presentation proves that you don't need to have spent a decade in journalism -as I've done- to know that you NEVER EVER should under-sell your material!

And that's the sentence we heard when the talk on "SQL Tuning tools" began.

Such a sentence is an indication of lack of enthusiasm, and if you do not feel enthusiastic enough about a subject that you're presenting, then why are you presenting it?!

That became a double-blow for members of audience like me, who were expecting and hoping that the course would take a step to enable a DBA's hand as much as his/her mind, and were hoping sufficient amount of time would be spent on trying our hands on different tuning tools, under guidance from Oracle guys.

Perhaps the false expectations were due to lack of clarity from Oracle regarding the concepts that were about to be covered in the hands-on sessions and the way they were to be covered.

But the "SQL Tuning tools" remained "the boring bit" till the end.

Observations of an Oracle newbie

All in all, the course seemed to cover too many subjects in too little time, at least for hands-on experiments.

Also the clear distinction between the general behaviour of an Exadata system and a normal Oracle box was made in an ad-hoc way, rather than a systematic one; there were no plans to present a "These operations (and only these) would be different on Exadata" talk.

There were no mention of Exadata pitfalls, or "What Exadata isn't good for." (Am I too naive to expect such a discussion in an official Oracle training? By the way, If you're interested in the subject, you can watch this OakTable presentation by Jonathan Lewis.)

And the final observation (anti-Oracle corporation readers would love this):

EVERYBODY quotes "Tom's" quotes, expressions ("slow-by-slow"), jokes, stories, ... and only "Tom's."

Or maybe I'm wrong.

Thursday, 7 November 2013

DBA_SEGMENTS Vs. DBA_EXTENTS for calculating data size

If you are dealing with a single datafile tablespace and you'd need to know the size of a specific chunk of data, you need to use DBA_SEGMENTS!

Here's a simple experiment on tablespaces with 32K Blocks, on ASM and 11.2.0.3, and PARALLEL_DEGREE_POLICY set to MANUAL:

SELECT /*+ PARALLEL(DE,4) */ SUM(BYTES)
FROM DBA_EXTENTS DE
WHERE TABLESPACE_NAME='ACTIX_DATAPART_0001';
-- Result: 8346468352 IN 162.67 SECs

SELECT /*+ PARALLEL(DS,4) */ SUM(BYTES)
FROM DBA_SEGMENTS DS
WHERE TABLESPACE_NAME='ACTIX_DATAPART_0001'
-- Result: 8346468352 IN 3.687 SECs
The SQL statements were run exactly in the same order that are displayed, but to make sure that the caching of the base segments (like SYS_OBJECTS view or any other SYS schema object) is not contributing to the achieved time, shared pool was flushed and SQL statements were run in reverse order. Same results.

The tablespace used has two datafiles, but same statements were also tried on single file tablespaces and again, the same results.

Obviously DBA_EXTENT would allow you to query based on FILE_ID, which -unfortunately- will be crucial when you are interested to know about segments/extents on one of the files. In my case, I need it for a "datafile defragmentation" proc, which requires the BLOCK_ID of extents; hence DBA_EXTENTS.

Side note: The SUM(BYTES) would return the total sum of the size of the blocks that contain data; not the actual size of the data, although I'm not sure whether that matters or not!

Post-publish-newbie-Observation: Just noticed that DBA_SEGMENTS is entirely based on SYS.SYS_DBA_SEGS, whose HEADER_FILE column represents FILE_ID.

Also that the "Bytes" column in DBA_EXTENTS/DBA_SEGMENTS is calculated using SYS_DBA_SEGS based in the following (ugly looking, but time saving!) formula:

decode(bitand(segment_flags, 131072), 131072, blocks,  (decode(bitand(segment_flags, 1), 1, dbms_space_admin.segment_number_blocks(tablespace_id,relative_fno, header_block, segment_type_id, buffer_pool_id,segment_flags,
segment_objd, blocks), blocks)))*blocksize

Regarding the EXTENT_ID: what you need is in sys.x$ktfbue.ktfbuebno; that is, if you need a precise BLOCK_ID, but if -like me, using a 5MB margin for resizing datafiles- a "more-or-less" precise BLOCK_ID does the job, then your answer lies in SYS.SYS_DBA_SEGS.HEADER_BLOCK.

It is either the same as the BLOCK_ID you'd get from sys.x$ktfbue.ktfbuebno and sys.uet$.block# (that is DBA_EXTENTS.BLOCK_ID) or it's only a mere 2 blocks higher!

The king is dead! All hail the king!

Sunday, 3 November 2013

Oracle and releasing documentation: Do it, or don't!

Something is fundamentally wrong here!

I was watching a great presentation by Carlos Sierra on Adaptive Cursor Sharing, similar to what he presented in OakTable 2013 that I was present in (Yes, I chose Carlos' talk over Jonathan Lewis' on 12c Histograms! What are you gonna do about it?!), just to remind myself of the critical points made about the subject.

If you have seen the presentation (around 01:05:45+) or have been present at his talk, you'd know that there is a certain point in the presentation that EVERYBODY seems to be confused.

The confusion is about how the ranges of row counts in three different buckets -that define the buckets, are calculated. Carlos has explicitly admitted to the issue in his article"Observations on a test case built on 11.2.0.3 seem to indicate that boundaries for these 3 buckets are: less than 1,000; between 1,000 and 1,000,000; and more than 1,000,000."

Carlos is a person who rightly praises OakTable members using a definition that members -rightly, again- attribute to themselves; i.e. Oracle Scientists. But isn't it a bit strange to call people who study the behaviour of a man-made entity, scientists?! Would you call the greatest mechanics you know, (for example) Ford Scientists?!

And that brings me to my point: I've been privileged to be among the members of this group on the Summer of 2013, and I know that "Scientist" is only one of many titles you could choose to describe them with. But the absurdity of the title arises from the unprofessional behaviour of Orcale itself.

I have absolutely no problem with Oracle keeping its documentation secret; after all they've spent millions of man-hours to develop a capable piece of software, but you'd have to make a choice here: either -for whatever reason you choose to do so- you release ALL your documentation, (likes of which you can find here), or you don't do any of it.

The fact that many professional like OakTable members have to do research to find out about the behaviour of a man-made product (as opposed to a natural phenomena/being) whose creator possesses the information about its inner-workings, can only make an independent observer more intent on judging the creator as "yet another corporation" or -in a best case scenario, "not able to provide an association between the size of the company and the quality of its product," while it can be regarded as an actual player in the field of data science.

The ambiguities such as the ones about the ACS could only bolster the position of those who might ask: Is Oracle releasing its documentation -the way it has- merely a PR move?

Sunday, 20 October 2013

How to start a career in Oracle at the age of 38

Getting back to IT could be  a "bad decision" for me.

After all, I could deceive myself of the illusion of "being an established journalist in a globally known media organisation;" same as many of my previous colleagues did and still do.

But you can not be dishonest to yourself; being "naturally" good at something is not the best that one can be. There has to be an element of "effort to fulfil a potential" involved.

So with the help of a true friend, I left it all at the age of 38, and started in a Telecomm software company; as an Oracle Developer/Admin ... well, "to be admin" to be precise!

I have to say that it didn't start out of the blue; I used to be an Oracle developer (8i) just before 2000 and an MSc in Database systems (2009); so it wasn't a field that I was completely out of touch with.

But in one sentence: It was extremely hard; so much so that it dragged depression's foot into my life and started to threaten my personal life. And that was despite being blessed with great colleagues and understanding managers.

What made it hard?

1. Myself! Obviously I was trying too hard for making the lost ground on learning, therefore I felt obliged to pay attention to EVERY single thing that was being said around me. Also coming form public sector, I was wrongly convinced that I'd be kicked out ASAP if I under-perform.

2. Having extremely talented and intelligent colleagues, and the fact that I wanted to perform like them there and then; although I've been away from Oracle for a decade.

3. Not getting enough/clear communication from my managers about what is expected from me; mainly due the force of -always imminent- release deadlines. I could also blame the first point (above) for not being able to fully apprehend the existing communication form my manager, which generally conveyed a message of sympathy.

4. Oracle is not the most straight forward software to deal with. A HUGE number of factors influence its performance and the presence of mind to pinpoint "the" main one in any circumstance, mainly stems from experience, rather than intelligence.

How can I survive?

1. Refrain from setting vague and all-encompassing goals (such as "being a great DBA") for yourself! Start with small and clear goals through coordination with your manager or senior members of your team.

2. Serialize your tasks. What you need the most at the start of this career is the focus; and trying to do more than one thing at a time would deprive you of that. Just remember that being focused is not generally easy when you're 38, anyway!

3. Communicate! Communicate! Communicate! Try to clarify what's asked from you, and find out if there is a "preferred" way in the company for such a task. Try to establish the appropriate/expected deadlines explicitly, and eventually ask for feedback; whether it's task by task, day by day, or week by week.

4. If you haven't said in your job interview that you've got "10 years of Oracle DBA experience" while all you've done is "once or twice installing an Oracle database," you do not need to compete with your more experienced colleagues. Truly experienced people in Oracle tend to be humble and decent personalities (I had a great experience in OakTable World 2013 - Thanks to Kyle Hailey!, and I shouldn't forget my own colleagues). Ask them to guide you, and they will. In general it is extremely rare to come across genuinely knowledgable people who are jerks.

5. Do not expect yourself to know everything about Oracle; in fact the existence of ORA-07445s would tell you that even Oracle doesn't know everything about Oracle! Looking at the manners of big names in Oracle would teach you that even they (some with more than 3 decades of experience) still regard themselves as a student of Oracle.

Be prepared to be a scientist; not a technician!

p.s. Do not forget about the great Oracle community. What inspired me to revisit my decision to change career path was a tweet by Jonathan Lewis who asked:


Those are the questions that keep your brain alive, and remind you that you actually enjoy dealing with this thing the world calls "Oracle."

Thanks Jonathan Lewis.