Tuesday, June 17, 2008

Tuning PL/SQL File io

this is a good article from http://www.oracle-developer.net/display.php?id=425

tuning pl/sql file i/o
Unloading Oracle data to flat-files is still very common. There are numerous "unloader" utilities on the web for this purpose and there are also many related topics in the Oracle forums. Methods for writing data to flat-files vary, but strangely Oracle has never provided a tool to do this for us. The tools that are provided by Oracle (export, Data Pump, writeable external tables) write data quickly, but in a proprietary format, so for true ASCII flat-files, we have to resort to our own homegrown utilities.
There are several ways to generate flat-files from data stored in Oracle. We can use Perl, C, C++, Pro*C, Java and other languages that interface with Oracle but it is far more common for us to use SQL or PL/SQL to perform this operation. For example, sqlplus can spool data to flat-files very quickly, but it is a command utility and not part of the database. This makes it generally unsuited to being part of a robust database application.
UTL_FILE, on the other hand, is a built-in package around which we can very easily base our unloader utilities. The main issue with UTL_FILE, however, is that it is a relatively slow tool for unloading data, but in this article we will demonstrate some methods for speeding this up. We will see that with simple techniques we can achieve significant performance gains for our data unloads.
setup
We are going to test several versions of a standard data-unload routine. We will run these on an 11g Release 1 database, but all examples are compatible with 10g and most of them with 9i. The data to be dumped to flat-file will be sourced from a single table of 1 million rows, which we will create as follows.SQL> CREATE TABLE source_data
2 ( x, y, z
3 , CONSTRAINT source_data_pk
4 PRIMARY KEY (x,y,z)
5 )
6 ORGANIZATION INDEX
7 AS
8 SELECT ROWNUM AS x
9 , RPAD('x',50,'x') AS y
10 , RPAD('y',50,'y') AS z
11 FROM dual
12 CONNECT BY ROWNUM <= 1000000;
Table created.
We have created an IOT (index-organized table) to enable us to fully cache the data and eliminate any physical I/O. Each test we run will therefore incur roughly the same I/O costs. Using Autotrace, we will run a couple of full scans of this data until it is all in the buffer cache, as follows.SQL> set autotrace traceonly statistics
SQL> SELECT * FROM source_data;
1000000 rows selected.
Statistics
----------------------------------------------------------
88 recursive calls
0 db block gets
17779 consistent gets
15647 physical reads
0 redo size
108077685 bytes sent via SQL*Net to client
16231 bytes received via SQL*Net from client
2001 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1000000 rows processed
SQL> SELECT * FROM source_data;
1000000 rows selected.
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
17668 consistent gets
0 physical reads
0 redo size
108077883 bytes sent via SQL*Net to client
16231 bytes received via SQL*Net from client
2001 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1000000 rows processed
We have reduced the physical I/O as intended, so to complete our setup we will create an Oracle directory below. This is where we will write our flat-files.SQL> CREATE DIRECTORY dump_dir AS '/u01/app/oracle/dir';
Directory created.
baseline utl_file performance
Before we can tune our code, we need to know our baseline performance. In this case, it will be the time it takes to unload 1 million rows to a flat-file using UTL_FILE.PUT_LINE calls, using the following PL/SQL block.SQL> DECLARE
2
3 v_file UTL_FILE.FILE_TYPE;
4 v_name VARCHAR2(128) := 'utl_file_untuned.txt';
5 v_lines PLS_INTEGER := 0;
6
7 BEGIN
8
9 v_file := UTL_FILE.FOPEN('DUMP_DIR',v_name,'W',32767);
10
11 FOR r IN (SELECT x ',' y ',' z AS csv
12 FROM source_data)
13 LOOP
14 UTL_FILE.PUT_LINE(v_file, r.csv);
15 v_lines := v_lines + 1;
16 END LOOP;
17
18 UTL_FILE.FCLOSE(v_file);
19
20 DBMS_OUTPUT.PUT_LINE('File='v_name'; Lines='v_lines);
21
22 END;
23 /
File=utl_file_untuned.txt; Lines=1000000
PL/SQL procedure successfully completed.
Elapsed: 00:01:06.53
Our baseline performance is approximately 66 seconds to write the data to file. Note the following:
Line 11: each record is pre-formatted with a delimiter in the SQL cursor for convenience. A dump utility would usually accept the delimiter as a parameter and concatenate a record piecemeal (especially if using dynamic SQL);
Lines 11-13: we are using an implicit cursor-for-loop. From Oracle 10g Release 1, this will be optimised by the PL/SQL compiler into bulk collects with an array-size of 100 rows. All examples in this article will use either implicit or explicit bulk fetches of the same size. Note that readers on 9i databases will need to convert this and other implicit cursor examples to use explicit array fetches with BULK COLLECT to ensure that all examples are comparable.
tuning (1): buffering to reduce utl_file i/o
Our baseline code has two repetitive operations. First there is an incremented counter which is used to instrument the example. This has a negligible impact overall. More importantly there are 1 million UTL_FILE I/O operations and these have a far more significant impact. Tracing the baseline example with the PL/SQL Profiler and the new PL/SQL Hierarchical Profiler clearly shows that 50% of the elapsed time is accounted for by our 1 million UTL_FILE.PUT_LINE calls.
Our first technique for tuning this, therefore, will be to reduce the number of I/O operations by buffering the output data. This is quite a simple method. We will use a local variable to buffer up to 32K of data before writing it to file, as follows.SQL> DECLARE
2
3 v_file UTL_FILE.FILE_TYPE;
4 v_buffer VARCHAR2(32767);
5 v_name VARCHAR2(128) := 'utl_file_buffered.txt';
6 v_lines PLS_INTEGER := 0;
7 c_eol CONSTANT VARCHAR2(1) := CHR(10);
8 c_eollen CONSTANT PLS_INTEGER := LENGTH(c_eol);
9 c_maxline CONSTANT PLS_INTEGER := 32767;
10
11 BEGIN
12
13 v_file := UTL_FILE.FOPEN('DUMP_DIR',v_name,'W',32767);
14
15 FOR r IN (SELECT x ',' y ',' z AS csv
16 FROM source_data)
17 LOOP
18
19 IF LENGTH(v_buffer) + c_eollen + LENGTH(r.csv) <= c_maxline THEN
20 v_buffer := v_buffer c_eol r.csv;
21 ELSE
22 IF v_buffer IS NOT NULL THEN
23 UTL_FILE.PUT_LINE(v_file, v_buffer);
24 END IF;
25 v_buffer := r.csv;
26 END IF;
27
28 v_lines := v_lines + 1;
29
30 END LOOP;
31
32 UTL_FILE.PUT_LINE(v_file, v_buffer);
33 UTL_FILE.FCLOSE(v_file);
34
35 DBMS_OUTPUT.PUT_LINE('File='v_name'; Lines='v_lines);
36
37 END;
38 /
File=utl_file_buffered.txt; Lines=1000000
PL/SQL procedure successfully completed.
Elapsed: 00:00:30.06
The algorithm (highlighted above) is reasonably self-explanatory, but we will describe it anyway:
Lines 19-20: UTL_FILE has a maximum write-size of 32,767 bytes. If there is enough room in the buffer for a new record to be added (including newline), then we simply append the new data together with a newline;
Lines 21-26: if we can't fit the new record into the buffer, or the buffer is null (in which case the buffer-length calculation will also evaluate to null), then we must either flush it or start it. The buffer will only be null on the very first entry to the loop, so we make it the very last test;
Lines 22-24: when the buffer is too full to accept any new data, we need to write it to file with UTL_FILE;
Line 25: regardless of whether the buffer has been written or hasn't been started it, we reset/start it with the current data record;
Line 32: once the loop is complete, we write the remaining buffered data to the flat-file.
This technique is simple yet extremely effective. We have reduced our elapsed time to less than half the baseline: in other words, this unloading routine is now twice as fast (for this dataset). With an average record size of 105 bytes, we can buffer approximately 310 records, which means we reduce our UTL_FILE calls to roughly 3,200.
Given the effectiveness of the buffering technique, we will continue to use it for the remaining examples in this article.
a note on utl_file buffering
It is possible to replicate this buffering algorithm using the UTL_FILE.PUT, UTL_FILE.NEW_LINE and UTL_FILE.FFLUSH procedures, but a test with our sample data took over 85 seconds, which is considerably slower than our baseline. For this reason, the example is omitted from this article, but is included in the accompanying download.
tuning (2): using temporary clobs
From Oracle 10g onwards, it is possible to write a CLOB to a file with a single call, using the DBMS_XSLPROCESSOR.CLOB2FILE procedure. In the following example, we will prepare a temporary CLOB with our data instead of writing it with UTL_FILE. When all source data has been added to the CLOB, we will write it to a flat-file in a single call.SQL> DECLARE
2
3 v_file CLOB;
4 v_buffer VARCHAR2(32767);
5 v_name VARCHAR2(128) := 'clob2file_buffered.txt';
6 v_lines PLS_INTEGER := 0;
7 v_eol VARCHAR2(2);
8 v_eollen PLS_INTEGER;
9 c_maxline CONSTANT PLS_INTEGER := 32767;
10
11 BEGIN
12
13 v_eol := CASE
14 WHEN DBMS_UTILITY.PORT_STRING LIKE 'IBMPC%'
15 THEN CHR(13)CHR(10)
16 ELSE CHR(10)
17 END;
18 v_eollen := LENGTH(v_eol);
19
20 DBMS_LOB.CREATETEMPORARY(v_file, TRUE);
21
22 FOR r IN (SELECT x ',' y ',' z AS csv
23 FROM source_data)
24 LOOP
25
26 IF LENGTH(v_buffer) + v_eollen + LENGTH(r.csv) <= c_maxline THEN
27 v_buffer := v_buffer v_eol r.csv;
28 ELSE
29 IF v_buffer IS NOT NULL THEN
30 DBMS_LOB.WRITEAPPEND(
31 v_file, LENGTH(v_buffer) + v_eollen, v_buffer v_eol
32 );
33 END IF;
34 v_buffer := r.csv;
35 END IF;
36
37 v_lines := v_lines + 1;
38
39 END LOOP;
40
41 IF LENGTH(v_buffer) > 0 THEN
42 DBMS_LOB.WRITEAPPEND(
43 v_file, LENGTH(v_buffer) + v_eollen, v_buffer v_eol
44 );
45 END IF;
46
47 DBMS_XSLPROCESSOR.CLOB2FILE(v_file, 'DUMP_DIR', v_name);
48 DBMS_LOB.FREETEMPORARY(v_file);
49
50 DBMS_OUTPUT.PUT_LINE('File='v_name'; Lines='v_lines);
51
52 END;
53 /
File=clob2file_buffered.txt; Lines=1000000
PL/SQL procedure successfully completed.
Elapsed: 00:00:28.65
The CLOB-specific code is highlighted above and is self-explanatory (perhaps with the exception of the end-of-line character assignment on lines 13-17 which is different for Windows. UTL_FILE manages the port-specific end-of-line conversions for us, but with CLOBs we must manage this ourselves).
Of particular interest is the DBMS_XSLPROCESSOR call on line 47, which is our only write operation to the destination flat-file. We can see overall that this technique is similar in performance to our buffered UTL_FILE mechanism (the CLOB method was slightly quicker). We therefore have an alternative method for writing data, but there will be additional costs associated with using CLOBs (for example, temporary tablespace and buffer cache). If the volume of data to be dumped is high, then this method might put too much stress on our temporary tablespace and cause problems for other users (large sort operations, hash joins, global temporary tables etc). Care should therefore be taken when using this method.
tuning (3): parallel execution with pipelined functions
Our final technique for quicker data-dumping from PL/SQL will be to split the workload and have the data written in separate streams. We can do this quite simply by using multiple sessions, with each reading and dumping different ranges of source data (Tom Kyte calls this "DIY parallelism"). A simpler (and possibly more elegant) method is to use Oracle's parallel query (PQ) mechanism and create a parallel pipelined function to split the source data and dump it using multiple PQ slaves.
For this test, we will create a parallel pipelined function that writes the source data to flat-file and returns a single summary record per session. This record will include details of the file and the session that wrote it. It is assumed that readers are familiar with the concept of parallel pipelined functions (some background reading is available if required).
required types
We will begin by creating an object type to define the structure of our pipelined function's return data, as follows.SQL> CREATE TYPE dump_ot AS OBJECT
2 ( file_name VARCHAR2(128)
3 , no_records NUMBER
4 , session_id NUMBER
5 );
6 /
Type created.
Pipelined functions return collections of data, so we will create a nested table type based on our new object.SQL> CREATE TYPE dump_ntt AS TABLE OF dump_ot;
2 /
Type created.
We are now ready to create our parallel pipelined function. We will create and test two versions: one with UTL_FILE and one with a CLOB, because there has been little to distinguish them in the timed comparisons so far.
tuning (3a): parallel utl_file solution
We will begin by creating a parallel pipelined function that uses UTL_FILE to write data, as follows.SQL> CREATE FUNCTION parallel_dump (
2 p_source IN SYS_REFCURSOR,
3 p_filename IN VARCHAR2,
4 p_directory IN VARCHAR2
5 ) RETURN dump_ntt
6 PIPELINED
7 PARALLEL_ENABLE (PARTITION p_source BY ANY) AS
8
9 TYPE row_ntt IS TABLE OF VARCHAR2(32767);
10 v_rows row_ntt;
11 v_file UTL_FILE.FILE_TYPE;
12 v_buffer VARCHAR2(32767);
13 v_sid NUMBER;
14 v_name VARCHAR2(128);
15 v_lines PLS_INTEGER := 0;
16 c_eol CONSTANT VARCHAR2(1) := CHR(10);
17 c_eollen CONSTANT PLS_INTEGER := LENGTH(c_eol);
18 c_maxline CONSTANT PLS_INTEGER := 32767;
19
20 BEGIN
21
22 SELECT sid INTO v_sid FROM v$mystat WHERE ROWNUM = 1;
23 v_name := p_filename '_' TO_CHAR(v_sid) '.txt';
24 v_file := UTL_FILE.FOPEN(p_directory, v_name, 'w', 32767);
25
26 LOOP
27 FETCH p_source BULK COLLECT INTO v_rows LIMIT 100;
28
29 FOR i IN 1 .. v_rows.COUNT LOOP
30
31 IF LENGTH(v_buffer) + c_eollen + LENGTH(v_rows(i)) <= c_maxline THEN
32 v_buffer := v_buffer c_eol v_rows(i);
33 ELSE
34 IF v_buffer IS NOT NULL THEN
35 UTL_FILE.PUT_LINE(v_file, v_buffer);
36 END IF;
37 v_buffer := v_rows(i);
38 END IF;
39
40 END LOOP;
41
42 v_lines := v_lines + v_rows.COUNT;
43
44 EXIT WHEN p_source%NOTFOUND;
45 END LOOP;
46 CLOSE p_source;
47
48 UTL_FILE.PUT_LINE(v_file, v_buffer);
49 UTL_FILE.FCLOSE(v_file);
50
51 PIPE ROW (dump_ot(v_name, v_lines, v_sid));
52 RETURN;
53
54 END parallel_dump;
55 /
Function created.
Before we run a timed test using this function, note the following:
Line 7: we parallel-enable the function and state that Oracle can divide the source data amongst the PQ slaves any way it wishes;
Lines 22-23: each PQ slave will write to its own file, so we append the SID to the filename to make each one unique;
Lines 26-27: the source cursor will be passed as a refcursor parameter, so we must explicitly code a bulk fetch of 100 records to emulate the other examples in this article (i.e. those that benefit from 10g PL/SQL optimisation);
Line 51: on completion of the flat-file, we return a single record summarising the file details.
We will execute this function, as follows.SQL> SELECT *
2 FROM TABLE(
3 parallel_dump(
4 CURSOR(SELECT /*+ PARALLEL(s,4) */
5 x ','
6 y ','
7 z AS csv
8 FROM source_data s),
9 'utl_file_parallel_pipelined',
10 'DUMP_DIR'
11 )) nt;
FILE_NAME NO_RECORDS SESSION_ID
-------------------------------------------------- ---------- ----------
utl_file_parallel_pipelined_136.txt 190758 136
utl_file_parallel_pipelined_135.txt 192640 135
utl_file_parallel_pipelined_117.txt 288960 117
utl_file_parallel_pipelined_121.txt 327642 121
4 rows selected.
Elapsed: 00:00:10.26
This is impressive! We have reduced the elapsed time of our write operation from a baseline of 66 seconds to just 10 seconds. Of course, we now have four files instead of one, but we can easily append these files together with a script of some description (Perl, shell etc). Preferably, we would leave them as separate files and simply read them as though they were a single file by using an external table with an appropriate LOCATION setting (i.e. one that includes all four files).
tuning (3b): parallel temporary clob solution
To complete our tuning tests, we will re-create the parallel pipelined function to use the CLOB method (as noted earlier, this requires at least 10g).SQL> CREATE OR REPLACE FUNCTION parallel_dump (
2 p_source IN SYS_REFCURSOR,
3 p_filename IN VARCHAR2,
4 p_directory IN VARCHAR2
5 ) RETURN dump_ntt
6 PIPELINED
7 PARALLEL_ENABLE (PARTITION p_source BY ANY) AS
8
9 TYPE row_ntt IS TABLE OF VARCHAR2(32767);
10 v_rows row_ntt;
11 v_file CLOB;
12 v_buffer VARCHAR2(32767);
13 v_sid NUMBER;
14 v_name VARCHAR2(128);
15 v_lines PLS_INTEGER := 0;
16 v_eol VARCHAR2(2);
17 v_eollen PLS_INTEGER;
18 c_maxline CONSTANT PLS_INTEGER := 32767;
19
20 BEGIN
21
22 v_eol := CASE
23 WHEN DBMS_UTILITY.PORT_STRING LIKE 'IBMPC%'
24 THEN CHR(13)CHR(10)
25 ELSE CHR(10)
26 END;
27 v_eollen := LENGTH(v_eol);
28
29 SELECT sid INTO v_sid FROM v$mystat WHERE ROWNUM = 1;
30 v_name := p_filename '_' TO_CHAR(v_sid) '.txt';
31
32 DBMS_LOB.CREATETEMPORARY(v_file, TRUE);
33
34 LOOP
35 FETCH p_source BULK COLLECT INTO v_rows LIMIT 100;
36
37 FOR i IN 1 .. v_rows.COUNT LOOP
38
39 IF LENGTH(v_buffer) + v_eollen + LENGTH(v_rows(i)) <= c_maxline THEN
40 v_buffer := v_buffer v_eol v_rows(i);
41 ELSE
42 IF v_buffer IS NOT NULL THEN
43 DBMS_LOB.WRITEAPPEND(
44 v_file, LENGTH(v_buffer) + v_eollen, v_buffer v_eol
45 );
46 END IF;
47 v_buffer := v_rows(i);
48 END IF;
49
50 END LOOP;
51
52 v_lines := v_lines + v_rows.COUNT;
53
54 EXIT WHEN p_source%NOTFOUND;
55 END LOOP;
56 CLOSE p_source;
57
58 IF LENGTH(v_buffer) > 0 THEN
59 DBMS_LOB.WRITEAPPEND(
60 v_file, LENGTH(v_buffer) + v_eollen, v_buffer v_eol
61 );
62 END IF;
63
64 DBMS_XSLPROCESSOR.CLOB2FILE(v_file, p_directory, v_name);
65 DBMS_LOB.FREETEMPORARY(v_file);
66
67 PIPE ROW (dump_ot(v_name, v_lines, v_sid));
68 RETURN;
69
70 END parallel_dump;
71 /
Function created.
We have already described the CLOB and parallel pipelined function techniques, so we will proceed with our timing test, as follows.SQL> SELECT *
2 FROM TABLE(
3 parallel_dump(
4 CURSOR(SELECT /*+ PARALLEL(s,4) */
5 x ','
6 y ','
7 z AS csv
8 FROM source_data s),
9 'clob2file_parallel_pipelined',
10 'DUMP_DIR'
11 )) nt;
FILE_NAME NO_RECORDS SESSION_ID
-------------------------------------------------- ---------- ----------
clob2file_parallel_pipelined_200.txt 248504 200
clob2file_parallel_pipelined_196.txt 232768 196
clob2file_parallel_pipelined_198.txt 248192 198
clob2file_parallel_pipelined_192.txt 270536 192
4 rows selected.
Elapsed: 00:00:27.84
The CLOB technique does not scale at all. It takes approximately the same time in serial or parallel mode, meaning that we should probably avoid this technique and use UTL_FILE instead.

Tuesday, June 10, 2008

Imp Oracle PLSQL Interview Questions

I have taken more than 100 interviews for oracle plsql developer position and Below few set of questions i asked in interviews.


RE: What is Pragma EXECPTION_INIT ? Explain...

Pragma exception_init Allow you to handle the Oracle predefined message by you'r own message. means you can instruct compiler toassociatethe specific message to oracle predefined message at compile time.This way you Improve the Readbility of your program,and handle it accoding to your own way.
It should be declare at the DECLARE section.
example
declare
salary number;
FOUND_NOTHING exception;
Pragma exception_init(FOUND_NOTHING ,100);
begin
select sal in to salaryfrom emp where ename ='ANURAG';
dbms_output.put_line(salary);
exception
WHEN FOUND_NOTHING THEN
dbms_output.put_line(SQLERRM);
end;


RE: What is Data Concarency and Consistency?

Concurrency
How well can multiple sessions access the same data simultaneously
Consistency
How consistent is the view of the data between and within multiple sessions, transactions or statements



RE: What is bulk binding please explain me in brief ?

Bulk Binds (BULK COLLECT , FORALL ) are a PL/SQL technique where, instead of multiple individual SELECT, INSERT, UPDATE or DELETE statements are executed to retrieve from, or store data in, at table, all of the operations are carried out at once, in bulk.
This avoids the context-switching you get when the PL/SQL engine has to pass over to the SQL engine, then back to the PL/SQL engine, and so on, when you individually access rows one at a time. To do bulk binds with Insert, Update and Delete statements, you enclose the SQL statement within a PL/SQL FORALL statement.
To do bulk binds with Select statements, you include the Bulk Collect INTO a collection clause in the SELECT Statement instead of using Simply into .
Collections, BULK COLLECT and FORALL are the new features in Oracle 8i, 9i and 10g PL/SQL that can really make a different to you PL/SQL performance


RE: what is p-code and sourcecode ?

P-code is Pre-complied code stored in Public cache memory of System Global Area after the Oracle instance is started, whereas sourcecode is a simple code of sp, package, trigger, functions etc which are stored in Oracle system defined data dictionary. Every session of oracle access the p-code which have the Execute permission on that objects.
Source code stored in user_objects data dictinary for user defined Store proc, Trigger, Package, Function. DBA_object stores all the db objects in sp. DB. ALL_objects stores all the db objects in sp. schema


According to O Reilly Book , the main difference between Index-By Table(pl-Sql Table) , Varray and nested tables are -

Index-By Table
Nested Table
VARRAY
Dimensionality
Single
Single
Single
Usable in SQL?
No
Yes
Yes
Usable as column datatype in a table?
No
Yes; data stored "out of line" (in separate table)
Yes; data stored "in line" (in same table)
Uninitialized state
Empty (cannot be null); elements undefined
Atomically null; illegal to reference elements
Atomically null; illegal to reference elements
Initialization
Automatic, when declared
Via constructor, fetch, assignment
Via constructor, fetch, assignment
In PL/SQL, elements referenced via
BINARY_INTEGER
(-2,147,483,647 .. 2,147,483,647)
Positive integer between 1 and 2,147,483,647
Positive integer between 1 and 2,147,483,647
Sparse?
Yes
Initially, no; after deletions, yes
No
Bounded?
No
Can be extended
Yes
Can assign value to any element at any time?
Yes
No; may need to EXTEND first
No; may need to EXTEND first, and cannot EXTEND past upper bound
Means of extending
Assign value to element with a new subscript
Use built-in EXTEND procedure (or TRIM to condense), with no predefined maximum
EXTEND (or TRIM), but only up to declared maximum size
Can be compared for equality?
No
No
No
Retains ordering and subscripts when stored in and retrieved from database?
N/A
No
Yes



RE: What is autonomous Transaction? Where are they use...

Autonomous transaction is the transaction which acts independantly from the calling part and could commit the process done.
example using prgma autonomous incase of mutation problem happens in a trigger


RE: why do we need to create a force view?what is its ...

It is some times important to create a view even if the base table does not exist and we are planning to create one but not yet decided the structure, in that case one can create force view. This view can not be accessed and it will be in 'invalid' state unless and untill you will create a base table and compile the force view.

RE:Explain materialized views and how they are used.
Materialized views are objects that are reduced sets of information that have been summarized, grouped, or aggregated from base tables. They are typically used in data warehouse or decision support systems.

RE:When a user process fails, what background process cleans up after it?
PMON

RE: How can you enable a trace for a session?
Use the DBMS_SESSION.SET_SQL_TRACE or
Use ALTER SESSION SET SQL_TRACE = TRUE;

Few other interview questions

1.What are the various types of Exceptions ?
User defined and Predefined Exceptions.
2.Can we define exceptions twice in same block ?
No.
3.What is the difference between a procedure and a function ?
Functions return a single variable by value whereas procedures do not return any variable by value. Rather they return multiple variables by passing variables by reference through their OUT parameter.

4.Can you have two functions with the same name in a PL/SQL block ?
Yes.

5.Can you have two stored functions with the same name ?
Yes.

6.Can you call a stored function in the constraint of a table ?
No.

7.What are the various types of parameter modes in a procedure ?

IN, OUT AND INOUT.

8.What is Over Loading and what are its restrictions ?
OverLoading means an object performing different functions depending upon the no. of parameters or the data type of the parameters passed to it.

9.Can functions be overloaded ?
Yes.

10.Can 2 functions have same name & input parameters but differ only by return datatype
No.

11.What are the constructs of a procedure, function or a package ?

The constructs of a procedure, function or a package are :
variables and constants cursors exceptions

12.Why Create or Replace and not Drop and recreate procedures ?
So that Grants are not dropped.

13.Can you pass parameters in packages ? How ?

Yes. You can pass parameters to procedures or functions in a package.

14.What are the parts of a database trigger ?
The parts of a trigger are: A triggering event or statement A trigger restriction A trigger action

15.What are the various types of database triggers ?
There are 12 types of triggers, they are combination of :Insert, Delete and Update Triggers.Before and After Triggers.Row and Statement Triggers.(3*2*2=12)

16.What is the advantage of a stored procedure over a database trigger ?
We have control over the firing of a stored procedure but we have no control over the firing of a trigger.

17.What is the maximum no. of statements that can be specified in a trigger statement ?
One.

18.Can views be specified in a trigger statement ?No

19.What are the values of :new and :old in Insert/Delete/Update Triggers ?

INSERT : new = new value, old = NULLDELETE : new = NULL, old = old valueUPDATE : new = new value, old = old value

20.What are cascading triggers? What is the maximum no of cascading triggers at a time?

When a statement in a trigger body causes another trigger to be fired, the triggers are said to be cascading. Max = 32.

21.What are mutating triggers ?

A trigger giving a SELECT on the table on which the trigger is written.

22.What are constraining triggers ?
A trigger giving an Insert/Updat e on a table having referential integrity constraint on the triggering table.

23.Describe Oracle database's physical and logical structure ?
Physical : Data files, Redo Log files, Control file.Logical : Tables, Views, Tablespaces, etc.

24.Can you increase the size of a tablespace ? How ?
Yes, by adding datafiles to it.

25.Can you increase the size of datafiles ? How ?
No (for Oracle 7.0)Yes (for Oracle 7.3 by using the Resize clause ----- Confirm !!).

26.What is the use of Control files ?
Contains pointers to locations of various data files, redo log files, etc.

27.What is the use of Data Dictionary ?
Used by Oracle to store information about various physical and logical Oracle structures e.g. Tables, Tablespaces, datafiles, etc

28.What are the advantages of clusters ?
Access time reduced for joins.

29.What are the disadvantages of clusters ?
The time for Insert increases.

30.Can Long/Long RAW be clustered ?No.

31.Can null keys be entered in cluster index, normal index ?Yes.

32.Can Check constraint be used for self referential integrity ? How ?
Yes. In the CHECK condition for a column of a table, we can reference some other column of the same table and thus enforce self referential integrity.

33.What are the min. extents allocated to a rollback extent ?
Two

34.What are the states of a rollback segment ? What is the difference between partly available and needs recovery ?
The various states of a rollback segment are :ONLINE, OFFLINE, PARTLY AVAILABLE, NEEDS RECOVERY and INVALID.

35.What is the difference between unique key and primary key ?
Unique key can be null; Primary key cannot be null.

36.An insert statement followed by a create table statement followed by rollback ? Will the rows be inserted ?
No.

37.Can you define multiple savepoints ?
Yes.

38.Can you Rollback to any savepoint ?Yes.

40.What is the maximum no. of columns a table can have ?
254.

41.What is the significance of the & and && operators in PL SQL ?

The & operator means that the PL SQL block requires user input for a variable. The && operator means that the value of this variable should be the same as inputted by the user previously for this same variable.If a transaction is very large, and the rollback segment is not able to hold the rollback information, then will the transaction span across different rollback segments or will it terminate ?It will terminate (Please check ).

42.Can you pass a parameter to a cursor ?Explicit cursors can take parameters, as the example below shows. A cursor parameter can appear in a query wherever a constant can appear. CURSOR c1 (median IN NUMBER) ISSELECT job, ename FROM emp WHERE sal > median;

43.What are the various types of RollBack Segments ?
Public Available to all instancesPrivate Available to specific instance

44.Can you use %RowCount as a parameter to a cursor ?Yes

45.Is the query below allowed :Select sal, ename Into x From emp Where ename = 'KING'(Where x is a record of Number(4) and Char(15))

Yes

Script to Compile All invalid Objects in a Database Schema

Most of the times we face a situation where you see lots of interdependent components would be in invalid state after compiling a new or existing oracle object. Below given script can be used to compile all invalid objects.

set serveroutput on size 1000000
DECLARE CURSOR invalidcurs_1 IS select * from user_objects where status != 'VALID' and object_type not in ('TRIGGER','VIEW');

CURSOR invalidcurs_2 IS select * from user_objects where status != 'VALID' and object_type in ('TRIGGER','VIEW');

sql_str VARCHAR2(100);
ret_code INTEGER;
m_cur INTEGER;
l_owner varchar2(10);

BEGIN

dbms_output.enable(1000000);
l_owner := '&owner';
FOR rec IN invalidcurs_1
LOOP
dbms_ddl.alter_compile( rec.object_type, l_owner, rec.object_name);
dbms_output.put_line( 'Compiled: ' l_owner '.' rpad(rec.object_name, 30) ' => ' rec.object_type);
END LOOP;

FOR rec IN invalidcurs_2
LOOP
sql_str := 'ALTER ' rec.object_type ' ' rec.object_name ' COMPILE';
m_cur := dbms_sql.open_cursor; dbms_sql.parse (m_cur, sql_str, 2);
ret_code := dbms_sql.execute (m_cur);
dbms_sql.close_cursor (m_cur);
dbms_output.put_line( 'Compiled: ' l_owner '.' rpad(rec.object_name, 30) ' => ' rec.object_type);
END LOOP;
EXCEPTION
WHEN others THEN
dbms_output.put_line ('Error: ' SQLERRM);
END;

Oracle Coding Standards

Below given few oracle coding standards I have been using with various projects.


1. Header
Script Files

Section Heading
Section Contents
Author
The name of the person who created it
Date Written
The completion date for the intial code release/ file creation
Description
A brief explanation of the task the code performed
Version Number
The Present Version Number

Example: Script file insertscript will contain:
Rem Author : Rhodes G.
Rem Created On : 09/09/2004
Rem Purpose : to populate master data
Rem Version No : v1.0

Triggers

Section Heading
Section Contents
Name
The name of the trigger.
Author
The name of the person who created it
Date Written
The completion date for the intial code release/ file creation
Description
A brief explanation of the task the code performed
Calls
(Optional) A list of procedures and functions by the code.
Updates
A history of code revision listing author, date and revision description
Version Number
The Present Version Number

Example:
/*
Name : tg_bl_updatePolReg
Description : to maintain history
Author : Rhodes G.
Date Written : 09/09/2004
Calls : pr_insert_his
Change Register :
-------------------------------------------------------------------------------------------------------------
Change # Date Author Description Review by
-------------------------------------------------------------------------------------------------------------
9999 99/99/9999 XXXXXXX XXXXXXXXXXXXXXXXXX XXXXXXX
XXXXXXXXXXXXXXXXXX
-------------------------------------------------------------------------------------------------------------
Version Number : v1.0
*/

Procedures and Functions

Section Heading
Section Contents
Name
The name of the trigger.
Description
A brief explanation of the task the code performed
Author
The name of the person who created it
Date Written
The completion date for the intial code release/ file creation
Inputs
List of incoming parameters with a brief description of each
Outputs
List of Output parameters with a brief description of each
Table Accessed
List of tables on which insert, update & delete is performed
Calls Called by
(Optional) A list of procedures and functions by the code.
Change Register
A history of code revision listing author, date and revision description
Version Number
The Present Version Number

Example:
/*
Name : pkg_manual_bills
Description : to generate manual bills
Author : Rhodes G.
Date Written : 09/09/2004
Inputs : policy_no – Policy no for which bills to be generated
Period - Billing Year & month YYYYMM
Outputs : Bill_no - System generated Bill no.
Table Accessed : Insert on –
Update on -
Delete on -
Calls :
Change Register :
-------------------------------------------------------------------------------------------------------------
Change # Date Author Description Review by
-------------------------------------------------------------------------------------------------------------
9999 99/99/9999 XXXXXXX XXXXXXXXXXXXXXXXXX XXXXXXX
XXXXXXXXXXXXXXXXXX
-------------------------------------------------------------------------------------------------------------
Version Number : v1.0
*/

2. Comments

There are a number of locations in PL/SQL code where comments are expected, the following instances,

Before each Loop structure.
Before each Begin….End sub-block.
Before each conditional logic expression (IF THEN).
Before any other logically significant statements.

If a comment is required placed the comment on the line immediately preceding the line of code. Do not append comments to the end of code. If a comment is warranted by the complexity of the code and you have used meaningful identifiers, the comment should be complicated enough that you need to explain the situation using more than one or two words.

For example

Correct way:
--
--If the balance is over the maximum balance allowed,
--Do not allow further charges.
--

IF (n_account_balance > MAX_BALANCE) THEN
…………

…………

Incorrect way:

IF (n_account_balance > MAX_BALANCE) THEN
…………
---No Charges
…………


3. Indentation

Indentation is the most crucial factor affecting the readability of the code. Consistent indentation makes the code appear structured and makes the code more readable by showing the levels of logic with in the code.

These are the guidelines for indentating code:
· Be consistent i.e. use the same number of spaces for indentation throughout your code.
· Don’t use too much or too little indentations. Four spaces are the most effective level of indentation. When the code is long TAB may used as the standard. Less indentation is not very useful, and more wastes spaces. Remember that 80 character is line limit.
· Indent the variable and constant definition in the variable declaration portion of a PL/SQL block
· Indent WHEN statement following an exception handler to the first level of indentation. Indent the statement following the WHEN statement to the second level of indentation.

EXCEPTION
WHEN NO_DATA_FOUND_THEN

The following are the guidelines for indenting that relate directly to stored prodedure and functions
· When invoking a stored procedure or function, begin the indentation of the parameter at the end of the procedure or function name. Place only one parameters per line and align all the parameters in a columnar fashion.
For example,
Pr_cal_billdue_dt (prm_policy_no,
prm_policy_effdt);
· When defining the parameters for stored procedure or function, align the IN, OUT and IN OUT parameter type definitions vertically, leaving space for both keywords even if and only one keyword is used.
PROCEDURE sample_proc (prm_1 IN number,
prm_2 IN OUT number,
prm_3 OUT varchar2);

Line Length

Avoid lines loner than 80 characters, Since they are not handled well by many terminals and tools.

Wrapping Line

When an expression will not fit on a single line, break it according ro these general principals,
· Break after a comma.
· Break after an operator.
· Prefer higher level breaks to lower level break.
· Align the new line with the beginning of the expression at the same level on the previous line.

In the above rules lead to confusing code or to code that’s squished up against the right margin, just intead by a TAB instead.

4. Naming Conventions

Variables

Types
Prefix
Examples
Local variables
mmy_
mmy_policy_no , mmy_policy_renew_no etc
Package variables
cmn_
cmn_policy_no , cmn_policy_renew_no etc
Parameters
prm_
Prm_policy_no , prm_policy_renew_no etc
global variables
glb_
:GLOBAL.glb_policy_no , :GLOBAL.glb_policy_renew_no etc

Objects

Types
Prefix
Examples
Primary key constraint
Pk_

Unique key constraint
Uk_

Foreign key constraint
Fk_

Check constraint
Ck_

Foreign key index
Fki_

Indexes
I_

Views
V_

Sequence
Seq_

Snapshots
Sn_

Tablespace
Ts_

Clusters
Clu_


Using %TYPE and %ROWTYPE

The use of %TYPE is recommended when declaring variables and constants that refer to a column in a table. Declaration using %TYPE is also useful when dependencies are desired between database objects and variables or parameters.

Due to a decrease in readability when using %TYPE, it is essential that meaningful identifier names can be used.

The use of %ROWTYPE is encouraged, providing comments are used to improve readability of the code. Due to the likelihood of changes in database structure, declarations using %ROWTYPE will help streamline the maintenance of existing procedure during the development process.

5. Programming Practices
Codding Conventions

Coding conventions are suggestions to help standardize the structure and coding style of a script or set of script so that you and others can easily read abd ubderstand the code. Using proper coding sonventions result in precise, readable, and unambiguous source code that is consistent with other language conventions and is intuitive as possible.

By adopting proper coding conventions early in the development cycle, you greatly increase the effectiveness of your development staff throughout the product cycle.

Naming

A variable name should be complete as necessary to describe its purpose. For frequently used or long terms, standards abbreviations are recommended to help keep name length reasonable. In general, variable names greater than 32 characters can be difficult to read. When using abbreviations, make sure they are consistent throughout the entire script.
For example, randomly switching between the variable name cnt and count within a script may lead to confusion.

Declarations

One declaration per line is recommended since it encourages commenting.
For example,
mmy_policy_no policy_reg.policy_no%type; --Policy number
mmy_policy_effdt policy_reg.policy_effdt%type; --Eff. Date

Capitalization

An important aspect of how code is presented is capitalization. This section outlines the rules for both partially and fully capitalized identifiers within code,
· All reserve words, built in functions, pre-defined types, Sql keywords should always be presented in upper case.
· The names of all the standard and user-defined exceptions should be listed in full uppercase.
· Below is the list of all the standard exceptions provided by PL/SQL
CURSOR_ALREADY_OPEN
DUP_VAL_ON_INDEX
INVALID_CURSOR
INVALID_NUMBER
LOGON_DENIED
NO_DATA_FOUND
OTHERS
PROGRAM_ERROR
STORAGE_ERROR
TIMEOUT_ON_RESOURCE
TOO_MANY_ROWS
TRANSACTION_BACKED_OUT
VALUE_ERROR
ZERO_DIVIDE
· The names of all the database triggers should be listed in full upper case.
· All user-defined and other datatypes will be listed in lower case.
· The names of all the tables, views, snapshots, synonyms will be listed in lowercase

All text that is not covered by these rules will be presented in lower case.

6. Errors/ Exceptions Handling

Errors and exeptions should be handled at every level of the application.

This control should be incorporated in the following components,
· Scripts
· Functions
· Procedures
· Triggers

a. In any database application. DML operations such as SELECT, UPDATE, INSERT, DELETE return an error code if they fail due to violation of constraints.
b. The constraints can be Foreign key violation, Primary key violation, Column size mismatch, datatype mismatch, etc.
c. Typically, these error codes or the text populated by the system do not make sense to the end user. The error handling system should intelligently interpret the error code or the text associated with it and return a clear and simple error message to the end user.
In case of error, the error trapping module should preferably identify the error and populate the best suited context sensitive message such as “policy no does not exist” in absence of policy number while reference through a foreign key.

Make it a practice to have EXCEPTION, in all the blocks even if exceptions / errors have no significance.

In cases where exception need not be handled, use WHEN_OTHERS. Ensure that not even a single DML is left without provision for exception.

e.g.

SELECT policy_effdt
FROM policy_reg INTO mmy_polieffdt
WHERE policy_no = ‘VG01’;
EXCEPTION
WHEN NO_DATA_FOUND then


Following are the guidelines for EXCEPTION handling.

1. Every subroutine (Procedure / Function / Trigger) must have WHEN OTHERS handler at the end.This will take care of any error
2. Every procedure/ functions must have two parameters i.e. prm_success_failure and prm_error_message. If a Logical error occurs (Due to Missing information or business rue violation), then set the prm_success_failure to '99. The error message should be driven from the Error master table. Do not hardcode the error message in the subroutine rather pick the error description from the error master table using the error code
3. For any program error due to unhanded exceptions, which are trapped in the Final block of the routine using WHEN OTHERS, set the prm_success_failure to '99', Also track the proper type of error using sqlcode and sqlerrm.
4. Log each and every error in the Error log table. Use the Diag package to log the error. Use the diag.ins_error_log and diag.ins_edit_log to log the errors. Avoid using the diag.pr for error logging.
5. Always log the EL_SOURCE and EL_ADDITIONAL_REFERENCE while logging any error in the error log. The el_source should be the Name of the Package. And el_additional_reference should be the name of the procedure/function inside the package. In case, it's a standalone procedure/Function, keep the el_source and el_additional_reference same.
6. If the error is in the batch object, while logging the error in error log ensure that you populate the EL_BATCH_EXEC_NO column of the error log with the be_sequence.