Here are Some Application, Programming, Oracle SQL Tips to Learn. This Chapter Shows You How to Write Efficient Code and Speed up Existing Code.
1. CLOB argument in Oracle 11g
We know that Oracle EXECUTE IMMEDIATE statement implements Dynamic SQL in Oracle. It gives end-to-end support when executing a dynamic SQL statement or an unknown PL/SQL block. Before Oracle 11g, EXECUTE IMMEDIATE upheld SQL string statements up to 32K in length.
Oracle 11g permits the use of CLOB data types as an argument which removes the constraint we faced on the length of strings when passed as an argument to Execute quick.
Below Example Shows how EXECUTE IMMEDIATE failed for strings of size > 32K
DECLARE var VARCHAR2 (34747); BEGIN var := 'create table temp_a(a number(10))'; WHILE (LENGTH (var) < 32000) LOOP var := var || CHR (10) || '--comment'; END LOOP; DBMS_OUTPUT.put_line (LENGTH (var)); EXECUTE IMMEDIATE var; END;
It will throw an obvious error : ORA-06502: PL/SQL: numeric or value error: character string buffer too small.
CLOB argument in EXECUTE IMMEDIATE will handle these scenarios.
DBMS_SQL was utilized with its inbuilt functions to deal with Dynamic SQL. Its inbuilt function PARSE was utilized to take care of dynamic SQL of 64k size. But it had certain drawbacks
- PARSE() could not handle CLOB argument
- A REF CURSOR can’t be converted to a DBMS_SQL cursor and vice versa to support interoperability
- DBMS_SQL did not support the full range of data types (including collections and object types)
- DBMS_SQL did not allow bulk binds using user-define collection types
Below example shows how DBMS_SQL was used to take care of long strings :
DECLARE var a VARCHAR2 (34747); var b VARCHAR2 (34747); ln_cursor NUMBER; ln_result NUMBER; ln_sql_id NUMBER := 1; BEGIN ln_cursor := DBMS_SQL.open_cursor; var a := 'create table testa( a number(10),'; while length(var a) <32000 loop var a := var a || chr(10) || '--comment'; end loop; var b := ' b number(10))'; while length(var b) <32000 loop var b := var b || chr(10) || '--comment'; end loop; dbms_output.put_line (length(vara)||'and'||length(varb)); DBMS_SQL.parse (ln_cursor, vara ||chr(10)|| varb, DBMS_SQL.native); ln_result := DBMS_SQL.EXECUTE (ln_cursor); DBMS_SQL.close_cursor (ln_cursor); END;
Oracle Database 11g evacuates DBMS_SQL restrictions to make the support of dynamic SQL from PL/SQL functionally complete.
The Only difference in this example compared to above example is Use of CLOB for the declaration of var a variable.
DECLARE var a CLOB; ln_cursor NUMBER; ln_result NUMBER; ln_sql_id NUMBER := 1; BEGIN ln_cursor := DBMS_SQL.open_cursor; var a := 'create table testa( a number(10))'; while length(vara) <70000 loop var a := vara || chr(10) || '--comment'; end loop; dbms_output.put_line (length(vara)); DBMS_SQL.parse (ln_cursor, vara, DBMS_SQL.native); ln_result := DBMS_SQL.EXECUTE (ln_cursor); DBMS_SQL.close_cursor (ln_cursor); END;
Presently Both Native Dynamic SQL and DBMS_SQL support SQL strings stored in CLOBs. Yet, utilizing DBMS_SQL has an over-burden of PARSE that accepts a collection of SQL string fragments.
2. Index usage with LIKE operator in Oracle & Domain Indexes
A lot of developers might be confused about index selectivity while using %LIKE% operator.
LIKE Operator :
- “LIKE” Determines whether a particular character string matches a specified pattern.
- % permits you to match any string of any length
We can use LIKE operator in 4 ways :
1. SEARCH-STRING%
The SEARCH-STRING% will perform INDEX RANGE SCAN information in least possible time.
SELECT * FROM sac WHERE object_type LIKE 'TAB%' ;
Here the optimizer knows ,where the string gets started, so It utilized Index Range Scan.
2. %SEARCH-STRING
At the point when utilizing %SEARCH-STRING it accessess the FULL table
SELECT * FROM sac WHERE object_type LIKE '%TAB' ;
In this case LIKE expression starts with a wildcard. Such a LIKE expression can’t serve as access predicate.
3. %SEARCH-STRING%
When utilizing %SEARCH-STRING% it accessess the FULL table
SELECT * FROM sac WHERE object_type LIKE '%TAB%' ;
Here the optimizer doesn’t know from which letter the String begins, so it will check the whole table.
4. SEARCH%STRING
The SEARCH%STRING will perform INDEX RANGE SCAN and produce an initial result set, containing the values that match first string i.e. SEARCH%. Next it will scan through the values to get second string i.e. %STRING
SELECT * FROM sac WHERE object_type LIKE 'TA%BLE' ;
This is how our normal LIKE operator works, but what happens when you want to use index in 2nd and 3rd case of the example above.
Oracle Text Utility (Oracle Context) allows us to parse through a large text column and index on the words within the column.
Unlike ordinary b-tree or bitmap indexes, Oracle context, ctxcat and ctxrule indexes can be set not to update as content is changed.
Oracle Provides the SYNC operator for this. The default is SYNC=MANUAL and you should manually synchronize the index with CTX_DDL.SYNC_INDEX.
SYNC (MANUAL | EVERY “interval-string” | ON COMMIT)
Hence, Oracle Text indexes are only useful for removing full-table scans when the tables are largely read-only and/or the end-users don’t mind not having 100% search recall.
Lets take an Example :
SQL> CREATE TABLE sac AS SELECT * FROM all_objects; Table created. SQL> CREATE INDEX sac_indx ON sac(object_type); Index created. SQL> set autotrace trace explain SQL> select * from sac where object_type LIKE 'TAB%';
Above example demonstrates that utilizing % wildcard character towards end checks an Index search.
But if it is towards start, it will not be used. And sensibly so, because Oracle doesn’t know which data to search, it can start from ‘A to Z’ or ‘a to z’ or even 1 to any number.
See this :
SQL> SELECT * FROM sac WHERE object_type LIKE '%ABLE'; Now how to use the index if you are using Like operator searches. The answer is Domain Indexes. SQL> connect / as sysdba Connected. SQL> grant execute on ctx_ddl to public; Grant succeeded. SQL> connect sac; Connected. SQL> begin ctx_ddl.create_preference('SUBSTRING_PREF', 'BASIC_WORDLIST'); ctx_ddl.set_attribute('SUBSTRING_PREF', 'SUBSTRING_INDEX', 'TRUE'); end;
- ctx_ddl.create_preference: Creates a preference in the Text data dictionary.
- ctx_ddl.set_attribute : Sets a preference attribute.
SQL> create index sac_indx on sac(object_type) indextype is ctxsys.context parameters ('Wordlist SUBSTRING_PREF memory 50m'); Index created. SQL> set autotrace trace explain SQL> select * from sac where contains (OBJECT_TYPE,'%PACK%') > 0
For this situation the index is getting utilized.
3. Passing Array to Oracle Stored Procedure
All PL/SQL arrays can’t be called from java. An array should be made as TYPE, at SCHEMA level in the database and then it can be utilized with ArrayDescriptor in Java, as oracle.sql.ArrayDescriptor class in Java can’t access at package level.
Database Code
First, Create an array, at SCHEMA level
CREATE TYPE array_table AS TABLE OF VARCHAR2 (50); // Array of String CREATE TYPE array_int AS TABLE OF NUMBER; // Array of Integers
Next, Create a procedure which takes an array as an input parameter and returns an array as its OUT parameter.
CREATE OR REPLACE PROCEDURE SchemaName.proc1 (p_array IN array_table, len OUT NUMBER, p_arr_int OUT array_int) AS v_count NUMBER; BEGIN p_arr_int := NEW array_int (); p_arr_int.EXTEND (10); len := p_array.COUNT; v_count := 0; FOR i IN 1 .. p_array.COUNT LOOP DBMS_OUTPUT.put_line (p_array (i)); p_arr_int (i) := v_count; v_count := v_count + 1; END LOOP; END;
After this, Execution permission would be required to execute the procedure created by you
GRANT EXECUTE ON SchemaNAme.proc1 TO UserName;
Java Code
Below example contains the whole flow from creating a connection with the database, to making a call to the stored procedure, passing an array to Oracle procedure, retrieving an array from an Oracle procedure and displaying the result.
import java.math.BigDecimal; import java.sql.CallableStatement; import java.sql.Connection; import java.sql.DriverManager; import java.sql.Types; import oracle.jdbc.OracleCallableStatement; import oracle.jdbc.internal.OracleTypes; import oracle.sql.ARRAY; import oracle.sql.ArrayDescriptor; public class TestDatabase { public static void passArray() { try{ Class.forName("oracle.jdbc.OracleDriver"); Connection con = DriverManager.getConnection("jdbc:oracle:thin:url ","UserName","Password");; String array[] = {"one", "two", "three","four"}; ArrayDescriptor des = ArrayDescriptor.createDescriptor("SchemaName.ARRAY_TABLE", con); ARRAY array_to_pass = new ARRAY(des,con,array); CallableStatement st = con.prepareCall("call SchemaName.proc1(?,?,?)"); // Passing an array to the procedure - st.setArray(1, array_to_pass); st.registerOutParameter(2, Types.INTEGER); st.registerOutParameter(3,OracleTypes.ARRAY,"SchemaName.ARRAY_INT"); st.execute(); System.out.println("size : "+st.getInt(2)); // Retrieving array from the resultset of the procedure after execution - ARRAY arr = ((OracleCallableStatement)st).getARRAY(3); BigDecimal[] recievedArray = (BigDecimal[])(arr.getArray()); for(int i=0;i<recievedArray.length;i++) System.out.println("element" + i + ":" + recievedArray[i] + "\n"); } catch(Exception e) { System.out.println(e); } } public static void main(String args[]){ passArray(); } }
4. Oracle SQL Tips in Query Optimization
SELECT nvl(a.FLD7,'x'), nvl(a.FLD11,'x'), nvl(a.FLD14,'x'), nvl(a.FLD25,'x'), nvl(a.FLD30,'x'), nvl(a.FLD32,'x'),min(decode (d.FLD11,'8' ,d.FLD15, '999999')) tfmin15, min(a.id) ID FROM (select * from partial_cdr where leg_id=1) a,(select * from partial_cdr where leg_id = 3) b, partial_cdr c, (select * from partial_cdr where fld11='8') d WHERE nvl(a.FLD7,'x') = nvl(b.FLD7,'x') AND nvl(a.FLD11,'x') = nvl(b.FLD11,'x') ...AND nvl(a.FLD32 ,'x') = nvl(b.FLD32,'x') GROUP BY nvl(a.FLD7,'x'), nvl(a.FLD11,'x'), nvl(a.FLD14,'x'), nvl(a.FLD25,'x'), nvl(a.FLD30,'x'), nvl(a.FLD32,'x') HAVING max(b.fld15) = count(distinct c.fid15);
The above script takes several minutes to give the result when performed on 1000 entries. Now how to optimize this query so as to give faster result.
The first thing noticed about this SQL statement was that the table partial_cdr was being joined to itself 4 times. So better to combine the sub queries on partial_cdr
The next thing noticed is that also use the NVL function on every column in the WHERE clause. This prevents any indexes on those columns being used. You could rewrite each clause to be like this : where (a.FLD7=b.FLD7 OR (a.FLD7 IS NULL AND b.FLD7 IS NULL)) to allow indexes to be used when the fields were not null
5. Reducing Loop Overhead for DML Statements and Queries
PL/SQL sends SQL statements, for example, DML and queries to the SQL engine for execution, SQL returns the result data to PL/SQL. You can minimize the performance overhead of this communication between PL/SQL and SQL by using the PL/SQL language features known collectively as bulk SQL. The FORALL statement sends INSERT, UPDATE, or DELETE statements in batches, rather than one at a time. The BULK COLLECT clause brings back batches of results from SQL. If the DML statement affects four or more database rows, the use of bulk SQL can improve performance considerably.
The assigning of values to PL/SQL variables in SQL statements is called binding. PL/SQL binding operations fall into 3 categories:
- In Bind : When a PL/SQL variable is stored in the database by an INSERT or UPDATE statement.
- Out Bind : When a database value is assigned to a PL/SQL variable by the RETURNING clause of an INSERT, UPDATE, or DELETE statement.
- Define : When a database value is assigned to a PL/SQL variable by a SELECT or FETCH statement.
Bulk SQL uses PL/SQL collections, such as varrays or nested tables, to pass large amounts of data backward and forward in a single operation. This process is known as bulk binding. If the collection has 20 elements, bulk binding lets you perform the equivalent of 20 SELECT, INSERT, UPDATE, or DELETE statements using a single operation. Queries can pass back any number of results, without requiring a FETCH statement for each row.
To speed up INSERT, UPDATE, and DELETE statements, enclose the SQL statement within a PL/SQL FORALL statement instead of a loop construct.
To speed up SELECT statements, include the BULK COLLECT INTO clause in the SELECT statement instead of using INTO.
6. Export Data Into CSV File
I am having a table with 5 columns and 1 million records. Now, how to export the data into a CSV file with some delimiter
See this example:
CREATE TABLE mytable ( c1 number, c2 varchar2(40), c3 varchar2(80), c4 number, c5 varchar2(8) ); BEGIN FOR i IN 1..1000 loop INSERT INTO mytable VALUES (i, 'Test '||i, 'Data for test row '||i, mod(i,43)+1, 'Final'); end loop; commit; END; / Spool c:\documents\data.csv SELECT c1||','||c2||','||c3||','||c4||','||c5 FROM mytable Spool OFF /
7. Update Statements and Correlated Updates
One of the slowest commands in SQL is UPDATE. This is because to most correlated updates require a full table scan. This results in very slow performance when the table is extremely large.
The following update statement is typical of correlated updates:
UPDATE Target_Table SET Target_Field = (SELECT Source_Information FROM Source_Table WHERE Source_Table.Key = Target_Table.Key) WHERE EXISTS (SELECT 'x' FROM Source_Table WHERE Source_Table.Key = Target_Table.Key)
Performance problems arise because there is no method of eliminating rows in the Target_Table based on information in the Source_Table. On the off chance other conditions in the Update’s Where clause do not forbid most of the rows in the Target_Table, this update will require substantial processing time.
The following PL/SQL code successfully utilizes an index on the Key field to get only the proper records in the Target_Table:
DECLARE Cursor Source is SELECT * FROM Source_Table; BEGIN For Row in Source Loop UPDATE Target_Table SET Target_Field = Row.Source_Information WHERE Key = Row.Key; End Loop; Exception When OTHERS Then Null; END;
This PL/SQL script loops through each of the records in the Source_Table and updates the appropriate row in the Target_Table, if any. Basically, this transfers the full table scan to the Source_Table and allows the index on the Target_Table to be used effectively.
8. Bulk Delete of Records in Table
I need to delete 3 Crore records out of 10 Crore records from a table with where condition. There is no index on the search column. I need to write a procedure to delete records and at every 1000 records need a commit.
Solution :
We can use the Following Command
DELETE from table_name WHERE Column_Name ='condition' and rownum<1001; Commit;
or
Here is a Sample Procedure
CREATE OR REPLACE PROCEDURE CLEAN_TEMP IS tmp_current integer; tmp_count integer; BEGIN tmp_current := 0; SELECT count(*) into tmp_count from temp ; LOOP EXIT when tmp_current > tmp_count ; DELETE from temp WHERE rownum BETWEEN tmp_current and tmp_current + 1000; tmp_current := tmp_current + 1000; dbms_output.put_line(tmp_current); commit; END LOOP; EXCEPTION WHEN NO_DATA_FOUND THEN NULL; WHEN OTHERS THEN RAISE; END CLEAN_TEMP;
9. Control Files Backup and Recovery Tips
Helpful explanation to end the backup for the tablespaces which are in the online backup mode:
1. Make a script through the following commands
Spool endhotbackup.sql
SELECT ‘ALTER DATABASE DATAFILE ”’|| name ||”’ END BACKUP’ FROM V$DATAFILE
2. Cleanup endhotbackup.sql through vi
3. @endhotbackup.sql
4. Startup the database
Submit your Oracle SQL Errors, issues or problems and get it fixed by an Oracle Database Expert