Oracle 12c allows to use sequence as default value for column value. This video shows the behavior in pre-12C & 12C version onwards.
-- Behaviour from 12c onwards ( USAGE OF SEQUENCE FOR DEFAULT VALUE ) drop table stu_details; create sequence stu_id_seq; CREATE TABLE stu_details ( stu_id NUMBER DEFAULT stu_id_seq.nextval, stu_name VARCHAR2(30), date_of_join DATE DEFAULT sysdate ); insert into stu_details(stu_name,date_of_join) values('RAGU',sysdate); insert into stu_details(stu_id,stu_name,date_of_join) values(201,'SCOTT',sysdate); insert into stu_details(stu_id,stu_name,date_of_join) values(NULL,'SMITH',sysdate); select * from stu_details; -- Behaviour from 12c onwards ( USAGE OF "DEFAULT ON NULL" ) drop table stu_details; drop sequence stu_id_seq; create sequence stu_id_seq; CREATE TABLE stu_details ( stu_id NUMBER DEFAULT ON NULL stu_id_seq.nextval, stu_name VARCHAR2(30), date_of_join DATE DEFAULT sysdate ); insert into stu_details(stu_name,date_of_join) values('RAGU',sysdate); insert into stu_details(stu_id,stu_name,date_of_join) values(100,'SCOTT',sysdate); insert into stu_details(stu_id,stu_name,date_of_join) values(NULL,'SMITH',sysdate); select * from stu_details;
Oracle introduced a new feature called "Memoptimized rowstore", this feature is designed for fast query performance . Most of the times while accessing key value based data, we will be accessing the value based on the primary key value of table. These type of queries will benefit much from this feature .
“MEMOPTIMIZE FOR READ” enabled tables are pinned in new memory pool called “MEMOPTIMIZE_POOL” with its new hash index. Queries accessing the table via primary keys will bybass the SQL execution layer, and access the data directly using the in-memory hash index.
Oracle introduced a new memory pool to manage the "Memoptimized rowstore" storage called MEMOPTIMIZE_POOL, and a new sizing parameter called "MEMOPTIMIZE_POOL_SIZE" is introduces.
The new acess paths "INDEXUNIQUEscanREADoptim" and "INDEXUNIQUEscanREADoptim" are from oracle 18c accessing using in-memory hash index value.
Rules for new access path:
The tables accessed using primary key alone in "where" condition (no other condition should exists), will use two new access paths "TABLE ACCESS BY INDEX ROWID READ OPTIM" and "INDEX UNIQUE SCAN READ OPTIM". This will confirm that the data are accessed via "READ OPTIM" path(ie. using the hash index value from in-memory cache).
Only the primary key condition with equality option will use this new access path.
Even if multiple primary key values are used in "IN" clause also wont use the access path
CREATE TABLE cust_details ( cust_id NUMBER, cust_name VARCHAR2(100), cust_address VARCHAR2(255), cust_account_number NUMBER ); insert into cust_details values(1,'SCOTT','500,1A,california',11111111); insert into cust_details values(2,'KING','120,1B,california',22222222); insert into cust_details values(3,'JAMES','400,1A,california',33333333); insert into cust_details values(4,'JHON','300,1A,california',44444444); commit; desc cust_details; select * from cust_details; alter table cust_details set unused(cust_account_number); select * from cust_details; desc cust_details; select * from USER_TAB_COLUMNS where table_name = 'CUST_DETAILS'; alter table cust_details drop unused columns;
CREATE TABLE cust_details ( cust_id NUMBER, cust_name VARCHAR2(100), cust_address VARCHAR2(255), cust_account_number NUMBER INVISIBLE ); desc cust_details; SET COLINVISIBLE ON desc cust_details; insert into cust_details values(1,'SCOTT','500,1A,california'); insert into cust_details(cust_id,cust_name,cust_address,cust_account_number) values(2,'JAMES','101,1A,Atlantic,08201',1234123412341234); SELECT * FROM cust_details; SELECT cust_id, cust_name, cust_address, cust_account_number FROM cust_details; update cust_details set cust_account_number = -9999 where cust_account_number is null; alter table cust_details modify cust_account_number visible; desc cust_details; alter table cust_details modify cust_account_number invisible; desc cust_details; SELECT column_id, column_name, hidden_column FROM user_tab_cols WHERE table_name = 'CUST_DETAILS' ORDER BY column_id;
Notes:
1. Indexes can be defined on virtual column., and these are equivalent to function based index.
2. virtual columns can be referred any in select stateless such as "where" clause etc, however DMLs are not allowed on virtual columns.
3. Virtual columns can be defined on "Heap organized table", and not supported for "external", "IOT", clustered or temporary tables.
4. virtual column cannot refer to other virtual column. It should refer to the columns of the same table.
5. The expression for virtual column should evaluate to scalar data type only.
6. Virtual column can be used as partition key.
insert into loan_details(sno,principal_amount,rate_of_interest,number_of_years) values(1,5000,8,1);
insert into loan_details(sno,principal_amount,rate_of_interest,number_of_years) values(2,5000,10,1);
insert into loan_details(sno,principal_amount,rate_of_interest,number_of_years) values(3,5000,12,1);
select * from loan_details;
desc loan_details;
select * from USER_TAB_COLUMNS where table_name = 'LOAN_DETAILS';
1. "GROUP BY" clause is mandatory to use approx_* functions.
2. Approx_count & Approx_sum functions has to be used with "HAVING" clause containing approx_rank function.
3. Both approx_* functions and non approx_* aggregate functions are not allowed in same select statement.
4. "ORDER BY" clause within approx_rank function should use only approx_* functions only (+ with "DESC" keyword only).
5. "HAVING" clause should contain "<= number" or "< number" conditions only. Follow this format only ('APPROX_RANK(PARTITION BY ... ORDER BY ... DESC) <(=) n')
There are many situation, where the data from external sources(like text, csv, JSON, XML etc.,) needs to be loaded into database.This can be achieved by many ways. Though there are many ways, each option is suited for particular scenarios only. This is mainly due to the difference in the source data and target data structures.
For eg, we might need different mechanism to read text file vs XML files. The mechanism needs to be decided depending on the nature/type of source data.
Given below the possible ways to load data into oracle database.
Database ClientTools (like SQL developer, TOAD etc., support loading data from files into tables)
Program/Script ( Using any "Programming language" (Like Java etc.,) or "Scripting Language" (Like shell script, python etc.,))
SQL Loader (Oracle provided utility )
External Table ( Oracle provided option to read data from external files as if they present in table)
PLSQL Program ( Oracle supplied UTL_FILE package to do file handling operations like read, write, copy and remove operations on operating system files)
ETL Tools ( Lot of ETL tools are available to map, extract and load the data from external files into oracle database)
Of all the options given above, option 3,4 & 5 are oracle provided mechanism to load data into oracle database.
Here is a course on Udemy to teach about oracle supplied options to load the data from external source into oracle database.
This course covers in detail about SQL Loader, UTL_FILE & External table concepts with example.
Functions can be defined in the "WITH" clause and can be used in select statement from 12.1 onwards.
WITH
FUNCTION fn_cube (p_no NUMBER) RETURN NUMBER
AS
BEGIN
RETURN power(p_no,3);
END;
SELECT
100 AS num,
fn_cube(10)
FROM
dual;
/
Another Example of "WITH" clause function
WITH
FUNCTION avg_sal (p_deptno NUMBER) RETURN NUMBER AS
v_avg_sal NUMBER;
BEGIN
SELECT
AVG(sal)
INTO v_avg_sal
FROM
emp
WHERE
deptno = p_deptno;
RETURN v_avg_sal;
END avg_sal;
SELECT
deptno,
dname,
loc,
round(avg_sal(deptno) ) avg_sal -- Calling the function
FROM
dept;
/
No DMLs are allowed in function while using in select statements. However DMLs can be used with autonomous_transaction functions.
CREATE TABLE temp_1 (
col1 NUMBER
);
WITH
FUNCTION avg_sal (p_deptno NUMBER) RETURN NUMBER AS
PRAGMA autonomous_transaction;
v_avg_sal NUMBER;
BEGIN
SELECT
AVG(sal)
INTO v_avg_sal
FROM
emp
WHERE
deptno = p_deptno;
INSERT INTO temp_1 VALUES ( 1 );
COMMIT;
RETURN v_avg_sal;
END avg_sal;
SELECT
deptno,
dname,
loc,
round(avg_sal(deptno) ) avg_sal -- Calling the function
FROM
dept;
/
Example of using procedure and function in WITH clause, however procedures cannot be called directly in select statements. Procedure can be accessed via the calling functions as show below.
WITH
PROCEDURE pr_cube (p_no IN OUT NUMBER)
AS
BEGIN
p_no := power(p_no,3);
END;
FUNCTION fn_cube (p_no NUMBER) RETURN NUMBER AS
lv_num NUMBER := p_no;
BEGIN
pr_cube(lv_num);
RETURN lv_num;
END;
SELECT
100 AS num,
fn_cube(10)
FROM
dual;
/
WITH
PROCEDURE pr_cube (p_no IN OUT NUMBER) AS
PRAGMA autonomous_transaction;
BEGIN
p_no := power(p_no,3);
INSERT INTO temp_1 VALUES ( 1 );
COMMIT;
END;
FUNCTION fn_cube (p_no NUMBER) RETURN NUMBER AS
lv_num NUMBER := p_no;
BEGIN
pr_cube(lv_num);
RETURN lv_num;
END;
SELECT
100 AS num,
fn_cube(10)
FROM
dual;
/
select case when json_equal('{}','{ }') then 'SAME JSON' else 'DIFFERENT JSON' end JSON_COMP_RESULT
from dual;
select case when json_equal('{"A":null}','{"A":""}') then 'SAME JSON' else 'DIFFERENT JSON' end JSON_COMP_RESULT
from dual;
select case when json_equal('{"EMPNAME":"SCOTT"}','{"EMPNAME":"SCOTT"}') then 'SAME JSON' else 'DIFFERENT JSON' end JSON_COMP_RESULT
from dual;
select case when json_equal('{"EMPNAME":"SCOTT"}','{"EMPNAME":"Scott"}') then 'SAME JSON' else 'DIFFERENT JSON' end JSON_COMP_RESULT
from dual;
select case when json_equal('{"EMPID":"1000","EMPNAME":"KING"}','{"EMPID":"1000","EMPNAME":"KING"}') then 'SAME JSON' else 'DIFFERENT JSON' end JSON_COMP_RESULT
from dual;
select case when json_equal('{"EMPID":"1000","EMPNAME":"KING"}','{"EMPNAME":"KING","EMPID":"1000"}') then 'SAME JSON' else 'DIFFERENT JSON' end JSON_COMP_RESULT
from dual;
select case when json_equal('{"NUMLIST":[1,2,3,4,5,6]}','{"NUMLIST":[1,2,3,4,5,6]}') then 'SAME JSON' else 'DIFFERENT JSON' end JSON_COMP_RESULT
from dual;
select case when json_equal('{"NUMLIST":[1,2,3,4,5,6]}','{"NUMLIST":[6,5,4,3,2,1]}') then 'SAME JSON' else 'DIFFERENT JSON' end JSON_COMP_RESULT
from dual;
CREATE TABLE json_sample_data_table (
s_no NUMBER NOT NULL,
json_1 VARCHAR2(100),
json_2 VARCHAR2(100),
CONSTRAINT json_chk_1 CHECK ( json_1 IS JSON ),
CONSTRAINT json_chk_2 CHECK ( json_2 IS JSON )
);
insert into json_sample_data_table values(1,'{}','{ }');
insert into json_sample_data_table values(2,'{"A":null}','{"A":""}');
insert into json_sample_data_table values(3,'{"A":""}','{"A":""}');
insert into json_sample_data_table values(4,'{"EMPNAME":"SCOTT"}','{"EMPNAME":"SCOTT"}');
insert into json_sample_data_table values(5,'{"EMPID":"1000","EMPNAME":"KING"}','{"EMPID":"1000","EMPNAME":"KING"}');
insert into json_sample_data_table values(6,'{"EMPID":"1000","EMPNAME":"KING"}','{"EMPNAME":"KING","EMPID":"1000"}');
insert into json_sample_data_table values(7,'{"NUMLIST":[1,2,3,4,5,6]}','{"NUMLIST":[1,2,3,4,5,6]}');
insert into json_sample_data_table values(8,'{"NUMLIST":[1,2,3,4,5,6]}','{"NUMLIST":[6,5,4,3,2,1]}');
commit;
select * from json_sample_data_table
where json_1 = json_2;
select * from json_sample_data_table
where json_equal(json_1,json_2);
select * from json_sample_data_table
where not json_equal(json_1,json_2);
select s_no, json_1,json_2, case when json_equal(json_1,json_2) then 'SAME JSON' else 'DIFFERENT JSON' end JSON_COMP_RESULT
from json_sample_data_table;
I am Siva Jeevan.LK
[LEARN | CODE | TRAIN | SHARE]
Passionate to answer oracle SQL & PLSQL related questions.
In my day to day job, I work with technologies including Oracle, Java, Python, MongoDB, talend and UNIX. I am passionate about "DATA", “SQL” , coding & training.
In my spare time, I teach database technologies , ETL etc. I am very glad that you are reading my Profile, and I value your time as well as mine. Looking forward to see you in my videos at Siva Academy
Passionate to answer oracle SQL & PLSQL related questions.
You can contact me via My youtube channel (Siva Academy) or drop an email at siva.k.academy@gmail.com
Happy to Learn & Help :-)