Tom Kyte

Subscribe to Tom Kyte feed Tom Kyte
These are the most recently asked questions on Ask Tom
Updated: 16 hours 14 min ago

PLSQL_WARNINGS parameter doesn't stick

Fri, 2023-02-03 07:06
I'm an 'accidental' DBA that's also been tasked with PL/SQL development work. Ours is a small environment; I'm the only DBA plus another developer who is completely new to PL/SQL. I'm working (development environment) on a basic package and after getting some compilation errors, I discovered the PLSQL_WARNINGS parameter and found it was set to: <code>NAME TYPE VALUE -------------- ------ -------------------------------------------------------- plsql_warnings string ERROR:INFORMATIONAL, DISABLE:PERFORMANCE, DISABLE:SEVERE </code> Not quite what we want, so I ran this in SQL*Plus as SYS: <code>ALTER SYSTEM SET PLSQL_WARNINGS = 'ENABLE:ALL' SCOPE=BOTH;</code> which returned: <code>System SET altered.</code> To confirm it: <code>show parameter PLSQL_WARNINGS;</code> returned: <code>NAME TYPE VALUE -------------- ------ ---------- plsql_warnings string ENABLE:ALL </code> So far, so good. Later that day I was demonstrating this setting to my boss and the settings had reverted back to the previous values. This was not only awkward, but confusing as well. MOS tells me there is no known bug or other explanation for this behavior. The system is not getting restarted and the developer does not have privileges to change this parameter. What gives?
Categories: DBA Blogs

Oracle dump into csv file have a auto padded field's vale

Fri, 2023-02-03 07:06
Hi, I am getting a table dump using an SQLplus client. The code is mentioned below. The problem is, the value of any varchar field of the dump table is saved in the dump file with padded space. means, if f1 is of varchar2(6) type field in a table. It has a value as "API" in the table. But when we took a dump of that table, the same field value is saved as "API " in the dump file. code snapshot ****************************** <code> sqlplus -s ${DB_USERNAME}/${DB_PASSWORD}@'(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = '"${CONN_STR}"')(PORT = '"${CONN_PORT}"'))(CONNECT_DATA = (SERVER = DEDICATED)(SERVICE_NAME = '"${DB_NAME}"')))' << !EOF! --/*************************/-- set echo off set feedback off set linesize 4000 set newpage NONE set pagesize 0 set serveroutput off set termout off set flush off set NUMWIDTH 128 set COLSEP "," set markup csv on set trimspool on set sqlprompt '' set long 9999999 set VERIFY OFF set TERM OFF set headsep off set TRIMOUT ON --/*******************************/-- --/**** Clear Screen & Buffer ****/-- --/*******************************/-- --/********clear scree***********************/-- clear screen --/*****************************/-- --/**** Start spooling file ****/-- --/*****************************/-- ---/*column record_count format 9999999*/-- set heading off SPOOL $FILE_COUNT_FILE select count(*) as record_count from ${SCHEMA_NAME}.$TABLE_NAME; SPOOL OFF set heading on set pagesize 0 embedded on SPOOL $FILE select * from ${SCHEMA_NAME}.$TABLE_NAME; SPOOL OFF EXIT !EOF! ****************************** The dump file snapshot is ****************************** "RATE_PLAN_CD","EFFECTIVE_DATE","SYS_CREATION_DATE","SYS_UPDATE_DATE","OPERATOR_ID","APPLICATION_ID","DL_SERVICE_CODE","DL_UPDATE_STAMP","BUNDLE_VALUE","DWN_RND_CP_IND","TIER_CHRG_IND","TENANT_CD","EXPIRATION_DATE" "SO_PAYG_DFL_ALL_1911","01-JAN-09","21-JAN-16",,460,,"API ",,0,"N","FL","M2M","31-DEC-00" "SI_MT_PAYG_ALL_1793","01-JAN-09","21-JAN-16",,460,,"API ",,0,"N","FL","M2M","31-DEC-00" ****************************** </code> Pls guide, how to solve it.
Categories: DBA Blogs

altering table to enable foreign key constraint doesn't utilize parallel query when desired

Fri, 2023-02-03 07:06
If I trace an execution of this statement <code>alter table TAB1 enable foreign_key_cons_on_tab2</code> then a recursive query similar to the following will attempt to find rows in TAB1 with no matching row in TAB2. If it fetches zero rows, then the constraint is enabled. <code> select /*+ all_rows ordered dynamic_sampling(2) */ A.rowid from TAB1 A , TAB2 B where ( A.KEY1 is not null ) and ( B.KEY1 (+) = A.KEY1 ) and ( B.KEY1 is null) </code> When I look at the recursive statement's execution plan, I see that it is serial in nature. If I manipulate some parameters I can coerce a parallel execution plan. But the execution of that plan remains serial. No parallel query slaves are enlisted and the query ends up taking longer to finish than the serial plan. If I take that recursive query and execute it in SQL*Plus (at depth 0), then it executes in parallel and uses all the PQ slaves I expected. Why? Since exclusive locks are taken, there should be no reason to preclude parallel execution. I tried the following with no success - alter session force parallel ddl - alter session force parallel query - creating a SQL profile to add a parallel - increasing the table's degree of parallelism UPDATE #1 Thanks to Tanel Poder's advice, I was able to make this work. 1. enable novalidate 2. enable validate I would be lying if I said I wasn't disappointed in having to cache this kind of information.
Categories: DBA Blogs

Collection Variable Types

Fri, 2023-02-03 07:06
Tom: What is really the difference and when you would use each of oracle datatypes: 1. Index-by-tables 2. Nested Tables 3. Varrays My understanding is that index by tables are for data of same type that is stored in memory. For nested tables you can store the variables values in oracle table. Varrays are same as nested except they are confined to a certain number. AM I correct? Would you use a nested table for two tables like a PO table and items table iinstead of referring to two tables. Thank you,
Categories: DBA Blogs

sqlplus 'set feedback off' is not working for oracle client 19.14 as expected.

Tue, 2023-01-31 05:46
Hi Tom, I have recently upgraded a couple of databases (12.1 -> 19.17) and remote oracle client (19.3 - 19.14 ) on application servers connecting upgraded databases. There exist only one case where 'set feedback off' is not working as expected when I commit from sqlplus client 19.14 to only one of those upgraded 19.17 DBs. How may I trace sqlplus to find out reason of unexpected behaviour for 'set feedback of'. Here is the test case I produced. <code> #########Connect to a database ( test1 ) on the same db_host and using same db_oracle_home where "SET FEEDBACK OFF" is working as EXPECTED. infa64@dwhsas15> sqlplus test_user@dwhpx-scan:1521/test1 SQL*Plus: Release 19.0.0.0.0 - Production on Mi Jan 25 09:47:34 2023 Version 19.14.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Kennwort eingeben: Letzte erfolgreiche Anmeldezeit: Mi Jan 25 2023 09:46:38 +01:00 Verbunden mit: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.17.0.0.0 SQL> commit; Transaktion mit COMMIT abgeschlossen. SQL> set feedback off SQL> commit; SQL> exit Verbindung zu Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.17.0.0.0 beendet #########Connect to a different database ( test2 ) on the same db_host and using same db_oracle_home where "SET FEEDBACK OFF" is NOT working as EXPECTED. infa64@dwhsas15> sqlplus test_user@dwhpx-scan:1521/sim SQL*Plus: Release 19.0.0.0.0 - Production on Mi Jan 25 09:48:01 2023 Version 19.14.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Kennwort eingeben: Letzte erfolgreiche Anmeldezeit: Mi Jan 25 2023 06:01:50 +01:00 Verbunden mit: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.17.0.0.0 SQL> commit; Transaktion mit COMMIT abgeschlossen. SQL> set feedback off SQL> commit; Transaktion mit COMMIT abgeschlossen. --<< UNEXPECTED >>-- SQL> </code>
Categories: DBA Blogs

How to convert a collection into json_object

Tue, 2023-01-31 05:46
Hi, Is it possible to convert a collection into json_object? I have a collection named code1 of type codelist of table of Varchar2(2). code1 = codelist('AB','CD', 'EF'); I want the above in JSOn format as below: Is it possible? Please help { "codelist" : ['AB','CD',EF'] }
Categories: DBA Blogs

ODBC Transparent Gateway

Tue, 2023-01-31 05:46
Hi I had Oracle <b>12c Enterprise on Windows 64 </b>and MariaDB 10.2 on Ubuntu . I have been using ODBC transparent gateway to send the data from Oracle to MariaDB. I am in the process of transitioning to <b>Oracle Enterprise 19c on Solari</b>s and MariaDB 10.4 on RHEL8 . It looks like , there are no native mariadb odbc drivers for Solaris. What is your recommendation on connecting Maria DB ( RHEL) to Oracle 19 ( Solaris ) ? With the current licensing model , only the ODBC Transparent Gateway covered with regular licensing? This data transfer happens once a day ( via dbms_scheduler). At the maximum , we will have 2000 records transferred between these DBMSes. What is the best approach ? Thanks very much.
Categories: DBA Blogs

Poorly Performing Query after RLS / VPD Policy

Mon, 2023-01-30 11:26
We have an old query that was running fine (though it could use improvements in general) but after having a VPD policy put in place, it started to run long and not complete. The difference between an 1 - 1.5 hours to not completing after running for well over 48 hours. After digging into the VPD policy in place, in this scenario of an application ID, they did not want to apply the policy so the predicate they were adding was just "1 = 1." From what I have experienced (and after doing some research) I haven't really found anything where 1 = 1 could really affect a query. In this case, when we run the explain plan for the query in an environment where the new policies are not present, we get a different explain plan when the "1 = 1" is added into the query. In a nutshell, this query will select against a large detail table, but it must first check whether detail has been completed (subquery with the vw_header_detail view) and then it will need to check whether the order was modified in the window of time we are looking at (2nd subquery that does an exist clause into 4 different tables). Note: these are all views that reference tables. If it is completed and it shows that it has been modified in the time window, it goes back to the detail table and pulls all the applicable rows for that seq_id. Which is a very large table. <code>SELECT * FROM vw_detail dtl WHERE EXISTS (SELECT 1 FROM vw_header_detail sts1 WHERE ((sts1.posted_date IS NOT NULL) OR (sts1.secondary_posted_date IS NOT NULL)) AND sts1.seq_id = dtl.seq_id AND sts1.c_status in ('P','D','C','G','K','A') AND sts1.p_status in ('S','P','F','O') ) AND (EXISTS (SELECT 1 FROM vw_header hdr WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date AND hdr.seq_id = dtl.seq_id) OR EXISTS (SELECT 1 FROM vw_detail dtl2 WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date AND dtl2.seq_id = dtl.seq_id) OR EXISTS (SELECT 1 FROM vw_secondary_header sech WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date AND sech.seq_id = dtl.seq_id) OR EXISTS (SELECT 1 FROM vw_secondary_detail secd WHERE nvl(update_datetime, :out_of_range) BETWEEN :from_date and :to_date AND secd.seq_id = dtl.seq_id)) ORDER BY dtl.seq_id, dtl.line_number</code> The existing query above will return this explain plan: <code>------------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 21 | 9282 | 6282K (1)| 00:04:06 | | 1 | SORT ORDER BY | | 21 | 9282 | 6282K (1)| 00:04:06 | | 2 | NESTED LOOPS SEMI | | 21 | 9282 | 6282K (1)| 00:04:06 | | 3 | NESTED LOOPS | | 21 | 9240 | 6281K (1)| 00:04:06 | | 4 | VIEW | VW_SQ_1 | 945K| 6463K| 6278K (1)| 00:04:06 | | 5 | HASH UNIQUE ...
Categories: DBA Blogs

Generate N TIMESTAMPs

Mon, 2023-01-30 11:26
I need to CREATE some test data where I want to CREATE N number of timestamps between 2 times for every 15 minute interval including the start time and less than the end time. N would be defined as dbms_random between a low_number and a high_number. I want to run something like this get_dates TIMESTAMP'2023-01-25 03:00:00', TIMESTAMP'2023-01-27 13:30:00', 10, 25 Any help would be greatly appreciated.
Categories: DBA Blogs

SYS.DBMS_NETWORK_ACL_ADMIN

Mon, 2023-01-30 11:26
ORA-04063: package body "SYS.DBMS_NETWORK_ACL_ADMIN" has errors, how do I reinstall this package?
Categories: DBA Blogs

Bind variables in Live SQL

Mon, 2023-01-30 11:26
Dear Tom, how can I create bind variable in Oracle Live SQL, <code><variable variable_name data_type;> example <variable dusko number;> </code> just doesn't work, i get "Ussuported command" message... Thanks and best regards
Categories: DBA Blogs

Grouping and updating column with equal parts

Mon, 2023-01-30 11:26
Hi, I have a table that holds about 2 million records. No column in the table can be used to divide rows equally. Initially when the data is loaded into the table, the bucket_number column is null. After data loaded, I would like to calculate and equally divide the total no. of records and update bucket_number column with number. Later the buckets will be processed parallelly by another batch process. I could use ntile to bucketize data but I want to persist the bucket number in table. What is the best way to bucketize and update column in Oracle?
Categories: DBA Blogs

Oracle Materialized View - adding a Where clause

Mon, 2023-01-30 11:26
I have a materialised view in my own schema which based on a table in another schema. However I want to modify the matrealise view to add a where clause which will never becomes true e.g. Where 1=2 Can i alter an existing materialised view to add this where clause in the MView query . Will alter command works or I have to drop and recreate the Mview (Which i am rluctatant to do actually)
Categories: DBA Blogs

Trying to Connect to a Local Oracle inside a Container

Mon, 2023-01-30 11:26
I have a Docker Container with Oracle Database Entrepise Edition (followed the basic instructions on container-regitry.oracle.com) And i'm trying to create a NodeJs application to connect to the Oracle Database inside the container (tried the "how to basics" on node-oracledb.readthedocs.io), but without success. My connection object is the following (using same credentials and connectString i'm able to connect on SQL Developer): <code> { user: 'SYS', password: 'Paulo@oracle123', connectString: 'localhost/OracleSID ' } </code> I've been receiving the following error <code>Error: DPI-1047: Cannot locate a 64-bit Oracle Client library: "The specified module could not be found". See https://oracle.github.io/node-oracledb/INSTALL.html for help Node-oracledb installation instructions: https://oracle.github.io/node-oracledb/INSTALL.html You must have 64-bit Oracle Client libraries in your PATH environment variable. If you do not have Oracle Database on this computer, then install the Instant Client Basic or Basic Light package from https://www.oracle.com/database/technologies/instant-client/winx64-64-downloads.html A Microsoft Visual Studio Redistributable suitable for your Oracle client library version must be available. at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async connectToDb (D:\PersonalDev\Migrate Oracle to MSSQL\index.js:10:18) { errorNum: 0, offset: 0 } </code> But coudn't figure it out. Please HELP!!
Categories: DBA Blogs

Directing trigger output to queue

Mon, 2023-01-30 11:26
We currently use triggers on a subset of tables in our transactional database (on-premises) to capture row / column level changes and insert them into a separate table. With the availability of DBMS_CLOUD package for our product version we are exploring the option to rewrite some of the existing triggers and push the row / column changes to a native queueing service on the cloud instead of writing to another table in the database. The idea is to then have an asynchronous process to subscribe to the queue and push the data to a downstream data store. The developer blog below has some very relevant information but it is written for Autonomous database & Oracle cloud. https://blogs.oracle.com/developers/post/back-to-the-database-part-3-publishing-database-changes-to-a-stream Has anyone had success with using DBMS_CLOUD package on 19c to work with streaming / queuing services on Azure / AWS cloud? On the documentation (https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_CLOUD.html), I see that it can interact with an object store but unlike the documentation for autonomous database there is no mention of REST_API capabilities. Is that an undocumented feature for 19c? I am looking for some clarity before embarking on a proof of concept. Thank you.
Categories: DBA Blogs

Materialize Hint used in almost all queries

Mon, 2023-01-30 11:26
Hello, Thanks to this forum a lot of my answers were answered. I noticed that most of the queries in an application that am working on is using the MATERIALIZE hint in the subqueries. Our queries heavily use CTEs and relies on multiple joins. Our query performance is not great. In OEM, i noticed all these queries have high user I/O and am guessing, its because all the queries that query tables of 100s million rows are materialized. Is that true? Is it better to keep it there rather than remove it? When should i REALLY be using materialized hint? The queries also use "where column1 = CASE WHEN this THEN 1 WHEN THAT THEN 2 WHEN OTHER_THAT THEN 3" Also uses Select NVL(this, NVL(that,99)), column, column from table; Does the above impact performance? If so how can i improve it? I also noticed use of "use_concat materialize" hint. For some of the queries, when i execute it without "use_concat" the speed is almost 300% faster. Under what condition should this hint be used? What would help with performance
Categories: DBA Blogs

Different number of where condition function call between direct and indirect calls

Fri, 2023-01-20 13:06
For code and test output, please see LiveSQL Link https://livesql.oracle.com/apex/livesql/s/onh4jsczpzaa2bp9t0r9yfkxh When called with: exec test_proc(1000, 1, 5) the output shows 500 vs. 77. Why InDirect Count = 77 calls ? Statement processed. ---------- Compare test_proc(1000, 1, 5) Function-Calls ------------- Direct Count = 500 InDirect Count = 77 In following calls, we can see that test_proc(183, 0, 5) vs test_proc(184, 0, 5), InDirect Count changed from 50 to 51. But we can see that test_proc(186, 1, 5) vs test_proc(187, 1, 5), InDirect Count changed from 50 to 51. Why such call number difference ?
Categories: DBA Blogs

Recursion - Why is anchor value not repeated

Thu, 2023-01-19 18:46
In the <b>Hierarchical Queries: Databases for Developers</b> first recursion example we run the query for the initial case in the recursive loop: <code> select employee_id, first_name, last_name, manager_id from employees where manager_id is null </code> Before a union all: <code>union all</code> Before then running the part that will repeat until it hits it's termination condition: <code> select e.employee_id, e.first_name, e.last_name, e.manager_id from org_chart oc join employees e on e.manager_id = oc.employee_id ) </code> My question is: <b>Why is the initial anchor value not repeated?</b> Is it not searching for cases where manager id is null for every line? I would have expected the root to be repeated between every other entry of the data.
Categories: DBA Blogs

Write a SQL Query to Calculate New Salaries

Thu, 2023-01-19 00:46
Assume that the company you work in wants to increase the salaries by 20% this year, and also wants to add 1000 as bonus to each salary after 20% increase. Based on this information, write a SQL query that returns the new salaries from the EMPLOYEES table. Your query should return the employee_id, salary and new_salary columns. (new_salary should be the alias for the calculated salary)
Categories: DBA Blogs

Rebuilding text index

Thu, 2023-01-19 00:46
Hello. I create text index <code>CREATE INDEX IDXC_SEARCH_FULL_NAME ON T_SEARCH (main.name_full) INDEXTYPE IS CTXSYS.context PARAMETERS ('' datastore main.name_search_ds lexer main.name_search_lexer wordlist main.name_search_wl section group main.name_search_sg stoplist CTXSYS.empty_stoplist storage main.name_store sync (ON COMMIT) '')'</code> After that <code>SELECT dbms_metadata.get_ddl(object_type => 'INDEX', NAME => 'IDXC_SEARCH_FULL_NAME', SCHEMA => 'MAIN') FROM dual;</code> returns correct PARAMETERS string and also it is shown in <code>SELECT parameters FROM dba_indexes WHERE index_name = 'IDXC_SEARCH_FULL_NAME'; </code> Then, when I do <code>ALTER INDEX IDXC_SEARCH_FULL_NAME REBUILD</code> parameters disappear <code>SELECT parameters FROM dba_indexes WHERE index_name = 'IDXC_SEARCH_FULL_NAME';</code> returns NULL Why is that? Do I should add PARAMETERS with REPLACE when REBUILDing? Even if I don't change parameters?
Categories: DBA Blogs

Pages