Jump to content


  • Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About jjeffman

  • Rank
  • Birthday 08/04/1952

Contact Methods

  • Website URL
  • ICQ

Profile Information

  • Gender
  • Location
    Porto Alegre, Brazil
  • Interests
    Database applications, SQL tuning

Recent Profile Visitors

324 profile views
  1. Hello Boobal Ganesan , Thank you very much for answering me. No, it is not a single delete! As I might have around 10 million records to delete, and this may cause a temporary tablespace error, I am committing the transaction every 100 thousands deletions. At this point, the commit action, I would like to expose to the user that "until now x thousands records were deleted". I have already thought on make a function which could return a table of Varchar2, but again, it seems that the table of records is only available after the process is finished. So, I must to accept that I will not be able to let the user know that the process is ok and is still running fine. The only way I have is to raise up the process level and letting the interface manage the loop of records to delete, showing a message on the commit action, but the performance of this aproach has less performance than the PL/SQL code. Best regards. Jayme Jeffman
  2. Hello, I wonder if is there any way of sending to the process requester a message or data while a plsql block is running. I have a process which is responsible for deleting a very large amount of data and it lasts more time than a nowadays user can wait for, so I would like to show them that the process has already deleted x tousands records until now and is still working on it. Is that possible? Does tns implements any kind of it ? Thank you very much. Best regards. Jayme Jeffman
  3. jjeffman

    Index performance on several milion records

    Hi Burleson, Thank you very much for answering me. As this Oracle software is installed on my client's server and I have not access to the machine itself I do not know how many cpu's it has, but I guess it might have four at least. But I have still a question. Imagine I have built a query which compares the mytimestamp column to a value, the composite index on mytimestap and table2key is enough for joining the tables or it would be better if the million_data table has two diferents indexes one on each column instead of of the composite index ? I do not know how Oracle uses the table indexes. Thank you very much. Best regards Jayme Jeffman
  4. Hello, I have two tables : Create table "million_data" ( "MYTIMESTAMP" NUMBER, "TABLE2_KEY" NUMBER, "QUALITY" NUMBER, "THEVALUE" NUMBER ) / Create Index "million_data_tst_key2" ON "million_data" ("MYTIMESTAMP","TABLE2_KEY") TABLESPACE TBS_IDX / Create Index "million_data_key2_tst" ON "million_data" ("TABLE2_KEY","MYTIMESTAMP") TABLESPACE TBS_IDX / Create table "TABLE2" ( "TABLE2_KEY" Number, "RECTITLE" Varchar2 (200), primary key ("TABLE2_KEY") USING INDEX TABLESPACE TBS_IDX ) The table "million_data" has around 5000000000 records. The TABLE2 has 10000 records only. Counting the records number takes 500 seconds; The question is how can I improve the performance of the SQL to get data joining the two tables ? Which is the best approach to create the indexes on million_table? Is the composite index a good idea ? If I need to perform queries limiting the range of MYTIMESTAMP between two values, having an index on just the MYTAMESTAMP column leads to a better performance ? Tnank you very much. Best regrads
  5. jjeffman

    Table full access

    Hello, I have a table which has a primary key which is a foreign key from another table. The table data can be seen in on the image attached and the query plan I have obtained from the Oracle database on the image . Every table involved in the query has indexes on primary keys and foreign keys defined on them. The last analysed date is yesterday, and all indexes are valid. The inner joins performed on the query are all based on primary keys. So why Oracle database is performing a full access on tables if there are valid indexes to use instead ? The resultant cost is equal to number of records in the firs table of the query. Thank you very much.
  6. jjeffman

    Create index on a huge table

    Hello, I have a table with millions of records. How to create an index without having temp tablespace problems ? Thank you very much.
  7. jjeffman

    ora-01460 error on Oracle 11g

    Hello Boobal. I have found out that the problem is related to the connection charset. The database is configured using the WE8ISO8859P1 charset and I am passing unicode values with special Portuguese characters to store in NVARCHAR2 columns. I hope I can get a solution from the makers of the component I am using to connect to Oracle
  8. jjeffman

    ora-01460 error on Oracle 11g

    Hello Boobal, Thank you very much for answering me. I am going to try to execute the procedure on SQLDeveloper or Toad to check if the problem is the database itself or is inside my C++ application. I will also check the NLS_LANGUAGE parameters. As I am setting up the connection to use Unicode I might have problems when passing parameters from the C++ application.
  9. jjeffman

    ora-01460 error on Oracle 11g

    Hello Boobal Thank you very much for answering me. This is the banner of the instance where the error was caught: Oracle Database 11g Release - 64bit Production PL/SQL Release - Production CORE Production TNS for Linux: Version - Production NLSRTL Version - Production As I have a C++ application which is filling up the parameters, I migth be passing parameter data in a wrong way.
  10. jjeffman

    ora-01460 error on Oracle 11g

    Hello, I have built a procedure inside a package which is responsible for adding records to a table that has a CLOB column. Create table S4MED_EVENTO_MDM ( EVENTO_ID Number NOT NULL , DATAHORA Timestamp(3) NOT NULL , MENSAGEM NVarchar2(600) NOT NULL , SEVERIDADE Number(5,0) Default 0 NOT NULL , GRUPO NVarchar2(20), TIPOEVENTO NVarchar2(15), ACKTIME Date, NOMEARQ NVarchar2(50), TEXTOARQUIVO Clob, APUSER Number(22,0), MEDICAO Number(22,0), Constraint pk_S4MED_EVENTO_MDM primary key (EVENTO_ID) USING INDEX TABLESPACE yyyyyy ) TABLESPACE XXXXXX / The procedure code is PROCEDURE addEventoMed( p_text IN NVARCHAR2 -- Texto da mensagem , p_sever IN INTEGER -- Grau de severidade , p_group IN NVARCHAR2 -- Grupo ao qual pertence o evento , p_tipo IN NVARCHAR2 -- Tipo do evento , p_file IN NVARCHAR2 -- Nome do arquivo lido , p_ftext IN NVARCHAR2 -- Conteúdo do arquivo lido ao qual está associado o evento , p_medicao IN INTEGER -- Medição a qual se refer o evento , p_result OUT NUMBER ) AS lob_loc CLOB; BEGIN BEGIN INSERT INTO S4MED_EVENTO_MDM (DataHora ,Mensagem ,Severidade ,Grupo ,TipoEvento ,NomeArq ,TextoArquivo ,MEDICAO) VALUES (SYSTIMESTAMP ,p_text ,p_sever ,p_group ,p_tipo ,p_file ,EMPTY_CLOB() ,p_medicao) RETURNING TextoArquivo INTO lob_loc; IF p_ftext IS NOT NULL THEN DBMS_LOB.WRITE(lob_loc, LENGTH(p_ftext), 1, p_ftext); END IF; SELECT seq_eventomdm.currval INTO p_result FROM sys.dual; EXCEPTION WHEN OTHERS THEN RAISE; END; END addEventoMed; This procedure works fine on my local Oracle 11g instance but not on my client 11g system. It raises an exception with code ora-01460 and I have not any explicit date or number conversion. Is there any reason for this exception ? Thank you very much.
  11. jjeffman

    Single column index versus multi-column

    Hi Burleson, >>>> very high throughput so I should not have too many indexes. >>By "throughput", do you mean DML? If so, you are correct. Yes, "insert" sql commands. >>Do you have a test environment, where you can experiment? Yes, I have a test environment. I am reading the articles you have pointed me to. Thank you very much.
  12. Hello, This a question which I still have doubts. The column group do not build an unique index, so there is no need of create the index to act as a constraint. The "where" clauses mostly have the column which is a timestamp as a limit for the number of records and sometimes I am interested on having other attributes as filters. The system has also to have a very high throughput so I should not have too many indexes. What is the best choice ? Build one index for each column or one index having two or more columns ? Does the column order influences the performance of the "select" command ? Thank you very much.
  13. jjeffman

    join two tables using timestamp columns

    Hi burleson, Thank you very much for answering me. My pleasure to be in this forum. No, I am not interested on the elapsed time between samples. As the data is filled up by a continuous and real time process which may generate lack of information, I need to know which are the dates and time which has not any value. Of course it will be very nice to know the date and time when each lack of information begins and when it ends to issue a command to the real time process asking it to retry getting data to fill the lack of information. I will try to use the process described on the article of the link you posted. Thank you very much.
  14. Hello, I have a table on which data acquired every 5 minutes from a data collector system is inserted, its structure is : Create table MEDIDA ( DataHora Timestamp(3) NOT NULL , Contagem Number(5,0) Default 0 NOT NULL , Valor Number(15,5) NOT NULL , FALHA Char (1) Default 'N' NOT NULL , CANAL Number(22,0) NOT NULL ) TABLESPACE XXXXX / Create UNIQUE Index medida_un_canal_dthr ON MEDIDA (DataHora,CANAL) TABLESPACE IIIIII / The data collector system may fail on getting values from its source and so it will be holes in the time series. I am trying to determine if and where are the holes in the time series by joining the "medida" table with another table which has the time series complete: CREATE TABLE DATA_CHECK ( "DATA" TIMESTAMP (3) NOT NULL , "DUMMY" VARCHAR2(20 BYTE), CONSTRAINT "DATA_CHECK_PK" PRIMARY KEY ("DATA") ) TABLESPACE xxxx ; I have made a function which first truncate the table "data check" and after fills it up with timestamp values every five minutes between the interested time, lets say for a month . When I perform the left or right join using "data_check.data = medida.data" the number of records are always the same and equal to the number of records which belongs to "medida". I am not realising what am I doing wrong. If I add a where clause to filter records which medida.datahora is null I have got an empty record set. Please help me. What am I doing wrong ? Kind regards. Jayme.