Jump to content


  • Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About corrado.perucca

  • Rank
  1. ORA-12170: TNS:Connect timeout occurred

    Great !! You' re welcome
  2. ORA-12170: TNS:Connect timeout occurred

    Hi Gvk, have you try to test your 1521 port on IP address if answer? telnet xx.xxx.xxx.xxx 1521 If the test is successfull then may be some antivirus. Bye
  3. Hello, i'm experiencing a strange issue on my refresh procedure for a materialized view, the table on the master is big table (28Gb + 20 Gb index), the materialized view have a job scheduled that launch a plsql procedure to refresh the view every minute. We have on the view site Oracle Enterprise and on the master site Oracle Enterprise both on HP-UX platforms, the firs on HP-UX 11.23 the second on HP-UX 11.31. The procedure works good for the most part of the year but time by time happens that the job takes long time to complete the task and more long it takes more the Materialized view log grows till the time that it's not able to refresh all the modifies on it like in this moment. I run some awr report to understand if something of strange is going on but for instance the DB Time on the day of the hang , on the mview site was 2400 min against 60 minutes(09-23-2015) of snapshot the day before(09-22-2015) in the same hour snapshot was 4130 against 60 minutes. That means that is not a DB work load problem. As added information i can say that our disks are not performant and we are preparing to migrating to a new Server with 3PAR that will improove the performace. But in the meanwhile i'll have to recreate my view with ON PREBUILT TABLE that takes anyway more than 5 hours. I would like to understand how to investigate this kind of issue that is a random happening, i run also awr on the master site but no way to find something strange for my knowledge. I tryed also to leave a remote connection open for more than some hour but was still working so it's not a network connection problem. On my sensation it has some event that let slow-down the network in some time and let freeze the refreshing job, but it's only a sensation as i told and i can't proove this. I'm open to any suggestion, thank you anyway for the time dedicated. Best regards
  4. split and repeat rows based on a column value

    Hello Jordan, i don't know exactely what is your scope but i think that can be usefull for you: select substr(','||RIGA||',' ,instr(','||RIGA||',',',',1,rownum)+1 ,instr(','||RIGA||',',',',1,rownum+1) - instr(','||RIGA||',',',',1,rownum)-1) avalue from (select TRANSLATE('AB(Manager) CD(Manager) EF(Senior Supervisor) GH(Supervisor) IJ(Staff)',' ',',') RIGA from dual ) connect by level <= length(RIGA)-length(replace(RIGA,','))+1 / Regards and good luck. Cor71
  5. Oracle ODI 11g

    Good morning(here), i have a problem in my ODI 11g with the load into Oralce table of a fixed width file, i configured all the datasource in ODI and when i do view data i see all correct, the end of file is signed like "0D0A" but when i try my load interface i receive the message that my last field is more big than the one declared. My file have an header of fields and the last field is a data-field of 2000 characters. I controlled and is really fixed the length cause is a COBOL file from a Mainframe. So it looks that ODI don't understand the end of that field and go ahead to the other, i just tryed to enlarge the limit but is always more big like if the file is shifting on the right. Have i forgot some configuration in some place? The definition of the file present the end of file like Microsoft hexadecimal \u000D\u000A i try all the combination there but no way to avoid this problem. If someone have some idea about and can give me some help i'll be thankful. Thank you in advance. Corrado
  6. Oracle Data Integrator 11g

    Hello to everyone, i had a problem with ODI 11g while i was loading some ";" delimited file. The record is this one: Header PIATTAFORMA; FUNZIONE; OPERAZIONE; SUB_OPERAZIONE; DESC_FUNZIONE; SYSTEM; SUBSYSTEM; DES_OPERAZIONE; PROGRESSIVO; TIPOLOGIA; COMMIT; LOGGATA; MOTIVO_ESCLUSIONE Data UCAMP-PWS;CDBXF001;;;Inquiry su GEBA;CD;BX;;;Lettura;NO;NO;Funzione senza dati cliente The field DES_OPERAZIONE is empty for that set of records, but is not a problem cause is a description, my target table is created with all VARCHAR2 except for one field that is Number (PROGRESSIVO in the example) and in this case is empty too, still not a problem cause is not some key or other. What's happens to me is that if i have the field DES_OPERAZIONE empty, the record will be not loaded but without any message of error or warning. I used the NVL on the field to say if empty then 'ND' but doesen't work. The strange thing is that if i leave empty the field before for example for him doesen't matter, if i use NVL on the field before it works. On teh field DES_OPERAZIONE not !!! THat's made me mad for this morning. If some one can explain me why that is happaning i'll be very glad. Thank you in advance. Bye Cor71
  7. Oracle Parallel Insert in Subpartitioned table

    Thank you very much about your fast answer, i found really good your tips and suggestions, i'll try to convince my Manager to proceed in someway. The problem of that DB is for example that a procedeure PL\SQL that was working in few minutes (9min) passed from one day to another to work in (1Hour and half) and than in few minutes again after a week. Sure that the Dba-team was tuning the DB for other problems but i've to suppose that someone is trying to tune and is not doing the right things to do it. Anyway is not the only one like this, another is that we create index on tables just to use them in the way that we query the DB and the DB ignore that index without explanations.... the answer of our dba-team was.. force the Index that you need, one time done with the HINTS tips, long operations appears..., after that the dba-team processed the table statistics, everything was good for a while.. but is so instable this system that finally i'm not able to understand where exactly is the problem. For me is not normal that i have and index on a table , i query that table in the way i studied that index and the DB ignore that index... i never forced index well-done.... Anyway for the moment thank you very much, how i've to do to try the 2 - Run it through http://www.statspackanalyzer.com and see the bottleneck! step on our DB? Is possible to do it online or i've to install something? As you can imagine i need a special authorization to do it. But for me will be a really good test to analyze our DB. so i'll suggest it. Thank you again
  8. Hi to everyone I wrote this mail just to talk and have some idea more than the one that i have. At work we have a linux server with Oracle g, and we have the need, to save time, to use parallel process launched in background (&) or (at now). The table that we are going to insert is partitioned for date and subpartitioned for name of file_source, and we are having a lot of problem with this. Our DBA staff say that is impossible to work in this way but for me, after ten year of oracle-work, sounds strange , sure may be that we don'r use really really correctly this procedure but i suppose that Oracle is able to recevie 20 ask of insert on the same table. The situation is that we usede this procedure in other server and it's work but the answer of our DBA-staff is that we have less data storage and so on. In the end some times this proceduers have a low time of execution and some times no, this goes out of my ability to find out the problem. We have a DWH with STAGING AREA, ODS Level, DDS Level and DM Level, the Staing area is paritionend for file_source, the ODS Area is partitioned by range (date) and subpartitioned for name (file_source), the DDS area is partitioned by range(date) and subpartitioned for Origin system(that include all the file_source) and the DM area is only partitioned by date. So the most big problems that we meet are between the Staging are and ODS area, and between the Ods area and the DDS Area, the most important thing is that this table (DDS) is a monster of near 500.000.000 of rows (ITr of data) but we look only at the date to elaborate. The solution is clear, divide this table in two, one online and one of storage as usually and correctly a normal situation require, but unfortunatly is a situation that we Erhedit from an old system and at the moment is not approoved the change request on this site. The really strange thing is that sometimes work and some times not, without understand the cause of this. My opinion on this is that the DB is not correctly configured but the System Staff say that everything is correct and there are no problem. My first problem is to understand, if possible, wich is the limit of this way to operate, can i insert in a subpartition in the same time with twenty parallel process that write on same partition and different subpartition? Is correct to act in this way to save time about the data-load or better doing it one by one? On my experience i realized that Oracle can manage(is his work) a lot of request in the same time, but in this DB that we are using i continue to see problems that sounds like if we are usuing a tool that is not working in the correct way... i'm without ideas. May be we went beyond some limit but in the end are less then 5.000.000 records per day that we move i think that a DWH have to support more than this... Thank you to everyone