Jump to content

Search the Community

Showing results for tags 'sql'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type



Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 4 results

  1. sundara

    Huge SQL data Insert

    I have couple of tables that I need to use to load a 3rd table. The total number of rows to load comes around 30 million. My Insert SQL query is a INSERT-INTO-<TABLE>-SELECT. My Insert SQL query hangs (Waited for 30 mins before killing the session). But when I used ROWNUM in the SELECT query with the ROWNUM value equal to the total number of result rows, the INSERT query ran in 43 seconds. And, both the results are consistent. I am completely lost here. Can you please throw your expert opinion on why the INSERT query without ROWNUM hangs and the INSERT query with ROWNUM runs much faster ? Details : TABLE - A : has a total of 50,000,000 rows; No index TABLE - B : has a total of 200,000 rows; Index present in COLUMN1 (Relation with TABLE-A is one to maasny) TABLE - C : Output table Original INSERT query: INSERT INTO C SELECT COLUMN1 FROM A WHERE COLUMN2 IN (SELECT COLUMN1 FROM B); INSERT query with ROWNUM : SELECT COUNT(*) FROM A WHERE COLUMN2 IN (SELECT COLUMN1 FROM B); -- No issues with this query INSERT INTO C SELECT COLUMN1 FROM A WHERE COLUMN2 IN (SELECT COLUMN1 FROM B) AND ROWNUM <=30,000,000;
  2. Hello... I want to change "stale_percent" to 3 and "concurrent" to true, but I need to test it first. I need to improve statistics gathering on very large fast changing tables.What is the best way to check before and after it will be set to check if this make any difference?I read that I need to check parameter JOB_QUEUE_PROCESSES and parallel_adaptive_multi_user before I will set "concurrent".Could you advise what values are save when setting it?Thank you in advance.
  3. A+++500+read++ I want the occurence of last '+' or in other words,total count of '+' before last value 'read' through PL/SQL at which value occurs.First value is 'A',Second value is '500' , third value is 'read'.Here last value is 'read' and position of '+' at which last value 'read' occurs is 4.I want this output 4 through query.Just want occurence of Highlighted '+' or in other words,total count of '+' before last value 'read'.Max position of '+' at which last value is there.Here output should be 4. Second i want total number of values that occurs between '+'..Here three values namely A,500,read is there.I want count of values means how many values exist between '+'.Output should be 3.
  4. Hello Everyone. Request you to please suggest some way on this. I have a csv file which i am uploading to a table using UTL_FILE. This activity runs on daily basis where file name remains same but data in there changes (insertion or update). I need to record that particular row with required columns which was changed in the csv file. I created a procedure and used below as condition which works for upload part. ## merge into test using dual on (Id = v_Id) when not matched then insert (Id, name, address) values (v_Id, v_name, v_address) when matched then update set name = v_name, address = v_address; For recording that update i used a trigger on this table (test) ## create or replace TRIGGER test_trg after update on test for EACH ROW begin If Updating Then insert into test_log (name, address ) values (:new.name, :new.address ); The trouble i am facing is that instead of recording that single record which is changed in table test using utl_file upload it records whole table values of required columns. Trigger works fine if i manually edit or update the test table but in case of UTL_FILE upload procedure trigger records whole table data of required columns.
×