Jump to content

ks.sivakumar

Members
  • Content count

    100
  • Joined

  • Last visited

Community Reputation

0 Neutral

About ks.sivakumar

  • Rank
    Advanced Member
  1. ks.sivakumar

    RMAN/output log generating with zero bytes

    Thanks for your reply. after addition of . $HOME/.bash_profile in the first line of the script, started working fine now. Siva.
  2. ks.sivakumar

    RMAN/output log generating with zero bytes

    Hello Burleson, crontab are re-directing for to write into log files as printed below.Just to know the size of the log file I have printed only log files along with size in previous post. 00 02,08,16 * * * sh /app/RMAN_BACKUP/Backup_Script/RMAN_arch_bkp_siebel.sh > /app/RMAN_BACKUP/Backup_log/RMAN_arch_bkp_siebel_$(date +\%d_\%m_\%y-\%H-\%M-\%S).log 30 20 * * * sh /app/RMAN_BACKUP/Backup_Script/RMAN_Backup.sh > /app/RMAN_BACKUP/Backup_log/RMAN_Backup_$(date +\%d-\%m-\%y-\%H-\%M-\%S).log #00 05,16 * * * sh /app/RMAN_BACKUP/Backup_Script/RMAN_arch_bkp_PRWH.sh > /app/RMAN_BACKUP/Backup_log/RMAN_arch_bkp_PRWH-$(date +\%d-\%m-\%y-\%H-\%M-\%S).log 00 03,12 * * * sh /app/RMAN_BACKUP/Backup_Script/RMAN_arch_bkp_PRWH.sh > /app/RMAN_BACKUP/Backup_log/RMAN_arch_bkp_PRWH-$(date +\%d-\%m-\%y-\%H-\%M-\%S).log as per above schedule for 'PRWH' 2 times needs to be run and the respective log should generated with one log with content and other log with zero byes those logs pasted below for reference.same cases for others as well. -rw-r----- 1 oracle dba 0 09/19/18 RMAN_arch_bkp_PRWH-19-09-18-03-00-01.log -rw-r--r-- 1 oracle dba 57608 09/19/18 RMAN_arch_bkp_PRWH-19-09-18-03-18-59.log what could be the reason behind these issue,
  3. ks.sivakumar

    ASM Disgroup Creation Steps from LUN

    Hi Team, requested 2 different 100 GB LUN total 200 GB LUN from storage team Now they updated as follows. LUN Path: /vol/STFNNIODBS0102VOL03/STFNNIODBS01VOL03_qtree01/LUN03 Size: 100GB Host mapped: STFNNIODBS02 and STFNNIODBS01 Status: Online and mapped LUN Path: /vol/STFNNIODBS0102VOL18/qtree01/LUN18 Size: 100GB Host mapped: STFNNIODBS02 and STFNNIODBS01 Status: Online and mapped 1) since above allocation is not yet converted into file system, still yet to visible/list in our mounts. Now on wards what are all the steps(step by step) needs to be followed for to created 2 different diskgroups for 100 GB each. so that one diskgroup will allocate for / archive log and another disgroup will allocate for future data growth. 2) Also after creation of the 2 different diskgroup, would like to point archive log creation / destionation location into new diskgroup whatever I have created. Hence please advice the steps for the same, Basically where and all I need to change the parameter How it should be. 3)Assuming that in future if we want 50 GB storage from one diskgroup move to another diskgroup which will also be possible isn't it ???
  4. ks.sivakumar

    DNS Configuration Details / Steps.

    Hi Team, As Per my understanding we may require to configure DNS server along with 3 SCAN IP so that we can use the scan name in tnsnames.ora which in turn help us to load balance for user connections between 2 nodes in our RAC. Cluster. Hence could you help me to provide / refer some documents which help us to understand for to configure the same. Also Hope if we use both the node VIP's in our tnsentries along with load_balance option which was supported earlier 11.0.2 version, which also help us to balance user connections between 2 nodes. so, Help me to revert with syntax for the same as well. Apart from the above option,do we have any other way for to balance user connections between 2 nodes without using DNS server.
  5. Hi Team, In our production environment cron job scheduled 2 times in a day for RMAN backup against few of our DB's with archive-log and purging for the same as well. But some DB/First time RMAN is executing and output / respective RMAN logs also generated with content(RMAN output like backup,purging for archive etc.,). But in second time the same cron job schedule RMAN is executing with output log as zero bytes only. (means empty log file is generating) why ???, means these schedule for cron job is not at all executed or output log generation issue or due to some other technical Issues ??. Just FYI pasted the log output file along with size printed below for understanding. -rw-r----- 1 oracle dba 0 09/17/18 RMAN_arch_bkp_PRINFREP-17-09-18-03-30-01.log -rw-r----- 1 oracle dba 0 09/17/18 RMAN_arch_bkp_PRWH-17-09-18-03-00-01.log -rw-r--r-- 1 oracle dba 51892 09/17/18 RMAN_arch_bkp_PRWH-17-09-18-03-28-15.log Hope you guys clear in my query. Help me to resolve the above ASAP. Thanks in Advance.
  6. ks.sivakumar

    Archive log generation more than usual

    Many Thanks for your reply.
  7. Hi Team, In our production Database archive log is generating more than usual. since earlier days archive log generation was very less compared to now a days Also by using queries can able to identify how many archive logs are generated during particular intervals of time. But Need to know which particular process generating those and also need to know Is there any setup can be done in oracle so that generation of archive log sizes can be reduced ??? ... could you guys help me for the above my issues. Siva.
  8. Hello Guru, while logging into the Database through server or toad or application connection which taking almost 1 mins to get into the db/logging into the database, also there was no heavy/expensive transaction exist in the DB.There was no cpu consumption as well. FYI. SGA is 5 GB. PGA is 1 GB. Moreover in the same server , we too have other 3 more databases all are get into the DB immediate as we request through sqlplus or toad or application. Why this slowness happens for DB connection along with some processing of the Jobs also very slow along with some modules are timing out. could you please kindly guide where we could trouble shoot the problems. FYI. Physical memory of the server is Memory: 130938 MB (127.87 GB). Analysed alert log as well, we don't see any kind of ORA errors as well. Could you please guide for the above ASAP. Siva..
  9. Hi guys, I have Big DB size, which is not synch with its standby, Also quite few days archive logs are missed in primary, since we do not have enought space in OS level in primary to keep RMAN backup sets,Hence we need to take incremental backup for set of/few datafiles from primary then restore into standby, then another set of incremental backup from primary then restore in stand by then need to be done recovery in standby in order to make synch using RMAN. could you pls. guys guide me how to perform above process in step by step with syntax , along with that is there any other way can be fllowing for my situation(if oslevel less storage,note able to keep all the datafile incremental backupP, could you pls. advice me on those steps as well. also how do I get estimated backup space requirement as well. Thanks for your valuable support. Siva..
  10. Hi Team, Need a query to find out how many day lag exist for my standby against primary DB, also in primary due to space constraint we have performed force delete archive for some without shipping into standby.Hence provide me step by step how do I make my standby DB synch with primary DB. Thanks in Advance. Siva..
  11. Hi Techgigs, we have only 8 days retention period for AWR, But still v$diag_alert_ext has got error log data since form 3th Apr 2015 to till date. 1) Hence v$diag_alert_ext dictionary got million of records, is there any way to retain for 10 to 15 days error log date in v$diag_alert_ext. so that our whatever query we performed in v$diag_alert_ext will not consume more resource from the system. 2) In RAC Envirnoment, is there any specific metrics is there to identify how many scan listeners can configured. say for example we have 2 node RAC, then we need to configure 2 scan listener or 3 scan listener. Since in our prod envirnoment node1 got 2 scan listener configured, but in node2 only one scan listener configured ??? - don't know why ??? Guide for the above. Thanks in Advance, Siva..
  12. ks.sivakumar

    sql bind variable History

    Thanks a lot Burleson. Siva...
  13. ks.sivakumar

    sql bind variable History

    Hi Team, I ran AWR from there, got the sql_id which causing more I/O stats(SQL ran on today morning between 7:30 to 8:30 AM as well with same sql_id), But now I was try to find out the bind variables whatever using the sql from gv$sql_bind_capture, but the value is not available in the dictionary, FYI we have 14 days retention of AWR, Hence is there any way to find out from other dictionaries or other ways to know our bind variables what we have used. Pls. guide me for the above. Thanks in Advance, Siva..
  14. Hi Burleson, Thanks for the update, since Analyze Table is depreciated, so just for identification still if some of the tables are analyzed then need to be put in practice for oracle recommended method package DBMS_STATS.GATHER_SCHEMA_STATS Siva..
  15. Hi, Is there any column attribute present into the oracle dictionary, for to find either executed statment for to collect statistics is used as Analyze Table or oracle recommended method package DBMS_STATS.GATHER_SCHEMA_STATS ????. Thanks in Advance, Siva..
×