WordPress Database Error Query Execution Was Interrupted

Primary Menu

Monthly Archives: June 2016

Post navigation

ORA-16016: archived log for thread 2 sequence# 221264 unavailable

Wed Jun 15 09:20:59 2016
alter database open
AUDIT_TRAIL initialization parameter is changed to OS, as DB is NOT compatible for database opened with read-only access
Signalling error 1152 for datafile 1!
Beginning Standby Crash Recovery.
Serial Media Recovery started
Managed Standby Recovery starting Real Time Apply
Media Recovery Log /u01/archivelog/1_221255_847215117.dbf
Media Recovery Waiting for thread 2 sequence 221264
Fetching gap sequence in thread 2, gap sequence 221264-221264
Wed Jun 15 09:22:01 2016
Standby crash recovery need archive log for thread 2 sequence 221264 to continue.
Please verify that primary database is transporting redo logs to the standby database.
Wait timeout: thread 2 sequence 221264
Standby Crash Recovery aborted due to error 16016.
Errors in file /u01/app/oracle/diag/rdbms/agdc170/agdc/trace/agdc_ora_29744.trc:
ORA-16016: archived log for thread 2 sequence# 221264 unavailable
Recovery interrupted!
Completed Standby Crash Recovery.
Signalling error 1152 for datafile 1!
Errors in file /u01/app/oracle/diag/rdbms/agdc170/agdc/trace/agdc_ora_29744.trc:
ORA-10458: standby database requires recovery
ORA-01152: file 1 was not restored from a sufficiently old backup
ORA-01110: data file 1: ‘/u01/app/oracle/oradata/agdc/system01.dbf’
ORA-10458 signalled during: alter database open …
SQL> select process,status,pid from v$managed_standby;

PROCESS STATUS PID
——— ———— ———-
ARCH CLOSING 15621
ARCH CLOSING 15625
ARCH CONNECTED 15627
ARCH CLOSING 15629
RFS IDLE 29800
RFS IDLE 29797
MRP0 WAIT_FOR_GAP 29862

7 rows selected.

Wed Jun 15 10:30:03 2016
Managed Standby Recovery not using Real Time Apply
Parallel Media Recovery started with 16 slaves
Waiting for all non-current ORLs to be archived…
All non-current ORLs have been archived.
Media Recovery Log /u01/archivelog/1_221255_847215117.dbf
Media Recovery Waiting for thread 2 sequence 221264
Fetching gap sequence in thread 2, gap sequence 221264-221264
Completed: alter database recover managed standby database disconnect from session
Wed Jun 15 10:31:57 2016
FAL[client]: Failed to request gap sequence
GAP – thread 2 sequence 221264-221264
DBID 3425128408 branch 847215117
FAL[client]: All defined FAL servers have been attempted.
————————————————————
Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization
parameter is defined to a value that’s sufficiently large
enough to maintain adequate log switch information to resolve
archivelog gaps.
————————————————————

SQL> create directory DIR_BACKUP as ‘/u01/backup’;
Directory created
[oracle@test143

]$ expdp ‘/ as sysdba’ tables=aud$ DIRECTORY=DIR_BACKUP dumpfile=aud0614

Export: Release 11.2.0.4.0 – Production on Tue Jun 14 11:29:38 2016

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 – 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting “SYS”.”SYS_EXPORT_TABLE_01″: “/******** AS SYSDBA” tables=aud$ DIRECTORY=DIR_BACKUP dumpfile=aud0614
Estimate in progress using BLOCKS method…
Total estimation using BLOCKS method: 0 KB
ORA-39166: Object SYS.AUD$ was not found.
ORA-31655: no data or metadata objects selected for job
Job “SYS”.”SYS_EXPORT_TABLE_01″ completed with 2 error(s) at Tue Jun 14 11:29:41 2016 elapsed 0 00:00:03

DataPump export of table SYS.AUD$ fails with errors:

ORA-39165: Schema SYS was not found.
ORA-39166: Object AUD$ was not found.
ORA-31655: no data or metadata objects selected for job
Job “SYS”.”SYS_EXPORT_TABLE_01″ completed with 3 error(s) at 13:49:47

There is a restriction on dataPump export. It cannot export schemas like SYS, ORDSYS, EXFSYS, MDSYS, DMSYS, CTXSYS, ORDPLUGINS, LBACSYS, XDB, SI_INFORMTN_SCHEMA, DIP, DBSNMP and WMSYS in any mode. The Utilities Guide indicates the restriction only on full export mode, but the restriction actually applies to all modes.

Export the table SYS.AUD$ using the traditional export:

file=dumpfile.dmp log=logfile.log tables=sys.aud$
Other options are:

– use CTAS (Create Table As Select) to copy the SYS.AUD$ data into a non-restricted schema.

– use the DBMS_AUDIT_MGMT package of Audit Vault to manage and purge audit data (see Note 731908.1). This allows for the facility to move the AUD$ table out of the SYSTEM tablespace, which can negate the need to export the table.

NOTE: This issue also applies to other SYS owned auditing tables such as FGA_LOG$

References

BUG:5647223 – DATAPUMP EXPORT FAILS ORA-31655
NOTE:731908.1 – New Feature DBMS_AUDIT_MGMT To Manage And Purge Audit Information

Join the world’s largest interactive community dedicated to Oracle technologies.

We have an sql query which is taking very long time to execute and below are the details of the sql.
Currently i have included /*+ gather_plan_statistics */ hint to collect actual and estimate of cardinality, but sql is still running for about 1 hour and is waiting on db file sequential read wait event.

How can i find actual and estimated cardinality for this sql without executing it. as it is taking very long time to comlete.

  • 13086 Просмотров
  • Метки: нет (добавить)
1. Re: SQL taking very long time to execute

YasserRACDBA wrote:
Hi All,

We have an sql query which is taking very long time to execute and below are the details of the sql.
Predicate Information (identified by operation id): Currently i have included /*+ gather_plan_statistics */ hint to collect actual and estimate of cardinality, but sql is still running for about 1 hour and is waiting on db file sequential read wait event.

How can i find actual and estimated cardinality for this sql without executing it. as it is taking very long time to comlete.

-Yasser Hopefully, the query has completed by now. You have at least one implicit (character set to character set) data type conversion happening, as indicated by the SYS_OP_C2C function in the predicates. I have not yet determined the meaning of the U in the following predicate: If you knew the SQL_ID for the SQL statement, you might be able to monitor the view V$SQL_PLAN_STATISTICS_ALL to determine the actual cardinality to this point (not verified).

Implicit Character Set to Character Set References:
http://jonathanlewis.wordpress.com/2007/07/29/nls/
http://joze-senegacnik.blogspot.com/2009/12/what-is-purpose-of-sysopc2c-internal.html

A, B, C, D aliases for tables. it really makes it hard to understand the SQL statement and the execution plan, please see:
http://jonathanlewis.wordpress.com/2010/01/26/aliases/

Charles Hooper
Co-author of "Expert Oracle Practices: Oracle Database Administration from the Oak Table"
http://hoopercharles.wordpress.com/
IT Manager/Oracle DBA
K&M Machine-Fabricating, Inc.

2. Re: SQL taking very long time to execute

Was eagerly waiting for someone’s response. thanks for involving in this thread.

You are correct, implicit conversion is happening as indicated by SYS_OP_C2C function, datatype nvarchar is getting converted according to table CM_CACHE_BROKER definition. But i am totally confused with "U" meaning in predicate section, do you have any idea on this??

Un-fortunately query was interrupted, and i am executing it again. Will monitor V$SQL_PLAN_STATISTICS_ALL during next execution.

Will post the details once query completes. But what if in case query takes abnormal time like one or two days. how to find actual and estimated cardinality??How to perform Cardinality Feedback Tuning??

3. Re: SQL taking very long time to execute

YasserRACDBA wrote:
Was eagerly waiting for someone’s response. thanks for involving in this thread.

You are correct, implicit conversion is happening as indicated by SYS_OP_C2C function, datatype nvarchar is getting converted according to table CM_CACHE_BROKER definition. But i am totally confused with "U" meaning in predicate section, do you have any idea on this??
Yasser,

The ‘U’ means the column is of type NVACHAR2 or NCHAR) and the passed string is a VARCHAR2 The above means D.TAG_LIST_CODE is of NVACHAR2 type whereas B.TAG_VALUE and A.PARENT_BROKER_CODE are NVARCHAR2 and VARCHAR2 (orcorresponding NCHAR and CHAR) datatypes respectively.

4. Re: SQL taking very long time to execute

I’ve not seen this U notation before.

I would guess that where the N notation indicates that the following string literal is an NCHAR string literal, the U would be unicode?
However, unlike N (and Q notation for quotes, I’ve not seen this U documented anywhere.

Here it states that: Maybe there is a similar distinction with U and UNISTR?

Confirmation is not helped by the inconsistency of terms used to describe N which is referred in official documentation as notation, function and "nchar capability".

5. Re: SQL taking very long time to execute

YasserRACDBA wrote:
Will post the details once query completes. But what if in case query takes abnormal time like one or two days. how to find actual and estimated cardinality??How to perform Cardinality Feedback Tuning?? I am sure Charles will be able to guide you on Cardinality Feedback Tuning but you may want to try executing the query without ORDER BY clause to see if it completes in reasonable time. If it does, you have 2 leads to follow:
1) You can use GATHER_PLAN_STATISTICS hint on query without ORDER BY and (hopefully) it will also complete in reasonable time &
2) You will have identified that SORT operation is one of the culprits for poor performance.

Your EXPLAIN PLAN suugests that your SORT will be spilling to disk (although I am not sure if soring about 7K rows should not be possible with a complete in-memory sort).

p.s. Had a second look at my post (should have done that before posting) and I guess the ORDER BY argument is not valid since the plan does not have explicit ORDER BY step but a SORT GROUP BY step. But I would be still interested in finding
a) why did oracle use SORT GROUP BY instead of HASH GROUP BY ?
b) Your SORT GROUP BY step and HASH JOIN step just below it expects to generate same number of rows. This looks unusual and the only reasons for the same that I can think of are
i) Some tables do not have up-to-date statistics OR
ii) GROUP BY in your query does not matter

You may want to look into this.

Edited by: user503699 on Feb 2, 2010 7:57 PM

p.s. One more update: Turns out ORDER BY does change things. It seems the answer to my first question above is oracle decides to use SORT GROUP BY in order to avoide additional SORT opertaion needed by ORDER BY clause. I remember reading somewhere that HASH GROUP BY operation is more efficient than SORT GROUP BY operation. So you may indeed want to test the query without ORDER BY clause to see if things change (for better). I know you need ORDER BY clause but if you can run the query without ORDER BY clause in reasonable time, the you can at least perform Cardinality Feedback Tuning by using GATHER_PLAN_STATISTICS output.
(And now I will shut up and not edit this post 🙂 )

Edited by: user503699 on Feb 2, 2010 8:17 PM

How Ya Do Dat(a)

More Netezza Questions and Answers (Part II)

This is an extension of my previous post (“Netezza Questions and Answers”).

What data type is most suited for zone maps
Zone maps are typically useful for integers, date and time, variations of this data type. Zone maps are useful for ordered that that are usually built into data that is loaded; for example, phone call logs. [Also check more on zonemaps in my original post below].

How can we find log (SQL) activity for a day.
We can find this under /nz/kit.x.y/log/postgres/pg.log file. Older files are named as pg.log.N (Where N starts from 1, after pg.log file this is the latest file). Assuming that we are looking for a week day within pg.log, we may run
$ cat pg.log | sed -n “/2010-02-01 00/,/2010-02-05 23:59/p” > pg.firstweekFeb2010.log
If this produces no data, look for corresponding log file based on the last update timestamp (ls -ltr sorts them in reverse time stemp order).

Two important bin locations for Netezza are: /nz/kit.x.y/sbin and /nz/kit.x.y/bin.
Some configurations are located in /nz/data directory.

How can we remove formatting with NZSQL
NZSQL by default shows text formatted. In cases in which we do not need white spaces, use “nzsql -A” option. For example, “nzsql -A -c “select count(*) from hertz.daily_bookings;” allows us to run SQL command direct without logging into nzsql interactively.

Is there a way to stop NZSQL command, if one of the SQL commands fail.
Yes. A similar feature exists in other databases too. In this case, ON_ERROR_STOP=true on nzsql command so that other commands do not get executed. For instance, we like to create a new table (CTAS) and later drop old table. If a new table creation fails, we certainly do not want to drop old table. In this case, this option is very useful.
Can we access data in other databases with the same NZSQL.
This depends on the version. Version 5 and upwards support selects against database.owner.table, provided that user has same login and password access. Inserts are allowed in the current database with data from other databases, not the other way (meaning, we cannot insert into another database from the database where we logged in).

How can we plan and corresponding CPP files.
We can just to ‘explain’ on a query to see how plan looks. At run time, plans are created under /nz/data/plans we will see corresponding plans generated during run time. Corresponding CPP code is located under /nz/data/cache/

Leave a Reply

Your email address will not be published. Required fields are marked *