In addition to Weibo, there is also WeChat
Please pay attention
WeChat public account
Shulou
2025-01-16 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Database >
Share
Shulou(Shulou.com)06/01 Report--
1. A production Oracle database has been installed for colleagues in Anhui. Recently, the database is always disconnected between 2: 00 and 10:00. The specific tomcat application log is as follows:
08:58:09 ERROR c.d.web.controller.DBAppController-query update version request exception org.springframework.dao.DataAcce***esourceFailureException:
# Error querying database. Cause: java.sql.SQLException: Io exception: Connection timed out
# The error may exist in file [/ usr/local/tomcat/xx/WEB-INF/classes/mapper/DBAppMapper.xml]
# The error may involve com.dabay.web.dao.DBAppDao.selectProperties-Inline
# The error occurred while setting parameters
# SQL: SELECT KEY,VALUE,DESCRIPTION FROM APP_PROPERTIES WHERE KEY=? AND Datagram STATUSSTATUSSTATUS9'
# Cause: java.sql.SQLException: Io exception: Connection timed out
; SQL []; Io exception: Connection timed out; nested exception is java.sql.SQLException: Io exception: Connection timed out
08:58:09 ERROR c.d.web.controller.DBAppController-DGW_0922084243406: query broadcast image request exception org.springframework.dao.DataAcce***esourceFailureException:
# Error querying database. Cause: java.sql.SQLException: Io exception: Connection timed out
# The error may exist in file [/ usr/local/tomcat/xx/WEB-INF/classes/mapper/DBAppMapper.xml]
# The error may involve defaultParameterMap
# The error occurred while setting parameters
# SQL: SELECT TITLE, URL, REMARKS PNGURL FROM INFO_BANNER WHERE data STATUSING 9' AND ROWNUM show parameter background_dumpNAME TYPE-- VALUE-- -background_dump_dest string/u01/app/oracle/diag/rdbms/just_test/test/trace, yes. There is an alert log, and then check the alert log, as follows: db_recovery_file_dest_size of 3882 MB is 45.88% used. This is auser-specified limit on the amount ofspace that will be used by thisdatabase for recovery-related files, and does not reflect the amount ofspace available in the underlying filesystem or ASM diskgroup.Fri Sep 22 02:01:05 2017Starting background process CJQ0Fri Sep 22 02:01:05 2017CJQ0 started with pid=22, OS id=6797 Fri Sep 22 02:06:05 2017Starting background process SMCOFri Sep 22 02:06:05 2017SMCO started with pid=32, OS id=7393 Fri Sep 22 04:21:10 2017Thread 1 cannot allocate new log Sequence 221Private strand flush not complete Current log# 1 seq# 220 mem# 0: / u01/app/oracle/oradata/hsrs_pro/redo01.logThread 1 advanced to log sequence 221 (LGWR switch) Current log# 2 seq# 221 mem# 0: / u01/app/oracle/oradata/hsrs_pro/redo02.logFri Sep 22 09:00:35 2017 first saw Thread 1 cannot allocate new log, sequence 221, then Baidu again The following results are found (excerpted from http://blog.csdn.net/zonelan/article/details/7613519), which is actually a common mistake. Generally speaking, log groups are switched when the log is full, which triggers a checkpoint,DBWR to write dirty blocks in memory to the data file. As long as the log group is not finished, the log group will not be released. If archiving mode is turned on, it will also be accompanied by the ARCH write archiving process. If the redo log is generated too fast, when the CPK or archiving is not completed, and the LGWR has already filled the rest of the log group and is about to write redo log to the current log group, a conflict will occur and the database will be suspended. And will always write an error message similar to the above in alert.log. So you have the following action: SQL > select group#,sequence#,bytes,members,status from v$log # check the status of each group of logs GROUP# SEQUENCE# BYTES MEMBERS STATUS- 1 220 52428800 1 INACTIVE # # empty Free 2221 52428800 1 CURRENT # # current 3 219 52428800 1 INACTIVE # # Free SQL > alter database add logfile group 4 ('/ u01 applicable oradata Greater oradata Greater size redo04.log') oradata 500m Add log group Database altered.SQL > alter database add logfile group 5 ('/ u01 alter system switch logfile; SQL SQL > select group#,sequence#,bytes,members,status from v$log') size 500M alternate database group # check the status and find that there is a difference GROUP# SEQUENCE# BYTES MEMBERS STATUS- 1 22052428800 1 INACTIVE 2 22152428800 1 ACTIVE 3 21952428800 1 INACTIVE 422524288000 1 ACTIVE 5223524288000 1 CURRENT Suddenly saw a recovery in the alert log and the word recovery in the tomcat application log, so Baidu did it again. The following commands (I don't know what it means) are executed respectively: SQL > select * from flashrecovery, recovery, recovery, usability, SQL > select * from v$recovery_file_dest Check the actual size of recovery: NAME----SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES -/ u01/app/oracle/recovery_area 4070572032 3926630400 2059067392 41SQL > select * from v$flash_recovery_area_usage 2 FILE_TYPE PERCENT_SPACE_USED---PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES--CONTROL FILE 000REDO LOG 000ARCHIVED LOG 000FILE_TYPE PERCENT_SPACE_USED---PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES-- -- BACKUP PIECE 53.96 50.58 37IMAGE COPY 42.504FLASHBACK LOG 000FILE_TYPE PERCENT_SPACE_USED---PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES- -- FOREIGN ARCHIVED LOG 0007 rows selected.SQL > show parameter db_recovery_file_dest_size Finally, I found that this is what I am looking for to check the current recovery limit size NAME TYPE---VALUE--db_. Recovery_file_dest_size big integer3882MSQL > alter system set db_recovery_file_dest_size=5882M scope=spfile A little bigger? System altered.SQL > show parameter db_recovery_file_dest_size But it doesn't seem to work. It's still such a big NAME TYPE-- VALUE--db_recovery_file_dest_size big integer3882M. Still Baidu:) executed the following command seems to work SQL > alter system set db_recovery_file_dest_size=10G System altered.SQL > show parameter db_recovery_file_dest_size;NAME TYPE-- VALUE--db_recovery_file_dest_size big integer10G
Let's take a look at it first ~ the application log does not seem to have reported a wrong timeout at 10:00. End.
To add, the difference between the following two goods
Scope=bothscope=spfile Oracle spfile is a dynamic parameter file, which sets various parameters of Oracle. The so-called dynamic means that you can change the database parameters and record them in spfile without shutting down the database. When changing parameters, there are four scope options, and scope is the range. Scope=spfile only changes the records in spfile, does not change memory, that is, does not take effect immediately, but waits for the next database startup to take effect. There are some parameters that are only allowed to be changed in this way, and scope=memory only changes memory, not spfile. That is, the next boot will fail. Scope=both memory and spfile are changed, and no scope parameter is specified, which is equivalent to scope=both. = well, the problem seems to have been solved. Log in to EM to check that jobs has an automatic backup policy at 2: 00 in the morning. The specific steps are as follows: 1, su oracle2, source .bash _ profile3, sqlplus / nolog4, conn / as sysdba5, emctl status dbconsole check whether EM is started, if not = "emctl start dbconsole [oracle@xx ~] $emctl status dbconsoleOracle Enterprise Manager 11g Database Control Release 11.2.0.1.0 Copyright (c) 1996, 2009 Oracle Corporation. All rights reserved. https://xx:1158/em/console/aboutApplicationOracle Enterprise Manager 11g is running. -Logs are generated in directory / u01/app/oracle/product/11.2.0/dbhome_1/xx/sysman/log 6, get the address above (https://xx:1158/em/console/aboutApplication), visit 7 in the browser Click on job
8, delete job
Take a look at it like this. Tomorrow we will see the results. In addition, the backup strategy can be modified by the job steps above.
Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.
Views: 0
*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.
Continue with the installation of the previous hadoop.First, install zookooper1. Decompress zookoope
"Every 5-10 years, there's a rare product, a really special, very unusual product that's the most un
© 2024 shulou.com SLNews company. All rights reserved.