Network Security Internet Technology Development Database Servers Mobile Phone Android Software Apple Software Computer Software News IT Information

In addition to Weibo, there is also WeChat

Please pay attention

WeChat public account

Shulou

Cisco switch Port err-disable solution

2025-03-29 Update From: SLTechnology News&Howtos shulou NAV: SLTechnology News&Howtos > Network Security >

Share

Shulou(Shulou.com)06/01 Report--

One of my 2960GG is connected to the core 3850 switch through multimode 10G SFP-10GBase-LRM fiber. The network is down this morning, and the SFP module lights on 3850G and 2960 are not lit. Check the CISCO 2960G port with the following prompt:

# show int status err-disabled

Port Name Status Reason Err-disabled Vlans

Te6/0/1 [TRUNK] swi-core01 err-disabled link-flap

The solution is as follows:

Conf t

Int Te6/0/1

Shut

No shut

End

OK!!!

The information on link-flap and err-disable is as follows: http://www.net130.com/cms/Pub/Tech/tech_zh/2010_11_07_20606.htmhttp://shanliren.blog.51cto.com/159454/165595

Notes on link-flap on the Cisco website:

Link-flap error

Link flap means that the interface continually goes up and down. The interface is put into the errdisabled state if it flaps more than five times in 10 seconds. The common cause of link flap is a Layer 1 issue such as a bad cable, duplex mismatch, or bad Gigabit Interface Converter (GBIC) card. Look at the console messages or the messages that were sent to the syslog server that state the reason for the port shutdown.

My translator:

Link flap means up and down of persistent interfaces. If an interface up/down occurs more than 5 times in 10 seconds, it will be set to the errdisable state. The cause of link-flap is gigabit GBIC cards in the layer-1 layer such as network cable problems, duplex mismatches, or failures. You can check the log log under the console or syslog server to get the reason for the port shutdwn. With this problem, we have to pay attention to the phenomenon of "fake death" of the switch port and find a way to "save" the port without rebooting the switch.

Rescue step 1: view the status of the log / port

After logging in to the switch, execute show log, and you will see the following prompt:

21w6d:% ETHCNTR-3-LOOP_BACK_DETECTED: Keepalive packet loop-back detected on FastEthernet0/20.

21w6d:% PM-4-ERR_DISABLE: loopback error detected on Fa0/20, putting Fa0/20 in err-disable state

The above information clearly indicates that port 20 is placed in the err-disable state because a loop has been detected.

View the status of the port

Switch

# show int te6/0/1 status

Port Name Status Vlan Duplex Speed Type

Te6/0/1 [TRUNK] swi-core01 err-disabled 1 full 10G SFP-10GBase-LRM

This message indicates more clearly that the port is in the err-disabled state.

Now that we see that the port is in the wrong state, we should have a way to restore it to a normal state.

Rescue step 2: recover the port from the error state

Enter the global configuration mode of the switch, execute errdisable recovery cause?, and you will see the following message:

Switch (config) # errdisable recovery cause?

All Enable timer to recover from all causes

Bpduguard Enable timer to recover from BPDU Guard error disable state

Channel-misconfig Enable timer to recover from channel misconfig disable state

Dhcp-rate-limit Enable timer to recover from dhcp-rate-limit error disable state

Dtp-flap Enable timer to recover from dtp-flap error disable state

Gbic-invalid Enable timer to recover from invalid GBIC error disable state

L2ptguard Enable timer to recover from l2protocol-tunnel error disable state

Link-flap Enable timer to recover from link-flap error disable state Link error repair

Error repair of loopback Enable timer to recover from loopback detected disable state Loop

Pagp-flap Enable timer to recover from pagp-flap error disable state

Psecure-violation Enable timer to recover from psecure violation disable state

Security-violation Enable timer to recover from 802.1x violation disable state

Udld Enable timer to recover from udld error disable state

Unicast-flood Enable timer to recover from unicast flood disable state

Vmps Enable timer to recover from vmps shutdown error disable state

From the options listed, we can see that there are many reasons why the port is placed in the wrong state, because we clearly know that the port on this switch is placed in the wrong state because of the loop problem. so you can type the command directly:

Switch (config) # errdisable recovery cause link-flap

Yes, it is so magical that such a simple order has solved the problem that has plagued us for a long time. So how do you verify that this order is in effect?

Rescue step 3: show the recovery of the port that has been placed in the error state

Switch#show errdisable recovery

ErrDisable Reason Timer Status

--

Arp-inspection Disabled

Bpduguard Disabled

Channel-misconfig (STP) Disabled

Dhcp-rate-limit Disabled

Dtp-flap Disabled

Gbic-invalid Disabled

Inline-power Disabled

Link-flap Enabled

Mac-limit Disabled

Loopback Disabled

Pagp-flap Disabled

Port-mode-failure Disabled

Pppoe-ia-rate-limit Disabled

Psecure-violation Disabled

Security-violation Disabled

Sfp-config-mismatch Disabled

Small-frame Disabled

Storm-control Disabled

Udld Disabled

Vmps Disabled

Psp Disabled

Dual-active-recovery Disabled

Evc-lite input mapping fa Disabled

Recovery command: "clear Disabled

Timer interval: 300 seconds

Interface Errdisable reason Time left (sec)

Interfaces that will be enabled at the next timeout:

Interface Errdisable reason Time left (sec)

Te6/0/1 link-flap 222

As can be seen from the information shown above, one of the ports on this switch (Te6/0/1) will return to normal after 222 seconds, and this is also the case after waiting for a few minutes. At last, several ports in the "fake death" state were "saved" without re-switching the switch.

~

The above is the solution to the fake death of the cisco switch port provided by a netizen. Although the method is feasible, it is troublesome to recover manually after each fake death.

In order to enable the switch to recover automatically after this fake death failure, we also have corresponding help to solve the problem.

Here I would like to add the content of the above netizens, which is used for cisco switch to automatically recover the failure of port fake death.

The configuration in privileged mode is as follows:

Errdisable recovery cause udld

Errdisable recovery cause bpduguard

Errdisable recovery cause security-violation

Errdisable recovery cause channel-misconfig

Errdisable recovery cause pagp-flap

Errdisable recovery cause dtp-flap

Errdisable recovery cause link-flap

Errdisable recovery cause sfp-config-mismatch

Errdisable recovery cause gbic-invalid

Errdisable recovery cause l2ptguard

Errdisable recovery cause psecure-violation

Errdisable recovery cause dhcp-rate-limit

Errdisable recovery cause unicast-flood

Errdisable recovery cause vmps

Errdisable recovery cause storm-control

Errdisable recovery cause inline-power

Errdisable recovery cause arp-inspection

Errdisable recovery cause loopback

These are the conditions that lead to the fake death of the port. You can configure the above

% PM-4-ERR_DISABLE: link-flap error detected on Gi4/1, putting Gi4/

1 in err-disable state

Issue this command in order to view the flap values:

Cat6knative#show errdisable flap-values

!-Refer to show errdisable flap-values for more information on the command.

ErrDisable Reason Flaps Time (sec)

Pagp-flap 3 30

Dtp-flap 3 30

Link-flap 5 10

Let the port recover automatically after the fake death occurs; troubleshooting of the interface in err-disable

Symptoms of failure:

The line is down, the physical LED is off or displayed as orange (different platforms have different LED states)

The show interface output shows the interface status:

FastEthernet0/47 is down, line protocol is down (err-disabled)

The interface status is err-disable.

Sw1#show interfaces statusPort Name Status Vlan Duplex Speed Type

Fa0/47 err-disabled 1 auto auto 10/100BaseTX

If the command with interface status of err-disable,show interfaces status err-disabled appears, you can see why err-disable is triggered.

The reason for the following example is bpduguard, which is configured with spanning-tree bpduguard enable on the port connected to the switch.

Sw1#show interfaces status err-disabled

Port Name Status Reason

Fa0/47 err-disabled bpduguard

The reason why the interface produces err-disable can be seen by the following command. The default configuration of the system is that all listed reasons can cause the interface to be set to err-disable.

Sw1#show errdisable detect

ErrDisable Reason Detection status

--

Udld Enabled

Bpduguard Enabled

Security-violatio Enabled

Channel-misconfig Enabled

Psecure-violation Enabled

Dhcp-rate-limit Enabled

Unicast-flood Enabled

Vmps Enabled

Pagp-flap Enabled

Dtp-flap Enabled

Link-flap Enabled

L2ptguard Enabled

Gbic-invalid Enabled

Loopback Enabled

Dhcp-rate-limit Enabled

Unicast-flood Enabled from the list, we can see that the common reasons are udld,bpduguard,link-flap, loopback and so on. Exactly what causes the current interface err-disable can be viewed by show interface status err-disable.

Shutdown,no shutdown is used for manual activation in interface mode.

In the default configuration, once the interface is set to err-disable,IOS, no attempt is made to restore the interface.

This can be viewed by show errdisable recovery, and all values under timer status are disable.

In the following example, the value of timer status changes to Enable because bpduguard recovery is manually configured.

Sw1#show errdisable recovery

ErrDisable Reason Timer Status

--

Udld Disabled

Bpduguard Enabled

Security-violatio Disabled

Channel-misconfig Disabled

Vmps Disabled

Pagp-flap Disabled

Dtp-flap Disabled

Link-flap Disabled

L2ptguard Disabled

Psecure-violation Disabled

Gbic-invalid Disabled

Dhcp-rate-limit Disabled

Unicast-flood Disabled

Loopback Disabled

Timer interval: 300 seconds

Interfaces that will be enabled at the next timeout:

Interface Errdisable reason Time left (sec)

Fa0/47 bpduguard 217

Configure IOS to reactivate the interface of errdisable, using the following command:

Sw1 (config) # errdisable recovery cause bpduguard

Sw1 (config) # errdisable recovery cause?

All Enable timer to recover from all causes

Bpduguard Enable timer to recover from BPDU Guard error disable state

Channel-misconfig Enable timer to recover from channel misconfig disable state

Dhcp-rate-limit Enable timer to recover from dhcp-rate-limit error disable state

Dtp-flap Enable timer to recover from dtp-flap error disable state

Gbic-invalid Enable timer to recover from invalid GBIC error disable state

L2ptguard Enable timer to recover from l2protocol-tunnel error disable state

Link-flap Enable timer to recover from link-flap error disable state

Loopback Enable timer to recover from loopback detected disable state

Pagp-flap Enable timer to recover from pagp-flap error disable state

Psecure-violation Enable timer to recover from psecure violation disable state

Security-violation Enable timer to recover from 802.1x violation disable state

Udld Enable timer to recover from udld error disable state

Unicast-flood Enable timer to recover from unicast flood disable state

After vmps Enable timer to recover from vmps shutdown error disable has configured the above command, IOS attempts to restore the interface set to err-disable after a period of time, which defaults to 300 seconds.

However, if the source that caused the err-disable is not cured, the interface will be set to err-disable again after it returns to work.

To adjust the timeout of err-disable, you can use the following command:

Sw1 (config) # errdisable recovery interval?

Timer-interval (sec) can be adjusted at 30ml 86400 seconds, the default is 300s.

If the cause of err-disable is udld, here is a command that works very well: sw1#udld resetNo ports are disabled by UDLD.

At the same time, a series of logs are usually generated when the API is set to err-disable, as follows:

* Mar 15 15 Received BPDU on port FastEthernet0/47 with BPDU Guard enabled 47 Received BPDU on port FastEthernet0/47 with BPDU Guard enabled 19.984:% Received BPDU on port FastEthernet0/47 with BPDU Guard enabled. Disabling port.

Sw1#

* Mar 15 15 bpduguard error detected on Fa0/47 47 bpduguard error detected on Fa0/47 19.984:% bpduguard error detected on Fa0/47, putting Fa0/47 in err-disable state

Sw1#

* Mar 15 15 Interface FastEthernet0/47 47 Interface FastEthernet0/47 21.996:% Interface FastEthernet0/47, changed state to down

Collecting these logs is also very useful.

Therefore, it is recommended to configure a syslog server to collect log information.

Sw1#show interfaces status

Port Name Status Vlan Du...

Enable the errdisable function so that you can use show errdisable to see what caused the errdisable, and then solve it with more information.

If you want not to affect the use of it, first use no errdisable detect cause loopback to execute the dead port, no sh once if there is no problem, it must be a loop, you can find time to unplug the suspected switch, unplug the network cable one by one to check, of course, there is a more effective way, you can check the status of all rj45 and gi ports of the switch in question, which has errdisable information, which will have a problem.

Switch#show interfaces status err-disabled

Port Name Status Reason

Fa0/22 err-disabled link-flap

Fa0/37 For office in 100K err-disabled link-flap

Fa0/41 unknow err-disabled link-flap

Fa0/42 Training Dc066 err-disabled link-flap

Fa0/45 Production line VM err-disabled link-flap

Switch#show errdisable detect

ErrDisable Reason Detection status

-- pagp-flap 3 30

Dtp-flap 3 30

Link-flap 5 10

(link-flap this is due to the poor quality of the link.)

Close errdisable detectswitch

# no errdisable detect cause all

Several common reasons for err-disable on switch interfaces:

1. EtherChannel misconfiguration

2. Duplex mismatch

Style= "TEXT-INDENT: 2em" > 3. BPDU port guard

4. UDLD

5. Link-flap error

6. Loopback error

7. Port security violation

First, err-disable will appear when the configuration of the F EC does not match. Assuming that Switch A configures the FEC mode to on, Switch A does not send PAGP packets and connected Switch B to negotiate FEC. It assumes that Switch B has already configured FEC. But in fact, Swtich B does not configure FEC. When the state of Switch B is more than 1 minute, the STP of Switch A thinks that there is a loop, so err-disable appears. The solution is to configure the mode of the FEC to channel-group 1 mode desirable non-silent, which means that the channel is not established until the FEC negotiation between the two parties is successful, otherwise the interface is still in a normal state.

The second reason is the duplex mismatch. When one end is configured as half-duplex, he will detect whether the peer is transmitting data. Only when the peer stops transmitting data, will he send a packet similar to ack to make the link up, but the peer will be configured as full-duplex. He will only keep sending requests to let the link up whether the link is idle or not. If this goes on, the link state will become err-disable.

Third, the third reason is BPDU, which is related to portfast and BPDU guard. If an interface is configured with portfast, that is to say, the interface should be connected to a pc. Pc will not send BPDU frames of spanning-tree, so this port also receives BPDU to generate spanning-tree. The administrator also configured BPDU guard on the same interface to prevent unknown BPDU frames to enhance security, but he accidentally connected a switch to both portfast and BPDU guard interfaces. So the interface is connected to the BPDU frame, and because BPDU guard is configured, the interface naturally enters the err-disable state. Solution: no spanning-tree portfast bpduguard default, or just turn off portfast.

The fourth reason is that UDLD.UDLD is the private layer 2 protocol of cisco, which is used to detect unidirectional problems in the link. Sometimes the physical layer is up, but the link layer is down, so UDLD is needed to detect whether the link is really up. When both ends of AB are configured with UDLD, A sends B a UDLD frame containing its own port id. B will return a UDLD frame and contain the port id of A when it is received. When A receives this frame and finds its own port id in it, it thinks that the link is good. On the contrary, it becomes err-disable. Suppose An is configured with UDLD and B does not configure UDLD:A to send B a frame containing its own port id. B does not know what the frame is and will not return a UDLD frame containing A's port id. Then A thinks that the link is a unidirectional link and naturally becomes an err-disable state.

The fifth reason is the jitter of the link. When the link repeatedly up and downloads five times in 10 seconds, it will enter the err-disable state.

The sixth reason is keepalive loopback. Before 12.1EA, the switch sent keepalive messages on all interfaces by default. Because some disconnected switches may have problems negotiating spanning-tree, and an interface receives its own keepalive, the interface becomes err-disable. The solution is to turn off keepalive. Or upgrade ios to 12.2SE.

The last reason, which is relatively simple, is that port-security violation shutdown is configured

Welcome to subscribe "Shulou Technology Information " to get latest news, interesting things and hot topics in the IT industry, and controls the hottest and latest Internet news, technology news and IT industry trends.

Views: 0

*The comments in the above article only represent the author's personal views and do not represent the views and positions of this website. If you have more insights, please feel free to contribute and share.

Share To

Network Security

Wechat

© 2024 shulou.com SLNews company. All rights reserved.

12
Report