Age | Commit message (Collapse) | Author |
|
* rt_netlink.c: (netlink_link_change) If the status of an
operative interface changes (e.g. MTU changes), the client
daemons should be notified by calling zebra_interface_up_update.
Previously, the information was being updated in zebra's
interface structure, but the clients were not notified of
changes to an operative interface.
* ospf_zebra.c: (ospf_interface_state_up) If the MTU of an operative
interface changes, print a debug message and call ospf_if_reset()
to simulate down/up on the interface.
* ospf_interface.h: Declare new function ospf_if_reset().
* ospf_interface.c: (ospf_if_reset) New function to call ospf_if_down
and ospf_if_up for all ospf interfaces attached to an interface.
|
|
* rt_netlink.c: (netlink_route_multipath) dont set equalise flag.
No stock Linux kernel has ever supported it, and even if it had
|
|
if we are not debugging.
|
|
* Not all Linux netlink systems have IFLA_WIRELESS
|
|
* rt_netlink.c: (netlink_socket,netlink_request,netlink_parse_info,
netlink_talk) Save errno before calling zserv_privs.change.
[backport candidate]
|
|
* zebra/rt_netlink.c: ignore wireless newlink netlink messages.
|
|
* *.c: Change level of debug messages to LOG_DEBUG.
|
|
* global: Replace strerror with safe_strerror. And vtysh/vtysh.c
needs to include "log.h" to pick up the declaration.
|
|
ripd might need some more testing though.
|
|
|
|
|
|
daemon to change netlink receive buffer size.
|
|
|
|
* ioctl.c: (if_get_mtu) set mtu6 to mtu
* mtu_kvm.c: (if_kvm_get_mtu) set mtu6 to mtu
* rt_netlink.c: (netlink_interface) set mtu6 to mtu
(netlink_link_change) ditto
2004-05-09 Sowmini Varadhan <sowmini.varadhan@sun.com>
* interface.c: (if_delete_update) only used with HAVE_NETLINK
and RTM_IFANNOUNCE.
(if_flag_dump_vty) Solaris IFF_IPV4 and IFF_IPV6 if flags
(if_dump_vty) print mtu6 if not same as mtu
|
|
|
|
|
|
* zebra/rt_netlink.c: netlink_parse_info() ignore messages which are
not from kernel. Reported to RH by Herbert Xu. See
http://rhn.redhat.com/errata/RHSA-2003-307.html and CAN-2003-0858.
|
|
* zebra/connected.c: revert the 'generic PtP' patch as it causes
far too many problems. People who use FreeSWAN should investigate
native linux ipsec.
* zebra/rt_netlink.c: ditto
* lib/if.c: ditto
* ripd/ripd.h: ditto
* ripd/ripd.c: ditto
* ripd/rip_interface.c: ditto
* ospfd/ospfd.c: ditto
* ospfd/ospf_snmp.c: ditto
* bgpd/bgp_nexthop.c: ditto
|
|
* lib/version.h: add ZEBRA_URL (unused for now)
* lib/vty.c: CMD_ERR_NOTHING_TODO when reading conf file should not
be fatal. slight reformating.
* ospfd/ospf_zebra.c: ignore reject/blackhole routes if zebra sends
these type of routes. probably should be a new type of route to
allow daemons to more easily choose whether to redistribute them
- rathen than just a flag (eg for reject/blackhole).
reorder the is_prefix_default test for ZEBRA_IPV4_ROUTE_DELETE to
avoid the inverted test - slightly more readable.
* redhat/zebra.spec.in: Add ospfapi port to services file, if
with_ospfapi.
* zebra/rib.h: Change nexthop types to an enum.
* zebra/rt_netlink.c: run it through indent -nut.
Add nexthop_types_desc[] descriptive array for nexthop types.
(netlink_route_multipath) debug statements indicate which branch
they are in and print out nexthop type.
* zebra/zebra_rib.c: slight reformatting.
* zebra/zebra_vty.c: Pass ZEBRA_FLAG_BLACKHOLE flag to
static_add_ipv4() if Null0 route is configured. print out Null0 if
STATIC_IPV4_BLACKHOLE route, and ignore flags (shouldnt be
possible to set flags from vty) for config and show route.
|
|
* zebra/rt_netlink.c: Debug statements added to
netlink_route_multipath()
* zebra/zebra_rib.c: If route has a gateway, delete only existing
route with that specified gateway.
|
|
* lib/vty.{c,h}: Remove vty layer depending on a 'master' global,
pass the thread master in explicitly to vty_init. Sort out some
header dependency problems with lib/command.h
* zebra/: Move globals to struct zebrad. Update vty_init().
* (.*)/\1_main.c: update call to vty_init().
|
|
* Merge of zebra privileges
|
|
|
|
|
|
Subject: [zebra 18677] zebra initialisation bug and patch
Hi All,
I have found a bug in zebra that prevents its routing table and
interface database from being initialised properly. The problem occurs
when a request is made via the netlink socket but the kernel produces a
EWOULDBLOCK/EAGAIN when the result is trying to be retrieved via a
recvmsg(). Zebra does not do anything about this and continues to
function (with an empty routing table and interface list) as if nothing
has happened. With no such information the routing protocol dosn't work!
Two functions are called during the initialisation of Zebra:
interface_lookup_netlink() and netlink_route_read() - obtaining the
interfaces and routing table from the kernel respectively. These are the
only time these functions are called.
These functions, interface_lookup_netlink() and netlink_route_read(),
use netlink_parse_info() to recieve the data from the netlink socket.
The problem is, netlink_parse_info() returns (without error) when its
call to recvmsg() results in an errno EWOULDBLOCK/EAGAIN. This behaviour
is expected by other funtions calling netlink_parse_info() -
netlink_parse_info is simply recalled at a later stage. However, on
initialisation it is never recalled.
Since zebra is expected to nothing else during initialisation it was
easiest to temporarily change the netlink socket to BLOCK and wait
indefinently until the kernel responds with the required information.
Attached is a patch with these changes.
Comments and questions are welcome.
Please inform me if this patch is added to the Zebra source.
--israel
|
|
|
|
Subject: [zebra 17290] [PATCHES] - Fixes for problems in 0.93b
Added ifupstaticfix
|
|
addresses.
It seems so far that netlink only ever returns IFA_ADDRESS for IPv6 interfaces
and never IFA_LOCAL, regardless of whether it is PtP or not. Need to investigate
precisely how IPv6 and netlink are supposed to behave wrt broadcast vs
PtP links.
|
|
Date: Sat, 11 Jan 2003 23:26:28 +0100 (CET)
From: Yon Uriarte <havanna_moon@gmx.net>
To: "the list(tm) Zebra" <zebra@zebra.org>
Subject: [zebra 17217] [PATCH] show thread CPU
Hi,
a little patch from the 'stupid preprocessor tricks' collection to record
thread statistics.
Usage: "show thread cpu [r][w][t][e][x]"
Output Fields: self explaining I hope. Type is one of RWTEX for:
Read, Write (fd threads), Timer, Event, Execute.
Overhead vs. vanilla zebra: almost nothing. Vanilla CVS zebra already
collects thread run times.
Caveats: Under linux getrusage has a granularity of 10ms, which is almost
useless in this case. Run ./configure, edit config.h and comment out
"#define HAVE_RUSAGE", this way it will use getimeofday which has a much
better granularity. IMHO this is better, as cooperative threads are
effectively running during all that wall time (dont care if CPU
utilization was 3% or 99% during the time the thread was running (an
effective rusage combined with getimeofday could give that info)).
Maybe someone can give tips for other platforms on API granularity.
TODO: change some of the calls to thread_add_$KIND to
funcname_thread_add_$KIND with a meaningfull funcname, so users will get a
better idea of what's going on.
F.ex. (AFAIK):
ospf_spf_calculate_timer -> "Routes Step 1, areas SPF"
ospf_ase_calculate_timer -> "Routes Step 2, externals"
Could this be added to the unofficial patch collection?
Could someone with BGP keepalive problems run their bgpd with this patch
and post the results?
TIA, HTH, HAND, regards
yon
Example output:
--------------------------------
ospfd# show thread cpu
Runtime(ms) Invoked Avg uSecs Max uSecs Type Thread
14.829 31 478 585 T ospf_ase_calculate_timer
82.132 9838 8 291 EX ospf_nsm_event
0.029 1 29 29 E ospf_default_originate_timer
0.254 9 28 34 T ospf_db_desc_timer
0.026 7 3 11 T ospf_wait_timer
669.015 523 1279 490696 R vty_read
4.415 45 98 173 TE ospf_network_lsa_refresh_timer
15.026 31 484 588 T ospf_spf_calculate_timer
29.478 1593 18 122 E ospf_ls_upd_send_queue_event
0.173 1 173 173 T vty_timeout
4.173 242 17 58 E ospf_ls_ack_send_event
637.767 121223 5 55 T ospf_ls_ack_timer
39.373 244 161 2691 R zclient_read
12.169 98 124 726 EX ospf_ism_event
0.226 2 113 125 R vty_accept
537.776 14256 37 3813 W ospf_write
4.967 41 121 250 T ospf_router_lsa_timer
0.672 1 672 672 E zclient_connect
7.901 1658 4 26 T ospf_ls_req_timer
0.459 2 229 266 E ospf_external_lsa_originate_timer
3.203 60 53 305 T ospf_maxage_lsa_remover
108.341 9772 11 65 T ospf_ls_upd_timer
33.302 525 63 8628 W vty_flush
0.101 1 101 101 T ospf_router_lsa_update_timer
0.016 1 16 16 T ospf_router_id_update_timer
26.970 407 66 176 T ospf_lsa_maxage_walker
381.949 12244 31 69 T ospf_hello_timer
0.114 22 5 14 T ospf_inactivity_timer
34.290 1223 28 310 T ospf_lsa_refresh_walker
470.645 6592 71 665 R ospf_read
3119.791 180693 17 490696 RWTEX TOTAL
ospfd#
bgpd# sh t c TeX
Runtime(ms) Invoked Avg uSecs Max uSecs Type Thread
21.504 476 45 71 T bgp_keepalive_timer
17.784 1157 15 131 T bgp_reuse_timer
29.080 193 150 249 T bgp_scan
23.606 995 23 420 E bgp_event
317.734 28572 11 69 T bgp_routeadv_timer
0.084 1 84 84 E zlookup_connect
0.526 1 526 526 E zclient_connect
1.348 13 103 147 T bgp_start_timer
19.443 142 136 420 T bgp_connect_timer
16.032 772 20 27 T bgp_import
447.141 32322 13 526 TEX TOTAL
bgpd#
bgpd# show thread cpu rw
Runtime(ms) Invoked Avg uSecs Max uSecs Type Thread
155.043 7 22149 150659 R bgp_accept
129.638 180 720 53844 R vty_read
1.734 56 30 129 R zclient_read
0.255 2 127 148 R vty_accept
58.483 983 59 340 R bgp_read
171.495 29190 5 245 W bgp_write
13.884 181 76 2542 W vty_flush
530.532 30599 17 150659 RW TOTAL
bgpd#
--------------------------------
|
|
|
|
|
|
|